An Axiomatic Basis for Computer Programming. C. A. R. Hoare


In this paper an attempt is made to explore the logical foundations of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This involves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantage, both theoretical and practical, may follow from a pursuance of these topics.

Graph Theorems


Cayley’s Formula • Mader’s Theorem • Menger’s Theorem • Euler’s Theorem • The Nordhaus-Gaddum Theorem • Dirac’s Theorem • The Chv´atal-Erd˝os Theorem • Hall’s Theorem • In any bipartite graph, the maximum size of a matching is equal to the minimum size of a vertex cover. • Tutte’s Theorem • Petersen’s Theorem • The chromatic index of any bipartite graph is equal to its maximum degree. • Vizing’s Theorem • Brooks’ Theorem • Tur´an’s Theorem • The K˝ov´ari-S´os-Tur´an Theorem • Ramsey’s Theorem • Erd˝os’ lower bound for the Ramsey number r(k, k). • Euler’s Formula for planar connected graphs. • The Four Color Theorem (with a proof that five colors suffice).

Graph Isomorphism


Read for Lazlo Babai

And Retraction also.

In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H

such that any two vertices u and v of G are adjacent in G if and only if ƒ(u) and ƒ(v) are adjacent in H. This kind of bijection is commonly described as “edge-preserving bijection”, in accordance with the general notion of isomorphism being a structure-preserving bijection.

If an isomorphism exists between two graphs, then the graphs are called isomorphic and denoted as {\displaystyle G\simeq H}G\simeq H. In the case when the bijection is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the bijection is called an automorphism of G.

Graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs.

The two graphs shown below are isomorphic, despite their different looking drawings.

Graph G Graph H An isomorphism
between G and H
Graph isomorphism a.svg Graph isomorphism b.svg f(a) = 1f(b) = 6

f(c) = 8

f(d) = 3

f(g) = 5

f(h) = 2

f(i) = 4

f(j) = 7

Deutsch–Jozsa algorithm


The Deutsch–Jozsa algorithm is a quantum algorithm, proposed by David Deutsch and Richard Jozsa in 1992with improvements by Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca in 1998.Although of little practical use, it is one of the first examples of a quantum algorithm that is exponentially faster than any possible deterministic classical algorithm. It is also a deterministic algorithm, meaning that it always produces an answer, and that answer is always correct.

Problem statement

In the Deutsch-Jozsa problem, we are given a black box quantum computer known as an oracle that implements some function f:\{0,1\}^{n}\rightarrow \{0,1\}. In layman’s terms, it takes n-digit binary values as input and produces either a 0 or a 1 as output for each such value. We are promised that the function is either constant (0 on all inputs or 1 on all inputs) or balanced[3] (returns 1 for half of the input domain and 0 for the other half); the task then is to determine if f is constant or balanced by using the oracle.


The Deutsch–Jozsa problem is specifically designed to be easy for a quantum algorithm and hard for any deterministic classical algorithm. The motivation is to show a black box problem that can be solved efficiently by a quantum computer with no error, whereas a deterministic classical computer would need exponentially many queries to the black box to solve the problem. More formally, it yields an oracle relative to which EQP, the class of problems that can be solved exactly in polynomial time on a quantum computer, and P are different.

Since the problem is easy to solve on a probabilistic classical computer, it does not yield an oracle separation with BPP, the class of problems that can be solved with bounded error in polynomial time on a probabilistic classical computer. Simon’s problem is an example of a problem that yields an oracle separation between BQP and BPP.

Classical solution

For a conventional deterministic algorithm where n is number of bits, {\displaystyle 2^{n-1}+1}2^{{n-1}}+1 evaluations of {\displaystyle f}f will be required in the worst case. To prove that {\displaystyle f}f is constant, just over half the set of inputs must be evaluated and their outputs found to be identical (remembering that the function is guaranteed to be either balanced or constant, not somewhere in between). The best case occurs where the function is balanced and the first two output values that happen to be selected are different. For a conventional randomized algorithm, a constant {\displaystyle k}k evaluations of the function suffices to produce the correct answer with a high probability (failing with probability {\displaystyle \epsilon \leq 1/2^{k-1}}\epsilon \leq 1/2^{{k-1}}). However, {\displaystyle k=2^{n-1}+1}k=2^{{n-1}}+1 evaluations are still required if we want an answer that is always correct. The Deutsch-Jozsa quantum algorithm produces an answer that is always correct with a single evaluation of {\displaystyle f}f.

Pollard’s rho algorithm


Pollard’s rho algorithm is a special-purpose integer factorization algorithm. It was invented by John Pollard in 1975.It is particularly effective for a composite number having a small prime factor.

    x ← 2; y ← 2; d ← 1
    while d = 1:
        x ← g(x)
        y ← g(g(y))
        d ← gcd(|x - y|, n)
    if d = n: 
        return failure
        return d

The ρ algorithm is based on Floyd’s cycle-finding algorithm and on the observation that (as in the birthday problem) t random numbers x1, x2, … , xt in the range [1, n] will contain a repetition with probability P > 0.5 if t > 1.177n1/2. The constant 1.177 comes from the more general result that if P is the probability that t random numbers in the range [1, n] contain a repetition, then P > 1 – exp{ – t2/2n }. Thus P > 0.5 provided 1/2 > exp{ – t2/2n }, or t2 > 2n ln 2, or t > (2ln 2)1/2n1/2 = 1.177n1/2.

The ρ algorithm uses g(x), a polynomial modulo n, as a generator of a pseudo-random sequence. (The most commonly used function is g(x) = (x2 + 1) mod n.) Let’s assume n = pq. The algorithm generates the sequence x1 = g(2), x2 = g(g(2)), x3 = g(g(g(2))), and so on. Two different sequences will in effect be running at the same time—the sequence {xk} and the sequence {xk mod p}. Since p < n1/2, the latter sequence is likely to repeat earlier than the former sequence. The repetition of the mod p sequence will be detected by the fact that gcd(xk mod p – xm mod p, n) = p, where k < m. Once a repetition occurs, the sequence will cycle, because each term depends only on the previous one. The name ρ algorithm derives from the similarity in appearance between the Greek letter ρ and the directed graph formed by the values in the sequence and their successors. Once it is cycling, Floyd’s cycle-finding algorithm will eventually detect a repetition. The algorithm succeeds whenever the sequence {xk mod p} repeats before the sequence {xk}. The randomizing function g(x) must be a polynomial modulo n, so that it will work both modulo p and modulo n. That is, so that g(x mod p) ≡ g(x) (mod p).

int gcd(int a, int b) {
	int remainder;
	while (b != 0) {
		remainder = a % b;
		a = b;
		b = remainder;
	return a;

int main () {
	int number = 10403, x_fixed = 2, cycle_size = 2, x = 2, factor = 1;

	while (factor == 1) {
		for (int count=1;count <= cycle_size && factor <= 1;count++) {
			x = (x*x+1)%number;
			factor = gcd(x - x_fixed, number);

		cycle_size *= 2;
		x_fixed = x;
	cout << "\nThe factor is  " << factor;