Pollard’s rho algorithm

Pollard’s rho algorithm is a special-purpose integer factorization algorithm. It was invented by John Pollard in 1975.It is particularly effective for a composite number having a small prime factor.

    x ← 2; y ← 2; d ← 1
    while d = 1:
        x ← g(x)
        y ← g(g(y))
        d ← gcd(|x - y|, n)
    if d = n: 
        return failure
    else:
        return d

The ρ algorithm is based on Floyd’s cycle-finding algorithm and on the observation that (as in the birthday problem) t random numbers x1, x2, … , xt in the range [1, n] will contain a repetition with probability P > 0.5 if t > 1.177n1/2. The constant 1.177 comes from the more general result that if P is the probability that t random numbers in the range [1, n] contain a repetition, then P > 1 – exp{ – t2/2n }. Thus P > 0.5 provided 1/2 > exp{ – t2/2n }, or t2 > 2n ln 2, or t > (2ln 2)1/2n1/2 = 1.177n1/2.

The ρ algorithm uses g(x), a polynomial modulo n, as a generator of a pseudo-random sequence. (The most commonly used function is g(x) = (x2 + 1) mod n.) Let’s assume n = pq. The algorithm generates the sequence x1 = g(2), x2 = g(g(2)), x3 = g(g(g(2))), and so on. Two different sequences will in effect be running at the same time—the sequence {xk} and the sequence {xk mod p}. Since p < n1/2, the latter sequence is likely to repeat earlier than the former sequence. The repetition of the mod p sequence will be detected by the fact that gcd(xk mod p – xm mod p, n) = p, where k < m. Once a repetition occurs, the sequence will cycle, because each term depends only on the previous one. The name ρ algorithm derives from the similarity in appearance between the Greek letter ρ and the directed graph formed by the values in the sequence and their successors. Once it is cycling, Floyd’s cycle-finding algorithm will eventually detect a repetition. The algorithm succeeds whenever the sequence {xk mod p} repeats before the sequence {xk}. The randomizing function g(x) must be a polynomial modulo n, so that it will work both modulo p and modulo n. That is, so that g(x mod p) ≡ g(x) (mod p).

int gcd(int a, int b) {
	int remainder;
	while (b != 0) {
		remainder = a % b;
		a = b;
		b = remainder;
	}
	return a;
}

int main () {
	int number = 10403, x_fixed = 2, cycle_size = 2, x = 2, factor = 1;

	while (factor == 1) {
		for (int count=1;count <= cycle_size && factor <= 1;count++) {
			x = (x*x+1)%number;
			factor = gcd(x - x_fixed, number);
		}

		cycle_size *= 2;
		x_fixed = x;
	}
	cout << "\nThe factor is  " << factor;
}

Probabilistic Turing machine

In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which chooses between the available transitions at each point according to some probability distribution.

In the case of equal probabilities for the transitions, it can be defined as a deterministic Turing machinehaving an additional “write” instruction where the value of the write is uniformly distributed in the Turing Machine’s alphabet (generally, an equal likelihood of writing a ‘1’ or a ‘0’ on to the tape.) Another common reformulation is simply a deterministic Turing machine with an added tape full of random bits called the random tape.

As a consequence, a probabilistic Turing machine can (unlike a deterministic Turing Machine) have stochasticresults; on a given input and instruction state machine, it may have different run times, or it may not halt at all; further, it may accept an input in one execution and reject the same input in another execution.

Therefore, the notion of acceptance of a string by a probabilistic Turing machine can be defined in different ways. Various polynomial-time randomized complexity classes that result from different definitions of acceptance include RP, Co-RP, BPP and ZPP. If the machine is restricted to logarithmic space instead of polynomial time, the analogous RL, Co-RL, BPL, and ZPL complexity classes are obtained. By enforcing both restrictions, RLP, Co-RLP, BPLP, and ZPLP are yielded.

Probabilistic computation is also critical for the definition of most classes of interactive proof systems, in which the verifier machine depends on randomness to avoid being predicted and tricked by the all-powerful prover machine. For example, the class IP equals PSPACE, but if randomness is removed from the verifier, we are left with only NP, which is not known but widely believed to be a considerably smaller class.

One of the central questions of complexity theory is whether randomness adds power; that is, is there a problem which can be solved in polynomial time by a probabilistic Turing machine but not a deterministic Turing machine? Or can deterministic Turing machines efficiently simulate all probabilistic Turing machines with at most a polynomial slowdown? It is currently widely believed by researchers[who?] that the latter is the case, which would imply P = BPP. The same question for log space instead of polynomial time (does L = BPLP?) is even more widely believed to be true. On the other hand, the power randomness gives to interactive proof systems, as well as the simple algorithms it creates for difficult problems such as polynomial-time primality testing and log-space graph connectedness testing, suggest that randomness may add power.

A quantum computer is another model of computation that is inherently probabilistic.

Ackermann function

In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest[1] and earliest-discovered examples of a totalcomputable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

After Ackermann’s publication[2] of his function (which had three nonnegative integer arguments), many authors modified it to suit various purposes, so that today “the Ackermann function” may refer to any of numerous variants of the original function. One common version, the two-argument Ackermann–Péter function, is defined as follows for nonnegative integers m and n:

{\displaystyle A(m,n)={\begin{cases}n+1&{\mbox{if }}m=0\\A(m-1,1)&{\mbox{if }}m>0{\mbox{ and }}n=0\\A(m-1,A(m,n-1))&{\mbox{if }}m>0{\mbox{ and }}n>0.\end{cases}}}A(m,n)={\begin{cases}n+1&{\mbox{if }}m=0\\A(m-1,1)&{\mbox{if }}m>0{\mbox{ and }}n=0\\A(m-1,A(m,n-1))&{\mbox{if }}m>0{\mbox{ and }}n>0.\end{cases}}

Its value grows rapidly, even for small inputs. For example, A(4,2) is an integer of 19,729 decimal digits.

Random Variate

In the mathematical fields of probability and statistics, a random variate is a particular outcome of a random variable: the random variates which are other outcomes of the same random variable might have different values. Random variates are used when simulating processes driven by random influences (stochastic processes). In modern applications, such simulations would derive random variates corresponding to any given probability distribution from computer procedures designed to create random variates corresponding to a uniform distribution, where these procedures would actually provide values chosen from a uniform distribution of pseudorandom numbers.

Procedures to generate random variates corresponding to a given distribution are known as procedures for random variate generation or pseudo-random number sampling.

In probability theory, a random variable is a measurable function from a probability space to a measurable space of values that the variable can take on. In that context, and in statistics, those values are known as a random variates, or occasionally random deviates, and this represents a wider meaning than just that associated with pseudorandom numbers.

Devroye defines a random variate generation algorithm (for real numbers) as follows:

Assume that

  1. Computers can manipulate real numbers.
  2. Computers have access to a source of random variates that are uniformly distributed on the closed interval [0,1].
Then a random variate generation algorithm is any program that halts almost surely and exits with a real number x. This x is called a random variate.

(Both assumptions are violated in most real computers. Computers necessarily lack the ability to manipulate real numbers, typically using floating pointrepresentations instead. Most computers lack a source of true randomness (like certain hardware random number generators), and instead use pseudorandom number sequences.)

The distinction between random variable and random variate is subtle and is not always made in the literature. It is useful when one wants to distinguish between a random variable itself with an associated probability distribution on the one hand, and random draws from that probability distribution on the other, in particular when those draws are ultimately derived by floating-point arithmetic from a pseudo-random sequence.