Prime Numbers And Primality Testing |
Written by Mike James | ||||
Thursday, 27 June 2019 | ||||
Page 3 of 3
Pick a prime, any prime…To return to primality testing. You might think that the simplest way to create a randomized test that n is prime is to simply pick random values between 2 and n and see if they divide n. The more you pick and fail to find a divisor the more certain you are that n is prime. While this is a randomized primality test it isn’t a particularly good one and there are much, much better ones, both in terms of the amount of work they require you to do and the level of certainty they provide. Any number which passes a randomized test with a high probability is referred to as a “pseudo-prime”. Currently one of the best known and most used test is Rabin’s Strong Pseudoprimality test and it’s interesting enough to explain in more detail. If the number we want to test is n, assumed odd, first we have to find d and s such that n-1=2sd where d is also odd (this is easy to do, try it). Next we pick an x smaller than n and form the sequence: xd, x2d, x4d, x8d, … xn-1 (mod n) all calculated modulo n. Calculating modulo n simply means use “clock arithmetic” with the clock wrapping round to zero at n-1. For example, counting mod 6 goes 0,1,2,3,4,5,0,1,2,3,4,5,0 and so on as the clock wraps round at 5 back to 0 again. Once you have the sequence of values – called a “Miller-Rabin sequence” – you can be sure that if n is a prime the sequence ends in a 1. How does this help with a randomized primality test? Simple, just pick a value of x less than n and compute the Miller-Rabin sequence. If it doesn’t end in a 1, then the number is composite and not a prime. Notice that if it does end in a 1 then this doesn’t prove it is a prime because some composite numbers produce Miller-Rabin sequences that end in a 1. The reason why this is such a good test is that if n isn’t a prime just less than three-quarters of all the x values you could pick at random produce Miller-Rabin sequences that don’t end in a 1. So if you pick one value at random and it gives a sequence that ends in a 1 the probability is one in four that it is really a composite non-prime. Pick 2 and the probability goes down to one sixteenth. If you continue picking M numbers that give a sequence that ends in 1, this means that the probability that you have a non-prime is ¼M. For example, if you get sequences that end in 1 with 100 random values then the probability that you have a non-prime number is ¼10 which is roughly 1/1061 which is exceeding unlikely and so you are safe in concluding that the number is indeed prime. Such is the strange reasoning behind randomized testing for primes, and many other randomized algorithms. They never produce certainty but what is the practical difference between a probability of 1/1061 and certain proof? There is more chance of a machine malfunction giving you the wrong answer computing a certain proof of primality than the Strong Pseudoprimality test failing! Certainty restored?So if knowing that a number is probably a prime is almost as good as knowing that it certainly is what’s the fuss about? There is still the philosophical point that a mathematical proof reveals a deeper truth than some random chance that something is true. What is more surprising is that the proof is so very simple and that it has remained undiscovered for so long. The problem also seems, on the face of it, to be an NP problem. That is the brute force way of testing a prime takes exponential time but if you give me a factor I can check that the number isn't prime with a single division. All NP problems are difficult to compute but easy to check. Also notice that proving a number to be a prime or not a prime doesn't actually help with the factoring problem. If you know a number is prime then it doesn't have factors but if you prove the number to be non-prime you still don't know what its factors are. Now consider what this means for the rest of the foundation of our cryptographic security. It is not that we can test primes fast that is important for the cryptography in general. It is more that something that we thought was difficult turns out to be easy. We assume that because something is difficult and because mathematicians have been working on the problem for years, centuries even, that the problem must really be difficult, intrinsically difficult. This is the assumption behind the factoring problem that many code systems rely on. Suppose now that the next problem Agrawal, Kayal, and Saxena, or some similar group, turn their attention to is factoring. Perhaps what was always assumed to be difficult will turn out not be if a sufficiently fresh approach is taken to the problem. Perhaps this has already happened. Answer to the factor problem: A Programmers Guide To Theory
|
||||
Last Updated ( Thursday, 27 June 2019 ) |