Quick: Is 341 a prime number? That's easy, if you know a few tricks. How about 4,294,967,297? Still a snap, if you use a computer. But what if the number has thousands of digits? Then things get murky, because the obvious way to settle the issue--systematically checking whether smaller numbers divide it--takes far too long. In recent decades, theorists have devised clever algorithms for telling whether a large number is prime, but none that could be proven to work quickly. Until now.
Three computer scientists at the Indian Institute of Technology in Kanpur have found a provably efficient algorithm for testing primes (whole numbers evenly divisible only be themselves and 1). Manindra Agrawal, a professor of computer science, and two students, Neeraj Kayal and Nitin Saxena, announced their result early this month, e-mailing copies of it to a number of experts in computational number theory.
What especially intrigues number theorists is that the algorithm and the proof of its efficiency are both very simple. Like other modern tests for primes, the new algorithm is based on a fact that Pierre de Fermat (of Last Theorem fame) discovered in the 17th century: If n is prime, then it evenly divides an - a for any number a. Fermat's test makes it possible to prove that a number n is not prime without finding any of its factors. For example, 29 - 2 = 510, which is not divisible evenly by 9. Hence, 9 cannot be a prime number.
Unfortunately, some nonprime numbers also pass the test. To eliminate such "false positive" readings, the new algorithm runs a more elaborate but still elementary test, based on searching for pairs of numbers that fulfill a few simple conditions. If the search comes up dry, n is prime. The key to the algorithm's efficiency is that the search can be restricted to a small range of numbers.
It's a "delightful surprise" that such an easy algorithm had been missed all these years, says Carl Pomerance, a number theorist at Lucent Technologies' Bell Labs in Murray Hill, New Jersey. But cryptographers might have more mixed feelings. They rely on difficult computations, such as factoring large numbers, to safeguard computer systems. If primality can be vanquished so easily, who's to say that a quick algorithm for factoring isn't just around the corner? Or, as Pomerance puts it, "What else have we overlooked?"