## Beal’s Conjecture

Beal’s conjecture states that

If m, n, r ≥ 3 and x, y, z are integers, and x^{m}+ y^{n}= z^{r}then x, y and z share a common factor.

Several years ago, Peter Norvig wrote a computer program to search for counterexamples. Norvig’s program was written in Python and run on a 400 MHz processor. I have access to a recent 64-bit AMD CPU with lots of RAM, so I decided to update his program and see how far I could push the threshhold.

My approach is essentially the same as his. First, I choose an upper bound on the bases (*max_base*) and the exponents (*max_pow*). In this case, I used *max_base = max_pow = 1000*. With a simple enumeration, this would give 10^{3*6} = 10^{18} combinations, far too many for a current computer to handle. Furthermore, the numbers involved could be as large as 1000^{1000}=10^{3000}. That’s a googol raised to the 30th power, and far larger than the native word size of any computer. In general, the running time is *O(max_base ^{3} * max_pow^{3})*.

### Using a hash table

The key insight is to precompute all possible values of *z ^{r}* and store them in a hash table. The keys in this hash table are the various values of

*z*and the values are the pair

^{r}*(r, z)*.

Then, for each lefthand side of the equation, *x ^{m} + y^{n}*, it suffices to look the value up in the hash table. If it’s there, we use the

*z*value to check the common factor condition and determine whether we’ve got a real counterexample. These checks take constant time, so that it’s only necessary to enumerate all the

*x, y, m*and

*n*values.

At the cost of *O(max_base*max_pow)* storage and some initialization cost, we can check each *x ^{m} + y^{n}* in constant time. Thus the total compelxity of the program has been reduced to

*O(max_base*, in our case about 10

^{2}* max_pow^{2})^{12}, which is a manageable number for a modern computer.

### Working with reasonably-sized integers

This analysis assumed that we could compare integers in constant time, but with numbers as large as 10^{3000}, this is certainly not the case. To deal with this problem, I followed up on one of Norvig’s suggestions and used modular arithmetic. Rather than verifying that *x ^{m} + y^{n} = z^{r}*, I instead verified that

*x*. If this is true, then we may or may not have a counterexample. But if it’s not, then we definitely don’t. The number of false positives depends on the choice of the modulus

^{m}+ y^{n}= z^{r}(mod N)*N*. Larger values will yield fewer false positives, as will prime numbers. What we lose in accuracy, we gain in speed. Working modulo

*N*, we never have to consider numbers larger than

*N*.

There’s no point in doing this unless the resulting numbers will fit into a machine word, so the largest reasonable choice on a 64-bit machine is *N=2 ^{64}*. I tried this, but it had a rather high false positive rate compared to primes, so I went with the primes. To do arithmetic modulo a prime

*p*quickly on a computer, numbers as large as

*p*must fit in a machine word. According to The Prime Pages, the largest primes less than 2

^{2}^{32}are

*p*= 2

_{1}^{32}-5 and

*p*= 2

_{2}^{32}-17. If

*x*and

^{m}+ y^{n}= z^{r}(mod p_{1})*x*, then

^{m}+ y^{n}= z^{r}(mod p_{2})*(x,m,y,n,z,r)*is a

*candidate*, and we need to pull out the big guns (arbitrary precision arithmetic) to determine whether it constitutes an actual counterexample.

How effective is this filtering process? Fantastically so. With *max_base=200* and *max_pow=100*, I got this sequence of candidates:

Stage | Candidates | Percent |
---|---|---|

All combinations | 115,555,328 | 100% |

Candidate (mod p)_{1} |
545 | 0.0005% |

Candidate (mod p)_{2} |
0 | 0.0% |

Counterexamples | 0 | 0.0% |

The big guns never even had to be pulled out! The first filter cut out roughly 199,999 out of every 200,000 possibilities, and the second one took care of the rest. More filters would make the existence of a genuine candidate even less likely.

### Constant factor optimizations

There were a few more optimizations to make before I had a finished program. First, from the symmetry of the equation, we can assume that *x* < *y*. This eliminates 50% of the computation. Furthermore, there’s no point in considering *(x, y)* pairs that wouldn’t violate Beal’s conjecture, namely those for which *gcd(x, y) > 1*. Given any two integers, there is roughly a 6/π^{2} chance that they are relatively prime to one another (see the math forum for a demonstration of this remarkable fact). Hence these steps have cut out another 2*(1-6/π^{2}) = 80% of the candidates.

The final step before running the search was to construct an optimized hash table to hold the 32-bit integers that we’ve used. I found a workable hash function designed by Thomas Wang, and wrote a lightweight interface around it. There was minimal overhead, since I had no need to include values in the hash table, only keys. With the above hash function, I averaged about 1.5 keys in each nonempty bucket.

### The results

My program ran about ten times faster than Norvig’s original Python program on the same hardware, and vastly faster than on his 400Mhz Pentium. For *max_base = max_pow = 1000*, my program completed the search in just shy of 17 hours. I forgot to include a tally of how many candidates made it past the first prime filter but not the second, so that data is missing from this table.

Stage | Candidates | Percent |
---|---|---|

All combinations | 301,980,444,768 | 100% |

Candidates (mod p, _{1}p)_{2} |
15,683 | 0.00000519% |

Counterexamples | 0 | 0.0% |

Sadly, still no counterexamples, and no $100,000. In the future, I may extend the search in other directions. (to fill in more cells in Norvig’s table) But now, it can be succinctly said that Beal’s conjecture is true for all variables up to 1,000.

### Correctness

Why should anyone believe my results? For one, you can have the source code and inspect it yourself. To compile, just run “make”. Running the program is a bit more complex, since it involves an executable as well as Norvig’s Python script. For this example, I used

./bealhash3 -v 1000 1000 | python beal.py 1000 1000 | tee 1000×1000.dat

For that matter, you can also inspect 1000×1000.dat, which contains a full list of the candidates produced in my run. If the `bealhash` program is run with a “-g” flag, then it omits the *gcd(x,y)==1* test. This causes causes the Python script to print out legitimate solutions to the Beal equation. I ran Norvig’s original Python script with *max_base=200*, *max_pow=100* and got 809 such solutions. I then added my filter in front of it and got the exact same set of solutions, all 809 of them. This is the main reason I believe in the correctness of my code.

Note that this program *MUST BE RUN ON A 64-BIT MACHINE*. Otherwise, overflow will produce weirdness and most likely incorrectness.

### Future ideas

Virtually all of the execution time of the program is spent doing hash table lookups, so any further optimization would have to improve this. A perfect hash might help here, but I’m not particularly familiar with the theory of perfect hash functions.

Rather than searching over all variables less than a certain bound, I think it would make sense to consider only exponents that are either prime or a power of two. Arguments analogous to those for Fermat’s Last Theorem can be applied here. My program does a decent amount of duplicated computation due to the fact that 2^{3} = 4^{2}, etc.

As is often the case, a mathematical proof relating to Beal’s Conjecture could cut down the search space dramatically. For example, one of the first steps taken towards proving FLT was Fermat’s proof when the exponent was four. This eliminated all multiples of four from consideration. Something akin to that would be *very* helpful here.