A few years ago I discovered an amusing proof of Euclid’s theorem that there are infinitely many primes which I thought I’d record here for posterity. (I subsequently learned that a similar argument appears in this paper by Paul Pollack.)
To motivate the proof, and illustrate its working in an (admittedly silly) special case, suppose that there were just two prime numbers, called p and q. Then by the fundamental theorem of arithmetic (i.e., unique factorization into primes) we would have the following identity between generating functions:
Indeed, there is precisely one term for each integer on the left-hand side, and the same is true for the right-hand side (consider separately the cases where but , but , and ). Using the formula for the sum of a geometric series, we can rewrite our formula as an identity between complex-analytic functions valid on the open unit disc :
This is impossible, however, as we see by letting approach a primitive root of unity, since each term stays bounded except for , which tends to infinity.
This argument readily generalizes to any number of prime factors:
Theorem (Euclid): There are infinitely many prime numbers.
Proof: Suppose there were just finitely many primes . Then by the fundamental theorem of arithmetic and the principle of inclusion-exclusion, we would have the following identity between analytic functions on :
If we now let and let be a primitive root of unity, the final term on the right-hand side goes to infinity as approaches , while all the other terms approach a finite limit, a contradiction. Q.E.D.
I think there may be some pedagogical merit to this proof as an illustration of the use of generating functions, the principle of inclusion-exclusion, and summing geometric series. However, as with any “new” approach to a well-known theorem with many known proofs, one should also ask whether the method itself is of any general mathematical interest. I will make two quick “sales pitches” for the utility of this approach (and if you’re not convinced, that’s fine — I’m not in this for the money anyway).
First of all (and this was how I discovered the argument in the first place), a similar method yields a well-known theorem about “covering congruences” which as far as I know does not (as opposed to Euclid’s theorem) have a substantially simpler proof. The theorem is the following:
Theorem (Mirsky-Newman, formerly a conjecture of Erdös): If the positive integers are partitioned into a finite number of arithmetic progressions with and for all , then the maximum of must occur at least twice.
Proof: Since partitions the set of positive integers, we have Thus we have the following relation between analytic functions on :
Suppose that for and let be a primitive root of unity. Then as approaches , the term approaches infinity while all other terms approach a finite limit, a contradiction. Q.E.D.
A similar proof strategy can be used to prove the following generalization of Euclid’s theorem. For the statement, recall that the Möbius function is defined by if is the product of distinct primes (so in particular ) and if is divisible by the square of a prime. Given a function , define its Möbius transform to be The Möbius inversion formula is the assertion that the inverse to the Möbius transform is given by the Dirichlet transform
Theorem (Pollack): For a nonzero function , the support of and cannot both be finite.
Euclid’s theorem follows immediately from the special case where is the function defined by and for , whose Möbius transform is equal to .
Sketch of proof (see Pollack’s paper cited above for additional details): Suppose for the sake of contradiction that both and have finite support. Then is a polynomial. On the other hand, for the Möbius inversion formula tells us that
Since is not identically zero, neither is (by Möbius inversion). Let be the largest positive integer such that and let be a primitive root of unity. Then as approaches , the right-hand size of the above identity approaches infinity while the left-hand side approaches a finite limit, a contradiction. Q.E.D.
Note that Pollack’s theorem bears a formal resemblance to the Heisenberg uncertainty principle from quantum mechanics, which in its simplest mathematical form says that a nonzero function and its Fourier transform cannot both be compactly supported.