This post contains some interesting mathematical curiosities I’ve recently learned about.
Formula for Primes
C. P. Willans (1964, paper)
$$ n^{th} prime = 1 + \sum_{i=1}^{2^n} \Bigg \lfloor \left( \frac{n}{\sum_{j=1}^{i} \Big \lfloor \left( cos \pi \frac{(j-1)! + 1}{j} \right)^2 \Big \rfloor } \right) ^{\frac{1}{n}} \Bigg \rfloor $$Not a fast formula to compute, but it does provide the nth prime. It is very inefficient from a computational point of view, with two loops embedded.
Wilson’s theorem: $(j-1)!+1$ is divisible by j precisely when j is prime or 1. Hence, the formula uses it with $cos$ and square to test for primality.
Bertrand’s postulate: for every integer $m \geq 2$ there exists a prime number $p$ such that $m < p < 2m$. Hence the formula sums until $2^n$
However, according to the paper “What is an answer?” by Herbert Wilf, this algorithm is not a real answer to the question “what is the nth prime?”
Here’s a short Python code implementing the formula.
import math
def prime(n):
sum = 0
for i in range(1, int(math.pow(2, n)) + 1):
inner_sum = 0
for j in range(1, i + 1):
div = (math.factorial(j - 1) + 1) / j
inner_term = math.pow(math.cos(math.pi * div), 2)
inner_sum = inner_sum + math.floor(inner_term)
term = math.pow(n / inner_sum, 1 / n)
sum = sum + math.floor(term)
return 1 + sum
for i in range(1, 7):
print(prime(i), end=", ")
The code above correctly prints 2, 3, 5, 7, 11, 13 but fails when i = 7 with a calculation error, and then at i = 8 with a float overflow. I’ve tried using numpy (most notably the numpy.float128 type) and Decimal, but the values reached by the variables in the inner loop are astronomical, and it keeps failing when calculating the next valid values, 17 and 19.
The Ruby code below uses the BigDecimal library and calculates the first 10 values correctly, but it takes around 4 minutes to do so on my laptop:
require 'bigdecimal'
require 'bigdecimal/math'
class BigDecimal
def fact
(2..self).reduce(1,:*)
end
end
PI = BigMath.PI(10)
def prime(n)
sum = BigDecimal(0)
limit = 2 ** n
for i in 1..limit do
inner_sum = BigDecimal(0)
for j in 1..i do
div = (BigDecimal(j - 1).fact + 1) / BigDecimal(j)
inner_term = BigMath.cos(PI * div, 10) ** BigDecimal(2)
inner_sum = inner_sum + inner_term.floor
end
term = (n / inner_sum) ** Rational(1, n)
sum = sum + term.floor
end
sum + BigDecimal(1)
end
for i in 1..10 do
puts(prime(i).to_s('F'))
end
These are the times my laptop took to find the nth prime, and as you can see the times increase dramatically after the 8th prime! I haven’t tried searching for the 12th, my estimation is that the calculation would take around 5 hours to complete.
| $n$ | Prime | Time (m:s) |
|---|---|---|
| 1 | 2 | 0:00.04 |
| 2 | 3 | 0:00.04 |
| 3 | 5 | 0:00.05 |
| 4 | 7 | 0:00.05 |
| 5 | 11 | 0:00.08 |
| 6 | 13 | 0:00.21 |
| 7 | 17 | 0:00.77 |
| 8 | 19 | 0:04.00 |
| 9 | 23 | 0:25.50 |
| 10 | 29 | 3:40.00 |
| 11 | 31 | 37:43.05 |
Stirling’s Approximation
This is a jaw-dropping formula, found in the 18th century, involving $\pi$ and $e$ that returns the nth factorial with increasing accuracy. There’s a page about it on Wikipedia.
$$ n! \approx \left( \frac{n}{e} \right) ^n \sqrt {2 \pi n} $$Found through:
- Gamma function
- Taylor series
- Gaussian integral
Here’s a table comparing this approximation to the actual values of the factorial function, and the relative error decreases substantially after $n$ reaches 4 or 5:
| $n$ | Factorial | Stirling | Difference | % |
|---|---|---|---|---|
| 0 | 1 | 0.00 | 1.00 | 100.00% |
| 1 | 1 | 0.92 | 0.08 | 7.79% |
| 2 | 2 | 1.92 | 0.08 | 4.05% |
| 3 | 6 | 5.84 | 0.16 | 2.73% |
| 4 | 24 | 23.51 | 0.49 | 2.06% |
| 5 | 120 | 118.02 | 1.98 | 1.65% |
| 6 | 720 | 710.08 | 9.92 | 1.38% |
| 7 | 5040 | 4980.40 | 59.60 | 1.18% |
| 8 | 40320 | 39902.40 | 417.60 | 1.04% |
| 9 | 362880 | 359536.87 | 3343.13 | 0.92% |
| 10 | 3628800 | 3598695.62 | 30104.38 | 0.83% |
| 11 | 39916800 | 39615625.05 | 301174.95 | 0.75% |
| 12 | 479001600 | 475687486.47 | 3314113.53 | 0.69% |
| 13 | 6227020800 | 6187239475.19 | 39781324.81 | 0.64% |
| 14 | 87178291200 | 86661001740.60 | 517289459.40 | 0.59% |
Download the (source spreadsheet).
Curious Integral
This is the kind of exercises I like, found it on Twitter when it was still an interesting place.
$$ \int_{0}^1 x \cdot \sqrt{x \cdot \sqrt[3]{x \cdot \sqrt[4]{x \cdot \sqrt[5]{...}}}} \,dx = $$ $$ \int_{0}^1 x \cdot x^{\frac{1}{2}} \cdot x^{\frac{1}{2 \cdot 3}} \cdot x^{\frac{1}{2 \cdot 3 \cdot 4}} \cdot x^{\frac{1}{2 \cdot 3 \cdot 4 \cdot 5}}... \,dx = $$ $$ \int_{0}^1 x^{\sum\limits_{n=1}^{\infty} \frac{1}{n!}}dx = $$ $$ \int_{0}^1 x^{e-1} dx = \tfrac{x^e}{e} \Big|_0^1 = \frac{1}{e} $$Quake Fast Square Root Algorithm
This is the famous (and wickedly fast) code used in Quake to calculate square roots, which was one of the secrets of the incredible speed of the game.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit hack
i = 0x5f3759df - ( i >> 1 ); // what the fuck?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
return y;
}
There are blog posts, videos, and a Wikipedia article. I also wrote an article on De Programmatica Ipsum about this algorithm.
Note: the current implementation of sqrt() in math.h is much faster than the one available back in the 90s when Quake was made, so today this code is no longer needed. But back then it was nothing else than a miracle.
Somebody wrote the mandatory version in Rust of the code above, with the same clarification that the current sqrt() implementation is actually faster and more accurate.
Simple Square Root Algorithms in Python
More square root algorithms, but these are definitely not as optimized; these are just using both a recursive and non-recursive version of Newton’s algorithm, for the sake of illustration.
import math
def sqroot1(number, guess = 100, iterations = 20):
if iterations == 0:
return guess
current = (guess + number / guess) / 2
return sqroot1(number, current, iterations - 1)
def sqroot2(number, precision = .01):
# Adapted from exercise 2.1 of "Shaum's Computer Science" page 182
guess = float(number / (1 + len(str(number))))
iterations = 100
while abs(guess * guess - number) > precision and iterations > 0:
iterations -= 1
guess = (guess + number / guess) / 2
return guess, iterations
input = 9876543210
val1 = sqroot1(input)
val2, iterations2 = sqroot2(input, 0.00001)
val3 = math.sqrt(input)
print(f'sqroot1({input}) = {val1} (20 iterations)')
print(f'sqroot2({input}) = {val2} ({iterations2} iterations)')
print(f'math.sqrt({input}) = {val3}')
Running the python code above returns this:
sqroot1(9876543210) = 99380.79900061179 (20 iterations)
sqroot2(9876543210) = 99380.79900061179 (82 iterations)
math.sqrt(9876543210) = 99380.79900061178