A few months ago I wrote something about algorithms for computing Fibonacci numbers, which was discussed in some of the nerdier corners of the internet (and even, curiously, made it into print).
Several people suggested that Binet’s closed-form formula for Fibonacci numbers might lead to an even faster algorithm. That’s an interesting idea, which we’re going to explore in this post. So, what is Binet’s formula? It goes like this:
where is the golden ratio.
I’d like to explain why this formula works, because it’s a lovely story; and, despite what you might think from most of the explanations on the web, it’s perfectly possible to understand and appreciate it without knowing what “eigenvalue” means. But I fear that would be too long a digression here, so I’ll save it for another post.
There is a nice trick we can use to simplify the computation of this expression, taking advantage of the fact we know it always comes out as an integer. Since φ is about 1.618, its negative cousin (1-φ) is about -0.618, which means that the term
is always between -1/2 and 1/2. So we can just compute
and round to the nearest integer.
The simplest way to implement this idea is to use ordinary floating-point arithmetic. For example, in C we might write:
double fib_fixed(long n) { double phi = (sqrt(5) + 1) / 2.0; return round(pow(phi, n) / sqrt(5)); }
This is simple and fast. The only trouble is that it almost always gives the wrong answer. On my machine the answer is wrong whenever n > 70. For example fib_fixed(71)
gives 308061521170130
, whereas the correct answer is 308061521170129
, and it only gets worse from there.
We can improve the situation a little by tweaking the computation, but, however cunningly we do that, we’ll soon run into a fundamental obstacle: fib(79)
is 14472334024676221
, which can’t even be represented in an IEEE double. You can see that illustrated as a comedy Google calculator fail.
So if we want to implement this in a general way then we need to use arbitrary-precision floating-point arithmetic. For some reason this has not gone mainstream in the way that big ints have done, but the GMP library has a good implementation, so we’ll use that. Here’s the code:
void fib_float(mpf_t *result, long n) { mpf_t sqrt5, phi; mp_bitcnt_t bitcnt; /* We need about n lg(phi) bits of precision */ bitcnt = n * 7 / 10; /* Initialise sqrt5 to the square root of 5 */ mpf_init2(sqrt5, bitcnt); mpf_sqrt_ui(sqrt5, 5); /* Initialise phi to the Golden Ratio */ mpf_init2(phi, bitcnt); mpf_set(phi, sqrt5); mpf_add_ui(phi, phi, 1); mpf_div_2exp(phi, phi, 1); /* Compute phi ^ n / sqrt5 */ mpf_init2(*result, bitcnt); mpf_pow_ui(*result, phi, n); mpf_div(*result, *result, sqrt5); /* Dispose of the temporary variables */ mpf_clear(sqrt5); mpf_clear(phi); }
This works fine, gives correct answers, and is reasonably fast. But how does it compare to the integer methods? For comparison, I rewrote the fib_fast
algorithm from my last post to use GMP integers. Click to see the code:
void fib_int(mpz_t *result, long n) { mpz_t a, b, c; long bit, mask; /* Set 'bit' to the most-significant 1-bit of 'n' */ bit = 1; mask = 1; while (n > mask) { bit = bit << 1; mask = (mask << 1) | 1; } mpz_init_set_ui(a, 1); mpz_init_set_ui(b, 0); mpz_init_set_ui(c, 1); while (bit > 0) { if ( (n & bit) != 0 ) { /* a, b = (a + c) * b, (b * b) + (c * c) */ mpz_add(a, a, c); mpz_mul(a, a, b); mpz_mul(b, b, b); mpz_addmul(b, c, c); } else { /* a, b = (a * a) + (b * b), b * (c + a) */ mpz_add(c, c, a); /* Temporarily set c := c + a */ mpz_mul(a, a, a); mpz_addmul(a, b, b); mpz_mul(b, b, c); } mpz_add(c, a, b); bit >>= 1; } mpz_init_set(*result, b); mpz_clear(a); mpz_clear(b); mpz_clear(c); }
Then I made a graph showing the execution time of both methods, against the input n. The vertical axis shows elapsed time, in ticks, when running on my computer. (But it doesn’t really matter what it is, since we are interested in the relationship between the speed of the two methods, rather than the exact speed of either.)
Clearly the integer method is faster, but the curves are not obviously different shapes. A log-log plot shows the relationship more clearly:
which supports the idea (consistent with what you’d expect in theory) that both algorithms have the same big-O complexity, but the integer method is consistently about five times faster.
Floating-point arithmetic has a reputation for being difficult and unreliable; a technique that is good for finding approximate answers quickly, but best avoided when a precise answer is required. And actually I think that’s a fair assessment on the whole, but it’s interesting to see how arbitrary-precision floating point arithmetic can give us an exact answer very easily in this case. Of course we lose a lot of the speed advantage that floating-point conventionally offers, because our hardware doesn’t have the same fast native support for these “big floats” that it does for IEEE doubles.
Another approach
A related idea is to take advantage of the fact that all the calculations produce numbers of a special form, e.g. a+bφ for integers a and b. This works essentially because φ2 = 1 + φ, and so any polynomial in φ can be reduced to that form.
This turns out to be precisely equivalent to the matrix technique we used last time, because φn = fib(n-1) + fib(n)φ, and so computing powers of φ in this representation is just directly equivalent to computing Fibonacci numbers.
A variation of this – as suggested on HN – is to use the fact that φn is always of the form
for integers a and b. This idea leads to the Lucas numbers, because
where luc(n) = fib(n-1) + fib(n+1), and hence to an efficient integer algorithm based on the identities
fib(2n) = luc(n) fib(n)
luc(2n) = 5 fib(n)^2 + 2 (-1)n
luc(2n + 1) = (5/2) fib(n) (fib(n) + luc(n)) + (-1)n
fib(2n + 1) = luc(2n + 1) – 2 luc(n) fib(n)
The complete code for this algorithm is here, and more generally all the code used in this post is on GitHub. Remarks and critique would be very welcome. You can follow me on Twitter at @robinhouston.
to be fair, you should compare arbitrary precision integer version against the arbitrary precision floating point version, right?
Yeah, that’s what I did. The graphs show the arbitrary-precision floating point algorithm compared with the arbitrary-precision integer algorithm.
Sorry it was unclear. Out of interest, what did you think I meant? (I’m wondering how I could make it clearer.)
This article made me check my Fibonacci routines in Delphi, and sure enough, they were slightly off at “high” values like 71 and 79. However, Delphi and its ancestors have access to Intel’s extended precision reals (extended type; most likely “long double” for most C compilers). Both the integer and floating point functions were using extendeds (long doubles) where appropriate, but the constants I had defined for Phi and SqrtOf5 were only defined to the precision of a double. After looking up the current definitions of those values (in Wikipedia, which changed a lot since those routines were written) and adding a few more digits of precision, the old routines provide correct values over a greater range of their domains. It’s not important enough for me to test thoroughly right now, and I’m short on time… but it looks pretty good on quick tests.
Oh yeah… I don’t return doubles; either int64’s (long long) or extended (long double). The long long return type doesn’t overflow until the input paramter is greater than 92! Of course the extended function will never show more than 20 digits of precision… One of these days I’ll find a good arbitrary-precision lib for floats for old Delphis…
Very interesting article! I was wondering how this approach would match (performance vise) against a very simple and naive approach like this: http://pastebin.com/quZ3qz7q (Proof of concept code).
I think there’s a small problem in your code for binet’s formula. You estimate the value of the base 2 log of the golden ratio right, but when you multiply by n and it gets typecast to an unsigned long (per the gmp docs, mp_bitcnt_t is unsigned long) many (if not all) C compilers floor the result to typecast. You can loose some precision because you’re just shy of what you need. This was evidenced by some large (circa 10^6) computations compared against an iterative integer computation (unrolling the tail recursion of a recursive solution) also using gmp.
You should add a +1 to the precision.
Mike: Interesting comment. Thanks. Since 7/10 is a slight over-estimate for lg(φ) I thought I was safe here, but I didn’t analyse it carefully.
Do you have a concrete example where my float-calc program gives an incorrect answer? I can’t seem to find one. Even 10^7 is checking out for me.
Pingback: 0 1 1 2 3 5 8 13… | Cute Code
Pingback: The Fibonacci sequence and linear algebra - Science