In my opinion, Fibonacci and factorial calculations are not really the best examples. Memoisation really comes into into its own when you have:
- A huge range of potential inputs to the calculation in question, but the range is still clearly restricted and known
- You know ahead of time that any actual use of the program will only use a small subset of possible inputs to your calculation (Fibonacci and factorial fail this)
- You don't know which particular inputs will be used at runtime, and therefore which particular results will need to be memoised (Fibonacci and factorial fail this too, up to a point)
Obviously if you do know all possible inputs, and space allows, you can consider replacing the function with a lookup (which is I'd do for, say, an embedded CRC32 implementation with a known generator).
...even better than #2 is if for any particular run of the program, you can immediately say "the range of potential inputs will be restricted to a subset satisfying these conditions...".
Note that a lot of this might be probabilistic (or intuitive) — sure, someone might try all of the 10^13 possible inputs to your magic calculation, but you know that realistically they won't. If they do, the overhead of memoisation will actually be of no benefit to them. But you may well decide that this is acceptable, or allow bypassing the memoisation in such circumstances.
Here's an example, and I hope it's not too convoluted (or generalised) to be informative.
In some firmware I've written, one part of the program takes a read from an ADC, which could be any number from 0x000
to 0xFFF
and calculates an output for some other part of the program. This calculation also takes a set of user-tuneable numbers, but these are only read at program startup. This calculation is quite a hit the first time it's run.
Creating a lookup table ahead of time is ridiculous. The input domain is the Cartesian product of [0x000
, ..., 0xFFF
] and (a large range of floating point values) and (another large range...) and ... No thanks.
But no user requires or expects the device to work well when conditions change rapidly, and they'd much rather it works better when things are steady. So I make a tradeoff in computational behaviour that reflects these requirements: I want this calculation to be nice and fast when things are stable, and I don't care about when they aren't.
Given the definition of "slowly changing conditions" that the typical user expects, that ADC value is going settle to a particular value and remain within about 0x010 of its settled value. Which value depends on the conditions.
The result of the calculation can therefore be memoised for these 16 potential inputs. If environmental conditions do change faster than expected, the "furthest" ADC read from the most recent is discarded (eg. if I've cached 0x210 to 0x21F, and then I read 0x222, I drop the 0x210 result).
The drawback here is that if environmental conditions change a lot, that already-slow calculation runs a little slower. We've already established that this is an unusual use-case, but if someone later reveals that actually, they do want to operate it under unusually unstable conditions, I can implement a way to bypass the memoisation.