If you need the exact proportion, then there is no escaping needing to store the actions so that you know exactly when to forget them.
If an approximate answer is good enough, then you can use a simple exponentially weighted moving average. For this you update it with:
weighted_average = 0.99 * weighted_average + 0.01 * observation
This actually is only getting 63.39% of its weight from the first 100, 23.205% from the next, 8.49% from the next, and so on.
In general if you want something that looks kind of like like an average over the last n
samples you'd use (1 - 1/n)
and (1/n)
in place of 0.99
and 0.01
respectively.
This average will start off with a bias that takes a few hundred observations to die away. There are a couple of ways to correct that. The simplest is to count observations and then:
w = max(1/observations, 1/n)
weighted_average = (1 - 1/w) * weighted_average + (1/w) * observation
This will compute an exact average of all observations for the first n
times, and then switch to exponential weighting after that.
This technique is widely used in areas as far apart as the Unix load average, updating neural networks, and tracking various financial indicators on Wall St.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…