I have several questions on making a lowpass filter in python/scipy.signal. Would appreciate any answers.
- I'm trying to understand how
scipy.signal.lfilter
works. Does it take the filter coefficients and multiply them by the data values, so that for data[500]
, it'll do
for b in range(0,len(coeff)):
filtered = filtered + data[500-b]*coeff[b]
What's the difference between doing that and what it's doing?
- I also don't understand how the number of taps affects when it starts filtering. For different number of taps I see it starting at different values on the data. I assumed it'll start only after it has the necessary coefficients. In my example I have 683 coefficients from
scipy.signal.firwin
, but the filter starts at 300-400, as you can see in the image Firwin lowpass filter (filter is in blue; sinewave is in red; x goes from 0-1000)
from scipy import signal
a = signal.firwin(683, cutoff = 1/30, window = "hamming")
t = signal.lfilter(a,1,sig)
With an fs=1, cutoff = fs/30, I get a lowpass filter with firwin that is delayed a lot as shown by the image above. What can I do to improve the delay?
How would changing the sampling rate affect the filter?
I've found two methods online to approximate the number of taps:
import math
(2/3) * math.log10(1/(10*ripple*attenuation)) * fs/transition_width
((-10*math.log10(ripple*attenuation) - 13)/(14.6)) * fs/transition_width
Which is a better approximation?
Any clarification would be greatly appreciated.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…