Butterworth Filtering and Inverse FFT problems [message #33747] |
Wed, 22 January 2003 06:27  |
jefield
Messages: 3 Registered: January 2003
|
Junior Member |
|
|
Hi,
I'd be *very* grateful if anyone could help me on this. I am trying to
apply a high-pass Butterworth filter to some data and am encountering
a couple of problems.
My original data set is so large (~100,000 points), and so dominated
by the low-frequency component which I'm trying to remove, that the
FFT just looks (at initial magnification) like a couple of spikes at
either end of a flat line at y=0. In order to make the frequency
components a bit easier to recognise, I have created a 100-element
array which comprises the first 50 and last 50 elements of the FFT
array and have applied the Butterworth filter to that. I have then
replaced the first 50 and last 50 values of the FFT array with these
new values. HOWEVER - when I then take an inverse FFT of the filtered
array, the range of values I see is tiny compared with the range
present if I simply inverse-FFT the filtered 100-element array. Surely
adding in all those extra frequency components would increase the
range of values?
My other problem is that in both cases, the final array is
(approximately) symmetrical about a vertical line at the centre of the
plot, while the original array certainly isn't! Presumably this is
something to do with positive/negative frequencies and/or
real/imaginary components?
I'm sure that I'm missing something really simple here so please
excuse my ignorance. I'm new to DSP, new to IDL and my university
physics has become very rusty!
Thanks very much!
Best wishes,
Julian
|
|
|
|