Comments on When do I need to put anti-aliasing filter in front of SAR ADC?
Parent
When do I need to put anti-aliasing filter in front of SAR ADC?
Signal characteristics are:
- 0 to 3.3V span
- 0 to 100Hz range( there will be noise superposed on top with unknown characteristics)
- Output resistance Rout=4700 ohm
I want to sample that with a SAR ADC built in the MCU. ADC has following characteristics:
- internal sample and hold capacitance --> Csh = 7pF
- sampling switch resistance --> Rsh = 6kOhm
- sampling time range --> programmable from t_s = 0.1us to t_s = 16us
My idea was to sample the input signal at a much higher rate than needed and implement an RC filter in the firmware so I can modify it easily.
Is there a benefit and what would the benefit of putting an RC filter in front of the ADC be? Or I can implement it purely in FW without drawbacks?
Post
Since your noise has "unknown characteristics", you should do low pass filtering in analog before the discrete sampling of the A/D.
The two parameters that matter are the frequency content of the signal, and the sampling rate. In theory, nothing is lost as long as the sample rate is more than twice the highest frequency of the signal being sampled. However, since you don't know what that highest frequency is, it's a good idea to attenuate any such high frequencies before sampling.
Over-sampling, then doing filtering in firmware is a legitimate strategy. However, you still have to avoid aliasing at whatever rate you actually sample at.
Fortunately, the maximum signal frequency of 100 Hz is "low" for a modern microcontroller. You have a lot of headroom to do over-sampling. Since you have a lot of room, the analog filter doesn't need to be accurate or tight. A single pole at 200 Hz should be good enough. That leaves some room to not cut into the desired signal much, while still heavily attenuating frequencies that cause aliasing at doable sampling rates.
You didn't give specifics, but you can probably sample every 20 µs at least, which means a rate of 50 kHz. That in turn means only frequencies above 25 kHz will cause artifacts that you can't squash with the firmware filter. Due to the 200 Hz analog low pass filter, such frequencies will be attenuated by (25 kHz)/(200 Hz) = 125 times, or 7 bits. In addition, the noise amplitude at 25 kHz is probably already much smaller than the signal, but you haven't specified that. Most likely 200 Hz analog low pass filter, followed by 50 kHz sampling, followed by firmware low pass filtering and decimation, will yield much better than 7 bits signal to noise ratio.
Since you mentioned sampling time, let's look at that too. This has nothing directly to do with the above calculations, but does dictate a lower limit on the sampling period.
Overall there is a 7 pF cap driven by 10.7 kΩ resitance. The time constant is therefore (7 pF)(10.7 kΩ) = 75 ns. The 1/2 of full scale settling time (the "half life") is 0.693 of that, or 52 ns. That's a useful number because it tells you how long you need to wait for each additional bit of accuracy. If you want 10-bit accuracy, for example, then you need to sample for 519 ns, and 623 ns to get 12 bits. This sample time is added to the A/D conversion time to get the minimum overall period you can sample the signal with.
1 comment thread