Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

77%
+5 −0
Q&A Moving average that uses less memory?

You can't eliminate all noise, use no memory, no processing cycles, and not add some lag. However, you can do much better than a brute force "moving average" filter. FIR versus IIR filters There a...

posted 4y ago by Olin Lathrop‭  ·  edited 4y ago by Olin Lathrop‭

Answer
#4: Post edited by user avatar Olin Lathrop‭ · 2020-06-22T22:42:44Z (over 4 years ago)
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation-based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT &lt;-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • <a href="http://www.embedinc.com/pic/dload.htm">http://www.embedinc.com/pic/dload.htm</a>.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT <-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation-based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT &lt;-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • <a href="http://www.embedinc.com/pic/dload.htm">http://www.embedinc.com/pic/dload.htm</a>.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT &lt;-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
#3: Post edited by user avatar Olin Lathrop‭ · 2020-06-22T22:41:33Z (over 4 years ago)
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation-based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT <-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • <a href="http://www.embedinc.com/pic/dload.htm">http://www.embedinc.com/pic/dload.htm</a>.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT <-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation-based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT &lt;-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • <a href="http://www.embedinc.com/pic/dload.htm">http://www.embedinc.com/pic/dload.htm</a>.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT <-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
#2: Post edited by user avatar Olin Lathrop‭ · 2020-06-20T20:16:07Z (over 4 years ago)
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT <-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • http://www.embedinc.com/pic/dload.htm.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT <-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
  • <p>You can't eliminate all noise, use no memory, no processing cycles, and
  • not add some lag. However, you can do much better than a brute force
  • "moving average" filter.
  • <h3>FIR versus IIR filters</h3>
  • <p>There are two broad classes of digital filters, FIR (finite impulse
  • response), and IIR (inifinite impulse response). There is more to it, but
  • as a good first introduction, FIR filters are table-based, while IIR
  • filters are equation-based.
  • <p>FIR filters can realize more arbitrary filter functions, but require a
  • lot of memory to do so. IIR filters use specific computations to get the
  • desired filter function. This means they are limited to filter functions
  • that can be realized by reasonable iterative equations, but generally
  • require little state.
  • <h3>Basic digital low pass filter</h3>
  • <p>The most common filter for reducing high frequency noise I use in
  • situations like you describe is the IIR filter:
  • <p>&nbsp; FILT <-- FILT + FF(NEW - FILT)
  • <p>This is a single-pole low pass filter, the digital equivalent of
  • resistor in series followed by a capacitor to ground.
  • <p>FILT is a piece of persistant state. This is the only persistant
  • variable you need to compute this filter. NEW is the new value that the
  • filter is being updated with this iteration. FF is the <i>filter
  • fraction</i>, which adjusts the "heaviness" of the filter.
  • <p>Look at this algorithm and see that for FF = 0 the filter is infinitely
  • heavy since the output never changes. For FF = 1, it's really no filter
  • at all since the output just follows the input. Useful values are in
  • between.
  • <p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
  • by FF can be accomplished as a right shift by N bits. For example, FF
  • might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
  • Otherwise this filter needs only one subtract and one add, although the
  • numbers usually need to be wider than the input value (more on numerical
  • precision later).
  • <p>I usually take A/D readings significantly faster than they are needed
  • and apply two of these filters cascaded. This is the digital equivalent
  • of two R-C filters in series, and attenuates by 12 dB/octave above the
  • rolloff frequency. However, for A/D readings it's usually more relevant
  • to look at the filter in the time domain by considering its step response.
  • This tells you how fast your system will see a change when the thing you
  • are measuring changes.
  • <h3>PLOTFILT program</h3>
  • <p>To facilitate designing these filters (which only means picking FF and
  • deciding how many of them to cascade), I use my program PLOTFILT. You
  • specify the number of shift bits for each FF in the cascaded series of
  • filters, and it computes the step response and other values. For example,
  • here is the result of "PLOTFILT 4 4":
  • <p>
  • <img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">
  • <p>The two parameters to PLOTFILT mean there will be two filters cascaded
  • of the type described above. The values of 4 indicate the number of shift
  • bits to realize the multiply by FF. The two FF values are therefore 1/16
  • in this case.
  • <p>The red trace is the unit step response, and is the main thing to look
  • at. For example, this tells you that if the input changes
  • instantaneously, the output of the combined filter will settle to 90% of
  • the new value in 60 iterations. If you care about 95% settling time then
  • you have to wait about 73 iterations, and for 50% settling time only 26
  • iterations.
  • <p>The green trace shows you the output from a single full amplitude
  • spike. This gives you some idea of the random noise attenuation. It
  • looks like no single sample will cause more than a 2.5% change in the
  • output.
  • <p>The blue trace is to give a subjective feeling of what this filter does
  • with white noise. This is not a rigorous test since there is no guarantee
  • what exactly the content was of the random numbers picked as the white
  • noise input for this run of PLOTFILT. It's only to give you a rough
  • feeling of how much it will be squashed and how smooth it is.
  • <p>PLOTFILT is available for free in my Full Runtime release at
  • <a href="http://www.embedinc.com/pic/dload.htm">http://www.embedinc.com/pic/dload.htm</a>.
  • <h3>Computation, noise attenutation, and delay</h3>
  • <p>To some extent, the amount of computation per filter iteration, the
  • noise attenuation, and the delay to specific settling fractions can be
  • traded off against each other.
  • <p>For example, more poles with lower shift bits each can result in the
  • same random noise attenuation. This example has 5 poles of 3 bits shift
  • each, compared to the 2 poles of 4 shift bits each above:
  • <p>
  • <img
  • src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">
  • <p>Note that the green trace has the same peak of about 2.5%. With the
  • green peak held constant, more poles results in more initial delay, but a
  • steeper eventual rise. In this case, the filter settles to values above
  • 87% faster than the previous. This is useful if you care about 90%
  • settling time, for example. Note, however, that 2&frac12; times more
  • computation is required.
  • <h3>Numerical precision</h3>
  • <p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
  • below the binary point. On small systems, FF is usually chosen to be
  • 1/2<sup>N</sup> so that this multiply is actually realized by a right
  • shift of N bits. For no loss, the FILT state for each pole must be the
  • width of the input value plus N bits.
  • <p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
  • = 1/8) applied. For no loss, the first FILT must be at least 13 bits
  • wide, and the second 16 bits wide.
  • <P>FILT is therefore usually a fixed point integer. Note that this
  • doesn't change any of the math from the processor's point of view. In the
  • example above, the first FILT could be thought of as 10.3 fixed point, and
  • the second as 10.6 fixed point. Both can fit into what the processor
  • considers a 16 bit integer.
  • <p>To re-use the same code for each pole, it is usually convenient to have
  • each FILT be the same format. In this case, both would be 10.6, although
  • the low 3 bits of the first FILT would remain 0.
  • <p>On most processors, you'd be doing 16 bit integer operations due to the
  • 10 bit A/D readings anyway. In this case, you can still do exactly the
  • same 16 bit integer opertions, but start with the A/D readings left
  • shifted by 6 bits. The processor doesn't know the difference and doesn't
  • need to. Doing the math on whole 16 bit integers works whether you
  • consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
  • point).
  • <h5>Signed intermediate values</h5>
  • <p>Examine the filter equation carefully, and you notice that the result
  • of NEW - FILT can be &plusmn;NEW. This means the result of the subtract
  • must be at least one bit wider than NEW.
  • <p>For simplicity, I usually leave one extra high bit in each FILT
  • variable. That high bit will always be 0 when the new FILT is stored, but
  • it simplifies intermediate calculations if the same format number can be
  • used for all values. In the example above, that would require 17 bits for
  • the second filt, meaning 24 bits would be used on most processors. Or,
  • you could use 11.5 fixed point for all values, and decide that the
  • quantization noise due to the slight loss of precision in the second FILT
  • is inconsequential in your application.
  • <h3>Firmware considerations</h3>
  • <P>I Usually write a subroutine or macro to perform one filter pole
  • operation, then apply that to each pole. Whether a subroutine or macro
  • depends on whether cycles or program memory are more important in that
  • particular project. The amount of program memory available in
  • microcontrollers has grown such that it hasn't been a limitation in recent
  • projects, so I mostly use a macro now.
  • <p>I use some scratch state to pass NEW into the subroutine/macro, which
  • updates FILT, but also loads that into the same scratch state NEW was in.
  • This makes it easy to apply to multiple poles since the updated FILT of one
  • pole is the NEW of the next pole. It can be useful to have
  • a pointer point to FILT on the way in, which is updated to just after FILT
  • on the way out. That way the filter routine automatically operates on
  • consecutive filters in memory if called multiple times.
  • <p>Here is one such filter macro. This is in assembler on a Microchip
  • dsPIC, using my PREPIC preprocessor (also available for free in the Full
  • Runtime release mentioned above):
  • <pre>
  • ////////////////////////////////////////////////////////////////////////////////
  • //
  • // Macro FILTER ffbits
  • //
  • // Update the state of one low pass filter. The new input value is in W1:W0
  • // and the filter state to be updated is pointed to by W2.
  • //
  • // The updated filter value will also be returned in W1:W0 and W2 will point
  • // to the first memory past the filter state. This macro can therefore be
  • // invoked in succession to update a series of cascaded low pass filters.
  • //
  • // The filter formula is:
  • //
  • // FILT <-- FILT + FF(NEW - FILT)
  • //
  • // where the multiply by FF is performed by a arithmetic right shift of
  • // FFBITS.
  • //
  • // WARNING: W3 is trashed.
  • //
  • /macro filter
  • /var local ffbits integer = [arg 1] ;get number of bits to shift
  • /write
  • /write " ; Perform one pole low pass filtering, shift bits = " ffbits
  • /write " ;"
  • sub w0, [w2++], w0 ;NEW - FILT --> W1:W0
  • subb w1, [w2--], w1
  • lsr w0, #[v ffbits], w0 ;shift the result in W1:W0 right
  • sl w1, #[- 16 ffbits], w3
  • ior w0, w3, w0
  • asr w1, #[v ffbits], w1
  • add w0, [w2++], w0 ;add FILT to make final result in W1:W0
  • addc w1, [w2--], w1
  • mov w0, [w2++] ;write result to the filter state, advance pointer
  • mov w1, [w2++]
  • /write
  • /endmac</pre>
#1: Initial revision by user avatar Olin Lathrop‭ · 2020-06-20T19:57:47Z (over 4 years ago)
<p>You can't eliminate all noise, use no memory, no processing cycles, and
not add some lag.  However, you can do much better than a brute force
"moving average" filter.

<h3>FIR versus IIR filters</h3>

<p>There are two broad classes of digital filters, FIR (finite impulse
response), and IIR (inifinite impulse response).  There is more to it, but
as a good first introduction, FIR filters are table-based, while IIR
filters are equation based.

<p>FIR filters can realize more arbitrary filter functions, but require a
lot of memory to do so.  IIR filters use specific computations to get the
desired filter function.  This means they are limited to filter functions
that can be realized by reasonable iterative equations, but generally
require little state.

<h3>Basic digital low pass filter</h3>

<p>The most common filter for reducing high frequency noise I use in
situations like you describe is the IIR filter:

<p>&nbsp; FILT <-- FILT + FF(NEW - FILT)

<p>This is a single-pole low pass filter, the digital equivalent of
resistor in series followed by a capacitor to ground.

<p>FILT is a piece of persistant state.  This is the only persistant
variable you need to compute this filter.  NEW is the new value that the
filter is being updated with this iteration.  FF is the <i>filter
fraction</i>, which adjusts the "heaviness" of the filter.

<p>Look at this algorithm and see that for FF = 0 the filter is infinitely
heavy since the output never changes.  For FF = 1, it's really no filter
at all since the output just follows the input.  Useful values are in
between.

<p>On small systems you pick FF to be 1/2<sup>N</sup> so that the multiply
by FF can be accomplished as a right shift by N bits.  For example, FF
might be 1/16 and the multiply by FF therefore a right shift of 4 bits.
Otherwise this filter needs only one subtract and one add, although the
numbers usually need to be wider than the input value (more on numerical
precision later).

<p>I usually take A/D readings significantly faster than they are needed
and apply two of these filters cascaded.  This is the digital equivalent
of two R-C filters in series, and attenuates by 12 dB/octave above the
rolloff frequency.  However, for A/D readings it's usually more relevant
to look at the filter in the time domain by considering its step response.
This tells you how fast your system will see a change when the thing you
are measuring changes.

<h3>PLOTFILT program</h3>

<p>To facilitate designing these filters (which only means picking FF and
deciding how many of them to cascade), I use my program PLOTFILT.  You
specify the number of shift bits for each FF in the cascaded series of
filters, and it computes the step response and other values. For example,
here is the result of "PLOTFILT 4 4":

<p>
<img src="https://electrical.codidact.com/uploads/LEFAV7754SHTXrr5nubrfxCT">

<p>The two parameters to PLOTFILT mean there will be two filters cascaded
of the type described above.  The values of 4 indicate the number of shift
bits to realize the multiply by FF.  The two FF values are therefore 1/16
in this case.

<p>The red trace is the unit step response, and is the main thing to look
at.  For example, this tells you that if the input changes
instantaneously, the output of the combined filter will settle to 90% of
the new value in 60 iterations.  If you care about 95% settling time then
you have to wait about 73 iterations, and for 50% settling time only 26
iterations.

<p>The green trace shows you the output from a single full amplitude
spike.  This gives you some idea of the random noise attenuation.  It
looks like no single sample will cause more than a 2.5% change in the
output.

<p>The blue trace is to give a subjective feeling of what this filter does
with white noise.  This is not a rigorous test since there is no guarantee
what exactly the content was of the random numbers picked as the white
noise input for this run of PLOTFILT.  It's only to give you a rough
feeling of how much it will be squashed and how smooth it is.

<p>PLOTFILT is available for free in my Full Runtime release at
http://www.embedinc.com/pic/dload.htm.

<h3>Computation, noise attenutation, and delay</h3>

<p>To some extent, the amount of computation per filter iteration, the
noise attenuation, and the delay to specific settling fractions can be
traded off against each other.

<p>For example, more poles with lower shift bits each can result in the
same random noise attenuation.  This example has 5 poles of 3 bits shift
each, compared to the 2 poles of 4 shift bits each above:

<p>
<img
src="https://electrical.codidact.com/uploads/3N9yxE6coXogs9EAgZfp9rWf">

<p>Note that the green trace has the same peak of about 2.5%.  With the
green peak held constant, more poles results in more initial delay, but a
steeper eventual rise.  In this case, the filter settles to values above
87% faster than the previous. This is useful if you care about 90%
settling time, for example.  Note, however, that 2&frac12; times more
computation is required.

<h3>Numerical precision</h3>

<p>Note that the multiply by FF will create Log<sub>2</sub>(FF) new bits
below the binary point.  On small systems, FF is usually chosen to be
1/2<sup>N</sup> so that this multiply is actually realized by a right
shift of N bits.  For no loss, the FILT state for each pole must be the
width of the input value plus N bits.

<p>For example, consider a 10 bit A/D reading with two poles of N = 3 (FF
= 1/8) applied.  For no loss, the first FILT must be at least 13 bits
wide, and the second 16 bits wide.

<P>FILT is therefore usually a fixed point integer.  Note that this
doesn't change any of the math from the processor's point of view.  In the
example above, the first FILT could be thought of as 10.3 fixed point, and
the second as 10.6 fixed point.  Both can fit into what the processor
considers a 16 bit integer.

<p>To re-use the same code for each pole, it is usually convenient to have
each FILT be the same format.  In this case, both would be 10.6, although
the low 3 bits of the first FILT would remain 0.

<p>On most processors, you'd be doing 16 bit integer operations due to the
10 bit A/D readings anyway.  In this case, you can still do exactly the
same 16 bit integer opertions, but start with the A/D readings left
shifted by 6 bits.  The processor doesn't know the difference and doesn't
need to.  Doing the math on whole 16 bit integers works whether you
consider them to be 10.6 fixed point or true 16 bit integers (16.0 fixed
point).

<h5>Signed intermediate values</h5>

<p>Examine the filter equation carefully, and you notice that the result
of NEW - FILT can be &plusmn;NEW.  This means the result of the subtract
must be at least one bit wider than NEW.

<p>For simplicity, I usually leave one extra high bit in each FILT
variable.  That high bit will always be 0 when the new FILT is stored, but
it simplifies intermediate calculations if the same format number can be
used for all values.  In the example above, that would require 17 bits for
the second filt, meaning 24 bits would be used on most processors.  Or,
you could use 11.5 fixed point for all values, and decide that the
quantization noise due to the slight loss of precision in the second FILT
is inconsequential in your application.

<h3>Firmware considerations</h3>

<P>I Usually write a subroutine or macro to perform one filter pole
operation, then apply that to each pole.  Whether a subroutine or macro
depends on whether cycles or program memory are more important in that
particular project.  The amount of program memory available in
microcontrollers has grown such that it hasn't been a limitation in recent
projects, so I mostly use a macro now.

<p>I use some scratch state to pass NEW into the subroutine/macro, which
updates FILT, but also loads that into the same scratch state NEW was in.
This makes it easy to apply to multiple poles since the updated FILT of one
pole is the NEW of the next pole.  It can be useful to have
a pointer point to FILT on the way in, which is updated to just after FILT
on the way out.  That way the filter routine automatically operates on
consecutive filters in memory if called multiple times.

<p>Here is one such filter macro.  This is in assembler on a Microchip
dsPIC, using my PREPIC preprocessor (also available for free in the Full
Runtime release mentioned above):

<pre>
////////////////////////////////////////////////////////////////////////////////
//
//   Macro FILTER ffbits
//
//   Update the state of one low pass filter.  The new input value is in W1:W0
//   and the filter state to be updated is pointed to by W2.
//
//   The updated filter value will also be returned in W1:W0 and W2 will point
//   to the first memory past the filter state.  This macro can therefore be
//   invoked in succession to update a series of cascaded low pass filters.
//
//   The filter formula is:
//
//     FILT <-- FILT + FF(NEW - FILT)
//
//   where the multiply by FF is performed by a arithmetic right shift of
//   FFBITS.
//
//   WARNING: W3 is trashed.
//
/macro filter
  /var local ffbits integer = [arg 1] ;get number of bits to shift

  /write
  /write "         ;   Perform one pole low pass filtering, shift bits = " ffbits
  /write "         ;"

         sub     w0, [w2++], w0 ;NEW - FILT --> W1:W0
         subb    w1, [w2--], w1

         lsr     w0, #[v ffbits], w0 ;shift the result in W1:W0 right
         sl      w1, #[- 16 ffbits], w3
         ior     w0, w3, w0
         asr     w1, #[v ffbits], w1

         add     w0, [w2++], w0 ;add FILT to make final result in W1:W0
         addc    w1, [w2--], w1

         mov     w0, [w2++]  ;write result to the filter state, advance pointer
         mov     w1, [w2++]

  /write
  /endmac</pre>