European Journal of Electrical Engineering: Received: 10 April 2019 Accepted: 28 May 2019
European Journal of Electrical Engineering: Received: 10 April 2019 Accepted: 28 May 2019
Design and Application of an Improved Least Mean Square Algorithm for Adaptive Filtering
Zhaojun Shen*, Rugang Wang
https://doi.org/10.18280/ejee.210307 ABSTRACT
Received: 10 April 2019 This paper enumerates the strengths and defects of the traditional least mean square (LMS)
Accepted: 28 May 2019 algorithm for adaptive filtering, and then designs a novel LMS algorithm with variable step
size and verifies its performance through simulation. In our algorithm, the step size is no longer
Keywords: adjusted by the square of the error (e2(n)), but by the correlation between the current error and
adaptive filtering, least mean square the error of a previous moment e(n-D). In this way, the algorithm becomes less sensitive to the
(LMS) algorithm, variable step size, noise noise with weak autocorrelation, and manages to achieve fast convergence, high time-varying
cancelation tracking accuracy, and small steady-state error. The simulation results show that our algorithm
outperformed the traditional LMS algorithm with fixed step size in convergence speed,
tracking accuracy and noise suppression. The research findings provide a new tool for many
other fields of adaptive filtering, such as adaptive system identification and adaptive signal
separation.
In the electronic data processing (EDP) system, the filter is An adaptive filter generally consists of two parts: an
a basic circuit to extract the useful ones out of complex signals, adaptive processer and an adaptive algorithm. The former is a
while suppressing noises or interferences. A digital filter takes digital structure of adjustable parameters. The adaptive filter
digital signals as input and output. This device can change the can be designed without knowing the statistical properties of
relative ratio of the frequency components in the input, or filter the inputs and noises. In actual operations, these features are
out some frequency components by a certain computational gradually learned or estimated by the adaptive filter, and used
relationship. To optimize the filtering effect, the digital filter to adjust the parameters to optimize the filtering effect. The
can be used adaptively, that is, automatically adjust the filter key features of adaptive filter can be summed up as learning
parameters at the current time based on those acquired at the and tracking.
previous moment. This kind of adaptive digital filtering can
deduce the unknown time-varying statistical properties of
signals and noises [1-3].
One of the classical adaptive filtering algorithms is the least
mean square (LMS) algorithm. With a simple structure and
good stability, this minimum mean square error algorithm has
been widely adopted in adaptive control, radar, system
identification and signal processing. However, the traditional
LMS algorithm cannot achieve fast convergence, high
Figure 1. Principle of adaptive filter
tracking accuracy and small steady-state error at the same time,
owing to the fixed step size. To solve this problem, many
The principle of adaptive filter is illustrated in Figure 1,
improved LMS algorithms with variable step size have been
where the discrete time linear system stands for an actual
developed for adaptive filtering [4-9].
programmable filter, x(n) is the input of adaptive filter, y(n) is
Drawing on existing LMS algorithms, this paper designs a
the output of the adaptive filter, d(n) is the desired output, and
novel LMS algorithm with variable step size and verifies its
e(n) is the error inputted to the adaptive algorithm. Note that
performance through simulation. The simulation results show
error is the difference between output and desired output.
that our algorithm achieved fast convergence and small
The filter parameters of the adaptive filter can be
tracking error, and effectively eliminated interferences.
characterized by the shock response h(n), which is affected by
Moreover, our algorithm minimized the parameter size and
e(n). During the operation, the adaptive filter automatically
computing load, facilitating the hardware implementation.
adjusts the shock response such that the output gradually
Overall, our algorithm can achieve the optimal filtering effect,
approaches the desired output.
striking a balance between convergence speed, tracking
Hence, the most significant difference between adaptive
accuracy and steady-state error [10-12].
filter and ordinary filter lies in the fact that the adaptive filter
can adjust its impulse response or filter parameters according
303
to the external environment. After a while, the adaptive filter 3.3 Improvement of LMS algorithm
can achieve the optimal filtering effect.
The adaptive algorithm is critical to the performance of the The performance of LMS algorithm in adaptive filtering can
adaptive filter. To track the changes in external environment, be evaluated accurately by three technical indices:
the adaptive algorithm must keep regulating filter parameters convergence speed, time-varying tracking accuracy and
according to the preset criterion, considering the input, output steady-state error. To solve the contradiction between these
and original values of the filter parameters. In this research, indices, the key lies in the design of the mapping relationship
the LMS algorithm is selected as the adaptive algorithm [13]. between step size and error.
Drawing on the abovementioned LMS algorithms with
variable step size, this paper designs a new LMS algorithm
3. PRINCIPLE AND IMPROVEMENT OF LMS with variable step size:
ALGORITHM
W (n) = W (n − 1) + 2 (n)e(n) X (n) (3)
3.1 Principle of LMS algorithm
(n) = (1 − exp( − | e(n) |m ) (4)
The LMS algorithm is a linear adaptive filtering algorithm.
The algorithm involves two operations, namely, filtering and
adaptation. The goal is to minimize the mean square error of where α(>0) is the shape factor controlling the shape of
e(n)=d(n)-y(n) through parameter adjustment, and to modify function; β(>0) is the range factor controlling the value range
weight accordingly. Given input and desired output, the LMS of the function.
algorithm can achieve a small computing load without off-line
calculation. There are many variants of the LMS algorithm.
For example, the following LMS algorithm is coupled with the
method of steepest descent [14].
304
To overcome this defect, the step size was no longer current error and the error in a previous moment e(n-D). This
adjusted by the square of the error (e2(n)), but by the new method for step size setting has many advantages. For
correlation between the current error and the error of a example, the autocorrelation error is usually close to the
previous moment e(n-D), where D is a positive integer falling optimal value, making the adjusted step size suitable for
between the time-dependent radius of the input and that of the application. Besides, the step size update will not be affected
noise, after the error decreases to zero after a certain period. by irrelevant noise sequence. Due to the large initial adaptive
Since the autocorrelation of the noise drops to zero, the noise error, the step size is long at the beginning. As the
has much less impact on the step size, reducing the sensitivity autocorrelation error approaches the optimal value (zero), the
of our algorithm to noise. The improved formula for variable step size will stabilize at a small level. In the initial phase, the
step size can be expressed as: algorithm converges rapidly with the large step size; in the
later phase, the tacking error can be minimized by the small
(n) = 1 − exp( −e(n)e(n − D)) (6) step size. The step size becomes more accurate after
considering the previous step sizes. Therefore, our algorithm
Thus, our algorithm now relies on the correlation value of can prevent the noise impacts more effectively than the
the error e(n)e(n-D) to adjust the step size. This adjustment traditional LMS algorithm [19-20].
method fully considers convergence speed and steady-state
error, and reduces the noise sensitivity of the algorithm with 4.2 Application in adaptive noise cancellation (ANC)
weak autocorrelation. system
As mentioned before, the convergence condition of the
LMS algorithm is 0 <μ(n)< 1/λmax, where λmax is the maximum In EPD systems, the received signals often contain many
eigenvalue of the input autocorrelation matrix. Thus, the range noises, which pushes up the bit error ratio. These signals
factor must be smaller than λmax: β< 1/λmax. Under this should be denoised adaptively with the optimal filter. The
condition, the algorithm will eventually converge, and the step optimal filter can be fixed or adaptive. A fixed filter needs to
size will gradually decline and minimize after the convergence. know the statistical properties of signals and noises, while the
adaptive one requires no or little such knowledge.
The ANC system is responsible for enhancing the SNR
through noise suppression or attenuation. The basic principle
is to remove the noises from the noisy signals, in contrast to
the desired output. The noise removal is known as noise
cancellation, which relies on the correlation between the noisy
signals and desired output. But the noises cannot be eliminated
if the noisy signals are unrelated or weakly correlated. The
residual noises will interfere with the filter and affect the
Figure 3. Relationship curves between step size and error of adaptive algorithm.
the improved LMS algorithm Our algorithm provides a desirable way to solve this
problem. The algorithm can distinguish between strong
Figure 3 presents the relationship curves between step size correlation noise and unrelated and weakly correlated noise.
and error of the improved LMS algorithm at different shape Here, the former is called additive noise signal n0 and the latter,
factors and range factors. In the initial phase of convergence, the noise signal v.
the absolute value of the error was large, the step size was long
and the algorithm converged rapidly. Once the algorithm
reached the steady state, both the absolute value of the error
and the step size were minimized.
It can be seen from Figure 3 that, when the initial error
remained the same, the algorithm converged successfully with
β< 1/λmax, and the convergence speed increased with the shape
factor. Similarly, the convergence speed also increased with
the range factor when the shape factor was constant. When the
two factors were too large, however, the step size was very
long at the convergence, despite the increase in convergence
Figure 4. The ANC mechanism of our algorithm
speed, resulting in a huge steady-state error.
Therefore, the shape factor and range factor should be
The ANC mechanism of our algorithm is illustrated in
selected to maximize the step size corresponding to the
Figure 4, where n1 is the reference input. The main input
absolute value of the initial error, provided that the algorithm
contains the useful signal s to be extracted, the additive noise
can still converge. In actual practice, the two factors should be
signal n0 and the noise signal v. The useful signal is not
optimized through experiments.
correlated with the noise signal, the additive noise signal or the
reference input; the additive noise signal is related to the
reference input, but not to the noise signal. The useful signal,
4. DISCUSSION
the noise signal, the additive noise signal and the reference
input are all zero mean signals. Hence, the output of the ANC
4.1 Advantages of step size setting
system can be expressed as:
In our algorithm, the step size is no longer determined by
the error in the current time, but by the correlation between the e(n) = s(n) + n0 + v − y(n) (6)
305
Since useful signal is not correlated with the noise signal, and restored the useful signal. Then, the discrete signals above
the additive noise signal or the reference input, the were subjected to fast Fourier transform, producing their
mathematical expectation can be obtained by squaring both curves in the frequency domain (Figure 7). Obviously, our
sides of equation (6): algorithm retained the useful signal and suppressed the
frequency spectrum of the noises. In addition, our algorithm
E (e 2 ) = E[( s + v) 2 ] + E[( n0 − y ) 2 ] (7) converged after fewer than 100 iterations, and controlled the
steady-state error well. The simulation results show that our
algorithm can converge rapidly with a good stability, and
The term E[( s+v)2] is not affected by the adjustment of filter
eliminate the noises in received signals.
parameters to minimize the E(e2). Thus, the minimum output
energy can be described as:
6. CONCLUSIONS
306
Our algorithm solves the inherent contradiction of the LMS 36(2): 139-145. https://doi.org/10.18280/ts.360203
algorithm with fixed step size: the inability to achieve fast [10] Sristi, P., Lu, W.S., Antoniou, A. (2012). A new variable-
convergence, high tracking accuracy and small steady-state step size LMS algorithm and its application in sub-band
error at the same time. More importantly, our algorithm adaptive filtering for echo cancellation. IEEE
converges faster than the existing LMS algorithms with International Symposium on Circuits and Systems, 1(2):
variable step size. The excellent performance is achieved with 721-724. https://doi.org/10.1109/ISCAS.2001.921172
a simple structure and easy implementation steps. Therefore, [11] Huang, H.C., Lee, J. (2012). A new variable step-size
our algorithm can be applied to many other fields of adaptive NLMS algorithm and its performance analysis. IEEE
filtering, such as adaptive system identification and adaptive Transactions on Signal Processing, 60(4): 2055-2060.
signal separation. https://doi.org/10.1109/tsp.2011.2181505
[12] Chan, S.C., Chu, Y.J., Zhang, Z.G. (2013). A new
variable regularized transform domain NLMS adaptive
REFERENCES filtering algorithm-acoustic applications and
performance analysis. IEEE Transactions on Audio,
[1] Gao, Y., Xie, S.L. (2001). A variable step size LMS Speech and Language Processing, 21(4): 868-878.
adaptive filtering algorithm and analysis. Chinese https://doi.org/10.1109/tasl.2012.2231074
Journal of Electronics, 29(8): 1094-1097. [13] Li, W., Zhao, Z., Tang, J., He, F., Li, Y., Xiao, H. (2013).
https://doi.org/10.3321/j.issn:0372-2112.2001.08.023 Performance analysis and optional design of the adaptive
[2] Zhang, Y.G., Chambers, J.A., Wang, W.W., Kendrick, P., interference cancellation syestem. IEEE Transactions on
Cox, T.J. (2007). A new variable step-size LMS Electromanetic Compatibility, 55(6): 1068-1075.
algorithm with robustness to nonstationary noise. 2007 https://doi.org/10.1109/temc.2013.2265803
IEEE International Conference on Acoustics, Speech and [14] Dalers, C.J. (2014). A method of adaptation between
Signal Processing, pp. 1349-1352. steepest-descent and newton’s algorithm for
10.1109/ICASSP.2007.367095 multichannel active control of tonal noise and vibration.
[3] Zhao, S., Man, Z., Khoo, S., Wu, H.R. (2008). Variable nternationnal Congress on Sound & Vibration, 19(3): 1-
step-size LMS algorithm with a quotient form. Signal 8. https://doi.org/10.13140/2.1.3647.2960
Processing, 8(1): 67-76. [15] Zheng Z., Liu, Z., Dong, Y. (2017). Steady-state and
https://doi.org/10.1016/j.sigpro.2008.07.013 tracking analyses of improved proportionate affine
[4] Abdolee, R., Champagne, B., Sayed, A.H. (2013). projection algorithm. IEEE Transactions on Circuits and
Diffusion LMS strategies for parameter estimation over Systems II: Express Briefs, 65(11): 1793-1797.
fading wireless channels. Proceedings of the IEEE https://doi.org/10.1109/tcsii.2017.2767569
International Conference on Communications (ICC), pp. [16] Huang, B.Y., Xiao, Y.G., Ma, W.P., Wei, G. Sun, J.W.
1926-1930 10.1109/ICC.2013.6654804 (2015). A simplified variable step size LMS algorithm
[5] Ren, Z.Z., Xu, J.C., Yan, Y.P. (2011). Improved variable for Fourier analysis and its statistical properties. Signal
step size LMS adaptive filtering algorithm and its Processing, 117: 69-81.
performance analysis. Application Research of https://doi.org/10.1016/j.sigpro.2015.04.021
Computers, 28(3): 954-956. [17] Das, R.L., Chakraborty, M. (2015). On convergence of
https://doi.org/10.3969/j.issn.1001-3695.2011.03.046 proportion- ate-type normalized least mean square
[6] Mayyas, K., Momani, F. (2011). An LMS adaptive algorithms. IEEE Transactions on Circuits and Systems
algorithm with a new step-size control equation. Journal II: Express Briefs, 62(5): 491-495.
of the Franklin Institute, 348(4): 589-605. https://doi.org/10.1109/TCSII.2014.2386261
https://doi.org/10.1016/j.jfranklin.2011.01.003 [18] Gui, G., Xu, L., Matsushita, S. (2015). Improved
[7] Mayyas, K. (2005). A new variable step size control adaptive sparse channel estimation using mixed square
method for the transform domain LMS adaptive fourth error criterion. Journal of the Franklin Institute,
algorithm. Circuits, Systems & Signal Processing, 24(6): 352(10): 4579-4594.
703-721. https://doi.org/10.1007/s00034-005-0705-7 https://doi.org/10.1016/j.jfranklin.2015.07.006
[8] Zeng, X.X., Shao, Z.H., Lin, W.Z., Luo, H.B. (2018). [19] Chen, Y., Tian, J.P., Liu, Y.P. (2015). New variable step
Orientation holes positioning of printed board based on size LMS adaptive filtering algorithm. Electronic
LS-Power spectrum density algorithm. Traitement du Measurement Technology, 38(4): 27-31.
Signal, 35(3-4): 277-288. https://doi.org/10.19651/j.cnki.emt.2015.04.007
https://doi.org/10.3166/TS.35.277-288 [20] Ni, J., Chen, J., Chen, X. (2016). Diffusion sign - error
[9] Zhu, Y.L., Xu, C.G., Xiao, D.G. (2019). Denoising LMS algorithm: formulation and stochastic behavior
ultrasonic echo signals with generalized s transform and analysis. Signal Processing, 128: 142-149.
singular value decomposition. Traitement du Signal, http://dx.doi.org/10.1016/j.sigpro.2016.03.022
307