RLS ALGORITHM FOR ACOUSTIC ECHO CANCELLATION
Amit Munjal*, Vibha Aggarwal**, Gurpal Singh***
*/**RIMT-IET Mandi Gobindgarh, India,*** BBSBEC Fatehgarh Sahib, India *amit_munjal@yahoo.com, **vibha_ec@yahoo.co.in, ***gurpal@bbsbec.org
Abstract-Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. It occurs when an audio source and sink operate in full duplex mode; an example of this is a hands-free loudspeaker telephone. In this situation the received signal is output through the telephone loudspeaker (audio source), this audio signal is then reverberated through the physical environment and picked up by the systems microphone (audio sink). The effect is the return to the distant user of time delayed and attenuated images of their original speech signal. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. This paper focuses on the use of RLS algorithm to reduce this unwanted echo, thus increasing communication quality.
Adaptive filters are dynamic filters, which iteratively alter their characteristics in order to achieve an optimal desired output. An adaptive filter algorithmically alters its parameters in order to minimize a function of the difference between the desired output d(n)its actual output y(n). This function is known as the cost function of the adaptive algorithm. Figure 1.2 shows a block diagram of the adaptive echo cancellation system implemented throughout this paper. Here the filterH(n) represents the impulse response of the acoustic
environment, W(n) represents the adaptive filter used to cancel the echo signal. The adaptive filter aims to equate its
output y(n) to the desired output d(n)(the signal I. INTRODUCTION
reverberated within the acoustic environment). At each
Acoustic echo occurs when an audio signal is iteration the error signal, e(n)=d(n)−y(n)is fed back reverberated in a real environment, resulting in the original into the filter, where the filter characteristics are altered intended signal plus attenuated, time-delayed images of this accordingly.[1] signal. This paper will focus on the occurrence of acoustic echo in telecommunication systems. Such a system consists of coupled acoustic input and output devices, both of which are active concurrently. An example of this is a hands-free telephony system. In this scenario the system has both an active loudspeaker and microphone input operating simultaneously.
The system then acts as both a receiver and transmitter in full duplex mode. When a signal is received by the system, it is output through the loudspeaker into an acoustic environment. This signal is reverberated within the environment and returned to the system via the microphone input. These reverberated signals contain time-delayed images of the original signal, which are then returned to the original
Figure 1.2: Block diagram of an adaptive sender (Figure 1.1, ak is the attenuation, tk is time delay).
echo cancellation system. The occurrence of acoustic echo in speech transmission causes
signal interference and reduced quality of communication. The
method used to cancel the echo signal is known as adaptive The aim of an adaptive filter is to calculate the difference
between the desired signal and the adaptive filter output, filtering.
e(n). This error signal is fed back into the adaptive filter and its coefficients are changed algorithmically in order to minimize a function of this difference, known as the cost function. In the case of acoustic echo cancellation, the optimal output of the adaptive filter is equal in value to the unwanted echoed signal. When the adaptive filter output is equal to desired signal the error signal goes to zero. In this situation the echoed signal would be completely cancelled and the far user would not hear any of their original speech returned to
them.[4]
Figure 1.1: Origins of acoustic echo.
299
Proceedings of 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT-IET, Mandi Gobindgarh. March 29, 2008
II. RECURSIVE LEAST SQUARES (RLS) ALGORITHM.
These algorithms attempt to minimize the cost function in equation 2.1 [2]. Wherek=1 is the time at which the RLS algorithm commences and λ is a small positive constant very close to, but smaller than 1. With values of λ<1 more importance is given to the most recent error estimates and thus the more recent input samples, this results in a scheme that places more emphasis on recent samples of observed data and tends to forget the past [2].
2
ζ(n)=∑λn−ken(k) (2.1)
k=1n
If define the X(n) as the matrix consisting of the n
previous input column vector up to the present time then y(n) can also be expressed as equation 2.3.
X(n)=[x(1),x(2),......,x(n)]y(n)=X(n)w(n)
T
(2.3)
The cost function of equation 2.1 can then be expressed in matrix vector form usingΛ(n), a diagonal matrix consisting of the weighting factors.
2
(k)ζ(n)=∑λn−ken
n
Unlike the LMS algorithm and its derivatives, the RLS
algorithm directly considers the values of previous error estimations. RLS algorithms are known for excellent performance when working in time varying environments. These advantages come with the cost of an increased computational complexity and some stability problems [3].
A. DERIVATION OF THE RLS ALGORITHM.
The RLS cost function of equation 2.1 shows that at a time n, all previous values of the estimation error since the commencement of the RLS algorithm are required. Clearly as time progresses the amount of data required to process this algorithm increases. The fact that memory and computation capabilities are limited makes the RLS algorithm a practical impossibility in its purest form. However, the derivation still assumes that all data values are processed. In practice only a finite number of previous values are considered, this number corresponds to the order of the RLS FIR filter, N.
First define yn(k)as the output of the FIR filter, at n, using the current tap weight vector, and the input vector of a previous time k. The estimation error value en(k)is the difference between the desired output value at time k, and the corresponding value ofyn(k). These and other appropriate
~
=eT(n)Λ(n)e(n)
k=1
⎡λn−1⎢⎢0~
Λ(n)=⎢0
⎢⎢...⎢0⎣
000
λn−2
0...0
λn−3
...0
...0⎤
⎥...0⎥
...0⎥(2.4)
⎥......⎥...1⎥⎦
Substituting values from equations 2.2 and 2.3, the cost
function can be expanded then reduced as in equation 2.5. (Temporarily dropping (n) notation for clarity).
ζ(n)=eT(n)Λ(n)e(n)
~
definitions are expressed in equation 2.2, for
k=1,2,3,.......,n.
Then derive the gradient of the above expression for the
cost function with respect to the filter tap weights. By forcing
T
yn(k)=w(n)x(k)this to zero then find the coefficients for the filter,w(n),
which minimizes the cost function.[5] e(k)=d(k)−y(k)
n
n
~~~~
=dTΛd−dTΛy−yTΛd+yTΛy
~~~=dTΛd−dTΛ(XTw)−(XTw)TΛd
~
+(XTw)TΛ(XTw)
~~~w=dTΛd−2θλTw+wTψλ~~(n)=X(n)Λψ(n)XT(n)λ (2.5) ~~
θλ(n)=X(n)Λ(n)d(n)
d(n)=[d(1),d(2)....d(n)]
T
T
y(n)=[yn(1),yn(2)......yn(n)]e(n)=[en(1),en(2)......en(n)]Te(n)=d(n)−y(n)
(2.2)
(2.6) ~−1~w(n)=ψλ(n)θλ(n)
The matrix ψ(n) in the above equation can be expanded and then rearranged in a recursive form; then use the special form of the matrix inversion lemma of equation 2.7 to find an inverse for this matrix, which is required to calculate the tap weight vector update. The vector k(n) is known as the gain vector and is included in order to simplify the calculation.
~(n)w(n)=θ(n)ψλλ~
300
Proceedings of 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT-IET, Mandi Gobindgarh. March 29, 2008
(A+αaa)
−1
T−1
αA−1aaTA−1
=A−
1+αaTA−1a
−1
(2.7)
2. The intermediate gain vector is calculated using equation 2.11. ~−1(n−1)x(n)u(n)=ψλk(n)=~−1(n)=λψ~−1(n−1)+x(n)xT(n)ψλλ~−1(n−1)x(n)xT(n)ψ~−1(n−1)λ−1ψ−1λλ ~=λψλ(n−1)−−1T−1~1+λx(n)ψλ(n−1)x(n)
~−1(n−1)−k(n)xT(n)ψ~−1(n−1))=λ−1(ψλλ (2.11) 1u(n)λ+xT(n)u(n)where
3. The estimation error value is calculated using equation 2.12. en−1(n)=d(n)−yn−1(n) (2.12) 4. The filter tap weight vector is updated using equation 2.12 and the gain vector calculated in equation 2.11. ~−1(n−1)x(n)λ−1ψλk(n)=~−1(n−1)x(n) (2.8) 1+λ−1xT(n)ψλ~−1(n)x(n)=ψλThe vector θλ(n)of equation 2.5 can also be expressed
in a recursive form. Using this and substituting ψ(n)from equation 2.8 into equation 2.6 finally arrive at the filter weight
update vector for the RLS algorithm, as in equation
−1
w(n)=wT(n−1)+k(n)en−1(n) (2.13) 5. The inverse matrix is calculated using equation 2.14. −1−1(n)=λ−1(ψλ(n−1)−k(n)ψλ×[x(n)ψλ(n−1)]) T−1 (2.14) Each iteration of the RLS algorithm requires θλ(n)=λθλ(n−1)+x(n)d(n)~(n)θ(n)w(n)=ψλλ~~−1(n−1)θ(n−1)=ψλλ~T−1
2.9.−k(n)xψλ(n−1)θλ(n−1)+k(n)d(n)−1
~~4N2multiplication operations and 3N2 additions [2]. III. RESULTS OF RLS ALGORITHM The RLS algorithm was simulated using Matlab. This algorithm proved to be very effective. Figure 3.1 shows the input signal. Figure 3.2and 3.3 shows the estimated error signal and desired signal. Figure 3.5 and 3.6 shows the impulse response of the RLS adaptive algorithm at integer multiples of 7500 iterations. Figure 3.8 shows the RLS adaptive filter echo signal attenuation. ~ =w(n−1)−k(n)xT(n)w(n−1)+k(n)d(n)=w(n−1)+k(n)(d(n)−wT(n−1)x(n))w(n)=w(n−1)+k(n)en−1(n)where en−1(n)=d(n)−wT(n−1)x(n) (2.9) C. IMPLEMENTATION OF THE RLS ALGORITHM The memory of the RLS algorithm is confined to a finite number of values, corresponding to the order of the filter tap weight vector. Firstly, two factors of the RLS implementation should be noted: the first is that although matrix inversion is essential to the derivation of the RLS algorithm, no matrix inversion calculations are required for the implementation, thus greatly reducing the amount of computational complexity of the algorithm. Secondly, unlike the LMS based algorithms, current variables are updated within the iteration they are to be used, using values from the previous iteration. To implement the RLS algorithm, the following steps are executed in the following order. 1. The filter output is calculated using the filter tap weights from the previous iteration and the current input vector. Figure 3.1: Input Signal yn−1(n)=wT(n−1)x(n) (2.10) 301 Proceedings of 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT-IET, Mandi Gobindgarh. March 29, 2008
Figure 3.2: Error Signal
Figure 3.3: Desired Signal
Figure 3.4: Adaptive Filter Output
Figure 3.5: Actual Impulse Response
Figure3.6: Convergence of the RLS Adaptive Filter to
Actual Impulse Response
Figure 3.7: Mean Square Error
302
Proceedings of 2nd National Conference on Challenges & Opportunities in Information Technology (COIT-2008) RIMT-IET, Mandi Gobindgarh. March 29, 2008
Figure 3.8: RLS Algorithm Echo Signal Attenuation
CONCLUSION
In RLS algorithm average attenuation is -16.4965 dB and numbers of multiplications are4N. It has the greatest attenuation of any algorithm, and converges much faster than the LMS algorithm. This performance comes at the cost of computational complexity and considering the large FIR order required for echo cancellation, this is not feasible for real time implementation. In future we can also perform this echo cancellation with other adaptive algorithms.
REFRENCES
[1] Haykin, Simon. 1991, Adaptive FilterTheory. 2nd Edition. Prentice-Hall Inc., New Jersey.
[2] Farhang-Boroujeny, B. 1999, Adaptive Filters, Theory and Applications. John Wiley and Sons, New York.
[3] Diniz, Paulo S. R. 1997, Adaptive Filtering, Algorithms and Practical Implementation. Kluwer Academic Publishers, Boston.
[4] Bellanger, Maurice G., 2001. “Adaptive Digital Filters”, 2nd edition. Marcel Dekker Inc., New York.
[5] Chassaing, Rulph. 2002, “DSP applications using C” John Wiley and Sons, New York.
2
303
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- yrrf.cn 版权所有 赣ICP备2024042794号-2
违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务