• 沒有找到結果。

A variable step-size sign algorithm for channel estimation

N/A
N/A
Protected

Academic year: 2021

Share "A variable step-size sign algorithm for channel estimation"

Copied!
9
0
0

加載中.... (立即查看全文)

全文

(1)

A variable step-size sign algorithm for channel estimation

Yuan-Ping Li

n

, Ta-Sung Lee, Bing-Fei Wu

Department of Electrical and Computer Engineering, National Chiao Tung University, 1001, Ta-Hsueh Road, Hsinchu 30010, Taiwan

a r t i c l e i n f o

Article history:

Received 2 November 2013 Received in revised form 17 February 2014 Accepted 21 March 2014 Available online 28 March 2014 Keywords:

Adaptive filters Channel estimation Impulsive noise Least mean square Sign algorithm System identification

a b s t r a c t

This paper proposes a new variable step-size sign algorithm (VSSA) for unknown channel estimation or system identification, and applies this algorithm to an environment containing two-component Gaussian mixture observation noise. The step size is adjusted using the gradient-based weighted average of the sign algorithm. The proposed scheme exhibits a fast convergence rate and low misadjustment error, and provides robustness in environments with heavy-tailed impulsive interference.

& 2014 Elsevier B.V. All rights reserved.

1. Introduction

In recent years, the variable step-size (VSS) techniques have been adopted in the least-mean-square (LMS)

algo-rithm for improving the convergence rate [1–9]. A VSS

technique was proposed in [4] by applying the squared

instantaneous error to control the step size. A variable step-size LMS (VSLMS) algorithm using the weighted

average of the gradient vector was proposed in[5]and a

variable step size normalized version (VSSNLMS) was

proposed in[6]. A modified version of[4]using the noise

resilient variable step size was presented in[7]. A quotient

form LMS algorithm of filtered version of the quadratic error for system identification application was proposed in

[8]. The LMS algorithm, which is applied to the sparse

channel estimation, using an l1-norm penalty to the cost

function was proposed in [9]. The channel estimation

is done by an adaptive filter, the weight vector of which is wi¼ ½w0;i; …; wN  1;iT with a tap length of N, and is

updated based on the error ei, which is given by

ei¼ diwTixi ð1Þ

and

di¼ yiþni¼ wToptxiþni; ð2Þ

where ðUÞT

, di, xi, yi, ni, and wopt denote the vector

transpose operator, the desired signal, the input signal

vector xi¼ ½xi; …; xi  N þ 1T, the output of the unknown

system, the system noise, and the optimal Wiener weight, respectively, at time index i. The algorithm for updating the weight of the LMS adaptive filter with a fixed step size μ is given as wi þ 1¼ wiþμeixi, where eixiis the gradient

vector. This is because the cost function using ð1=2Þe2

i is

minimized according to the weights. The mathematical formulas used in these VSLMS algorithms to update the

step sizeμiare summarized inTable 1. A common problem

in these algorithms is that their convergence performance can be degraded by the presence of heavy-tailed impulsive interference. Because the energy of the instantaneous error is used as the cost function of the LMS algorithm

[1–9]and the error signal is sensitive to impulsive noise,

this will make these LMS-type algorithms prone to considerable degradation in several practical applications. Furthermore, because the error signal is used as an

Contents lists available atScienceDirect

journal homepage:www.elsevier.com/locate/sigpro

Signal Processing

http://dx.doi.org/10.1016/j.sigpro.2014.03.030

0165-1684/& 2014 Elsevier B.V. All rights reserved.

nCorresponding author.

E-mail addresses:leochan87@hotmail.com(Y.-P. Li),

(2)

estimate of the step size, gradient-based algorithms are also sensitive to impulsive noise.

The sign algorithm (SA) [1–3,10–17], is now receiving

attention in the adaptive filtering area because of the simplicity of its implementation. This algorithm can per-form efficiently in the presence of impulsive interference. SA is more suitable for this application than LMS because it has a lower computational requirement and is resistant to the presence of impulsive interference. Based on the advantages of SA, several studies have used adaptive algorithms to reduce the detrimental effects of impulse noise. A robust mixed norm (RMN) algorithm using the

weighted averaging of the l1 and l2 norms of error was

proposed in[11]and its normalized version (NRMN) was

introduced in [14]. A dual sign algorithm (DSA) operates

between two sign algorithms with a large step-size para-meter for increasing the convergence speed and a small

one for reducing the steady-state error[12,13]. An affine

projection sign algorithm (APSA) [15] using an

l1-norm optimization criterion has been proposed without

involving any matrix inversion to achieve robustness against impulsive noise. A modified variable step-size

APSA (MVSS-APSA) was proposed in[16]in order to obtain

a fast convergence rate and small misalignment error when compared to APSA. A similar MVSS-APSA method

applied to a subband adaptive filter was proposed in[17].

In [18], a variable sign-sign Wilcoxon algorithm was

developed for the system identification application and performs efficiently in the presence of impulsive noise. The mathematical formulas used in these sign algorithms

for updating the step size are summarized inTable 2.

This paper proposes a new framework based on scaling in

the conventional SA cost function, using a critical factorγ to

γjeij (γ40); hence, its gradient vector is γ sgnðeiÞxi and

weight update is wi þ 1¼ wiþγ sgnðeiÞxi. Similar to the step

size, the parameterγ determines the convergence time and

level of misadjustment of the algorithm. When the conver-gence speed of the SA is enhanced using a large step size, the convergence performance exhibits a substantial chattering phenomenon. The loss of information in the sign error signals occurs because they provide only positive or negative polarities, similar to a switching mode with a substantial chattering phenomenon in a control effect. To overcome this

disadvantage,γ can be treated as a variable instead of a fixed

Table 1

Summary and complexity of the step-size updates of some existing VSLMS algorithms.

Algorithm Update equations of the step size The number of mults (adds)

VSS[4] μi¼ αμi  1þγe2i 2N þ 4 (2Nþ 1) VSLMS[5] ^pi¼ β ^pi  1þei  1xi  1 μi¼ μi  1þγeixTi^pi ( 5N þ 3 (4N) VSSNLMS[6] ^pi¼ β ^pi  1þð1βÞjjxxiijj2ei μi¼ μsjj^pijj2 jjxijj2 s 2 n Ns2 xþjj^pijj 2   . 8 < : 6N þ 6 (5N  1) Proposed ^pi¼ β ^pi  1þð1βÞsgnðeiÞxi μi¼ αμi  1þγsjj^pijj2 ( 5N þ 2 (4N)

Note: the parameters represented by the same symbols in different algorithms are not necessarily related. The complexities of various algorithms include computation of the filter output and updates of the tap weights and step-size parameters (mults and adds denote the multiplications and additions, respectively).

Table 2

Summary and complexity of the step-size updates of some existing variable step-size sign algorithms.

Algorithm Update equations of the step size The number of mults (adds)

DSA[13] rðeiÞ ¼ sgnðeiÞ; jeijrτ L sgnðeiÞ; jeij4τ ( μi¼ μrðeiÞ 8 > > < > > : 2N þ1 (2N) NRMN[14] λi¼ 2erfc½jdij=^sd;i ^sd;i¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 N  Kw 1o T iToi q μi¼½2λ 2A iþ ½1  λi ffiffiffiffiffiffi 2=π p ðs2 bþ s2ηÞ 1=2Ns2x 8 > > > > < > > > > : Greater than 3N  Kwþ4 (3NKwþ2) APSA[15] μi¼ μ= ffiffiffiffiffiffiffiffiffiffiffiffijjxijj2 p 3N (3N  1) MVSS-APSA[16] βi¼ λβi  1þð1λÞjei  1j μi¼ αμiþð1αÞ min jjei  1ffiffiffiffiffiffiffiffiffiffiffiffiffijjxj  βij i  1jj2 p ; μi  1   8 > < > : 3N þ4 (3N þ 2) Proposed ^pi¼ β ^pi  1þð1βÞ sgnðeiÞxi μi¼ αμi  1þγsjj^pijj2 ( 5N þ2 (4N)

Note: the parameters of T and oiin the NRMN algorithm[14]are set according to T ¼ Diag[1,…, 1, 0, …, 0] and oi¼Ο([di,…, di  N þ 1]T). The oicontains the

(3)

value, thus compensating for the loss of information in the sign error signals. Therefore, the algorithm can converge

quickly by maintainingγ as a large value in the early stages of

the adaptive process and using a smallγ value at the steady

state to ensure accurate convergence. Therefore, estimating a

smooth sign gradient vector, ^pi, using a weighted average

with a smoothing factorβ (0oβo1) was proposed so that

^pi¼ β ^pi  1þð1βÞsgnðeiÞxi: ð3Þ

When using γsJ ^piJ2 (γs40) instead of γ in the recursive

operation, the proposed variable step-size sign algorithm (VSSA) becomes

μi¼ αμi  1þγsJ ^piJ2; ð4Þ

wi þ 1¼ wiþμisgnðeiÞxi; ð5Þ

where jjUjj2denotes the squared Euclidean norm operation.

The behavior in (3) and (4) corresponds to low-pass

filtering, which effectively reduces the noise content. The gradient vector can be regarded as a criterion of optimal performance because it always points in the direction of the greatest rate of decrease during the adaptive process toward the bottom of the error performance surface. Thus, based on these advantages, the most favorable option is to

apply the weighted average of the sign gradient vector in(3)

and the recursive operation in(4)to determine the step size

of the adaptive algorithm. The simulation results show that the proposed VSSA achieved faster convergence, a lower misadjustment error, and lower complexity than did the gradient-based VSLMS. In addition, it provided robustness in environments exhibiting heavy-tailed impulsive interference. 2. Derivation and analysis of proposed algorithm 2.1. Modification for impulse noise

The convergence behavior of (5) has been studied in

[1–3,10], and is based on Gaussian inputs and independent

additive Gaussian observation noise. To extend this to a two-component Gaussian mixture for the observation noise, similar assumptions are used in the convergence analysis. The input signal is white noise, with a zero mean and variances2

x. Therefore, the autocorrelation matrix of the input

signals is R ¼ EðxixTiÞ ¼ s 2

xI. Consider that a contaminated

Gaussian impulse noise ni[12]is defined as follows:

ni¼ biþωiηi; ð6Þ

where bi and ηi are each zero-mean, independent, white

Gaussian sequences with variancess2

b ands2η¼ Ks2b (K⪢1);

ωiis a Bernoulli random process, an independent sequence

of zeros and ones with Pr[ωi¼1]¼prand Pr[ωi¼0]¼1  pr.

Thus, the probability density function (pdf) of niis given by

pniðniÞ ¼ ð1prÞNð0; s 2 bÞþprNð0; ðK þ1Þs2bÞ; ð7Þ s2 n¼ Eðn 2 iÞ ¼ s 2 bþprs2η¼ ð1prÞs2bþpr½ðK þ1Þs2b ð8Þ

If pr¼0 or 1, then ni is a zero-mean Gaussian random

variable.

2.2. Mean and mean-squared behavior

Let vi¼ wiwopt, and Ki¼ EðvivTiÞ denotes the second

moment matrix of vi. Eq. (2) can be inserted into (1),

therefore, the error can be further represented as

ei¼ nivTixi ð9Þ

Taking the expectation in(1)and conditioned on viyields a

mean squared error (MSE) of Eðe2

ijviÞ  Eðe2iÞ ¼ s2e;i ð10Þ

Substituting (9) in (5), taking the expectation, and

using the condition in whichμiis statistically independent

of xi, vi, and ei, the weight error vector of VSSA satisfies

Eðvi þ 1Þ ¼ EðviÞþEðμiÞE½sgnðeiÞxi ð11Þ

The second moment Kiof the weight error vector can be

evaluated recursively as

Ki þ 1¼ KiþEðμiÞE½sgnðeiÞðvixTiþxivTiÞþEðμ2iÞR ð12Þ

According to Appendix A, The weight-error vector and the

second moment Kican be obtained as follows from(11)

and(12), respectively: Eðvi þ 1Þ ¼ IEðμiÞ ffiffiffi 2 π r 1 pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþtrðRKiÞ q 2 6 4 3 7 5R 8 > < > : 9 > = > ;EðviÞ; ð13Þ Ki þ 1¼ KiEðμiÞ ffiffiffi 2 π r ðKiR þRKiÞ  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1  pr s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþtrðRKiÞ q 8 > < > : 9 > = > ;þEðμ 2 iÞR ð14Þ

Assuming the initial condition ^p0¼ 0 and using the

expectation of the squared norm of(3), the following is

obtained usingLemma 1

EðJ ^piJ2Þ ¼ ð1βÞ2 ∑ i k ¼ 1 ∑ i m ¼ 1β i  kβi  mE½sgnðe kÞsgnðemÞxTkxm ¼ ð1βÞ2 ∑i k ¼ 1 ka m ∑i m ¼ 1 ma k βi  k βi  m2 π EðekemxTkxmÞ se;kse;m þ ∑ i k ¼ 1 k ¼ m β2ði  kÞE JxkJ2   2 4 3 5  ð1βÞ2 ∑i k ¼ 1 ka m ∑i m ¼ 1 ma k β2i  k  m2 π 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKkÞ q þ pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½ðK þ1Þs2 bþtrðRKkÞ q 2 6 4 3 7 5 8 > < > :

(4)

wherese;kandse;mare the standard deviations of the error

sequences. Note that the last line on the right-hand side

of (15) corresponds to the effect of impulsive noise.

Similarly, the expectation of the recursion in (4) can be

obtained as follows: EðμiÞ ¼ γs ∑ i k ¼ 1 αi  kJ ^p kJ2Þ ð16Þ

Eqs.(13)–(16) show the transient behavior of the VSSA.

To analyze the steady-state performance, the following standard assumptions were made: (1) the white Gaussian

noise niis statistically stationary, and is uncorrelated and

independent of the input signal xiwith a distribution of

Nð0; s2

xÞ and (2) when the step size is small at the steady

state, the excess error simultaneously converges to a value much smaller than the value of the noise signal; therefore,

ei ni. For the time-index s, the system is assumed to be at

the steady state when iZs, and the error signals are

assumed to be uncorrelated when kam,(15)is

lim i-1EðJ ^piJ 2 Þ  ð1βÞ2 ∑i k ¼ sβ 2ði  kÞ Ns2 x: ð17Þ

Hence, when i-1,(17)can be further simplified as

EðJ ^p1J2Þ 1 β

1 þβNs2x: ð18Þ

Following the same procedure, when i-1, and by

sub-stituting(18)into(16),(16)can be simplified as

Eðμ1Þ  γs

1 αU

1 β

1 þβUNs2x: ð19Þ

Using (10), based on the Gaussian assumption in [12],

allows showings2

e;ias a mixture of two Gaussian variables

with parameters pr and 1 pr and their respective

var-iances ðK þ 1Þs2

bþtrðRKiÞ and s2bþtrðRKiÞ. Because input xi

is white (R ¼s2

xI), usingLemma 1in Appendix A and the

standard assumption in [1–3,10,12], the MSE in (10) is

derived as follows: s2

e;i¼ ð1prÞs2bþpr½ðK þ1Þs2bþs2xtrðKiÞ: ð20Þ

Observing the MSE given in(20), it is only necessary to

study a recursion for ki¼tr(Ki). Taking the trace of both

sides of(14)yields ki þ 1¼ kiEðμiÞs2x ffiffiffi 8 π r 1 pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþs2xki q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþs2xki q 8 > < > : 9 > = > ;kiþEðμ 2 iÞNs2x: ð21Þ

Assuming the adaptive filter has converged when i-1,

the following is obtained 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþs2xk1 q þ pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½ðK þ1Þs2 bþs2xk1 q 8 > < > : 9 > = > ;k1¼ ffiffiffi π 8 r Eðμ1ÞN: ð22Þ Assumings2

xk15s2bwhen the system has converged to a

steady state and its step size is sufficiently small,(22)can

be approximated as k1 ffiffiffi π 8 r Eðμ1ÞN 1  pr sb þ ffiffiffiffiffiffiffiffiffiffiffipr K þ 1 p sb   1 : ð23Þ

The excess MSE (EMSE) defined as ξexcess¼ trðRK1Þ ¼

s2 xk1is ξexcess ffiffiffi π 8 r Eðμ1ÞNs2x 1 pr sb þ ffiffiffiffiffiffiffiffiffiffiffipr K þ1 p sb   1 : ð24Þ

Hence, with the EMSE in(24), the VSSA produces a lower

impact on the impulsive interference than does the LMS

algorithm (shown in Appendix B). Substituting(19)into

(24), the EMSE for the proposed VSSA becomes

ξexcess ffiffiffi π 8 r N2s4 x γ sð1βÞ ð1αÞð1þβÞ 1 pr sb þ ffiffiffiffiffiffiffiffiffiffiffipr K þ1 p sb   1 : ð25Þ

According to [1–3,10], to guarantee the stability of the

MSE,α, β, and γscan be determined by

0oEðμ1Þ  γ1 αs U1 β1 þβUNs2xo ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi π 2fð1prÞs2bþpr ðK þ1Þs2b g q Ns2 x ; ð26Þ 0soð1αÞð1þβÞ ð1βÞN2 s4 x ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi π 2fð1prÞs 2 bþpr ðK þ1Þs2b g r : ð27Þ

Because K⪢1, the right-hand side of(25)becomes

1  pr sb þ ffiffiffiffiffiffiffiffiffiffiffipr K þ1 p sb   1 ¼ sb 1 prþ pr ffiffiffiffiffiffiffiffiffiffiffi K þ 1 p   1  sbð1prÞ 1: ð28Þ

In most cases,(28)can be simplified tosbwhen prr0:1.

Hence, the EMSE in(25)can be further simplified as

ξexcess ffiffiffi π 8 r N2s4 x γsð1βÞ ð1αÞð1þβÞ sb; prr0:1: ð29Þ

It can be observed in(29)that the EMSE for the proposed

VSSA depends on the standard deviation of the system

noise and the variance of the input vector when prr0:1.

The heavy-tailed impulsive noises2

ηð ¼ Ks2bÞ can be

com-pletely neglected. In addition, the proposed algorithm also

performed well when verified with pr¼0.5 (not shown

here). When using the EMSE, (25) can be determined

 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 pr s2 bþtrðRKmÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþtrðRKmÞ q 2 6 4 3 7 5EðvT kxkxTkxmxTmvmÞþ ∑ i k ¼ 1 k ¼ m β2ði  kÞJx kJ2Þ 9 = ;; ð15Þ

(5)

according to pras follows: ξexcess ffiffiπ 8 p N2s4 x ð1  αÞð1 þ βÞγsð1  βÞ h i sb; pr¼ 0 ffiffiπ 8 p N2s4 x ð1  αÞð1 þ βÞγsð1  βÞ h i 1  p r sb þ pr ffiffiffiffiffiffiffiffi K þ 1 p sb   1 ; 0opro1 ffiffiπ 8 p N2s4 x ð1  αÞð1 þ βÞγsð1  βÞ h i ðpffiffiffiffiffiffiffiffiffiffiffiK þ 1sbÞ; pr¼ 1 8 > > > > > < > > > > > : ð30Þ

3. Simulation results and discussion

The performance of the proposed algorithm was eval-uated by carrying out computer simulations in a channel estimation scenario, using an adaptive filter with a length of 25 taps (the same as that of the unknown channel) to demonstrate the validity of the analysis. The input signal was obtained through three Gaussian distributed signals by directly passing a white zero-mean Gaussian random sequence (white Gaussian inputs) or filtering the same Gaussian random sequence through a third-order

low-pass filter (third-order inputs) G1ðzÞ ¼ 0:44=ð11:5z 1þ

z 20:25z 3Þ or a first-order system G

2ðzÞ ¼ 1=

ð10:9z 1Þ (first-order inputs). The desired signal was

generated by adding the contaminated Gaussian impulsive noise to the output of the system. The impulse response of

the system was normalized as wT

optwopt¼ 1, and the input

signal was scaled so that the output power wass2

y¼ 1. The

measurement noise bi was added to yi such that SNR¼

10 dB and 0 dB according to the calculation of the

signal-to-noise ratio (SNR) [SNR ¼ 10log10ðs2y=s2bÞ]. A strong

impulsive interference with the Bernoulli-Gaussian

distri-bution (ωiηi), where ηi was a white Gaussian random

sequence in whichs2

η¼ 100; 000s2ywhen SNR¼10 dB and

0 dB, andωiwas a Bernoulli process with the probability of

Pr[ωi¼1]¼pr, was also added to yi. The results obtained in

this study were averaged from over 200 independent trials. The simulation parameters of the various sign

algorithms are shown inTable 3, according to the original

papers. Although the studies of the step size for NRMN

[14], APSA[15], and MVSS-APSA[16]had been carried out,

there were no general guidelines for the selection of the step size in these proposed methods. Manual adjustment of each parameter was needed to achieve good perfor-mance. The input signals were generated using direct

white Gaussian inputs, G1(z), and G2(z) for Figs. 1–3,

Figs. 4 and 5, and Figs. 6 and 7, respectively, when SNR ¼10 dB. For SNR¼ 0 dB, the performance comparison of the EMSE curves is similar to the case of SNR¼ 10 dB, so we only show the comparison with white Gaussian inputs (Fig. 8).

Fig. 1shows a comparison of the EMSE curves of the proposed algorithm with those of other adaptive sign algorithms at a 10 dB SNR, without impulsive noise

(pr¼0). The theoretical value of the steady-state EMSE is

also included. The proposed VSSA converged faster with the same steady-state error compared with SA using a

fixed step size ofμ¼0.00002, DSA[13], NRMN [14], and

APSA [15] using one projection order. Although

MVSS-APSA[16] (also using one projection order) had a higher

initial convergence speed, the proposed VSSA showed a lower steady-state error. Because MVSS-APSA starts with a large step size, it converges fast initially. It should be noted that the theoretical value of the steady-state EMSE is slightly biased from the simulation results because of the approximations and assumptions made in the steady-state

performance analysis. Fig. 2 shows the step size of the

proposed algorithm in (a), the estimates of jj^pijj2 with

impulsive noise of pr¼0 in (b), and the estimates of jj^pijj2

with pr¼0.1 in (c). Estimates of jj^pijj2 and the step size

were close to their respective theoretical values of the

steady state according to(18)and(19), which are

repre-sented by a dashed line.Fig. 3shows a comparison of the

EMSE curves of the proposed VSSA with those of other adaptive sign algorithms at a 10 dB SNR, with impulsive

noise of pr¼0.1. Moreover, the change in the coefficient

Table 3

Simulation parameters of the variable step-size sign algorithms for the channel estimation problem.

Algorithm Parameters SNR¼ 10 dB

White Gaussian inputs Third-order inputs First-order inputs

SA μ 0.00002 0.00006 0.000227 DSA[13] μ, τ, L 0.00002, 1, 16 0.00006, 1, 16 0.000227, 1, 16 NRMN[14] A, Kw 0.001, 5 0.001, 5 0.001, 5 APSA[15] μ 0.00015 0.00025 0.00035 MVSS-APSA[16] α, β, μ0 0.99, 0.9999999, 0.5 0.99, 0.9999999, 0.5 0.99, 0.9999999, 0.5 Proposed α, β, γs 0.99, 0.9999, 0.00016 0.99, 0.9999, 0.00137 0.99, 0.9999, 0.0203 0 1 2 3 4 5 6 7 8 9 10 x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 Number of iterations Excess MSE (dB) (a) (b) DSA [13] (c) NRMN [14] (d) APSA [15] (e) MVSS-APSA [16] (f) Proposed (b) DSA [13] (e) MVSS-APSA [16] (a) µ=0.00002, SA µ=0.00002, SA (d) APSA [15] (c) NRMN [14] (f) Proposed Proposed (theoretical)

Fig. 1. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 10 dB SNR, and no impulsive noise (pr¼ 0)).

(6)

values (all multiplied by  1) was abrupt when the

channel was changed. As observed inFig. 3, the proposed

method converged quickly and had a low misadjustment error. The proposed VSSA performed well and was robust

to the heavy-tailed impulsive interference. Figs. 4and 5

(third-order inputs) andFigs. 6and7(first-order inputs)

are the simulated results, with a different input signal

generated by G1(z) and G2(z). Similar result to that shown

in Fig. 1(10 dB SNR) is observed in Fig. 8(0 dB SNR). In

Fig. 8, DSA usedμ¼0.00002, τ¼3, and L¼8; NRMN used

A¼0.0007 and Kw¼5; the step size of APSA was set

to μ¼0.0003 (using one projection order); MVSS-APSA

used α¼0.99, λ¼0.9999999, μ0¼0.5, and one projection

order; the proposed VSSA used α¼0.99, β¼0.9999, and

γs¼0.0005. These parameters were chosen to obtain the

best performance and to achieve the same steady-state error for each of the compared algorithms. The proposed VSSA performed well at a 10 dB or 0 dB SNR, with heavy-tailed impulsive noises.

Methods using the technique based on the weighted

average of the gradient vector were introduced in [5,6].

The gradient vector is initially large and converges into a small value at the steady state, so it can be used as a performance index for convergence. However, this leads to a performance degradation of the LMS-type algorithms

[5,6]when impulsive interference is present (see

Appen-dix B). Similarly, the experimental results in [4] are

sensitive to high-level noise because the instantaneous

0 1 2 3 4 5 6 7 8 9 10 x 104 x 104 x 104 0 5 10 x 10 -4 Step size 0 1 2 3 4 5 6 7 8 9 10 0 0.02 0.04 0.06 0 2 4 6 8 10 12 14 16 18 0 0.05 0.1 E[µi], pr=0 Theoretical E[||pi||2], pr=0.1 Theoretical E[||pi||2], pr=0 Theoretical E[||pi||2] (pr=0) E[µi] (pr=0)

Theoretical value at steady state Theoretical value at steady state

Theoretical value at steady state E[||pi||2] (pr=0.1)

Number of iterations

Number of iterations

Number of iterations

Fig. 2. (a) Estimates of the step size for the proposed method. (b) Estimates of jj^pijj2with pr¼ 0 for the proposed method and with pr¼ 0.1 in (c) when the

channel is changed. The dashed lines indicate the theoretical jj^pijj2andμiat the steady state (white Gaussian inputs at 10 dB SNR).

0 2 4 6 8 10 12 14 16 18 x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 Number of iterations Excess MSE (dB) (b) DSA [13] (c) NRMN [14] (a) µ=0.00002, SA (d) APSA [15] (e) MVSS-APSA [16] (f) Proposed Proposed (theoretical)

Fig. 3. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 10 dB SNR, and with impulsive noise of pr¼0.1).

x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 Number of iterations Excess MSE (dB) (a) µ=0.00006, SA (b) DSA [13] (c) NRMN [14] (d) APSA [15] (e) MVSS-APSA [16] (f) Proposed (e) MVSS-APSA [16] (b) DSA [13] (c) NRMN [14] (d) APSA [15] (a) µ=0.00006, SA (f) Proposed Proposed (theoretical) 0 1 2 3 4 5 6 7

Fig. 4. Comparison of the EMSE for various adaptive sign algorithms (third-order inputs, 10 dB SNR, and no impulsive noise (pr¼0)).

(7)

error value is used and could, therefore, be contaminated by the noise.

The performance of DSA [13] is determined by the

values of transition thresholds and selection of two

step-size parameters. It is similar to the hard-switching from one step size to another. The step size always maintains a large value when the heavy-tailed impulsive interference exists and this will lead to performance degradation. The

cost function of NRMN [14] minimized according to a

convex mixture of the first and second error norms, is mainly controlled by a time varying mixing parameter. If the parameter estimate tends to a large value, the NRMN algorithm is similar to the LMS algorithm and this will make the algorithm prone to considerable degradation in the presence of heavy-tailed impulsive noise. When the parameter estimate is a small value, NRMN will be similar

to SA and hence converge slow. Although APSA[15]could

speed up under colored input conditions, it is practically similar to SA and this makes its convergence speed lower

in Gaussian input environments. In[16], when compared

to APSA, the MVSS-APSA algorithm is derived based on the minimization of mean-square deviation to calculate the optimum step size and to ensure an improved perfor-mance in terms of convergence rate and misalignment. However, MVSS-APSA uses a decreasing property rule to control the step size. It always chooses the minimum value between the adjacent step sizes, so tracking capability will be degraded when the channel is changed.

From a robustness perspective, an approach to improv-ing the performance of the family of LMS algorithms to examine the step size is using the squared norm of the sign gradient vector to enhance the dynamic range of the step size between the maximum and minimum allowable

values of μ instead of using a fixed value. The squared

norm of the sign gradient vector can cover the overall tracking process during adaptation, providing tracking capability when the channel is changed because the proposed VSSA uses instantaneous gradient vectors, and always points in the direction of the greatest rate of decrease during the adaptive process toward the bottom of the error performance surface. Furthermore, the

recur-sive operation in(3)and(4), when applying the smoothing

factors ofα and β, is similar to low-pass filtering, which

effectively reduces the noise content. This ensures that the proposed algorithm not only enhances the convergence rate and reduces the complexity, but also exhibits a low

0 2 4 6 8 10 12 x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 Number of iterations Excess MSE (dB) (c) NRMN [14] (b) DSA [13] (e) MVSS-APSA [16] (a) µ=0.00002, SA (d) APSA [15] (f) Proposed Proposed (theoretical)

Fig. 5. Comparison of the EMSE for various adaptive sign algorithms (third-order inputs, 10 dB SNR, and with impulsive noise of pr¼0.1).

0 1 2 3 4 5 6 7 x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 Number of iterations Excess MSE (dB) (a) µ=0.000227, SA µ=0.000227, SA (b) DSA [13] (c) NRMN [14] (d) APSA [15] (e) MVSS-APSA [16] (f) Proposed (f) Proposed (b) DSA [13] (d) APSA [15] Proposed (theoretical) (e) MVSS-APSA [16] (a) (c) NRMN [14]

Fig. 6. Comparison of the EMSE for various adaptive sign algorithms (first-order inputs, 10 dB SNR, and no impulsive noise (pr¼0)).

0 2 4 6 8 10 12 x 104 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 Number of iterations Excess MSE (dB) (c) NRMN [14] (a) µ=0.000227, SA (d) APSA [15] (b) DSA [13] (e) MVSS-APSA [16] (f) Proposed Proposed (theoretical)

Fig. 7. Comparison of the EMSE for various adaptive sign algorithms (first-order inputs, 10 dB SNR, and with impulsive noise of pr¼0.1).

0 1 2 3 4 5 6 7 8 9 10 x 104 -35 -30 -25 -20 -15 -10 -5 0 Number of iterations Excess MSE (dB) (a) µ µ=0.00002, SA =0.000064, SA (b) DSA [13] (c) NRMN [14] (d) APSA [15] (e) MVSS-APSA [16] (f) Proposed (c) NRMN [14] (b) DSA [13] (e) MVSS-APSA [16] (f) Proposed (a) (d) APSA [15] Proposed (theoretical)

Fig. 8. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 0 dB SNR, and no impulsive noise (pr¼0)).

(8)

misadjustment error, and is robust against strong impul-sive disturbances. The simulation results demonstrate that the proposed method performs well and is robust in low SNR, high impulsive interference, and colored input con-ditions. Regarding the complexity of various adaptive

schemes (Tables 1and2), the proposed approach requires

5N þ2 multiplications and 4N additions per filter output for computing.

4. Conclusion

This paper introduces a new algorithm, known as VSSA, which uses the squared Euclidean norm of the sign gradient vector's weighted-averaging as a criterion for the convergence performance. The proposed VSSA com-bines the benefits of the gradient-based algorithm and SA. The gradient-based algorithm makes the proposed algo-rithm converge fast with colored input signals and simul-taneously the SA guarantees its robustness against impulsive interference. Analyses and computer simula-tions confirm that the proposed algorithm improves the performance of conventional SA by offering a fast conver-gence rate, a lower misadjustment error, and a lower complexity when compared to other gradient-based VSLMS algorithms. The proposed algorithm also exhibits high robustness against strong impulsive interferences.

Acknowledgment

This work was in part funded by the Aiming for the Top University and Elite Research Center Development Plan,

NSC 101-2221-E-009–093-MY2, and the MediaTek Research

Center at National Chiao Tung University.

Appendix A. Proof of(13)and(14)

The following lemma is needed to verify(13)and(14):

Lemma 1. Let u1 and u2 be jointly Gaussian zero-mean

random variables with variancess2

1ands22, and let y ¼ u2þn

and n with the pdf given in(7)be independent of u1and u2.

Let z1¼ u2þh1 and z2¼ u2þh2, where h1 with variance

s2 h1¼ s 2 band h2withs2h2¼ ðK þ1Þs 2 b, be zero-mean Gaussian

variables independent of u1and u2. Therefore,

E½sgnðyÞu1 ¼ ∑

2

k ¼ 1

εkE½sgnðzkÞu1; ðA:1Þ

where ε1¼1  pr and ε2¼pr. Using (12), the second

moment Kiof the weight error vector in(13)is necessary

to calculate E½sgnðeiÞvixTi and E½sgnðeiÞxivTi. Thus,

E½sgnðeiÞvixTi can be written as

E½sgnðeiÞvixTi ¼ EfE½sgnðeiÞvixTijvig ðA:2Þ

Furthermore, using Price's theorem[19]and Refs.[1–3,10,12], the following result is obtained

E sgnðeiÞxTi ¼ ffiffiffi 2 π r 1 se;iEðx T ieiÞ ðA:3Þ

UsingLemma 1and(A1)–(A3), E½sgnðeiÞvixTijvi can be

written as E sgnðeiÞvixTijvi ¼ vi ffiffiffi 2 π r ∑2 k ¼ 1 εk sek;i EðxT iek;i viÞ ¼ vi ffiffiffi 2 π r 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q EðxT ie1;ijviÞþ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ðK þ1Þs2 bþtrðRKiÞ q EðxT ie2;ijviÞ 8 > < > : 9 > = > ;; ðA:4Þ where ei¼ vTixiþniand ek;i¼ vTixiþhk;i[k ¼1, 2 and h1,i

with variances2

h1¼ s

2

band h2,iwiths2h2¼ ðK þ1Þs

2 b]. Taking

the expectation with respect to viand with E½xTieijvi ¼

vT

iR, the following is obtained

E sgnðeiÞvixTi ¼  ffiffiffi2 π r KiR 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ðK þ1Þs2 bþtrðRKiÞ q 8 > < > : 9 > = > ; ðA:5Þ

E½sgnðeiÞxivTi can be derived using the same procedure:

E sgnðeiÞxivTi ¼  ffiffiffi2 π r RKi 1 pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ðK þ1Þs2 bþtrðRKiÞ q 8 > < > : 9 > = > ; ðA:6Þ Hence, we have E sgnðeiÞðvixTiþxivTiÞ ¼  ffiffiffi 2 π r ðKiR þ RKiÞ 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþtrðRKiÞ q 8 > < > : 9 > = > ; ðA:7Þ

Similarly,(11)can be derived as

Eðvi þ 1Þ ¼ EðviÞþEðμiÞE sgnðe½ iÞxi ¼ IEðμiÞ ffiffiffi 2 π r 1  pr ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s2 bþtrðRKiÞ q þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipr ½ðK þ1Þs2 bþtrðRKiÞ q 2 6 4 3 7 5R 8 > < > : 9 > = > ;EðviÞ ðA:8Þ

Appendix B. Derivation of excess MSE for LMS algorithm In this appendix, the LMS algorithm using a fixed step

size of μ was derived based on the two-component

Gaussian mixture observation noise given in(7)and (8).

According to the standard assumptions used in [1–4,

7–10,12], the weight-error vector and its second moment

Kican be evaluated recursively as

Eðvi þ 1Þ ¼ ½IμREðviÞ ðB:1Þ

and

Ki þ 1¼ KiμðRKiþKiRÞþμ2½2RKiR þ RtrðRKiÞþμ2s2nR

ðB:2Þ

Observing the MSE given in(20), it is only necessary to

study a recursion for ki¼tr(Ki). Taking the trace of both

sides of(B.2)yields

ki þ 1¼ ki2μs2xkiþμ2ðN þ2Þs4xkiþμ2Ns2xs 2

(9)

By substituting(8)into(B.3), assuming the adaptive filter

has converged when i-1, the following is obtained:

k1¼ μN 2 μs2 xðN þ2Þ fð1prÞs2bþpr ðK þ1Þs2b g ðB:4Þ

The EMSE [defined asξexcess¼ trðRK1Þ ¼ s2xk1 and with

R ¼s2 xI] is ξexcess¼ μNs 2 x 2 μs2 xðN þ2Þ fð1prÞs2bþpr ðK þ1Þs2b g ðB:5Þ

It can be observed in (B.5) that the EMSE for the LMS

algorithm depends on the power of the impulsive noise and the input power. Hence, the LMS that uses the energy of the instantaneous error as its cost function is sensitive to impulsive noise, making it prone to substantial degra-dation in several practical applications.

References

[1]B. Farhang-Boroujeny, Adaptive Filters: Theory and Applications,

Wiley, New York, 1998.

[2]A.H. Sayed, Adaptive Filters, John Wiley & Sons, New York, NY, USA,

2008.

[3]P.S.R. Diniz, Adaptive Filtering: Algorithms and Practical

Implemen-tation, third ed. Springer, New York, 2008.

[4]R.H. Kwong, E.W. Johnston, A variable step size LMS algorithm, IEEE

Trans. Signal Process. 40 (7) (July 1992) 1633–1642.

[5]W.P. Ang, B. Farhang-Boroujeny, A new class of gradient adaptive

step-size LMS algorithms, IEEE Trans. Signal Process. 49 (4) (April

2001) 805–810.

[6]H.C. Shin, A.H. Sayed, W.J. Song, Variable step-size NLMS and affine

projection algorithms, IEEE Signal Process. Lett. 11 (2) (February

2004) 132–135.

[7]M.H. Costa, J.C.M. Bermudez, A noise resilient variable step-size LMS

algorithm, Signal Process. 88 (March 2008) 733–748.

[8]S. Zhao, Z. Man, S. Khoo, H.R. Wu, Variable step-size LMS algorithm

with a quotient form, Signal Process. 89 (1) (January 2009) 67–76.

[9]K. Shi, P. Shi, Convergence analysis of sparse LMS algorithms with

l1-norm penalty based on white input signal, Signal Process. 90 (12)

(December 2010) 3289–3293.

[10]V.J. Mathews, S.H. Cho, Improved convergence analysis of stochastic

gradient adaptive filters using the sign algorithm, IEEE Trans. Acoust.

Speech Signal Process. 35 (4) (April 1987) 450–454.

[11]J. Chambers, A. Avlonitis, A robust mixed-norm adaptive filter

algorithm, IEEE Signal Process. Lett. 4 (2) (February 1997) 46–48.

[12]S.C. Bang, S. Ann, I. Song, Performance analysis of the dual sign

algorithm for additive contaminated-Gaussian noise, IEEE Signal

Process. Lett. 1 (12) (December 1994) 196–198.

[13]V.J. Mathews, Performance analysis of adaptive filters equipped with

the dual sign algorithm, IEEE Trans. Signal Process. 39 (1) (January

1991) 85–91.

[14]E.V. Papoulis, T. Stathaki, A normalized robust mixed-norm adaptive

algorithm for system identification, IEEE Signal Process. Lett. 11 (1)

(January 2004) 173–176.

[15]T. Shao, Y.R. Zheng, J. Benesty, An affine projection sign algorithm

robust against impulsive interferences, IEEE Signal Process. Lett. 17

(4) (February 2010) 173–176.

[16]S. Zhang, J. Zhang, Modified variable step-size affine projection sign

algorithm, Electron. Lett. 49 (20) (September 2013) 1264–1265.

[17]J. Shin, J. Yoo, P. Park, Variable step-size sign subband adaptive filter,

IEEE Signal Process. Lett. 20 (2) (February 2013) 173–176.

[18]S. Dash, M.N. Mohanty, Variable sign-sign Wilcoxon algorithm: a

novel approach for system identification, Int. J. Electr. Comput. Eng.

2 (4) (August 2012) 481–486.

[19]R. Price, A useful theorem for nonlinear devices having Gaussian

數據

Fig. 1. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 10 dB SNR, and no impulsive noise (p r ¼ 0)).
Fig. 3. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 10 dB SNR, and with impulsive noise of p r ¼0.1).
Fig. 8. Comparison of the EMSE for various adaptive sign algorithms (white Gaussian inputs, 0 dB SNR, and no impulsive noise (p r ¼0)).

參考文獻

相關文件

He proposed a fixed point algorithm and a gradient projection method with constant step size based on the dual formulation of total variation.. These two algorithms soon became

Real Schur and Hessenberg-triangular forms The doubly shifted QZ algorithm.. Above algorithm is locally

In this chapter we develop the Lanczos method, a technique that is applicable to large sparse, symmetric eigenproblems.. The method involves tridiagonalizing the given

volume suppressed mass: (TeV) 2 /M P ∼ 10 −4 eV → mm range can be experimentally tested for any number of extra dimensions - Light U(1) gauge bosons: no derivative couplings. =&gt;

Courtesy: Ned Wright’s Cosmology Page Burles, Nolette &amp; Turner, 1999?. Total Mass Density

• Formation of massive primordial stars as origin of objects in the early universe. • Supernova explosions might be visible to the most

Like the proximal point algorithm using D-function [5, 8], we under some mild assumptions es- tablish the global convergence of the algorithm expressed in terms of function values,

2-1 註冊為會員後您便有了個別的”my iF”帳戶。完成註冊後請點選左方 Register entry (直接登入 my iF 則直接進入下方畫面),即可選擇目前開放可供參賽的獎項,找到iF STUDENT