INFINITE PRECISION ANALYSIS OF THE FAST OR ALGORITHMS BASED ON BACKWARD PREDICTION ERRORS

The conventional QR Decomposition Recursive Least Squares (QRD-RLS) method requires the order of X 2 multiplications-O[1\,JJ-per output sample. Nevertheless. a number of Fast QRD-RLS algorithms have been proposed with O[lYJ of complexity. Particularly thc Fast QRD-RLS al­ gorithms based on backward prediction enol'S are well known for their good numerical behaviors and low complexities. In such a scenario. considering a case where fixed-point arith­ metic is employed. an infinite precision analysis offering the mean square values of the internal variables becomes very at­ tractive for a practical implementation. In addition to this. a finite-precision analysis requires the estimates of these mean square values. In this work. we first present an overview of the main Fast QRD-RLS algorithms. followed by an infinite precision analysis concerning the steady state mean square values of the internal variables 01" four FQR-RLS algorithms. We stress the fact that the goal of this paper is the presentation of the infinite precision analysis results. the expressions for the mean square values of the internal variables. for all FQR algorithms based on backward prediction errors. The validity of these analytical expressions is verified through computer simulations. carried out in a system identification setup. In the appendixes. the pseudo-code detailed implementations of each algorithm are listed.


INTRODUCTION
Since the first QR Decomposition (QRD) based Fast RLS algorithm introduced by John Cioffi in 1990 [1]. many other Fast QRD-based RLS algorithms were developed [2. 3. -1-. 5. 6].It can be seen on [51 that Fast QRD-RLS algorithms can be classified in terms of the type of triangularization ap plied to the input data matrix (upper or lower triangular) and the type of error vector (a posteriori or ({ priori) involved in the updating process.It can be seen from the Gram-Schmidt orthogonalization procedure that an upper triangularization (notation being the same as in [5]) involves the updating of forward prediction errors while a lower triangularization in volves the updating of backward prediction enol's.Table 11 presents this classification as well as points out how these al gorithms will be designated hereafter.Also note that only for the algorithms [2J and [:1J.a formal demonstration of the numerical stability is known; these algorithms are backward stable and minimal in the sense of system theory [2. 7].
This work is focused in the study 01' the steady state mean squared value of the internal variables of this class of Fast QRD-based algorithms which are well known for their good numerical behavior and low computational complexity.Since these algorithms present similar performances in finite pre cision, specially when using a reasonably large number of bits of wordlength.they are all currently subject of research.
Particularly in the case of fixed-point arithmetic implementa tions, information about the range of their internal variables such as those offered by an infinite precision analysis-is very interesting for a practical implementation.
It is also worth mentioning that finite-precision analysis re quires the estimates of the mean square values found in this work.some of them obtained here and others collected from the technical literature.The relevance of the infinite preci sion analysis can be clearly observed in [9].where the sec tion "Quanti.ctionError and Stability Analysis," addressing the finite precision analysis of the conventional QRD-RLS algorithm, was only possible with the results of the infinite precision analysis carried out in the previous section.
Since in an infinite precision environment many variables are identical for a1l Fast QR algorithms based on Backward Prediction Errors mentioned in Table I, the use of results from other works was possible.We have used theoretical expressions for the mean square values of different variables from the analysis of the conventional QR-RLS algorithm per formed by Diniz and Siqucira in 199!S [91.We have also used results for variables of the a Posteriori Fast QR algo rithm based on Backward Prediction Enol'S [2J in paper by Siqucira, Diniz, and Alwan [1OJ published in 199-1.Finally.we have used some expressions derived in a work carried by Miranda.Aguayo.and Gerken in H19? [II] concerning the variables of the Fast QR algorithm based on a Priori Back ward Prediction Errors [3].
The main contributions of this work, besides the new the oretical expressions developed.are concerned to the unified framework in which all FQR algorithms based on Backward Prediction Errors were addressed and all their infinite preci sion analysis were presented using the same notation.
This paper is organized as follows.In Section :? we present an overview of the Fast QR algorithms based on backward prediction errors.Then. in Sections S and -1. the infinite preci sion analysis concerning the steady state mean square values of each internal variable is presented.In Section S. the vali dation of the analytical results obtained is carried out through computer simulations.Finally, some conclusions are summa rized and the detailed algorithmic implementations are pre sented in the appendixes.

THE FOR ALGORITHMS BASED ON BACKWARD PREDICTION ERRORS
The RLS algorithms minimize the following cost function where each component of the error vector" e(k) is the a pos teriori enol' at instant i weighted by ),u.--i)/2 (A is the forget ting factor).The vector e( k) is given by (2) In the equation above.the weighted desired or reference sig nal vector d(k).the coefficient vector w(k).and the input data matrix X (k) are defined as follows.

d(k)
where S is the filter order (number of coefficients minus one).x(k) is the input signal vector [.rl)) ir( k -1) ... i((I, "YljT (samples before instant 0 are considered equal to zero).and 'W(k) is the coefficient vector.The prernultiplication of the equation above by the orthonormal matrix Q (),.) triangu larizes X (k) without affecting the cost function.(6) The weighted-square error in (1) is minimized by choosing 'W(k) such that the term d q2(k) -U(k)w(k) is zero.Equa tion (6) can be written in a recursive form while avoiding the ever increasing order for the vectors and matrices involved [8]: where is the first element of and Q e(k) e '1, e '1, TI;)=J\' Q 0, (k) is a sequence of Givens rotations that anni hilates the elements of the input vector ;1;(),) in the equation Matrix Qe(k) in (7) can be partitioned as (11 ) where, using (11) in (8) and recalling that Qe(k) is or thonormal, it is possible to prove that, for the case of lower is the normalized a posteri ori backward prediction error vector [3], a(k) = U~I(k I)X(k)/v0: is the normalized a priori backward prediction enol' vector [3].and The update of the a posteriori and the a priori backward prediction error vectors.f( k) and a( k) respectively, leads to two different algorithms, the so-called FQRYOS-B and FQR_PRLB algorithms.The update equations of these vec tors are given by Ile,,(l,+l)11 Il e l ( I, + ] )11

MEAN SQUARE VALUES OF COMMON VARIABLES (FQR_POS_B AND FQR_PRLB ALGORITHMS)
The matrix equations of the two implementations of Fast QR algorithms mentioned before are listed in Tables -Iand 5.As can be seen from these tables.several equations arc exactly the same.In this section.we summarize the mean square values of all variables found in both algorithms.

Mean Square Values of cose,(kj and sinfJ,(k)
The following results can be found in

Mean Square Value of e~I~, (J,:)
The following result was first derived in [ 10].

Mean Square Value of I I e(/)(k) I I
The following result can be found in [10].

Mean Square Values of cose'r, (k) and sine' f , (k)
The following results were also derived in [10].

(22)
The same expression was also obtained in [II] using a differ ent approach.

Mean Square Value of d q2 (k)
The following result was first introduced in [

Mean Square Value of f~i,\k)
From the joint process estimation part of the FQRYOS-B algorithm, we take the expressions of f.~~) (J,.+ 1) and d q 2 \ -2-; (k + 1).and use them to derive the expected value of . .
we find the following relation.
where E {[t:;I~)(k)]2} = (J~ = (J~ L;~o uL + (J~ is the variance of the reference signal and (J~ is the variance of the measurement noise (it is assumed here that the algorithm is applied in a sufficient-order identification problem, i.e., the unknown FIR system has the same order as the adaptive fil ter).Finally, from the last equation 01' the algorithm and as suming that C q j (I,) and ~i (~.) are uncorrelated, we have (25)

MEAN SQUARE VALUES OF INTERNAL VARIABLES OF THE FQR_POS_B ALGORITHM
For this algorithm, from the derivation of (12), it can be ob served that the last clement of i (I, + 1), siven bV:,: 1,+-1) , . ~ -Ile,lh+] III was precalculated in a previous step.This fact leads to two slightly different versions of the same algorithm.The first one is based on this prior knowledge of the last element of i( I, + 1) while the second is based on the straightfor ward computation of i(k + 1) and requires the calculation The first version of this algorithm was introduced in [6J and its detailed description is presented in Appendix A. The second version of this algorithm was introduced in [2] and its detailed description is given is Appendix B.
For the infinite precision results of the FQR_POS-B Algo rithm, all variables have the same notation used in its detailed description.

Mean Square Value of fi(k)
From the implementation of the step "Obtaining Qe(I, + 1r (see Table 4 and Appendix A or B) we obtain the following expression.

iX+2-,(k
By taking the expected value of (26) squared and using the approximations ( 16) and (22), wc obtain the following ex pression.

Mean Square Value of OU,7',
The implementation of the step "Obtaining i(k + 1)" can be carried out in two different ways.as mentioned in the begin ning of this section.These implementations can be found in Appendices A and B, respectively.
In the first version of this algorithm, since we take the expressions for Iv +2-1 (I, + 1) and (

MEAN SQUARE VALUES OF INTERNAL VARIABLES OF THE FQR_PRLB ALGORITHM
For the FQRYRLB algorithm.it is observed from the derivation of (13) that the last element of ai]: + 1) had been previously calculated.This observation led to two slightly different versions of the same algorithm.The first version of this algorithm is based on the prior knowledge of the last el ementoJa("'+1)(orUY+1''''+1)= and was < " , ' ) .. \ vA'!ie, 'ihlll first presented in [4J.The second version of the FQRYRI-B algorithm is based on the straightforward computation of at]: + 1) according to (13) and requires the calculation of e',II,+l)

\/'\11 e, (/,) Ii'
The first version of the FQRYRI-B algorithm was intro duced in [4J and its detailed description is presented in Ap pendix C The second version of this algorithm was intro duced in [3J and its detailed description is given is Appendix D.
For the infinite precision results of the FQRYRI-B Algo rithm.all variables have the same notation used in its detailed description.

Mean Square Value of o,(/;)
From the implementation of the step "Obtaining Qe(~ + 1r (see Table 5 and appendix C or D), we obtain the following expression.The implementation of the step "Obtaining 0 (I, + 1r can be carried out in two distinct ways.as also discussed in the beginning of this section, These implementations are detailed in Appendices C and D. respectively.
In the first version of this algorithm we use the expres sions for C/iY +2-i (1;+ 1) and aui, to calculate E [O~' +2i (k+  For the second version of this algorithm we use the ex pressions for a, -d k + 1) and au.i i to calculate E[u7-I (Ie-+ it is easy to conclude that E[mLl'f] = E[OT(Ie-)]: as a conse quence.from (31). the following expression results.

SIMULATION RESULTS
In this section we consider a system identification exam pic where the input signal is a zero-mean random Gaussian process with variance (J~ = 10-: 3 .the measurement noise is Gaussian with variance (J~ = 10-'.the desired signal is ob tained through a fourth-order filter.In an ensemble of 1000 runs.each with ')000 samples.only the -WOO last output sam ples were used to calculate the mean square value.The cho sen ..\ was 0.95.
The four algorithms were used in the simulation in order to compare the simulated with the theoretical results.From these results.Table 2 shows the total errors between the the oretical and simulated values for the non-common variables.This error was computed.for each algorithm.as the sum of the absolute value of the difference between the simulated values (in db) and the theoretical values (in dB).As can be seen [rom this table.the lowest error corresponds to the FQRYOS-B Version I algorithm.This only means that we can predict the mean squared values slightly better for this algorithm than for the others.All detailed results are shown in Table 3.As can be observed from this table.the predicted mean square values for all internal variables are very close to their measured values.i=2 i:::::3 e~1 i (k) II II O.02UU:.!G7

CONCLUSIONS
In this paper, four versions or Fast QR Decomposition al gorithms based on backward prediction errors have been ana lyzed in infinite precision environment.These algorithms are generally good choices among the Fast RLS algorithms due to their low computational complexity and proved stability when implemented with finite precision arithmetic.
Closed-form formulae for the estimation of the mean square values of the internal variables were obtained and the oretical results were compared with computer simulations.confirming the accuracy of the analysis.
These expressions are keys [or a proper implementation of these algorithms using fixed-point arithmetic processors.since the number of bits for each internal variable could be determined by its estimated mean squared value.In addi tion.they are required in the finite-precision analysis of the FQR_PRLB and FQR_POS_B algorithms which.so far. is not available in the literature., .

Ii
The following relation which is also used in the con ventional QR algorithm is obtained by posunultiplying e~(k)Q(k) by the pinning vector [1 () ... OjT y CNote that scalars are represented by italic letters while vectors arc writ elk) = e '1, (k) IIcos ()i(k) = (k)~; (k) Apollnarlo Jr., C. A. Medina S., and P. S. R. Diniz Infinite Precision Analysis of the Fast OR Algorithms Based on Backward Prediction Errors where ~J(k) is the first element of the first row of Qe(!')'

Table 3 .
Mean Square Values of Internal Variables.(Contin ues on pp.129) I cpi esents common \ at iablex