A SINR Maximizing Interpolation-and-Decimation-based Dimensionality Reduction Technique, with Application to Beamforming

—We present a dimensionality reduction technique based on a joint interpolation and decimation scheme, with application to beamforming. The dimensionality reduction is achieved by a two step procedure: interpolation followed by decimation. The array snapshots are interpolated by a ﬁnite impulse response (FIR) ﬁlter in order to generate correlation between its samples. The decimation stage then discards some samples from the correlated interpolator output signal, effectively reducing the snapshots’ length. A notable point of this technique is the elegant and effective way to design the interpolation ﬁlter. The design is such that, for a given decimation pattern, the interpolation ﬁlter maximizes the signal-to-interference-and-noise ratio (SINR) at the ouput of the decimation stage. The optimization of the reduced dimensionality stage is made independently of the ﬁnal application ﬁltering stage, allowing the proposed scheme to be combined with any interference-suppressive or detection ﬁlter of choice. Investigation of this technique in light of the particularities of the beamforming signal model led to, here proposed, simpliﬁcations that allowed for a signiﬁcant reduction of its overall complexity. Comparison with renowned robust rank reduction techniques show that the proposed approach has an excellent SINR loss ﬁgure of merit performance with superior robustness and low computational complexity.


I. INTRODUCTION
Interpolation and decimation algorithms were extensively studied for sampling rate alteration and related applications [1]. In these algorithms, the input signal is filtered prior to the decimation stage in order to avoid aliasing. As the sampling rate is an important cost factor in digital signal processors implementations, so is the length of the input data in ever increasing sophisticated algorithms.
One of this paper's authors investigated the interpolation and decimation concept focusing on the dimensionality reduction of the observed data prior to the digital processing algorithms. The idea was to design the interpolation filter aiming at the minimization of a certain cost function. At first, this idea originated an algorithm where the dimensionality-reduction stage is coupled with the final application filter (the detection filter for example), adaptively adjusting both the interpolation and detection filter weights [2]. This idea proved to give excellent results at low computational cost in Direct Sequence Code Division Multiple Access (DS-CDMA) communication scenarios using Minimum Mean Square Error (MMSE) filters [3], [4], [5].
Building on this concept, another approach was then proposed: to design an interpolator filter that maximizes the signal-to-noise ratio (SNR) at the output of the decimation stage, irrespective of the final application filter. Indeed, differently from [2], the optimization of the reduced dimensionality stage is made independently of the filtering stage, allowing the proposed scheme to be combined with any interferencesuppressive or detection filter of choice. Since the interpolation filter is now decoupled from the application filter, the advantage of this stand-alone dimensionality reduction block relies on the fact that it can be deployed upstream from any desired application filter, e.g. any kind of detection or estimation filter, without the need of redesigning it from the application cost function, as it has to be done for different applications [2], [3], [4], [5]. This alternative approach was tested in Ultra Wide Band (UWB) communication scenarios, and had excellent results [6], [7], [8].
The aforementioned method was further improved, leading to the design of a decoupled interpolation filter that maximizes the signal-to-interference-and-noise ratio (SINR), instead of only the SNR as in [6], [7], [8]. The design of this new interpolator is creative and effective. This method was applied for DS-CDMA and UWB communication scenarios and reported in Portuguese in [9] and [10]. This method will be now on called: joint interpolation and decimation scheme (JIDS).
In this paper, we recast the JIDS algorithm for beamforming applications and specialize it considering the particularities of this specific problem. We propose simplified methods for selecting the algorithm parameters and for defining the best decimation strategy: we investigate the dependence of two of the JIDS parameters, the decimation factor, F , and the interpolation filter length, L v , and obtain a straightforward method to set the length of the interpolation filter. We also derive a new low complexity criterium for selecting the best decimation pattern. The proposed specialized JIDS for beamforming will be referred to as JIDSB.
The Minimum Variance Distortionless Response (MVDR) beamforming filter [11] is a well-known beamforming technique that exploits the second-order statistics of the interference vector to minimize the array variance while constraining the array response towards the direction of the signal of interest (SOI). The Minimum Power Distortionless Response (MPDR) beamforming filter, as denoted in [11], exploits the second-order statistics of the received vector to minimize the array output power while constraining the array response towards the direction of the SOI. When the direction of arrival (DOA) of the SOI is known exactly the output of the MPDR resumes to the output of the MVDR [11].
There are many techniques available to implement the MVDR filter with less computational burden, basically to avoid the MVDR matrix inversion step, e.g. the stochasticgradient (SG) [12], [13] technique. However, full rank algorithms usually require a large number of snapshots to reach the steady-state. In large antena array arrangements, this may cause degradation in convergence speed, especially in environments where a small support of independent and identically distributed (IID) samples are available for estimation of the statistical quantities [14].
Reduced-rank techniques can mitigate the effects of these drawbacks. Principal Components (PC) [15], [16] and Cross-Spectral Metric (CSM) [17] are examples of reduced-rank filtering schemes based on an eigen-decomposition. The Multistage Wiener Filter (MWF) achieves rank reduction through the Krylov subspace, which has the added benefit of a further reduction in computational complexity based for example on the Lanczos [18], [19], [20] or the Arnoldi [21], [20] algorithms, or a conjugate gradient-based (CG) implementation [22], [23] (CG-MWF). Other rank-reduction algorithms are, for example, the Auxiliary Vector Filter (AVF) [24], [25] that generates a sequence of linear auxiliary filters that converge to the MVDR filter and the family of adaptive joint iterative optimization (JIO) algorithms [26], [27].
Another concern in beamforming is robustness, which indicates how the algorithms perform under certain unfavorable situations, e.g. calibration errors, look of direction errors, distortions caused by source spreading and poor estimation of statistical quantities due to small sample support sizes. This problem may be especially dramatic when the MPDR beamformer filter is used. Many approaches have been proposed to improve the beamformers robustness, for example, adding extra quadratic (diagonal loading), linear point or derivative constraints, [28], [12], [29], [30] and robust estimation using random matrices theory [31] or eigenspace based robust beamformers [32]. It is within this context that the JIDSB algorithm is inserted.
We apply the MVDR detection filter to the output of the dimensionality reduction stage and compare the JIDSB results in terms of SINR loss with the full rank MVDR filter and several rank reduction techniques such as PC, CSM and CG-MWF. In terms of robustness, we compare the JIDSB with the former methods with diagonal loading when the MPDR filter is applied. This paper is organized as follows. Section II describes the signal model. Section III combines the dimensionality reduction transforms with the beamformer weighting vector. Section IV recasts the JIDS, specialized here for the beamforming application. The proposed simplification procedures are explained in subsection IV-B and IV-C. Subsection IV-D addresses the computational complexity issue. The performance assessment examples of the proposed specialized and simplified JIDS (i.e. JIDSB) are provided in Section V. Finally, conclusions are given in Section VI.

II. SYSTEM MODEL
We consider a beamforming application with a uniform linear array (ULA) with M elements. The sensor array received vector at the i-th time snapshot r(i) ∈ C M ×1 is given by where without loss of generality, the signal of interest is represented as b 0 , a complex random variable with power E[|b 0 | 2 ] = σ 2 0 and steering vector s where λ c is the carrier wavelength and d is the inter-element spacing of the ULA. The term x(i) ∈ C M ×1 is the interference plus noise vector, where .., Q, given by: The elements of the vector b are modeled as random variables from uncorrelated zero-mean, circular complex processes, with variances given by σ 2 1 , σ 2 2 , ..., σ 2 Q . The vector n(i) ∈ C M ×1 is the complex vector of sensor noise, which is assumed to be a zero-mean spatially and temporally circular complex Gaussian vector. The beamformer output is then given by where w = [w 1 , ..., w M ] T ∈ C M ×1 is the complex weighting vector. The beamformer weighting vector w can be designed to maximize the SINR in z(i), according to the MVDR criterion [11] min w w H Rw, subject to w H s(θ 0 ) = 1, where is the autocorrelation matrix of the interference and noise. The well-known solution of (5) is the optimal weighting vector given by: It is easy to show that the SINR achieved with the optimal filter of (6) is given by When the autocorrelation matrix is not known a priori, it has to be estimated from the observed data. In statistical stationary signal scenarios, the autocorrelation matrix can be estimated from the available sample support with N s snapshots aŝ The solution in (6) that entails the computation of the inverse of the estimated autocorrelation matrix,R, is called the MVDR sample matrix inversion (SMI) beamformer. The optimization problem that arises when we use the autocorrelation matrix of the whole incoming signal, R = E[r(i)r H (i)], is known as MPDR [11]. The MPDR and the MVDR solutions are identical in conditions of perfect knowledge of the autocorrelation matrix and the desired steering vector. But SMI-based MPDR beamformers are known to suffer from performance degradation [11], [28], [33]. The performance degradation is due to signal cancelation, termed as signal self-nulling. This problem becomes especially dramatic in practical scenarios, when there are mismatches between the assumed array response and the true array response. This situation arises, for example, when there is a finite sample support for estimating the autocorrelation matrix.
Diagonal loading is a popular approach to improve the MPDR-SMI beamformer robustness. It is derived by imposing an additional quadratic constraint either on the Euclidian norm of the weight vector itself or on its difference from a desired weight vector [28]. The estimated autocorrelation matrix added with a diagonal loading γ ∈ R + ,R DL is given bŷ III. DIMENSIONALITY REDUCTION TECHNIQUE FOR BEAMFORMING APPLICATION Consider a dimensionality reduction transformation matrix T ∈ C D×M . The observed i-th snapshot r D (i) ∈ C D×1 after the dimensionality reduction, given by is then processed by the beamforming filter, w D ∈ C D×1 , whose output z D (i) is given by The complex weighting vector w D = [w 1 , ..., w D ] T ∈ C D×1 is designed according to the MVDR criteria for the reduced observation r D and is given by where is the autocorrelation matrix of the interference plus noise after the dimensionality reduction stage, where R ∈ C M ×M is the autocorrelation matrix of the interference plus noise of the original data and s D (θ 0 ) = Ts(θ 0 ) is the desired signal steering vector after the dimensionality reduction stage. The block diagram of this process is illustrated in Fig. 1.

IV. JIDS DIMENSIONALITY REDUCTION TECHNIQUE
The JIDS dimensionality reduction technique was presented by one of the authors in a regional conference in Portuguese [9] for DS-CDMA and UWB communications. For the sake of completeness, this section gives an overview of the method modifying it to take advantage of the beamforming system model. In general terms, the JIDS is based on two operations: interpolation and decimation, as depicted in Fig 2. At the interpolation stage the i-th received snapshot r(i) is filtered by v ∈ C Lv×1 (L v << M ) in order to correlate its components before the decimation stage. The decimation stage is implemented by means of a decimation matrix, D, that selects certain components, reducing the original dimension M by a factor of F . The resulting vector length is D = ⌊M/F ⌋, where ⌊x⌋ is the operation that selects the largest integer not greater than x. For a uniform decimation by a factor of F , there are in fact F possible patterns, l ∈ {0, ..., F − 1}. The index l, that designates the decimation pattern, D l ∈ C D×M , corresponds to the row of the first component of r(i) selected by the decimation block, that is where The JIDS dimensionality reduction technique makes a joint choice of the interpolation filter, v l , and the decimation pattern, D l as will be explained in the following subsection.
A. An Effective Design of the Interpolation Filter Specialized for Beamforming Given a decimation pattern l, we seek the interpolation filter v * l that maximizes the SINR at the output of the dimensionality reduction stage. Furthermore, we also seek the decimation pattern l * that results in the highest SINR among all F possible decimation patterns, l ∈ {0, . . . , F − 1}. The problem of choosing the decimation pattern, l * , will be shown to simplify to a trivial comparison of scalars.
In the following, we address the problem of finding the interpolation filter v * l that maximizes the SINR at the output of the dimensionality reduction stage, given a decimation pattern l. The output of the dimensionality reduction stage using the l-th decimation pattern, at the i-th snapshot, is given by where V l ∈ C M ×M is a Toeplitz matrix that implements the discrete convolution between v l ∈ C Lv×1 and r(i) ∈ C M ×1 . The first column of V l is given by . Due to the convolution commutation property, where R ∈ C M ×Lv is a Toeplitz matrix whose first column is given by r(i) ∈ C M ×1 . Using (18) we can rewrite (15) as Similarly, s D l , i D l (i) and n D l (i) are defined as where S ∈ C M ×Lv , I ∈ C M ×Lv and N ∈ C M ×Lv are Toeplitz matrices with their first columns given respectively by s(θ 0 ) ∈ C M ×1 , i(i) ∈ C M ×1 and n(i) ∈ C M ×1 . The SINR after the dimensionality reduction stage using the l-th decimation pattern is given by (dropping the snapshot index i for convenience) The filter v * l that maximizes (23) is the one that satisfies which is equivalent to since The numerator in (25) may be written as where A l ∈ C Lv×Lv is the symmetric, hermitian, non-negative matrix given by The matrix diag(p l ) is a diagonal matrix that is formed with the elements of the vector p l along the main diagonal. The vector p l identifies the l-th, l ∈ {0, ..., F − 1}, decimation pattern: it has zeros at the positions where the elements will be discarded and ones where the elements will be selected, that is, the elements with indices {l, l + F, l + 2F, ..., l + ⌊M/F ⌋} are retained. The denominator in (25) may be written as where B l ∈ C Lv×Lv is a hermitian symmetric, non-negative matrix given by The maximization problem in (25) can be restated as where The gradient of f l (v) is computed as (37) The values that null (37), or equivalently, must satisfy or where F l = B −1 l A l and λ is the scalar given by We notice that (40) is the eigenvalue equation of matrix F l . Therefore, vector v that solves (40) must be the eigenvector of F l associated to λ. But as can be seen comparing (41) with (36), λ is the SINR itself. Thus, in order to maximize the SINR we need to find the eigenvector associated to the largest eigenvalue of F l . By doing so, we are choosing the interpolation filter, v * l , that will produce the maximal SINR for the l-th decimation pattern. In summary, the interpolation filter v * l that maximizes the SINR at the output of the dimensionality reduction stage, given a decimation pattern l, is the eigenvector associated to the largest eigenvalue of F l .
In the following, we address the problem of finding the decimation pattern l * that results in the highest SINR among all F decimation patterns. It turns out that the answer is simple: we just have to compare the largest eigenvalue of F l for l ∈ {0, . . . , F − 1} and select l * as the decimation pattern that produces F l * with the largest eigenvalue. This process can be done in a parallel multiple branch structure with F branches, where each branch uses a different decimation pattern followed by a simple scalar comparison. By doing so, we are choosing the interpolation filter v * l * , that will produce the maximal SINR among all F decimation patterns.
We can now recast the JIDS for beamforming applications: 1) Construct the Toeplitz matrix of the desired steering vector s(θ 0 ), S; Compute the largest eigenvalue, λ max,l , ofF l ; 6) Repeat steps 2 to 5 for the F possible decimation patterns and choose the decimation pattern l * that provides the largest eigenvalue ofF l 7) For the decimation pattern l * (selected in the previous step), set the interpolation filter v * l * as the eigenvector associated to the largest eigenvalue ofF l * . After determining D and V using the steps described above, the JIDS dimensionality reduction transformation matrix T J ∈ C D×M is, thus, given by

B. Selection of the Interpolator Length
Selection of the interpolator length, L v , and the decimation factor, F , may require an extensive search. In this section we will examine the particularities of the beamforming problem in order to suggest a good choice for those parameters adjustments.
Previous work [6] showed that, for scenarios where the observed data is corrupted only by white noise, the best results occurred for interpolation filter lengths equal to the decimation factor L v = F . This may be explained by the fact that L v = F is the filter length that combines the largest number of samples while preserving the statistical characteristics of the white noise vector. For this choice of L v , the time interval between the preserved noise samples is greater than the memory of the interpolator filter and the white noise vector remains white after filtering and decimation operations. Indeed, it is straightforward to show that, due to the structure of V l in (22), the k-th component z k (i) of the filtered noise vector z(i) = V l n(i) in (22) depends only on the (at most) L v components n k , n k−1 , · · · , n k−Lv+1 of n(i), that is z k (i) = f (n k , n k−1 , · · · , n k−Lv+1 ). The net effect of the matrix D l in (22) is to select (keep) one out of every F components of z(i), resulting in n D l (i). In this respect, any two adjacent components of n D l (i) are indeed two components of z(i) spaced F components apart of each other. Thus, in order for any two adjacent components of n D l (i) to be uncorrelated, z k = f (n k , n k−1 , · · · , n k−Lv+1 ) and z k+F = f (n k+F , n k+F −1 , · · · , n k+F −Lv+1 ) must be uncorrelated. For this to hold, z k and z k+F cannot have components of n(i) in common, or equivalently, k+F −L v +1 > k that is F > L v − 1, and since F and L v are integers, F ≥ L v . Therefore, L v = F is the condition that combines the largest number of samples while preserving the statistical characteristics of the white noise vector.
That is a good starting point for investigating the parameters settings in beamforming scenarios as well. It is to be expected that in cases where the jammer-to-noise-ratio (JNR) is very low, using L v = F is the best choice, as it approaches the white noise only scenario. In scenarios where the JNR is very high it may not be the best setting, but it may still be a good choice. We checked this through extensive computer simulations and verified that, for beamforming applications, setting L v = F is indeed a good setting. In this subsection we will show only two representative results for illustration purposes.
We simulated a beamforming scenario consisting of: M = 64 elements; SOI at 0 o and SNR = 10 dB. We varied the number of jammers and their JNR and evaluated the SINR loss for the decimation factors of 2, 4, 8 and 16 for a range of filter lengths. The SINR loss (L SINR ) was computed as where R is the true autocorrelation of the noise and interference known a priori and where T J is defined in (45). The number of training samples for estimating B l is N B = 128, averaged over 200 Montecarlo trials. In Fig. 3 we simulated three jammers impinging on the ULA at angles −30 o , 50 o and 65 o . All jammers have a JNR of -9 dB. One can see that for all decimation factors, F , the lowest SINR losses occur for filter lengths set as L v = F , as we expected. The arrows in the figure point where the interpolator lengths are equal to the decimation factors. The presentation of the SINR loss in dB is equivalent to 10 log 10 (L SINR ).
In Fig. 4 we simulated four jammers impinging on the ULA at angles −60 o , −30 o , 10 o and 50 o . All jammers have a high JNR of 15 dB. One can see that for all decimation factors, except for F = 4 (which is not the best reduction factor for this scenario anyway), the lowest SINR losses occur for filter  This procedure avoids the additional burden of jointly optimizing the filter length and the decimation factor.

C. JIDS Simplification for Beamforming Environment -JIDSB
Deeper investigation of the ULA structure into the JIDS revealed that the JIDS can be further simplified. In this context, we propose a low complexity criterium for selecting the best decimation pattern for ULAs. Considering the structure of the steering vector in a ULA, we can, instead of selecting the decimation pattern related to the largest eigenvalue, λ max,l , of F l , among all decimation patterns, l ∈ {0, .., F − 1}, select the decimation pattern, l * , that corresponds to the largest trace of F l , denoted tr(F l ), This procedure leads to similar performance and avoids the need of eigenvalue decompositions during the decimation pattern decision process. The trace of F l (which is equal to the sum of all eigenvalues of F l ) is approximately equal to the largest eigenvalue of F l , because, when using the length of the interpolation filter L v equal to the reduction factor F , the rank of F l is at most two, meaning that F l has at most two non-zero eigenvalues.
Proof. The rank of F l = B −1 l A l is upper bounded by the minimum between the rank of B −1 l and A l . Since B l is invertible, B l is a full rank matrix, leaving us with the analysis of A l . Matrix A l can be written as where The application of the JIDS in beamforming allows us to use the structure of the steering vector s(θ 0 ) = [s 0 , s 1 , . . . , s M −1 ] T to go deeper into the structure of A D l . The matrix S ∈ C M ×Lv is a Toeplitz matrix, with its element, S p,q , at the p-th row, p ∈ {0, . . . , M − 1}, and q-th column, q ∈ {0, . . . , L v − 1}, formed by The m-th element, s m , of the steering vector s(θ 0 ) ∈ C M ×1 corresponds to the signal impinging on the m-th antenna element and is given by Substituting (56) into (55), or equivalently where We can then rewrite the "tall" matrix S as Matrix S ∈ C M ×Lv has at most L v linearly independent rows. Indeed, from (60), we note that the M − L v last rows of S are linearly dependent: they can be expressed as the multiplication of the row vector, [e −α0 , e −α1 , . . . , e −α(Lv−1) ], by a complex scalar. This means that the L v linearly independent rows of S are precisely the first L v rows of S. After decimation using the l-th uniform pattern, the i-th row of the resulting matrix A D l , denoted A D l (i, :), corresponds to the (iF + l)-th row of matrix S, denoted S(iF + l, :), Using L v = F , the l-th uniform decimation pattern, l ∈ [0, . . . , F − 2] has two linearly independent rows, while decimation pattern l = F −1 has only one linearly independent row. Therefore, the rank of F l is limited by the rank of A l which in turn is limited by the rank of A D l , consequently the rank is at most two. This finishes the proof that F l has at most two nonzero eigenvalues.
As a result, we can choose the F l , l ∈ {0, · · · , F − 1}, which has the largest trace (which is equal to the sum of all eigenvalues), instead of the one that has the largest eigenvalue without any noticeable performance degradation.
To compute the trace of F l , we can use the fact that the trace of a matrix C = AB, with A, B, C ∈ C N ×N , is given by tr(C) = A representative result is depicted in Fig. 5, showing that the proposed decimation pattern selection procedure is in good agreement with the original one. In The JIDS algorithm specialized for beamforming with the proposed simplification is named the JIDSB dimensionality reduction algorithm.

D. Computational Complexity
In this subsection, we address the computational complexity of the JIDS and the JIDSB algorithms.
The main steps of the proposed algorithms take place in a lower dimensional subspace, because the practical effect of the decimation matrix D l is to select just D lines of S and R(i).  Tables I and II show the computational complexity of the main parts of the JIDS and JIDSB algorithms respectively. Fig. 6 shows how the computational complexity of the stage of decimation pattern selection is decreased by the simplification described in subsection IV-C as a function of the reduction factor F . In order to assess the number of operations required for finding the eigenvector associated with the largest eigenvalue of a N × N matrix we used the power method [20], which takes N it iterations and involves N it N 2 complex multiplications and N it N (N − 1) complex additions. For both algorithms (JIDS and JIDSB), we set the filter length equal to the reduction factor L v = F and N it = 5. We can see that the proposed simplification significantly reduces the number of complex operations for the stage of decimation pattern selection.

V. NUMERICAL RESULTS AND COMPARISONS
In this section, we compare the JIDSB algorithm applied for reducing the dimensionality of the MVDR-SMI beamformer (JIDSB-SMI) with renowned rank reduction algorithms in terms of SINR loss performance, computational complexity, adapted beampattern and BER performance.
We compare the JIDSB-SMI with the PC-SMI, CSM-SMI and CG-MWF algorithms. The PC-SMI and CSM-SMI algorithms follow the same structure of the JIDS-SMI as explained in section III: the received array snapshots go through the dimensionality reduction stage and then are used to estimate the lower dimension autocorrelation matrix which is then   inverted and used in the computation of the MVDR or the MPDR filter. The rank of the PC-SMI algorithm is the number of vectors used to form the subspace that is spanned by the D eigenvectors associated with the D largest eigenvalues of the estimated autocorrelation matrixR [15]. The rank of the CSM-SMI algorithm is the number of vectors used to form the subspace that is spanned by the D eigenvectors of the estimated autocorrelation matrixR that maximize the cross spectral metric [17]. The rank of the CG-MWF algorithm is the number of basis vectors used to describe the Krylov subspace of the estimated autocorrelation matrixR [22]. The CG-MWF algorithm converges to the MVDR or the MPDR result without the need of inverting the estimated autocorrelation matrixR. The rank of the JIDSB-SMI is ⌊M/F ⌋ = D, which is the final length of the vectors after the dimensionality reduction stage.

A. SINR Loss Performance Comparison
We evaluate the SINR loss performance of the proposed JIDSB-SMI algorithm and compare it with the performance of the JIDS-SMI, PC-SMI, CSM-SMI and the CG-MWF algorithm.
In these simulations we adopt a ULA consisting of M = 64 sensor elements whose inter-element spacing is half a signal wavelength. The SOI is at 0 o . We simulate two scenarios φ j = eigenvector associated with the j-th eigenvalue λ j ofR MVDR-SMI after rank reduction stage For both scenarios, we examine the case when the amount of diagonal loading is set to γ = σ 2 n in (9), just in order to avoid computational instabilities in the algorithms and the case when the diagonal loading is set to γ = 10σ 2 n , which is empirically shown in [34] to be a suitable value. We compare the latter case using two different detection filters: the MVDR, when the desired signal is not present during estimation of the autocorrelation matrix and the MPDR, when the desired signal is present during the estimation of the autocorrelation matrix and the SOI SNR equals 10 dB.
For all simulations the JIDS and JIDSB had equivalent performances, as expected, so in the following we will mention only the JIDSB for brevity. The mean SINR loss is defined as where R is the true autocorrelation matrix of noise and interference. The SINR is computed as where w is the beamforming filter, T is the dimensionality reduction transformation (applied when necessary) and the expectation, had an expressive superior performance compared to full rank MVDR-SMI, CG-MWF, CSM-SMI and PC-SMI algorithms. Figs. 8 and 11 show the SINR loss vs. sample support with a diagonal loading of 10σ 2 n . As expected, all the algorithms improved their convergence rate when compared to the case of a diagonal loading of σ 2 n depicted in Figs 7 and 10. But, even with the considerable improvement of the full rank MVDR-SMI and CG-MWF, the proposed JIDSB still had similar performance.
Next, we examine the effect of the presence of the desired signal in the observed data during estimation of the autocorrelation matrix. Figs 9 and 12 show how the performance is affected by the self-nulling when the MPDR detection filter is applied. As expected, for a high SNR such as 10 dB, there is a large degradation in the performance of all algorithms. Still, the JIDSB performed better than all the others algorithms. It should be stressed that in Fig. 9 the JIDSB showed an impressive superior performance.
Finally, we evaluate the SINR loss performance vs. number of antennas for all the algorithms considered, for a sample  support of 150 snapshots and two jammers. As can be seen from Fig. 13, as the number of antennas M grows (and consequently the dimensionality of the snapshot increases), the SINR loss performances of the CSM-SMI and the PC-SMI algorithms degrade rather abruptly. Still, the CG-MWF and JIDSB algorithms exhibit only a slight degradation in SINR loss performance. Table VII shows the number of complex operations needed to complete all the steps described in tables III, IV, V and VI for the algorithms considered. We used the power method (cf Section IV-D) in order to find the eigenvector associated with the largest eigenvalue of a N × N matrix; the number of iterations N it = 5. In order to compute the inverse of a N ×N matrix we used the Gauss-Jordan method that takes 2N 3 /6 + 3N 2 /6 − 5N/6 complex multiplications and additions.

Algorithm
Multiplications Additions (Ns + 1)DM + D 2 (Ns + 2) + 2D two scenarios considered in section V-A, according to Table  VII. Analysing Fig. 14 in light of figs. 7, 8, 9, 10, 11 and 12, it can be seen that the superior SINR loss performance of the JISDB compared to the others algorithms does not come at the expense of increased computational complexity. Indeed, the JIDSB performs better than the CSM-SMI and the PC-SMI algorithms and still has lower computational complexity. Even for the considerable performance improvement of the CG-MWF in the case of diagonal loading, the proposed JIDSB still had similar performance, with a significantly lower computational complexity than the CG-MWF for the case of Fig. 11 (L=F=2) and a lower computational complexity than the CG-MWF for smaller sample support and similar computational complexity as the CG-MWF for larger sample support for the case of Fig. 8 (L=F=8), as can be verified in Fig. 14.
Since the main contender of the JIDSB in terms of SINR loss performance is the CG-MWF, in Fig. 15 we further compare the computational complexity of the JIDSB algorithm with the CG-MWF for different decimation factors F , vs. different sizes of sample support N s . The number of snapshots N B used to estimate B l ∈ C Lv×Lv is the total number of snapshots available, N B = N s . We note from Fig. 15 that the JIDSB has a remarkably smaller computational complexity than the CG-MWF for factors F = 2 and F = 4. For F = 8, the complexity is notably lower for smaller sample support and it is almost the same as the computational complexity of the CG-MWF for larger sample support. For F = 16, the JIDSB has a significantly increased computational complexity, that is because for this pair of array size M and factor F , the JIDSB pre-processing steps take place in a not so reduced subspace, thus increasing significantly the computational complexity. Fig. 16 depicts the adapted beampattern of all algorithms with the configuration that led to the results in Fig. 11 with a sample support of 200 snapshots. The arrows indicate the positions of the jammers. All curves were averaged over 200 Montecarlo runs. From Fig. 16, we can see that all algorithms were able to set deep nulls at the jammers position. Moreover, the JIDSB beampattern follows more closely the beampatterns obtained by the CG-MWF and the full rank MVDR-SMI, which is in good agreement with the SINR loss performance behaviour depicted in Fig. 11. Fig. 17 illustrates the application of the sensor array for a communication system and depicts the BER, for all algorithms for a small sample support. The system model is as described  (4) and z D (i) as in (11), are fed to a minimum-distance QPSK detector, which is the optimal detector for gaussian channels.

D. BER Performance Comparison
Since noise and jammers are modeled as independent complex gaussian random vectors, it is possible to compute semianalytically the error bit probability, P(e), as which can be estimated by the average where with and where w i and T i are the detection filter and the dimensionality reduction transformation obtained in the i-th simulation run and R is the autocorrelation matrix of the noise and interference. Fig. 17 depict the semi-analytical BER of the described QPSK system using N p = 1000 for a sample support of 20. In the horizontal axis we show the SNR based on the reference detection SNR of the optimal full rank case without interference, given as We can see that the JIDSB performance is slightly better than the full rank MVDR-SMI and CG-MWF, that is because the mean SINR for that scenario, with a sample support of only 20 snapshots is slightly better for the JIDSB than for the CG-MWF.

VI. CONCLUSION
We reported the JIDS dimensionality reduction technique. The JIDS algorithm achieves dimensionality reduction by means of a joint interpolation and decimation scheme. The JIDS strategy design is an elegant and effective way to obtain the joint interpolation filter and decimation pattern, taking advantage of the correlation generated by the interpolation filter in order to eliminate samples and still achieve a high SINR after the decimation stage. We stress that the design is such that the interpolator filter maximizes the SINR at the output of the decimation stage, irrespective of the final application filter. We also proposed the JIDSB algorithm, which is a specialized version of the JIDS technique for beamforming with new proposed simplification procedures that resulted from analysis of the combination of the beamforming system model with the JIDS structure. The JIDSB has reduced number of operations and complexity in comparison with the JIDS, without degrading its performance.
We presented performance results in terms of SINR loss vs. sample support applying the MVDR filter with and without diagonal loading and the MPDR filter with diagonal loading. We could see the superiority of the JIDSB-SMI in terms of SINR loss performance for the MPDR filter and for the MVDR filter without diagonal loading. In the case of the MVDR filter with diagonal loading the JIDSB had similar performance as the full rank MVDR-SMI and CG-MWF, with an impressive lower computational complexity specially for small sample supports.
In our understanding, the proposed JIDSB algorithm shows considerable advantages in beamforming scenarios. It is an inherently robust method, it has similar or superior SINR loss performance than the full rank MVDR-SMI and CG-MWF, José Mauro Fortes was born in Rio de Janeiro, Brazil, in 1950. He received the B.S. and M.Sc. degrees in electrical engineering (telecommunications) from the Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), in 1973 and 1976, respectively, and the M.Sc. and Ph.D. degrees in electrical engineering from Stanford University, USA, in 1978 and 1980, respectively. He returned to PUC-Rio, in June 1980, and was a Professor with the Electrical Engineering Department, until 2018, working at the University Center for Telecommunications Studies (CETUC), where he was the Head of the Communication Systems Group, from 2000 to 2018. He is currently an Independent Consultant in satellite communications. In 1992, while on sabbatical leave, he was a Researcher with the General Electric Research and Development Center, Schenectady, USA. He has published several articles in national and international journals and conferences. He has coordinated a number of research projects and has been a Consultant in satellite communications for several private companies and telecommunications agencies.
His main research interests include communication theory, satellite communications, estimation theory, and digital transmission. For two terms (from 1996 to 2000), he was the President of the Brazilian Telecommunications Society (an IEEE ComSoc Sister Society), and for 13 years, he was the Vice-Chairman of the ITU-R Study Group 4 (Fixed Satellite Service). His areas of interest include statistical signal processing, communication theory, digital transmission and signal processing for communications, radar and electronic warfare.
He is a member of the IEEE. His areas of interest include communication theory, digital transmission and signal processing for communications, areas in which he has published more than 200 papers in referred journals and conferences. He was a co-organizer of the Session on Recent Results for the IEEE Workshop on Information Theory, 1992, Salvador, Brazil. He also served as Technical Program Cochairman for IEEE Global Telecommunications Conference (Globecom'99) held in Rio de Janeiro and as a Technical Program Member for several national and international conferences. He was in office for three terms on the Board of Directors of the Brazilian Communications Society (SBrT). He served as a member of its Advisory Council for four terms and as Associate Editor of the society journal: Journal of Communication and Information Systems.
He is currently an Emerit Member of SBrT.