Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data.
Having lower quantization resolution, has been introduced in the literature, to reduce the power consumption of massive MIMO and millimeter wave MIMO systems. Here, we analyze the bit error rate (BER) performance of quantized uplink massive MIMO employing few-bit resolution ADCs. Considering ZF detection, we derive a signal-to-interference, quantization and noise ratio (SIQNR) to achieve an analytical BER approximation for coarsely quantized M-QAM massive MIMO systems, by using a linear quantization model. The proposed expression is a function of the quantization resolution in bits. We further numerically investigate the effects of different quantization levels, from 1-bit to 4-bits, on the BER of three modulation types QPSK, 16-QAM, and 64-QAM. The uniform and non-uniform quantizers are employed in our simulation. Monte Carlo simulation results reveal that our approximate formula gives a tight upper bound on the BER performance of b-bit resolution quantized systems using non-uniform quantizers, whereas the use of uniform quantizers cause a lower performance. We also found a small BER performance degradation in coarsely quantized systems, for example 2-3 bits QPSK and 3-4 bits 16-QAM, compared to the full-precision (unquantized) case. However, this performance degradation can be compensated by increasing the number of antennas at the BS. (C) 2019 Published by Elsevier B.V.
In this paper, we focus on the solution of online optimization problems that arise often in signal processing and machine learning, in which we have access to streaming sources of data. We discuss algorithms for online optimization based on the prediction-correction paradigm, both in the primal and dual space. In particular, we leverage the typical regularized least-squares structure appearing in many signal processing problems to propose a novel and tailored prediction strategy, which we call extrapolation-based. By using tools from operator theory, we then analyze the convergence of the proposed methods as applied both to primal and dual problems, deriving an explicit bound for the tracking error, that is, the distance from the time-varying optimal solution. We further discuss the empirical performance of the algorithm when applied to signal processing, machine learning, and robotics problems.
In this paper, we investigate the problem of recovering the frequency components of a mixture of K complex sinusoids from a random subset of N equally-spaced time-domain samples. Because of the random subset, the samples are effectively non-uniform. Besides, the frequency values of each of the K complex sinusoids are assumed to vary continuously within a given range. For this problem, we propose a two-step strategy: (i) we first lift the incomplete set of uniform samples (unavailable samples are treated as missing data) into a structured matrix with missing entries, which is potentially low-rank; then (ii) we complete the matrix using a weighted nuclear minimization problem. We call the method a weighted lifted-structured (WLi) low-rank matrix recovery. Our approach can be applied to a range of matrix structures such as Hankel and double-Hankel, among others, and provides improvement over the unweighted existing schemes such as EMaC and DEMaC. We provide theoretical guarantees for the proposed method, as well as numerical simulations in both noiseless and noisy settings. Both the theoretical and the numerical results confirm the superiority of the proposed approach.
The Bayesian information criterion (BIC) is one of the most well-known criterion used for model order estimation in linear regression models. However, in its popular form, BIC is inconsistent as the noise variance tends to zero given that the sample size is small and fixed. Several modifications of the original BIC have been proposed that takes into account the high-SNR consistency, but it has been recently observed that the performance of the high-SNR forms of BIC highly depends on the scaling of the data. This scaling problem is a byproduct of the data dependent penalty design, which generates irregular penalties when the data is scaled and often leads to greater underfitting or overfitting losses in some scenarios when the noise variance is too small or large. In this paper, we present a new form of the BIC for order selection in linear regression models where the parameter vector dimension is small compared to the sample size. The proposed criterion eliminates the scaling problem and at the same time is consistent for both large sample sizes and high-SNR scenarios.
This paper presents a notch filter based on recursive least-squares sinusoidal modelling. This gives analytical insight both into least-squares modelling of sine waves in noise and the use of constrained notch filters. Especially, the derived filter corresponds to a commonly used notch filter with constrained poles and zeros.
The intersecting cortical model (ICM) is a model based on neural network techniques especially designed for image processing. It was derived from several visual cortex models and is basically the intersection of these models, i.e. the common elements amongst these models. The theoretical foundation of the ICM is given and it is shown how the ICM can be derived as a reduced set of equations of the pulse-coupled neural network based upon models proposed by Eckhorn and Reitboeck. Tests of the ICM are presented: one on a series of images of an aircraft moving in the sky; two on car detection; and one on preparations of underground nuclear explosions. The ICM is shown here, in a few examples, to be useful in imagery change detection: aircraft moving against a homogeneous background without precise geometric matching; car on a road; two cars moving in an urban setting without precise geometric matching; and for a linear structure in a complex background. The ICM can be used when the moving objects are not too small and the background is not too difficult. Changes involving larger linear structures can be detected even if the background is not homogeneous.
During recent decades, there has been a substantial development in optimal mass transport theory and methods. In this work, we consider multi-marginal problems wherein only partial information of each marginal is available, a common setup in many inverse problems in, e.g., remote sensing and imaging. By considering an entropy regularized approximation of the original transport problem, we propose an algorithm corresponding to a block-coordinate ascent of the dual problem, where Newton's algorithm is used to solve the sub-problems. In order to make this computationally tractable for large-scale settings, we utilize the tensor structure that arises in practical problems, allowing for computing projections of the multi-marginal transport plan using only matrix-vector operations of relatively small matrices. As illustrating examples, we apply the resulting method to tracking and barycenter problems in spatial spectral estimation. In particular, we show that the optimal mass transport framework allows for fusing information from different time steps, as well as from different sensor arrays, also when the sensor arrays are not jointly calibrated. Furthermore, we show that by incorporating knowledge of underlying dynamics in tracking scenarios, one may arrive at accurate spectral estimates, as well as faithful reconstructions of spectra corresponding to unobserved time points.
An efficient algorithm for determining the unique minimal and stable realization of a window of Markov parameters and covariances is derived. The main difference compared to the Q-Markov COVER theory is that here we let the variance of the input noise be a variable, thus avoiding a certain data consistency criterion. First, it is shown that maximizing the input variance of the realization over all interpolants yields a minimal degree solution-a result closely related to maximum entropy. Secondly, the state space approach of the Q-Markov COVER theory is used for analyzing the stability and structure of the realization by straightforward application of familiar realization theory concepts, in particular the occurrence of singular spectral measures is characterized.
This paper studies the possibility of upper bounding the position error for range-based positioning algorithms in wireless sensor networks. In this study, we argue that in certain situations when the measured distances between sensor nodes have positive errors, e.g., in non-line-of-sight (NLOS) conditions, the target node is confined to a closed bounded convex set (a feasible set) which can be derived from the measurements. Then, we formulate two classes of geometric upper bounds with respect to the feasible set. If an estimate is available, either feasible or infeasible, the position error can be upper bounded as the maximum distance between the estimate and any point in the feasible set (the first bound). Alternatively, if an estimate given by a positioning algorithm is always feasible, the maximum length of the feasible set is an upper bound on position error (the second bound). These bounds are formulated as nonconvex optimization problems. To progress, we relax the nonconvex problems and obtain convex problems, which can be efficiently solved. Simulation results show that the proposed bounds are reasonably tight in many situations, especially for NLOS conditions.
This paper proposes two novel iterative data-adaptive spectral estimation techniques for blood velocity estimation using medical ultrasound scanners. The techniques make no assumption on the sampling pattern of the emissions or the depth samples, allowing for duplex mode transmissions where B-mode images are interleaved with the Doppler emissions. Furthermore, the techniques are shown, using both simplified and more realistic Field II simulations as well as in vivo data, to outperform current state-of-the-art techniques, allowing for accurate estimation of the blood velocity spectrum using only 30% of the transmissions, thereby allowing for the examination of two separate vessel regions while retaining an adequate updating rate of the B-mode images. In addition, the proposed methods also allow for more flexible transmission patterns, as well as exhibit fewer spectral artifacts as compared to earlier techniques.
The problem of synthesizing multivariate stationary series Y[n] = (Y-1[n],...,Y-p[n](T), n is an element of Z, with prescribed non-Gaussian marginal distributions, and a targeted covariance structure, is addressed. The focus is on constructions based on a memoryless transformation Y-p[n] = f(p)(X-p[n]) of a multivariate stationary Gaussian series X[n] = (X-1[n],...,X-p[n])(T). The mapping between the targeted covariance and that of the Gaussian series is expressed via Hermite expansions. The various choices of the transforms f(p) for a prescribed marginal distribution are discussed in a comprehensive manner. The interplay between the targeted marginal distributions, the choice of the transforms f(p), and on the resulting reachability of the targeted covariance, is discussed theoretically and illustrated on examples. Also, an original practical procedure warranting positive definiteness for the transformed covariance at the price of approximating the targeted covariance is proposed, based on a simple and natural modification of the popular circulant matrix embedding technique. The applications of the proposed methodology are also discussed in the context of network traffic modeling. MATIAB codes implementing the proposed synthesis procedure are publicly available at http://www.hermir.org.
An approach for the estimation of the frequencies and damping factors of exponentially damped cisoids (complex sinusoids) is presented. The method may be seen as an extension of the method of backward linear prediction and singular value decomposition of Kumaresan and Tufts to the second-order statistics domain. The proposed estimator is interpreted as a high-order Yule-Walker (HOYW) method using a data based covariance matrix. The HOYW method is analysed at high SNR where closed-form expressions for the accuracy of the estimates are derived. By Monte Carlo simulations the HOYW method is applied to data consisting of one and two damped cisoids in additive white noise. The simulation results are compared with the results using the Kumaresan and Tufts method, with the Cramer-Rao bound, and with the derived theoretical results. The method is not statistically efficient, but the comparison shows that the HOYW method outperforms the method of Kumaresan and Tufts in terms of accuracy versus algorithmic complexity and in terms of precision in the cases considered. Due to the above properties the method is suitable to provide fast initial estimates for nonlinear search methods.
This paper analyzes the frequency error variance of a low complexity single tone frequency estimator based on sample correlations of the input data. In the high SNR scenario it is analytically shown that the accuracy of a properly tuned algorithm is nearly optimal, i.e. nearly attains the Cramer-Rao lower bound. For low SNR the statistical efficiency of the algorithm is degraded, but it is analytically proven that for a large number of samples the error variance attains the lower bound for this class of estimators.
In this paper the autoregressive (AR) spectral estimator is analyzed in the case of noisy sinusoidal data. An expression for the large-sample normalized variance is derived and studied in detail for increasing model order. In particular, a very simple formula is derived for the asymptotic (in both number of observed data and model order) normalized variance, which confirms a conjecture made by Sakai.
Finite impulse response filters for the prediction of polynomial signals are considered. An expression for the asymptotic noise gain (as the filter length increases without bound) is derived. It is shown that the asymptotic noise gain only depends on the polynomial order - in particular, it is independent of the prediction horizon. It is also shown that the noise gain forms a non-increasing sequence for increasing filter lengths. Numerical results that lend support to the theoretical findings are included.
In this paper a novel distributed algorithm derived from the Generalized Likelihood Ratio is proposed for real time change detection using sensor networks. The algorithm is based on a combination of recursively generated local statistics and a global consensus strategy, and does not require any fusion center. The problem of detection of an unknown change in the mean of an observed random process is discussed and the performance of the algorithm is analyzed in the sense of a measure of the error with respect to the corresponding centralized algorithm. The analysis encompasses asymmetric constant and randomly time varying matrices describing communications in the network, as well as constant and time varying forgetting factors in the underlying recursions. An analogous algorithm for detection of an unknown change in the variance is also proposed. Simulation results illustrate characteristic properties of the algorithms including detection performance in terms of detection delay and false alarm rate. They also show that the theoretical analysis connected to the problem of detecting change in the mean can be extended to the problem of detecting change in the variance.
Recently, state-space subspace system identification (4SID) has been suggested as an alternative to the more traditional prediction error system identification. The aim of this paper is to analyze the connections between these two different approaches to system identification. The conclusion is that 4SID can be viewed as a linear regression multistep-ahead prediction error method with certain rank constraints. This allows us to describe 4SID methods within the standard framework of system identification and linear regression estimation. For example, this observation is used to compare different cost-functions which occur rather implicitly in the ordinary framework of 4SID. From the cost-functions, estimates of the extended observability matrix are derived and related to previous work. Based on the estimates of the observability matrix, the asymptotic properties of two pole estimators, namely the shift invariance method and a weighted subspace fitting method, are analyzed. Expressions for the asymptotic variances of the pole estimation error are given. From these expressions, difficulties in choosing user-specified parameters are pointed out. Furthermore, it is found that a row-weighting in the subspace estimation step does not affect the pole estimation error asymptotically.
In this paper, the problem of least-squares (LS) preamble-based channel estimation in multicarrier systems under a deterministic channel assumption is revisited. The analysis is presented for the filterbank multicarrier offset quadrature amplitude modulation (FBMC/OQAM) system, although the derived results hold unchanged for the cyclic prefixed orthogonal frequency division multiplexing (CP-OFDM) system, as well. Assuming independent noise disturbances per subcarrier, we show that frequency-domain smoothing techniques can be used to improve the mean square error (MSE) performance. Depending on the number of subcarriers, the choice of smoothing may be different. We also present the time-domain formulation of the frequency-domain smoothing. Based on this formulation, we show that frequency-domain smoothing techniques can minimize the variance of the LS channel estimator, while they can further reduce its MSE by trading bias for variance. Simulations are given to support the derived results.
Filter bank-based multicarrier communications (FBMC) have recently attracted increased interest in both wired (e.g., xDSL, PLC) and wireless (e.g., cognitive radio) applications, due to their enhanced flexibility, higher spectral efficiency, and better spectral containment compared to conventional OFDM. A particular type of FBMC, the so-called FBMC/OQAM or OFDM/OQAM system, consisting of pulse shaped OFDM carrying offset QAM (OQAM) symbols, has received increasing attention due to, among other features, its higher spectral efficiency and implementation simplicity. It suffers, however, from an imaginary inter-carrier/inter-symbol interference that complicates signal processing tasks such as channel estimation. This paper focuses on channel estimation for OFDM/OQAM systems based on a known preamble. A review of the existing preamble structures and associated channel estimation methods is given, for both single- (SISO) and multiple-antenna (MIMO) systems. The various preambles are compared via simulations in both mildly and highly frequency selective channels.
A significant challenge in spectrum sensing is to lessen the signal to noise ratio needed to detect the presence of primary users while the noise level may also be unknown. To meet this challenge, multi-antenna based techniques possess a greater efficiency compared to other algorithms. In a typical compact multi-antenna system, due to small interelement spacing, mutual coupling between thermal noises of adjacent receivers is significant. In this paper, unlike most of the spectrum sensing algorithms which assume spatially uncorrelated noise, the noises on the adjacent antennas can have arbitrary correlations. Also, in contrast to some other algorithms, no prior assumption is made on the temporal properties of the signals. We exploit low-rank/sparse matrix decomposition algorithms to obtain an estimate of noise and received source covariance matrices. Given these estimates, a Semi-Constant False Alarm Rate (S-CFAR) detector, in which the probability of false alarm is constant over the scaling of the noise covariance matrix, to examine the presence of primary users is proposed. In order to analyze the efficiency of our algorithm, we derive approximate probability of detection. Numerical simulations show that the proposed algorithm consistently and considerably outperforms state-of-the-art multiantenna based spectrum sensing algorithms.
In 1948 Wold introduced an isometric isomorphism between a Hilbert (linear) space formed from the weighted shifts of a numerical sequence and a suitable Hilbert space of values of a second-order stochastic sequence. Motivated by a recent resurrection of the idea in the context of cyclostationary sequences and processes, we present the details of the Wold isomorphism between cyclostationary stochastic sequences and cyclostationary numerical sequences. We show how Hilbert-space representations of cyclostationary sequences are interpreted in the case of numerical CS sequences.
This paper focuses on the problem of reconstructing low-rank matrices from underdetermined measurements using alternating optimization strategies. We endeavour to combine an alternating least-squares based estimation strategy with ideas from the alternating direction method of multipliers (ADMM) to recover low-rank matrices with linear parameterized structures, such as Hankel matrices. The use of ADMM helps to improve the estimate in each iteration due to its capability of incorporating information about the direction of estimates achieved in previous iterations. We show that merging these two alternating strategies leads to a better performance and less consumed time than the existing alternating least squares (ALS) strategy. The improved performance is verified via numerical simulations with varying sampling rates and real applications.
The line spectral frequencies (LSFs) are commonly used for the linear predictive/autoregressive model in speech and audio coding. Recently, probability density function (PDF)-optimized vector quantization (VQ) has been studied intensively for quantization of LSF parameters. In this paper, we study the VQ performance bound of the LSF parameters. The LSF parameters are transformed to the Delta LSF domain and the underlying distribution of the Delta LSF parameters is modeled by a Dirichlet mixture model (DMM) with a finite number of mixture components. The quantization distortion, in terms of the mean squared error (MSE), is calculated with high rate theory. For LSF quantization, the mapping relation between the perceptually motivated log spectral distortion (LSD) and the MSE is empirically approximated by a polynomial. With this mapping function, the minimum required bit rate (an empirical lower bound) for transparent coding of the LSF under DMM modeling is derived.
Efficient quantization of the linear predictive coding (LPC) parameters plays a key role in parametric speech coding. The line spectral frequency (LSF) representation of the LPC parameters has found its applications in speech model quantization. In practical implementations of vector quantization (VQ), probability density function optimized VQ has been shown to be more efficient than the VQ based on training data. In this paper, we present the LSF parameters by a unit vector form, which has directional characteristics. The underlying distribution of this unit vector variable is modeled by a von Mises-Fisher mixture model (VMM). An optimal inter-component bit allocation strategy is proposed based on high rate theory and a distortion-rate (D-R) relation is derived for the VMM based-VQ (VVQ). Experimental results show that the VVQ outperforms the recently introduced Dirichlet mixture model-based VQ and the conventional Gaussian mixture model-based VQ in terms of modeling performance and D-R relation.
We address some theoretical guarantees for Schatten-p quasi-norm minimization (p∈(0,1]p∈(0,1]) in recovering low-rank matrices from compressed linear measurements. Firstly, using null space properties of the measurement operator, we provide a sufficient condition for exact recovery of low-rank matrices. This condition guarantees unique recovery of matrices of ranks equal or larger than what is guaranteed by nuclear norm minimization. Secondly, this sufficient condition leads to a theorem proving that all restricted isometry property (RIP) based sufficient conditions for _{ℓ}_{p}ℓp quasi-norm minimization generalize to Schatten-p quasi-norm minimization. Based on this theorem, we provide a few RIP-based recovery conditions.
Suppose that a solution x to an underdetermined linear system b=Ax is given. x is approximately sparse meaning that it has a few large components compared to other small entries. However, the total number of nonzero components of x is large enough to violate any condition for the uniqueness of the sparsest solution. On the other hand, if only the dominant components are considered, then it will satisfy the uniqueness conditions. One intuitively expects that x should not be far from the true sparse solution x0. It was already shown that this intuition is the case by providing upper bounds on ||x-x0|| which are functions of the magnitudes of small components of x but independent from x0. In this paper, we tighten one of the available bounds on ||x-x0|| and extend this result to the case that b is perturbed by noise. Additionally, we generalize the upper bounds to the low-rank matrix recovery problem.
In this letter, we discuss the problem of unfolding the frequency spectrum for undersampled wideband data. The problem is of relevance to state-of-the-art radio frequency measurement systems, which capture repetitive waveform based on a sampling rate that violates the Nyquist constraint. The problem is presented in a compact form by the inclusion of a complex operator called the CN operator. The ease-of-use problem formulation eliminates the ambiguity caused by folded frequency spectra, in particular those with lines standing on multiples of the Nyquist frequency that are captured with erroneous amplitude and phase values.
Despite being a strong candidate also for future cellular networks, OFDM’s main drawback is the high peak-to-average power ratio. This requires transmitters to deploy high dynamic range power amplifiers which are difficult to manufacture and thereby expensive. It is particularly problematic in future IoT-based 5G networks, in which a lot of presumably low-cost low-power devices transmit data to a high-quality receiver. In order to make such transmitters as simple and cheap as possible, we consider receiver-side nonlinearity compensation and symbol detection. In particular, we study the problem of joint maximum-likelihood estimation-detection of channel and nonlinearity parameters and symbols using frequency-domain comb-type pilots in multi-path fading OFDM systems, and propose an iterative optimization algorithm to solve it. We also calculate the Cramér-Rao lower bound for a general type of memoryless nonlinearity and show that the proposed algorithm attains it, for high signal-to-noise ratios. We then show by numerical evaluations that the algorithm converges fast and the difference of its performance with the one of the genie-aided scenario, with perfect a priori knowledge of nonlinearity parameters and channel, is negligible.
We consider the problem of model selection for high-dimensional linear regressions in the context of support recovery with multiple measurement vectors available. Here, we assume that the regression coefficient vectors have a common support and the elements of the additive noise vector are potentially correlated. Accordingly, to estimate the support, we propose a non-negative Lasso estimator that is based on covariance matching techniques. We provide deterministic conditions under which the support estimate of our method is guaranteed to match the true support. Further, we use the extended Fisher information criterion to select the tuning parameter in our non-negative Lasso. We also prove that the extended Fisher information criterion can find the true support with probability one as the number of rows in the design matrix grows to infinity. The numerical simulations confirm that our support estimate is asymptotically consistent. Finally, the simulations also show that the proposed method is robust to high correlation between columns of the design matrix.
This work studies dynamic scene deblurring (DSD) of a single photograph, mainly motivated by the very recent DeblurGAN method. It is discovered that training the generator alone of DeblurGAN will result in both regular checkerboard effects and irregular block color excursions unexpectedly. In this paper, two aspects of endeavors are made for a more effective and robust adversarial learning approach to DSD. On the one hand, a kind of opposite-channel -based discriminative priors is developed, improving the deblurring performance of DeblurGAN without additional computational burden in the testing phase. On the other hand, a computationally more efficient while architecturally more robust auto -encoder is developed as a substitute of the original generator in DeblurGAN, promoting DeblurGAN to a new state-of-the-art method for DSD. For brevity, the proposed approach is dubbed as DeblurGAN+. Experimental results on the benchmark GoPro dataset validate that DeblurGAN+ achieves more than 1.5 dB improvement than DeblurGAN in terms of PSNR as trained utilizing the same amount of data. More importantly, the results on realistic non -uniform blurred images demonstrate that DeblurGAN+ is really more effective than DeblurGAN as well as most of variational model-based methods in terms of both blur removal and detail recovery.
The maximum likelihood estimate in factor analysis is typically obtained as the solution of the stationary point equation of the likelihood function. This type of derivation suffers from two problems: it is rather cumbersome and, in fact, it is incomplete as it does not include a proof that the so-obtained estimate is indeed a global maximum point of the likelihood function. In this note we present a simple algebraic derivation of the maximum likelihood estimate in factor models with spherical noise that applies to the general complex-valued data case.
We discuss the intriguing notion of statistical superefficiency in a straightforward manner with a minimum of formality. We point out that for any given parameter estimator there exist other estimators which have a strictly lower asymptotic variance and hence are statistically more efficient than the former. In particular, if the former estimator was statistically efficient (in the sense that its asymptotic variance was equal to the Cramer-Rao bound) then the latter estimators could be called ''superefficient''. Among others, the phenomenon of superefficiency implies that asymptotically there exists no uniformly minimum-variance parameter estimator.
For compressed sensing over arbitrarily connected networks, we consider the problem of estimating underlying sparse signals in a distributed manner. We introduce a new signal model that helps to describe inter-signal correlation among connected nodes. Based on this signal model along with a brief survey of existing greedy algorithms, we develop distributed greedy algorithms with low communication overhead. Incorporating appropriate modifications, we design two new distributed algorithms where the local algorithms are based on appropriately modified existing orthogonal matching pursuit and subspace pursuit. Further, by combining advantages of these two local algorithms, we design a new greedy algorithm that is well suited for a distributed scenario. By extensive simulations we demonstrate that the new algorithms in a sparsely connected network provide good performance, close to the performance of a centralized greedy solution.
A recursive algorithm for simultaneous estimation and tracking of instantaneous frequencies and instantaneous frequency rates-of-change for signals that consist of multiple narrow-band components in noise is proposed and studied. The algorithm recursively separates the signal to individual components and uses estimated phase differences for updating the instantaneous frequency and frequency rate of each component. The main advantages of the proposed algorithm over frequencies-only tracking algorithms known in literature include the zero asymptotic bias (zero tracking delay) in estimating of the instantaneous frequencies of linear FM (chirp) signals and more accurate tracking of frequencies that cross each other. Performance of the algorithm is studied by means of a linear filter approximation technique and derived results are compared with the appropriate (posterior) Cramer-Rao bound. Superior performance of the algorithm is illustrated by computer simulations.
The problem of estimating the nominal direction of arrival and angular spread of a source surrounded by a large number of local scatterers using an array of sensors is addressed. This type of propagation occurs frequently in, for example, mobile communications. The maximum likelihood estimator is considered and two computationally less complex estimators are also proposed. They are based on least-squares fits of the sample covariance to the theoretical covariance matrix derived from the measurement model. The performance of the least-squares-based algorithm is analyzed and based on this, an optimally weighted least-squares criterion is proposed. The weighted least-squares algorithm is shown to be asymptotically efficient. Results of numerical experiments are presented to indicate small sample behavior of the estimators. The nominal direction-of-arrival (DOA) estimates are compared with those provided by a standard subspace algorithm. Finally, the methods are applied to experimental array data to determine spread angles for non line of sight scenarios.
We propose Hilbert transform and analytic signal construction for signals over graphs. This is motivated by the popularity of Hilbert transform, analytic signal, and modulation analysis in conventional signal processing, and the observation that complementary insight is often obtained by viewing conventional signals in the graph setting. Our definitions of Hilbert transform and analytic signal use a conjugate symmetry-like property exhibited by the graph Fourier transform (GFT), resulting in a 'one-sided' spectrum for the graph analytic signal. The resulting graph Hilbert transform is shown to possess many interesting mathematical properties and also exhibit the ability to highlight anomalies/discontinuities in the graph signal and the nodes across which signal discontinuities occur. Using the graph analytic signal, we further define amplitude, phase, and frequency modulations for a graph signal. We illustrate the proposed concepts by showing applications to synthesized and real-world signals. For example, we show that the graph Hilbert transform can indicate presence of anomalies and that graph analytic signal, and associated amplitude and frequency modulations reveal complementary information in speech signals.
The analytic signal (AS) was proposed by Gabor as a complex signal corresponding to a given real signal. The AS has a one-sided spectrum and gives rise to meaningful spectral averages. The Hilbert transform (HT) is a key component in Gabor's AS construction. We generalize the construction methodology by employing the fractional Hilbert transform (FrHT), without going through the standard fractional Fourier transform (FrFT) route. We discuss some properties of the fractional Hilbert operator and show how decomposition of the operator in terms of the identity and the standard Hilbert operators enables the construction of a family of analytic signals. We show that these analytic signals also satisfy Bedrosian-type properties and that their time-frequency localization properties are unaltered. We also propose a generalized-phase AS (GPAS) using a generalized-phase Hilbert transform (GPHT). We show that the GPHT shares many properties of the FrHT, in particular, selective highlighting of singularities, and a connection with Lie groups. We also investigate the duality between analyticity and causality concepts to arrive at a representation of causal signals in terms of the FrHT and GPHT. On the application front, we develop a secure multi-key single-sideband (SSB) modulation scheme and analyze its performance in noise and sensitivity to security key perturbations.
The ability of the continuous wavelet transform (CWT) to provide good time and frequency localization has made it a popular tool in time-frequency analysis of signals. Wavelets exhibit constant-Q property, which is also possessed by the basilar membrane filters in the peripheral auditory system. The basilar membrane filters or auditory filters are often modeled by a Gammatone function, which provides a good approximation to experimentally determined responses. The filterbank derived from these filters is referred to as a Gammatone filterbank. In general, wavelet analysis can be likened to a filterbank analysis and hence the interesting link between standard wavelet analysis and Gammatone filterbank. However, the Gammatone function does not exactly qualify as a wavelet because its time average is not zero. We show how bona fide wavelets can be constructed out of Gammatone functions. We analyze properties such as admissibility, time-bandwidth product, vanishing moments, which are particularly relevant in the context of wavelets. We also show how the proposed auditory wavelets are produced as the impulse response of a linear, shift-invariant system governed by a linear differential equation with constant coefficients. We propose analog circuit implementations of the proposed CWT. We also show how the Gammatone-derived wavelets can be used for singularity detection and time-frequency analysis of transient signals.
Many algorithms for transmission in multiple input multiple output (MIMO) communication systems rely on second order statistics of the channel realizations. The problem of estimating such second order statistics of MIMO channels, based on limited amounts of training data, is treated in this article. It is assumed that the Kronecker model holds. This implies that the channel covariance is the Kronecker product of one covariance matrix that is associated with the array and the scattering at the transmitter and one that is associated with the receive array and the scattering at the receiver. The proposed estimator uses training data from a number of signal blocks (received during independent fades of the MIMO channel) to compute the estimate. This is in contrast to methods that assume that the channel realizations are directly available, or possible to estimate almost without error. It is also demonstrated how methods that make use of the training data indirectly via channel estimates can be biased. An estimator is derived that can, in an asymptotically optimal way, use, not only the structure implied by the Kronecker assumption, but also linear structure on the transmit- and receive covariance matrices. The performance of the proposed estimator is analyzed and numerical simulations illustrate the results and also provide insight into the small sample behaviour of the proposed method.
In this article, we investigate the performance of the recently proposed Direction-Of-Arrival (DOA) estimator POWDER (Prior Orthogonally Weighted Direction EstimatoR). The method is exploiting a specific form of prior information, namely that some DOAs are known, as well as that the correlation state between some of the source signals is known. In such scenarios, it is desirable to exploit the prior information already in the estimator design such that the knowledge can benefit the estimation of the DOAs of the unknown sources. Through an asymptotical statistical analysis, we find closed form expressions for the accuracy of the method. We also derive the relevant Cramér-Rao Bound, and we show the algorithm to be efficient under mild assumptions. The realizable performance in the finite sample-case is studied through numerical Monte-Carlo simulations, from which one can conclude that the theoretically predicted accuracies are attained for modest sample sizes and comparatively low SNR. This has the implication that the algorithm is significantly more accurate than other, state-of-art, methods, in a wide range of scenarios.
Discrete-rate spectrum balancing in interference-limited multi-user and multi-carrier digital subscriber lines (DSL) is a large-scale, non-convex and combinatorial problem. Previously proposed algorithms for its (dual) optimal solution are only applicable for networks with few users, while the suboptimality of less complex bit-loading algorithms has not been adequately studied so far. We deploy constrained optimization techniques as well as problem-specific branch-and-bound and search-space reduction methods, which for the first time give a low-complexity guarantee of optimality in certain multi-user DSL networks of practical size. Simulation results precisely quantify the suboptimality of multi-user bit-loading schemes in a thousand ADSL2 scenarios under measured channel data.
For line spectrum estimation, we derive the maximum a posteriori probability estimator where prior knowledge of frequencies is modeled probabilistically. Since the spectrum is periodic, an appropriate distribution is the circular von Mises distribution that can parameterize the entire range of prior certainty of the frequencies. An efficient alternating projections method is used to solve the resulting optimization problem. The estimator is evaluated numerically and compared with other estimators and the Cramer-Rao bound.
We design a Generalized Fusion Algorithm for Compressive Sampling (gFACS) reconstruction. In the gFACS algorithm, several individual compressive sampling (CS) reconstruction algorithms participate to achieve a better performance than the individual algorithms. The gFACS algorithm is iterative in nature and its convergence is proved under certain conditions using Restricted Isometry Property (RIP) based theoretical analysis. The theoretical analysis allows for the participation of any off-the-shelf or new CS reconstruction algorithm with simple modifications, and still guarantees convergence. We show modifications of some well-known CS reconstruction algorithms for their seamless use in the gFACS algorithm. Simulation results show that the proposed gFACS algorithm indeed provides better performance than the participating individual algorithms.
We investigate an existing distributed algorithm for learning sparse signals or data over networks. The algorithm is iterative and exchanges intermediate estimates of a sparse signal over a network. This learning strategy using exchange of intermediate estimates over the network requires a limited communication overhead for information transmission. Our objective in this article is to show that the strategy is good for learning in spite of limited communication. In pursuit of this objective, we first provide a restricted isometry property (RIP)-based theoretical analysis on convergence of the iterative algorithm. Then, using simulations, we show that the algorithm provides competitive performance in learning sparse signals vis-a-vis an existing alternate distributed algorithm. The alternate distributed algorithm exchanges more information including observations and system parameters.