In this work we study the problem of estimating the parameters of a bilinear model describing, e.g., the amplitude modulation of extremely low frequency electromagnetic (ELFE) signatures of submarines. A similar problem arises in estimation of a nonlinear dynamic system using a Hammerstein–Wiener model, where two nonlinear static blocks surround a linear dynamic block. For these purposes a new method is derived. It is also shown in the same context that a two-stage method for parameter estimation of Hammerstein–Wiener models can be interpreted as an approximate least squares method. We also show the similarities with the problem of weighted low-rank approximation and the fact that these problems can be solved exactly in finite time using solvers for global optimization of systems of polynomials based on self dual optimization.
In this paper a number of covariance matrix estimators suggested in the literature are compared in terms of their performance in the context of array signal processing. More specifically they are applied in adaptive beamforming which is known to be sensitive to errors in the covariance matrix estimate and where often only a limited amount of data is available for estimation. As many covariance matrix estimators have the form of diagonal loading or eigenvalue adjustments of the sample covariance matrix and as they sometimes offer robustness to array imperfections and finite sample error, they are compared to a recent robustified adaptive Capon beamforming (RCB) method which also has a diagonal loading interpretation. Some of the covariance estimators show a significant improvement over the sample covariance matrix and in some cases they match the performance of the RCB even when a priori knowledge, which is not available in practice, is used for choosing the user parameter of RCB.
In this paper we introduce a new technique for estimating the parameters of the Keplerian model commonly used in radial velocity data analysis for extrasolar planet detection. The unknown parameters in the Keplerian model, namely eccentricity e, orbital frequency f, periastron passage time T, longitude of periastron., and radial velocity amplitude K are estimated by a new approach named SPICE (a semi-parametric iterative covariance-based estimation technique). SPICE enjoys global convergence, does not require selection of any hyperparameters, and is computationally efficient (indeed computing the SPICE estimates boils down to solving a numerically efficient linear program (LP)). The parameter estimates obtained from SPICE are then refined by means of a relaxation-based maximum likelihood algorithm (RELAX) and the significance of the resultant estimates is determined by a generalized likelihood ratio test (GLRT). A real-life radial velocity data set of the star HD 9446 is analyzed and the results obtained are compared with those reported in the literature.
In this note we show that the sparse estimation technique named Square-Root LASSO (SR-LASSO) is connected to a previously introduced method named SPICE. More concretely we prove that the SR-LASSO with a unit weighting factor is identical to SPICE. Furthermore we show via numerical simulations that the performance of the SR-LASSO changes insignificantly when the weighting factor is varied. SPICE stands for sparse iterative covariance-based estimation and LASSO for least absolute shrinkage and selection operator.
In this letter we revisit the problem of smoothed nonparametric spectral estimation via cepstrum thresholding. We formulate the problem of cepstrum thresholding as a multiple hypothesis testing problem and use the false discovery rate (FDR) and familywise error rate (FER) procedures to threshold the cepstral coefficients. We compare the FDR and FER approaches with a previously proposed individual hypothesis testing approach and show that the cepstrum thresholding based on FDR and FER can yield spectral estimates with lower mean square error (MSE).
We consider the problem of model selection for high-dimensional sparse linear regression models. We pose the model selection problem as a multiple-hypothesis testing problem and employ the methods of false discovery rate (FDR) and familywise error rate (FER) to solve it. We also present the reformulation of the FDR/FER-based approaches as criterion-based model selection rules and establish their relation to the extended Bayesian Information Criterion (EBIC), which is a state-of-the-art high-dimensional model selection rule. We use numerical simulations to show that the proposed FDR/FER method is well suited for high-dimensional model selection and performs better than EBIC.
In this paper we deal with the problem of spectral-line analysis ofnonuniformly sampled multivariate time series for which we introduce two methods: the ﬁrst method named SPICE (sparse iterativecovariance based estimation) is based on a covariance ﬁtting framework whereas the second method named LIKES (likelihood-basedestimation of sparse parameters) is a maximum likelihood technique. Both methods yield sparse spectral estimates and they donot require the choice of any hyperparameters. We numericallycompare the performance of SPICE and LIKES with that of the recently introduced method of multivariate sparse Bayesian learning(MSBL).
We consider least squares (LS) approaches for locating a radiating source from range measurements (which we call R-LS) or from range-difference measurements (RD-LS) collected using an array of passive sensors. We also consider LS approaches based on squared range observations (SR-LS) and based on squared range-difference measurements (SRD-LS). Despite the fact that the resulting optimization problems are nonconvex, we provide exact solution procedures for efficiently computing the SR-LS and SRD-LS estimates. Numerical simulations suggest that the exact SR-LS and SRD-LS estimates outperform existing approximations of the SR-LS and SRD-LS solutions as well as approximations of the R-LS and RD-LS solutions which are based on a semidefinite relaxation.
This paper addresses the estimation of the center frequency of complex exponential signals with time-varying amplitude. A method which requires few assumptions regarding the signal’s envelope is proposed. It is based on the polar decomposition of a certain covariance matrix. The polar decomposition, a generalization to matrices of the complex number representation z=reiθ with r>0, is particularly suitable for the application considered. The notion of truncated polar decomposition is introduced. Simple schemes for estimating the signal’s frequency are presented, based on these decompositions. In contrast to most existing methods, the methods presented herein do not rely on any assumed structure for the time-varying amplitude, and they are shown to possess good performances in a large class of signals. The effectiveness and robustness of our approach are demonstrated on real radar data.
Magnetic Resonance Imaging of tissues with both fat and water resonances allows for absolute temperature mapping through parametric modeling. The fat resonance is used as a reference to determine the absolute water resonance frequency which is linearly related to the temperature. The goal of thispaper is to assess whether or not resonance frequency based absolute temperature mapping is feasible in fat tissue. This is done by examining identifiability conditions and analyzing the obtainable performance in terms of the Cramér-Rao Bound of the temperature estimates. We develop the model by including multiple fat peaks, since even small fat resonances can be significant compared to the small water component in fat tissue. It is showed that a high signal to noise ratio is needed for practical use on a 1.5 T scanner, and that higher field strengths can improve the bound significantly. It is also shown that the choice of sampling interval is important to avoid aliasing. In sum, this type of magnetic resonance thermometry is feasible for fat tissuein applications where high field strength is used or when high signal to noise ratio can be obtained.
In magnetic resonance imaging (MRI), the balanced steady-state free precession (bSSFP) pulse sequence has shown to be of great interest, due to its relatively high signal-to-noise ratio in a short scan time. However, images acquired with this pulse sequence suffer from banding artifacts due to off-resonance effects. These artifacts typically appear as black bands covering parts of the image and they severely degrade the image quality. In this paper, we present a fast two-step algorithm for estimating the unknowns in the signal model and removing the banding artifacts. The first step consists of rewriting the model in such a way that it becomes linear in the unknowns (this step is named Linearization for Off-Resonance Estimation, or LORE). In the second step, we use a Gauss-Newton iterative optimization with the parameters obtained by LORE as initial guesses. We name the full algorithm LORE-GN. Using both simulated and in vivo data, we show the performance gain associated with using LORE-GN as compared to general methods commonly employed in similar cases.
Purpose: The balanced steady-state free precession (bSSFP) pulse sequence has shown to be of great interest due to its high signal-to-noise ratio efficiency. However, bSSFP images often suffer from banding artifacts due to off-resonance effects, which we aim to minimize in this article. Methods: We present a general and fast two-step algorithm for 1) estimating the unknowns in the bSSFP signal model from multiple phase-cycled acquisitions, and 2) reconstructing band-free images. The first step, linearization for off-resonance estimation (LORE), solves the nonlinear problem approximately by a robust linear approach. The second step applies a Gauss-Newton algorithm, initialized by LORE, to minimize the nonlinear least squares criterion. We name the full algorithm LORE-GN. Results: We derive the Cramer-Rao bound, a theoretical lower bound of the variance for any unbiased estimator, and show that LORE-GN is statistically efficient. Furthermore, we show that simultaneous estimation of T-1 and T-2 from phase-cycled bSSFP is difficult, since the Cramer-Rao bound is high at common signal-to-noise ratio. Using simulated, phantom, and in vivo data, we illustrate the band-reduction capabilities of LORE-GN compared to other techniques, such as sum-of-squares. Conclusion: Using LORE-GN we can successfully minimize banding artifacts in bSSFP.
This article presents a way of modeling patient response to a pharmacotherapy by means of dynamic models with quantized output. The proposed modeling technique is exemplified by treatment of Parkinson's disease with Duodopa ^{®}, where the drug is continuously administered via duodenal infusion. Titration of Duodopa ^{®} is currently performed manually by a nurse judging the patient's motor symptoms on a quantized scale and adjusting the drug flow provided by a portable computer-controlled infusion pump. The optimal drug flow value is subject to significant inter-individual variation and the titration process might take up to two weeks for some patients. In order to expedite the titration procedure via automation, as well as to find optimal dosing strategies, a mathematical model of this system is sought. The proposed model is of Wiener type with a linear dynamic block, cascaded with a static nonlinearity in the form of a non-uniform quantizer where the quantizer levels are to be identified. An identification procedure based on the prediction error method and the Gauss-Newton algorithm is suggested. The datasets available from titration sessions are scarce so that finding a parsimonious model is essential. A few different model parameterizations and identification algorithms were initially evaluated. The results showed that models with four parameters giving accurate predictions can be identified for some of the available datasets.
In this paper we present an algorithm for sequence design with magnitude constraints. We formulate the design problem in a general setting, but also illustrate its relevance to parallel excitation MRI. The formulated non-convex design optimization criterion is minimized locally by means of a cyclic algorithm, consisting of two simple algebraic sub-steps. Since the algorithm truly minimizes the criterion, the obtained sequence designs are guaranteed to improve upon the estimates provided by a previous method, which is based on the heuristic principle of the Iterative Quadratic Maximum Likelihood algorithm. The performance of the proposed algorithm is illustrated in two numerical examples.
Estimation of the transverse relaxation time, T-2, from multi-echo spin-echo images is usually performed using the magnitude of the noisy data, and a least squares (LS) approach. The noise in these magnitude images is Rice distributed, which can lead to a considerable bias in the LS-based T-2 estimates. One way to avoid this bias problem is to estimate a real-valued and Gaussian distributed dataset from the complex data, rather than using the magnitude. In this paper, we propose two algorithms for phase correction which can be used to generate real-valued data suitable for LS-based parameter estimation approaches. The first is a Weighted Linear Phase Estimation algorithm, abbreviated as WELPE. This method provides an improvement over a previously published algorithm, while simplifying the estimation procedure and extending it to support multi-coil input. The algorithm fits a linearly parameterized function to the multi-echo phase-data in each voxel and, based on this estimated phase, projects the data onto the real axis. The second method is a maximum likelihood estimator of the true decaying signal magnitude, which can be efficiently implemented when the phase variation is linear in time. The performance of the algorithms is demonstrated via Monte Carlo simulations, by comparing the accuracy of the estimates. Furthermore, it is shown that using one of the proposed algorithms enables more accurate T-2 estimates; in particular, phase corrected data significantly reduces the estimation bias in multi-component T-2 relaxometry example, compared to when using magnitude data. WELPE is also applied to a 32-echo in vivo brain dataset, to show its practical feasibility.
For estimating angles of arrival, there are three well known algorithms: weighted noise subspace fitting (WNSF), unconditional maximum likelihood (UML), and conditional niaximum likelihood (CML). These algorithms can also be used for estimating/calibratin
We consider the problem of optimizing the quantization intervals (or thresholds) of low-resolution analog-to-digital converters (ADCs) via the minimization of a Cramer-Rao bound (CRB)-based metric. The interval design is formulated as a dynamic programming problem. A computationally efficient global algorithm, referred to as the interval design for enhanced accuracy (IDEA) algorithm, is presented to solve this optimization problem. If the realization in hardware of a quantizer with optimized intervals is difficult, it can be approximated by a design whose practical implementation is feasible. Furthermore, the optimized quantizer can also be useful in signal compression applications, in which case no approximation should be necessary. As an additional contribution, we establish the equivalence between the Lloyd-Max type of quantizer and a low signal-to-noise ratio version of our IDEA quantizer, and show that it holds true if and only if the noise is Gaussian. Furthermore, IDEA quantizers for several typical signals, for instance normally distributed signals, are provided. Finally, a number of numerical examples are presented to demonstrate that the use of IDEA quantizers can enhance the parameter estimation performance.
In this paper, we formulate the multi-pitch estimation problem and propose a number of methods to estimate the set of fundamental frequencies. The proposed methods, based on the nonlinear least-squares (NLS), Multiple Signal Classification (MUSIC) and the Capon principles, estimate the multiple fundamental frequencies via a number of one-dimensional searches. We also propose an iterative method based on the Expectation Maximization (EM) algorithm. The statistical properties of the methods are evaluated via Monte Carlo simulations for both the single- and multi-pitch cases.