One of the challenges for a successful use of wireless sensor networks in process industries is to design networks with energy efficient transmission, to increase the lifetime of the deployed network while maintaining the required latency and bit-error rate. The design of such transmission schemes depend on the radio channel characteristics of the region. This paper presents an investigation of the statistical properties of the radio channel in a typical process industry, particularly when the network is meant to be deployed for a long time duration, e. g., days, weeks, and even months. Using 17-20-h-long extensive measurement campaigns in a rolling mill and a paper mill, we highlight the non-stationarity in the environment and quantify the ability of various distributions, given in the literature, to describe the variations on the links. Finally, we analyze the design of an optimal received signal-to-noise ratio (SNR) for the deployed nodes and show that improper selection of the distribution for modeling of the variations in the channel can lead to an overuse of energy by a factor of four or even higher.
Lamb waves have proven to be very useful for plate inspection because large areas of a plate can be covered from a fixed position. This capability makes them suitable for both inspection and structural health monitoring (SHM) applications. During the last decade, research on the use of active arrays in combination with beamforming techniques has shown that a fixed array can be used to perform omni-directional monitoring of a plate structure. The dispersion and multiple propagating modes are issues that need to be addressed when working with Lamb waves. Previous work has mainly focused on conventional, delay-and-sum (DAS) beamforming, while reducing the effects of multiple modes through frequency selectivity and transducer design. The paper describes an adaptive beamforming technique using a minimum variance distortionless response beamforming (MVBF) approach for spatial Lamb wave filtering with multiple-transmitter-multiple-receiver arrays. Dispersion is compensated for by using theoretically calculated dispersion curves. Simulations are used for evaluating the performance of the technique for suppression of interfering Lamb modes, both with and without the presence of mode conversion using different array configurations. A simple simulation model of the plate is used to compare the performance of different sizes of active arrays. An aluminum plate with artificial defects is used for the experimental evaluation. The results show that the MVBF approach performs a lot better in terms of resolution and suppression of interfering modes than the widely used standard beamformer.
In this paper, a computationally efficient algorithm for Bayesian joint change point (CP) detection (CPD) in multiple time series is presented. The data generation model includes a number of change configurations (CC), each affecting a unique subset of the time series, which introduces correlation between the positions of CPs in the monitored time series. The inference objective is to identify joint changes and the associated CC. The algorithm consists of two stages: First a univariate CPD algorithm is applied separately to each of the involved time series. The outcomes of this step are maximum a posteriori (MAP) detected CPs and posterior distributions of CPs conditioned on the MAP CPs. These outcomes are used in combination to approximate the posterior for the CCs. In the second algorithm stage, dynamic programming is used to find the maxima of this approximate CC posterior. The algorithm is applied to synthetic data and it is shown to be both significantly faster and more accurate compared to a previously proposed algorithm designed to solve similar problems. Also, the initial algorithm is extended with steps from the Maximization-Maximization algorithm which allows the hyperparameters of the data generation model to be estimated jointly with the CCs, and we show that these estimates coincide with estimates obtained from a Markov Chain Monte Carlo algorithm.
In this paper we present a new algorithm for joint change point (CP) detection (CPD) in multiple time series. The algorithm is a computationally more efficient version of the previously proposed JCPD algorithm which is based on a dynamic program (DP). Here we show how to reduce the computational cost of the DP by introducing a pruning step which removes unnecessary computations. The algorithm uses a Bayesian data generation model and the CPs are estimated in a two stage procedure: First, a univariate CPD algorithm is applied separately to each time series. The outputs from this stage are univariate posterior distributions of CPs which are then combined to approximate the joint, multivariate, CP posterior. The second stage of the algorithm uses a DP to find the maxima of the joint CP posterior. In this work we show that the computational cost of the second stage can be reduced by using pruning techniques which preserve the optimality of the DP. We demonstrate the computational savings on sets of synthetic data, and for one of the sets, the pruning step reduced the processing time by an order of magnitude. Finally, we demonstrate the practical applicability of the algorithm by applying it to measurements of channel gain from radio links inside a paper mill. Supplementary material is given in the following chapter, where we compare the algorithm with similar methods and also present an alternative pruning condition which leads to a faster, but approximate, algorithm.
The reliability and throughput in an industrial wireless sensor network can be improved by incorporating the predictions of channel gains when forming routing tables. Necessary conditions for such predictions to be useful are that statistical dependences exist between the channel gains and that those dependences extend over a long enough time to accomplish a rerouting. In this paper, we have studied such long-term dependences in channel gains for fixed wireless links in three factories. Long-term fading properties were modeled using a switched regime model, and Bayesian change point detection was used to split the channel gain measurements into segments. In this way, we translated the study of long-term dependences in channel gains into the study of dependences between fading distribution parameters describing the segments. We measured the strengths of the dependences using mutual information and found that the dependences exist in a majority of the examined links. The strongest dependence appeared between mean received power in adjacent segments, but we also found significant dependences between segment lengths. In addition to the study of statistical dependences, we present the summaries of the distribution of the fading parameters extracted from the segments, as well as the lengths of these segments.
Model selection based on observed data sequences is used to decide between different model structures within the class of multinomial, Markov, and hidden Markov models. In a unified Bayesian treatment, we derive posterior probabilities for different model structures without assuming prior knowledge of transition probabilities. We emphasize the following tests: 1) Given a particular data sequence of n outcomes, is each state equally likely? 2) Do the data support an independent model, or is a Markov model a more plausible description? 3) Are two data sequences generated from a) the same Markov model? b) the same hidden Markov model? For Markov models and independent multinomial models, all results are exact. For hidden Markov models, the exact solution is computationally prohibitive, and instead, an approximate solution is proposed.
This paper treats time-domain model-based Bayesian image reconstruction for ultrasonic array imaging and, in particular, two reconstruction methods are presented. These two methods are based on a linear model of the array imaging system and they perform compensation in both the spatial and temporal domains using a minimum mean squared error (MMSE) criterion and a maximum a posteriori MAP) estimation approach, respectively. The presented estimators perform compensation for both the electrical and acoustical wave propagation effects for the ultrasonic array system at hand. The estimators also take uncertainties into account, and, by the incorporation of proper prior knowledge, high-contrast superresolution reconstruction results are obtained. The novel nonlinear MAP estimator constrains the scattering amplitudes to be positive, which applies in applications where the scatterers have higher acoustic impedance than the surrounding medium. The linear MMSE and nonlinear MAP estimators are compared to the traditional delay-and-sum (DAS) beamformer with respect to both resolution and signal-to-noise ratio. The algorithms are compared using both simulated and measured data. The results show that the model-based methods can successfully compensate for both sidelobes and grating lobes, and they have a superior temporal and lateral resolution compared to DAS beamforming. The ability of the nonlinear MAP estimator to suppress noise is also superior compared to both the linear MMSE estimator and the DAS beamformer.
Modern array systems allow for excitation of separate elements using arbitrary wave forms. This is utilized in pulse compression and coded excitation techniques to improve the imaging performance. Such techniques are however somewhat inflexible since they use predefined excitation schemes. This paper presents a more flexible method for optimizing the input signals to an ultrasonic array in such a way that the scattering strengths at arbitrarily chosen control points in the insonified object can be estimated with as small an error as possible, measured with a mean squared error criteria. The statistically motivated method is based on a linear model of the array imaging system and the method takes into account both prior information regarding the scattering strengths and measurement errors. The input signals are found by using genetic optimization and are constrained to have finite duration and bounds on the maximum amplitudes. Different constellations of control points, and different signal-to-noise ratios, yield different excitation schemes. The design approach finds multiple selective focal laws when choosing relatively well separated control points and when the control points are closely spaced, the resulting excitations result in more diffuse fields. Because of the flexibility in choosing the control points, the design method will be useful when developing transmission schemes aiming at fast imaging of large image areas using few transmissions.
This paper proposes the use of phase shift migration for ultrasonic imaging of layered objects and objects immersed in water. The method, which was developed in reflection seismology, is a frequency domain technique that in a computationally efficient way restores images of objects that are isotropic and homogeneous in the lateral direction but inhomogeneous in depth. The performance of the proposed method was evaluated using immersion test data from a block with side-drilled holes with an additional scatterer residing in water. In this way, the method's capability of simultaneously imaging scatterers in different media and at different depths was investigated. The method was also applied to a copper block with flat bottom holes. The results verify that the proposed method is capable of producing high-resolution and low-noise images for layered or immersed objects.
The problem of evaluating compound probability distributions where one of the involved distributions is normal, frequently occurring when modelling communication channels in indoor industrial environments, is considered. Three different methods are investigated. They will here be named Gauss Newton Raphson (GNR), based on the Laplace approximation, Gauss-Hermite Quadratures (GHQ), and a Discrete Convolutional Sum (DCS). These three methods are investigated and compared for a one point problem assuming data that are continuous in amplitude, and a problem where data are assumed to be received in quantized bins. The relative integral approximation error, resulting from computing the compound distribution, is evaluated for the different methods and their complexities are compared. Simulations are provided to illustrate the advantages and disadvantages of the different methods.
This paper presents an investigation of how to model the statistical properties of radio channels arising in industrial environments over long time horizons, e.g., hours and days. Based on extensive measurement campaigns, conducted at three different factory buildings, it is shown that for mobile transceivers the fading characteristics are Rayleigh or close to Rayleigh. However, for transceivers mounted at fixed locations, the use of conventional single fading distributions is not sufficient. It is shown that a suitable model structure for describing the fading properties of the radio channels, as measured by power, is a mixture of gamma and compound gamma-lognormal distributions. Furthermore, the complexity of the model generally increases with the observation interval. A model selection approach based on a connection between Kullback's mean discrimination information and the log-likelihood provides a robust choice of model structure. We show that while a (semi)-Markov chain constitute a suitable model for the channel dynamics the time dependence of the data can be neglected in the estimation of the parameters of the mixture distributions. Neglecting the time dependence in the data leads to a more efficient parametrization. Moreover, it is shown that the considered class of mixture distributions is identifiable for both continuous and quantized data under certain conditions and under those conditions a maximum likelihood under independence assumption estimator is shown to give consistent parameters also for data which are not independent. The parameter estimates are obtained by maximizing the log likelihood using a genetic and a local interior point algorithm.
In this paper, a new computationally efficient sparse deconvolution algorithm for the use on B-scan images from objects with relatively few scattering targets is presented. It is based on a linear image formation model that has been used earlier in connection with linear minimum mean squared error (MMSE) two-dimensional (2-D) deconvolution. The MMSE deconvolution results have shown improved resolution compared to synthetic aperture focusing technique (SAFT), but at the cost of increased computation time. The proposed algorithm uses the sparsity of the image, reducing the degrees of freedom in the reconstruction problem, to reduce the computation time and to improve the resolution. The dominating task in the algorithm consists in detecting the set of active scattering targets, which is done by iterating between one up-dating pass that detects new points to include in the set, and a down-dating pass that removes redundant points. In the up-date, a spatio-temporal matched filter is used to isolate potential candidates. A subset of those are chosen using a detection criterion. The amplitudes of the detected scatterers are found by MMSE. The algorithm properties are illustrated using synthetic and real B-scan. The results show excellent resolution enhancement- and noise-suppression capabilities. The involved computation times are analyzed.
The synthetic aperture focusing technique (SAFT) is used to create focused images from ultrasound scans. SAFT has traditionally been applied only for imaging in a single medium, but the recently introduced phase shift migration (PSM) algorithm has expanded the use of SAFT to multilayer structures. In this article we present a similar focusing algorithm called multi-layer omega-k (MULOK), which combines PSM and the omega-k algorithm to perform multilayer imaging more efficiently. The asymptotic complexity is shown to be lower for MULOK than for PSM, and this is confirmed by comparing execution times for implementations of both algorithms. To facilitate the complexity analysis, a detailed description of algorithm implementation is included, which also serves as a guide for readers interested in practical implementation. Using data from an experiment with a multilayered structure, we show that there is essentially no difference in image quality between the two algorithms.