This paper focuses on the distributed non-Bayesian quickest change detection of the probability distribution of a random process in a wireless senor network (WSN), where the distributions before and after the change point is assumed to be known. The individual sensors are capable of harvesting energy from their surroundings. Each sensor decides to sense the observation signal depending on the available energy at its disposal. Once, a sensor decides to sense, it takes a sample of the observation signal and computes the log-likelihood ratio (LLR) of the aforementioned two distributions, if enough energy is available in its battery for sensing and processing the sample. On the other hand, if enough energy is not available, the sensor decides to abstain from the sensing process during that time slot and waits until a future time slot when it accumulates enough energy to perform the sensing and processing. Once a sensor computes the LLR, it uses that information to calculate the Cumulative Sum (CUSUM) test statsistic to arrive at a local decision about the change point. When a change is detected, these decisions are then sent to the FC (provided the transmitting sensor has enough energy to transmit the decision to the fusion centre successfully), where they are collated to form a single decision about the detection of the change point based on some pre-decided fusion rule. In this work, using asymptotic results on the detection delay for CUSUM tests for a single sensor, we have derived asymptotic results for the expected detection delay (when the change occurs) for three common fusion rules, namely, OR, AND and $r$ out of $N$ rule respectively. These results are analyzed for the scenario when the average harvested energy ($\mathit{\bar{H}}$) at each sensor is greater than or equal to the amount of energy required for sensing ($E_{s}$). We show that in such cases, the standard existing asymptotic results for CUSUM test holds for the local decisions. Consequently, we have determined corresponding results for the detection delay for decisions taken at the FC with the three aforementioned fusion rules by using the theory of order statistics. Numerical results are provided to support the theoretical claims.
In this paper, we investigate the performance of distributed estimation schemes in a wireless sensor network in the presence of an eavesdropper. The sensors transmit observations to the fusion center (FC), which at the same time are overheard by the eavesdropper. Both the FC and the eavesdropper reconstruct a minimum mean-squared error estimate of the physical quantity observed. We address the problem of transmit power allocation for system performance optimization subject to a total average power constraint on the sensor(s), and a security/secrecy constraint on the eavesdropper. We mainly focus on two scenarios: 1) a single sensor with multiple transmit antennas and 2) multiple sensors with each sensor having a single transmit antenna. For each scenario, given perfect channel state information (CSI) of the FC and full or partial CSI of the eavesdropper, we derive the transmission policies for short-term and long-term cases. For the long-term power allocation case, when the sensor is equipped with multiple antennas, we can achieve zero information leakage in the full CSI case, and dramatically enhance the system performance by deploying the artificial noise technique for the partial CSI case. Asymptotic expressions are derived for the long-term distortion at the FC as the number of sensors or the number of antennas becomes large. In addition, we also consider multiple-sensor multiple-antenna scenario, and simulations show that given the same total number of transmitting antennas the multiple-antenna sensor network is superior to the performance of the multiple-sensor single-antenna network.
In this paper, we study the problem of pulsed-radar transmit code design for detection of moving targets in the presence of signal-dependent clutter. For unknown target Doppler shift, the optimal detector does not lead to a closed-form expression. Therefore, we resort to average and worst case performance metrics of the optimal detector for code design. We propose several algorithms under two novel frameworks to solve highly nonconvex design problems. We also consider low peak-to-average-power ratio (PAR) code design.
Radio frequency interference (RFI) mitigation is critical to the proper operation of ultrawideband (UWB) radar systems because RFI can severely degrade the radar imaging capability and target detection performance. In this article, we address the RFI mitigation problem for one-bit UWB radar systems. A one-bit UWB system obtains its signed measurements via a low-cost and high rate sampling scheme, referred to as the continuous time binary value (CTBV) technology. This sampling strategy compares the signal to a known threshold that varies with slow-time and can be used to achieve a high sampling rate and quantization resolution with simple and affordable hardware. This article establishes a proper data model for the RFI sources and proposes a novel RFI mitigation method for the one-bit UWB radar system that uses the CTBV sampling technique. Specifically, we model the RFI sources as a sum of sinusoids with frequencies fixed during the coherent processing interval (CPI) and we exploit the sparsity of the RFI spectrum. We use an extended majorization-minimization-based 1bRELAX algorithm, referred to as 1bMMRELAX, to estimate the RFI source parameters from the signed measurements obtained by using the CTBV sampling strategy. We also devise a new fast frequency initialization method for the extended 1bMMRELAX algorithm to improve its computational efficiency. Moreover, a sparse method is introduced to recover the desired radar echoes using the estimated RFI parameters. Both simulated and experimental results are presented to demonstrate that our proposed algorithm outperforms the existing digital integration method, especially for severe RFI cases.