In this paper, we propose a new architecture to enhance the privacy and security of networked control systems against malicious adversaries. We consider an adversary which first learns the system using system identification techniques (privacy), and then performs a data injection attack (security). In particular, we consider an adversary conducting zero-dynamics attacks (ZDA) which maximizes the performance cost of the system whilst staying undetected. Using the proposed architecture, we show that it is possible to (i) introduce significant bias in the system estimates obtained by the adversary: thus providing privacy, and (ii) efficiently detect attacks when the adversary performs a ZDA using the identified system: thus providing security. Through numerical simulations, we illustrate the efficacy of the proposed architecture
In this paper, we address the problem of covert communication under the presence of multiple wardens with a finite blocklength. The system consists of Alice, who aims to covertly transmit to Bob with the help of a jammer. The system also consists of a Fusion Center (FC), which combines all the wardens' information and decides on the presence or absence of Alice. Both Alice and jammer vary their signal power randomly to confuse the FC. In contrast, the FC randomly changes its threshold to confuse Alice. The main focus of the paper is to study the impact of employing multiple wardens on the trade-off between the probability of error at the FC and the outage probability at Bob. Hence, we formulate the probability of error and the outage probability under the assumption that the channels from Alice and jammer to Bob are subject to Rayleigh fading, while we assume that the channels from Alice and jammer to the wardens are not subject to fading. Then, we utilize a two-player zero-sum game approach to model the interaction between joint Alice and jammer as one player and the FC as the second player. We derive the pay-off function that can be efficiently computed using linear programming to find the optimal distributions of transmitting and jamming powers as well as thresholds used by the FC. The benefit of using a cooperative jammer is shown by means of analytical results and numerical simulations to neutralize the advantage of using multiple wardens at the FC.
Because of modern societies’ dependence on industrial control systems, adequate response to system failures is essential. In order to take appropriate measures, it is crucial for operators to be able to distinguish between intentional attacks and accidental technical failures. However, adequate decision support for this matter is lacking. In this paper, we use Bayesian Networks (BNs) to distinguish between intentional attacks and accidental technical failures, based on contributory factors and observations (or test results). To facilitate knowledge elicitation, we use extended shbone diagrams for discussions with experts, and then translate those into the BN formalism. We demonstrate the methodology using an example in a case study from the water management domain. M4 - Citavi
Water management infrastructures such as floodgates are critical and increasingly operated by Industrial Control Systems (ICS). These systems are becoming more connected to the internet, either directly or through the corporate networks. This makes them vulnerable to cyber-attacks. Abnormal behaviour in floodgates operated by ICS could be caused by both (intentional) attacks and (accidental) technical failures. When operators notice abnormal behaviour, they should be able to distinguish between those two causes to take appropriate measures, because for example replacing a sensor in case of intentional incorrect sensor measurements would be ineffective and would not block corresponding the attack vector. In the previous work, we developed the attack-failure distinguisher framework for constructing Bayesian Network (BN) models to enable operators to distinguish between those two causes, including the knowledge elicitation method to construct the directed acyclic graph and conditional probability tables of BN models. As a full case study of the attack-failure distinguisher framework, this paper presents a BN model constructed to distinguish between attacks and technical failures for the problem of incorrect sensor measurements in floodgates, addressing the problem of floodgate operators. We utilised experts who associate themselves with the safety and/or security community to construct the BN model and validate the qualitative part of constructed BN model. The constructed BN model is usable in water management infrastructures to distinguish between intentional attacks and accidental technical failures in case of incorrect sensor measurements. This could help to decide on appropriate response strategies and avoid further complications in case of incorrect sensor measurements.
Both intentional attacks and accidental technical failures can lead to abnormal behaviour in components of industrial control systems. In our previous work, we developed a framework for constructing Bayesian Network (BN) models to enable operators to distinguish between those two classes, including knowledge elicitation to construct the directed acyclic graph of BN models. In this paper, we add a systematic method for knowledge elicitation to construct the Conditional Probability Tables (CPTs) of BN models, thereby completing a holistic framework to distinguish between attacks and technical failures. In order to elicit reliable probabilities from experts, we need to reduce the workload of experts in probability elicitation by reducing the number of conditional probabilities to elicit and facilitating individual probability entry. We utilise DeMorgan models to reduce the number of conditional probabilities to elicit as they are suitable for modelling opposing influences i.e., combinations of influences that promote and inhibit the child event. To facilitate individual probability entry, we use probability scales with numerical and verbal anchors. We demonstrate the proposed approach using an example from the water management domain.
This tutorial provides a high-level introduction to novel control-theoretic approaches for the security and privacy of cyber-physical systems (CPS). It takes a risk-based approach to the problem and develops a model framework that allows us to introduce and relate many of the recent contributions to the area. In particular, we explore the concept of risk in the context of CPS under cyber-attacks, paying special attention to the characterization of attack scenarios and to the interpretation of impact and likelihood for CPS. The risk management framework is then used to give an overview of and map different contributions in the area to three core parts of the framework: attack scenario description, quantification of impact and likelihood, and mitigation strategies. The overview is by no means complete, but it illustrates the breadth of the problems considered and the control-theoretic solutions proposed so far.
This paper addresses the issue of data injection attacks on actuators in control systems. Considering attacks that aim at maximizing impact while remaining undetected, the paper revisits the recently proposed output-to-output gain, which is compared to classical sensitivity metrics such as H-infinity and H_. In its original formulation, the output-to-output gain is unbounded for strictly proper systems. This limitation is further investigated and addressed by modifying the performance output of the system and ensuring that the system from attack signal to performance output is also strictly proper. With this system description, and by using the theory of dissipative systems, a Bi-linear Matrix Inequality (BMI) is formulated for system design. Using this BMI, a design algorithm is proposed based on the heuristic of alternating minimization. Through numerical simulations of the proposed algorithm, it is found that the output-to-output gain presents advantages over the other metrics: the effect of the attack is reduced in the performance output and increased in the detection output in a relatively large spectrum of frequencies.
In this paper, we consider the optimal controller design problem against data injection attacks on actuators for an uncertain control system. We consider attacks that aim at maximizing the attack impact while remaining stealthy in the finite horizon. To this end, we use the Conditional Value-at-Risk to characterize the risk associated with the impact of attacks. The worst-case attack impact is characterized using the recently proposed output-to-output l(2)-gain (OOG). We formulate the design problem and observe that it is non-convex and hard to solve. Using the framework of scenariobased optimization and a convex proxy for the OOG, we propose a convex optimization problem that approximately solves the design problem with probabilistic certificates. Finally, we illustrate the results through a numerical example.
This paper addresses the issue of data injection attacks on control systems. We consider attacks which aim at maximizing system disruption while staying undetected in the finite horizon. The maximum possible disruption caused by such attacks is formulated as a non-convex optimization problem whose dual problem is a convex semi-definite program. We show that the duality gap is zero using S-lemma. To determine the optimal attack vector, we formulate a soft-constrained optimization problem using the Lagrangian dual function. The framework of dynamic programming for indefinite cost functions is used to solve the soft-constrained optimization problem and determine the attack vector. Using the Karush-Kuhn-Tucker conditions, we also provide necessary and sufficient conditions under which the obtained attack vector is optimal to the primal problem. Finally, we illustrate the results through numerical examples.
This paper firstly addresses the problem of risk assessment under false data injection attacks on uncertain control systems. We consider an adversary with complete system knowledge, injecting stealthy false data into an uncertain control system. We then use the Value-at-Risk to characterize the risk associated with the attack impact caused by the adversary. The worst-case attack impact is characterized by the recently proposed output-to-output gain. We observe that the risk assessment problem corresponds to an infinite non-convex robust optimization problem. To this end, we use dissipative system theory and the scenario approach to approximate the risk-assessment problem into a convex problem and also provide probabilistic certificates on approximation. Secondly, we con-sider the problem of security measure allocation. We consider an operator with a constraint on the security budget. Under this constraint, we propose an algorithm to optimally allocate the security measures using the calculated risk such that the resulting Value-at-risk is minimized. Finally, we illustrate the results through a numerical example. The numerical example also illustrates that the security allocation using the Value-at-risk, and the impact on the nominal system may have different outcomes: thereby depicting the benefit of using risk metrics.
This article addresses the detection of stealthy attacks on sensor measurements. Inspired in authentication schemes with weak cryptographic guarantees, we propose a watermarking approach to validate the data and its source. In particular, we propose a multiplicative scheme, where the sensor outputs are watermarked by a bank of filters, then transmitted through the possibly unsecured communication network. The original measurement data are finally reconstructed by a watermark remover. To allow the detection of replay attacks, the watermarking filters are devised as hybrid switching systems, whose parameters are assumed to be unknown to the adversary. Design rules are provided, guaranteeing that the nominal closed-loop performance is not deteriorated by the watermarking scheme and ensuring robust stability with mismatched filter parameters. Moreover, we design a switching protocol with no communication overhead to allow the watermarking filters to synchronously update their parameters. The detectability properties of cyber-attacks are analyzed, and the results are illustrated through numerical examples for replay and data injection attacks.
This chapter addresses the problem of detecting stealthy data injection attacks on sensor measurements in a networked control system. A multiplicative watermarking scheme is proposed, where the data from each sensor is post-processed by a time-varying filter called watermark generator. At the controller’s side, the watermark is removed from each channel by another filter, called the watermark remover, thus reconstructing the original signal. The parameters of each remover are matched to those of the corresponding generator, and are supposed to be a shared secret not known by the attacker. The rationale for time-varying watermarks is to allow model-based schemes to detect otherwise stealthy attacks by constantly introducing mismatches between the actual and the nominal dynamics used by the detector. A specific model-based diagnosis algorithm is designed to this end. Under the proposed watermarking scheme, the robustness and the detectability properties of the model-based detector are analyzed and guidelines for designing the watermarking filters are derived. Distinctive features of the proposed approach, with respect to other solutions like end-to-end encryption, are that the scheme is lightweight enough to be applied also to legacy control systems, the absence of side-effects such as delays, and the possibility of utilizing a robust controller to operate the closed-loop system in the event of the transmitter and receiver losing synchronization of their watermarking filters. The results are illustrated through numerical examples.
In this introductory chapter, we illustrate the book’s motivation and objective. In particular, the book takes its raison d’être from the need for protecting Cyber-Physical Systems (CPSs) against threats originating either in the cyber or in the physical domain. Exploring the concepts of safety, security, and privacy for CPSs thus emerged as the natural goal to reach. In order to better support this objective and to help the reader to navigate the book contents, a taxonomy of the above-mentioned concepts is introduced, based on a set of three triads, including the well-known Confidentiality, Integrity, and Availability triad which was introduced in the Information Technology security literature.
In networked control systems, leveraging the peculiarities of the cyber-physical domains and their interactions may lead to novel detection and defense mechanisms against malicious cyber-attacks. In this paper, we propose a multiplicative sensor watermarking scheme, where each sensor's output is separately watermarked by a Single Input Single Output (SISO) filter. Hence, such scheme does not require communication between multiple sensors, but can still lead to detection and isolation of malicious cyber-attacks. In particular, we analyze the benefits of the proposed watermarking scheme for two attack scenarios: the physical sensor re-routing attack and the cyber measurement re-routing one. For each attack scenario, detectability and isolability properties are analyzed with and without the proposed watermarking scheme and we show how the watermarking scheme can be leveraged to detect cyber sensor routing attacks. In order to detect compromised sensors, we design an observer-based detector with a robust adaptive threshold. Additionally, we identify the sensors involved in the re-routing attacks by means of a tailored Recursive Least Squares parameter estimation algorithm. The results are illustrated through a numerical example.
This paper addresses the design of an active cyber-attack detection architecture based on multiplicative watermarking, allowing for detection of covert attacks. We propose an optimal design problem, relying on the so-called output-to-output l(2)-gain, which characterizes the maximum gain between the residual output of a detection scheme and some performance output. Although optimal, this control problem is non-convex. Hence, we propose an algorithm to design the watermarking filters by solving the problem suboptimally via LMIs. We show that, against covert attacks, the output-to-output l(2)-gain is unbounded without watermarking, and we provide a sufficient condition for boundedness in the presence of watermarks.
The alternating direction method of multipliers (ADMM) has emerged as a powerful technique for large-scale structured optimization. Despite many recent results on the convergence properties of ADMM, a quantitative characterization of the impact of the algorithm parameters on the convergence times of the method is still lacking. In this paper we find the optimal algorithm parameters that minimize the convergence factor of the ADMM iterates in the context of l2-regularized minimization and constrained quadratic programming. Numerical examples show that our parameter selection rules significantly outperform existing alternatives in the literature.
We consider the problem of control and remote state estimation with battery constraints and energy harvesting at the sensor (transmitter) under DoS/jamming attacks. We derive the optimal non-causal energy allocation policy that depends on current properties of the channel and on future energy usage. The performance of this policy is analyzed under jamming attacks on the wireless channel, in which the assumed and the true channel gains differ, and we show that the resulting control cost is not monotonic with respect to the assumed channel gain used in the transmission policy. Additionally, we show that, in case there exists a stabilizing policy, then the optimal causal policy ensures stability of the estimation process. The results were illustrated for non-causal and causal energy allocation policies under different jamming attacks.
This paper proposes a secure state estimation scheme with non-periodic asynchronous measurements for linear continuous-time systems under false data attacks on the measurement transmit channel. After sampling the output of the system, a sensor transmits the measurement information in a triple composed of sensor index, time-stamp, and measurement value to the fusion center via vulnerable communication channels. The malicious attacker can corrupt a subset of the sensors through (i) manipulating the time-stamp and measurement value; (ii) blocking transmitted measurement triples; or (iii) injecting fake measurement triples. To deal with such attacks, we propose the design of local estimators based on observability space decomposition, where each local estimator updates the local state and sends it to the fusion center after sampling a measurement. Whenever there is a local update, the fusion center combines all the local states and generates a secure state estimate by adopting the median operator. We prove that local estimators of benign sensors are unbiased with stable covariance. Moreover, the fused central estimation error has bounded expectation and covariance against at most p corrupted sensors as long as the system is 2p-sparse observable. The efficacy of the proposed scheme is demonstrated through an application on a benchmark example of the IEEE 14-bus system.
Low-voltage distribution grids experience a rising penetration of inverter-based, distributed generation. In order to not only contribute to but also solve voltage problems, these inverters are increasingly asked to participate in intelligent grid controls. Communicating inverters implement distributed voltage droop controls. The impact of cyber-attacks to the stability of such distributed grid controls is poorly researched and therefore addressed in this article. We characterize the potential impact of several attack scenarios by employing the positivity and diagonal dominance properties. In particular, we discuss measurement falsification scenarios where the attacker corrupts voltage measurement data received by the voltage droop controllers. Analytical, control-theoretic methods for assessing the impact on system stability and voltage magnitude are presented and validated via simulation.
We propose an actuator security index that can be used to localize and protect vulnerable actuators in a networked control system. Particularly, the security index of an actuator equals to the minimum number of sensors and actuators that need to be compromised, such that a perfectly undetectable attack against that actuator can be conducted. We derive a method for computing the index in small-scale systems and show that the index can potentially be increased by placing additional sensors. The difficulties that appear once the system is of a large-scale are then outlined: The index is NP-hard to compute, sensitive with respect to system variations, and based on the assumption that the attacker knows the entire system model. To overcome these difficulties, a robust security index is introduced. The robust index can characterize actuators vulnerable in any system realization, can be calculated in polynomial time, and can be related to limited model knowledge attackers. Additionally, we analyze two sensor placement problems with the objective to increase the robust indices. We show that the problems have submodular structures, so their suboptimal solutions with performance guarantees can be computed in polynomial time. Finally, we illustrate the theoretical developments through examples.
To protect industrial control systems from cyberattacks, multiple layers of security measures need to be allocated to prevent critical security vulnerabilities. However, both finding the critical vulnerabilities and then allocating security measures in a cost‐efficient way become challenging when the number of vulnerabilities and measures is large. This paper proposes a framework that can be used once this is the case. In our framework, the attacker exploits security vulnerabilities to gain control over some of the sensors and actuators. The critical vulnerabilities are those that are not complex to exploit and can lead to a large impact on the physical world through the compromised sensors and actuators. To find these vulnerabilities efficiently, we propose an algorithm that uses the nondecreasing properties of the impact and complexity functions and properties of the security measure allocation problem to speed up the search. Once the critical vulnerabilities are located, the security measure allocation problem reduces to an integer linear program. Since integer linear programs are NP‐hard in general, we reformulate this problem as a problem of minimizing a linear set function subject to a submodular constraint. A polynomial time greedy algorithm can then be applied to obtain a solution with guaranteed approximation bound. The applicability of our framework is demonstrated on a control system used for regulation of temperature within a building.
The addition of physical watermarking to the control input is a well-adopted technique to detect the data deception attacks on the cyber-physical systems. However, the addition of the watermarking increases the control cost. On the other hand, the attack might be a rare event. In this paper, we propose to reduce the control cost when the system is not under attack by adding the watermarking as and when needed depending on a posterior probability of attack. We first formulate a stochastic optimal control problem, and then solve it using dynamic programming by keeping a balance between the detection delay, false alarm rate (FAR), and the reduction in control cost. We numerically find two thresholds from the value iterations, Th e and Th d , Th d is greater than Th e , for the posterior probability of attack p k . If p k is greater than or equal to Th e , then the watermarking signal is added for the (k+1)-th instant of time. On the other hand, if p k greater than or equal to Th d , then we declare that the system is under attack. We have provided simulation results to illustrate our approach. For the example system model considered in this paper, we have achieved a considerable reduction in the control cost during the normal operation compared to the case where watermarking is always present without sacrificing much in the detection delay.
The addition of physical watermarking to the control input is a well-adopted technique to detect the data deception attacks on the cyber-physical systems. However, the addition of the watermarking increases the control cost. On the other hand, the attack might be a rare event. In this paper, we propose to reduce the control cost when the system is not under attack by adding the watermarking as and when needed depending on a posterior probability of attack. We first formulate a stochastic optimal control problem, and then solve it using dynamic programming by keeping a balance between the detection delay, false alarm rate (FAR), and the reduction in control cost. We numerically find two thresholds from the value iterations, Th-e and Th-d, Th-d is greater than Th-e, for the posterior probability of attack p(k). If p(k) is greater than or equal to Th-e, then the watermarking signal is added for the (k +1)-th instant of time. On the other hand, if p(k) greater than or equal to Th-d, then we declare that the system is under attack. We have provided simulation results to illustrate our approach. For the example system model considered in this paper, we have achieved a considerable reduction in the control cost during the normal operation compared to the case where watermarking is always present without sacrificing much in the detection delay.
Adding a physical watermarking signal to the control input of a networked control system increases the detection probability of data deception attacks at the expense of increased control cost. This paper proposes a parsimonious policy to limit the average number of watermarking events when the attack is not present, which in turn reduces the control cost. We model the system as a stochastic optimal control problem and apply dynamic programming to minimize the average detection delay (ADD) for fixed upper bounds on false alarm rate (FAR) and an average number of watermarking events (ANW) before the attack. Under practical circumstances, the optimal solution results in a two threshold policy on the posterior probability of attack, derived from the Shiryaev statistics for sequential change detection and assuming the change point is a random variable. We derive asymptotically approximate analytical expressions of ADD and FAR, applying the non-linear renewal theory for non-independent and identically distributed data. The derived expressions reveal that ADD reduces with the increase in the Kullback-Leibler divergence (KLD) between the post-and pre-attack distributions of the test statistics. Therefore, we further design the optimal watermarking that maximizes the KLD for a fixed increase in the control cost. The relationship between the ANW and the increase in control cost is also derived. Simulation studies are performed to illustrate and validate the theoretical results.
In this paper, we propose and analyze an attack detection scheme for securing the physical layer of a networked control system (NCS) with a wireless sensor network against attacks where the adversary replaces the true observations with stationary false data. An independent and identically distributed watermarking signal is added to the optimal linear quadratic Gaussian (LQG) control inputs, and a cumulative sum (CUSUM) test is carried out using the joint distribution of the innovation signal and the watermarking signal for quickest attack detection. We derive the expressions of the supremum of the average detection delay (SADD) for a multi-input and multi-output (MIMO) system under the optimal and sub-optimal CUSUM tests. The SADD is asymptotically inversely proportional to the expected Kullback–Leibler divergence (KLD) under certain conditions. The expressions for the MIMO case are simplified for multi-input and single-output systems and explored further to distil design insights. We provide insights into the design of an optimal watermarking signal to maximize KLD for a given fixed increase in LQG control cost when there is no attack. Furthermore, we investigate how the attacker and the control system designer can accomplish their respective objectives by changing the relative power of the attack signal and the watermarking signal. Simulations and numerical studies are carried out to validate the theoretical results.
One of the most studied forms of attacks on the cyber-physical systems is the replay attack. The statistical similarities of the replayed signal and the true observations make the replay attack difficult to detect. In this article, we address the problem of replay attack detection by adding watermarking to the control inputs and then perform resilient detection using cumulative sum (CUSUM) test on the joint statistics of the innovation signal and the watermarking signal, whereas existing work considers only the marginal distribution of the innovation signal. We derive the expression of the Kullback-Liebler divergence (KLD) between the two joint distributions before and after the replay attack, which is, asymptotically, inversely proportional to the detection delay. We perform a structural analysis of the derived KLD expression and suggest a technique to improve the KLD for the systems with relative degree greater than one. A scheme to find the optimal watermarking signal variance for a fixed increase in the control cost to maximize the KLD under the CUSUM test is presented. We provide various numerical simulation results to support our theory. The proposed method is also compared with a state-of-the-art method based on the Neyman-Pearson detector, illustrating the smaller detection delay of the proposed sequential detector.
In this paper, we have proposed a technique for Bayesian sequential detection of replay attacks on networked control systems with a constraint on the average number of watermarking (ANW) events used during normal system operations. Such a constraint limits the increase in the control cost due to watermarking. To determine the optimal sequence regarding the addition or otherwise of watermarking signals, first, we formulate an infinite horizon stochastic optimal control problem with a termination state. Then applying the value iteration approach, we find an optional policy that minimizes the average detection delay (ADD) for fixed upper bounds on the false alarm rate (FAR) and ANW. The optimal policy turns out to be a two thresholds policy on the posterior probability of attack. We derive approximate expressions of ADD and FAR as functions of the two derived thresholds and a few other parameters. A simulation study on a single-input single-output system illustrates that the proposed method improves the control cost considerably at the expense of small increases in ADD. We also perform simulation studies to validate the derived theoretical results.
In this paper, we perform structural analyses of a parsimonious watermarking policy, which minimizes the average detection delay (ADD) to detect data deception attacks on networked control systems (NCS) for a fixed upper bound on the false alarm rate (FAR). The addition of physical watermarking to the control input of a NCS increases the probability of attack detections with an increase in the control cost. Therefore, we formulate the problem of data deception attack detection for NCS with the facility to add physical watermarking as a stochastic optimal control problem. Then we solve the problem by applying dynamic programming value iterations and find a parsimonious watermarking policy that decides to add watermarking and detects attacks based on the estimated posterior probability of attack. We analyze the optimal policy structure and find that it can be a one, two or three threshold policy depending on a few parameter values. Simulation studies show that the optimal policy for a practical range of parameter values is a two-threshold policy on the posterior probability of attack. Derivation of a threshold-based policy from the structural analysis of the value iteration method reduces the computational complexity during the runtime implementation and offers better structural insights. Furthermore, such an analysis provides a guideline for selecting the parameter values to meet the design requirements.
This paper proposes a game-theoretic approach to address the problem of optimal sensor placement against an adversary in uncertain networked control systems. The problem is formulated as a zero-sum game with two players, namely a malicious adversary and a detector. Given a protected performance vertex, we consider a detector, with uncertain system knowledge, that selects another vertex on which to place a sensor and monitors its output with the aim of detecting the presence of the adversary. On the other hand, the adversary, also with uncertain system knowledge, chooses a single vertex and conducts a cyber-attack on its input. The purpose of the adversary is to drive the attack vertex as to maximally disrupt the protected performance vertex while remaining undetected by the detector. As our first contribution, the game payoff of the above-defined zero-sum game is formulated in terms of the Value-at-Risk of the adversary’s impact. However, this game payoff corresponds to an intractable optimization problem. To tackle the problem, we adopt the scenario approach to approximately compute the game payoff. Then, the optimal monitor selection is determined by analyzing the equilibrium of the zero-sum game. The proposed approach is illustrated via a numerical example of a 10-vertex networked control system.
This paper proposes a game-theoretic method to address the problem of optimal detector placement in a networked control system under cyber-attacks. The networked control system is composed of interconnected agents where each agent is regulated by its local controller over unprotected communication, which leaves the system vulnerable to malicious cyber-attacks. To guarantee a given local performance, the defender optimally selects a single agent on which to place a detector at its local controller with the purpose of detecting cyber-attacks. On the other hand, an adversary optimally chooses a single agent on which to conduct a cyber-attack on its input with the aim of maximally worsening the local performance while remaining stealthy to the defender. First, we present a necessary and sufficient condition to ensure that the maximal attack impact on the local performance is bounded, which restricts the possible actions of the defender to a subset of available agents. Then, by considering the maximal attack impact on the local performance as a game payoff, we cast the problem of finding optimal actions of the defender and the adversary as a zero-sum game. Finally, with the possible action sets of the defender and the adversary, an algorithm is devoted to determining the Nash equilibria of the zero-sum game that yield the optimal detector placement. The proposed method is illustrated on an IEEE benchmark for power systems.
This paper considers the problem of security allocation in a networked control system under stealthy attacks in which the system is comprised of interconnected subsystems represented by vertices. A malicious adversary selects a single vertex on which to conduct a stealthy data injection attack to maximally disrupt the local performance while remaining undetected. On the other hand, a defender selects several vertices on which to allocate defense resources against the adversary. First, the objectives of the adversary and the defender with uncertain targets are formulated in probabilistic ways, resulting in an expected worst-case impact of stealthy attacks. Next, we provide a graph-theoretic necessary and sufficient condition under which the cost for the defender and the expected worst-case impact of stealthy attacks are bounded. This condition enables the defender to restrict the admissible actions to a subset of available vertex sets. Then, we cast the problem of security allocation in a Stackelberg game-theoretic framework. Finally, the contribution of this paper is highlighted by utilizing the proposed admissible actions of the defender in the context of large-scale networks. A numerical example of a 50-vertex networked control system is presented to validate the obtained results.
This paper proposes a game-theoretic approach to address the problem of optimal sensor placement for detecting cyber-attacks in networked control systems. The problem is formulated as a zero-sum game with two players, namely a malicious adversary and a detector. Given a protected target vertex, the detector places a sensor at a single vertex to monitor the system and detect the presence of the adversary. On the other hand, the adversary selects a single vertex through which to conduct a cyber-attack that maximally disrupts the target vertex while remaining undetected by the detector. As our first contribution, for a given pair of attack and monitor vertices and a known target vertex, the game payoff function is defined as the output-to-output gain of the respective system. Then, the paper characterizes the set of feasible actions by the detector that ensures bounded values of the game payoff. Finally, an algebraic sufficient condition is proposed to examine whether a given vertex belongs to the set of feasible monitor vertices. The optimal sensor placement is then determined by computing the mixed-strategy Nash equilibrium of the zero-sum game through linear programming. The approach is illustrated via a numerical example of a 10-vertex networked control system with a given target vertex.
Understanding smart grid cyber attacks is key for developing appropriate protection and recovery measures. Advanced attacks pursue maximized impact at minimized costs and detectability. This paper conducts risk analysis of combined data integrity and availability attacks against the power system state estimation. We compare the combined attacks with pure integrity attacks-false data injection (FDI) attacks. A security index for vulnerability assessment to these two kinds of attacks is proposed and formulated as a mixed integer linear programming problem. We show that such combined attacks can succeed with fewer resources than FDI attacks. The combined attacks with limited knowledge of the system model also expose advantages in keeping stealth against the bad data detection. Finally, the risk of combined attacks to reliable system operation is evaluated using the results from vulnerability assessment and attack impact analysis. The findings in this paper are validated and supported by a detailed case study.
There can be none. In this paper, we address the problem of a set of discrete-time networked agents reaching average consensus privately and resiliently in the presence of a subset of attacked agents. Existing approaches to the problem rely on trade-offs between accuracy, privacy, and resilience, sacrificing one for the others. We show that a separation-like principle for privacy-preserving and resilient discrete-time average consensus is possible. Specifically, we propose a scheme that combines strategies from resilient average consensus and private average consensus, which yields both desired properties. The proposed scheme has polynomial time-complexity on the number of agents and the maximum number of attacked agents. In other words, each agent that is not under attack is able to detect and discard the values of the attacked agents, reaching the average consensus of non-attacked agents while keeping each agent's initial state private. Finally, we demonstrate the effectiveness of the proposed method with numerical results.
In this article two limitations in current distributed model based approaches for anomaly detection in large-scale uncertain nonlinear systems are addressed. The first limitation regards the high conservativeness of deterministic detection thresholds, against which a novel family of set-based thresholds is proposed. Such set-based thresholds are defined in a way to guarantee robustness in a user-defined probabilistic sense, rather than a deterministic sense. They are obtained by solving a chance-constrained optimization problem, thanks to a randomization technique based on the Scenario Approach. The second limitation regards the requirement, in distributed anomaly detection architectures, for different parties to regularly communicate local measurements. In settings where these parties want to preserve their privacy, communication may be undesirable. In order to preserve privacy and still allow for distributed detection to be implemented, a novel privacy-preserving mechanism is proposed and a so-called privatized communication protocol is introduced. Theoretical guarantees on the achievable level of privacy, along with a characterization of the robustness properties of the proposed distributed threshold set design, taking into account the privatized communication scheme, are provided. Finally, simulation studies are included to illustrate our theoretical developments.
Distributed fault diagnosis has been proposed as an effective technique for monitoring large scale, nonlinear and uncertain systems. It is based on the decomposition of the large scale system into a number of interconnected subsystems, each one monitored by a dedicated Local Fault Detector (LFD). Neighboring LFDs, in order to successfully account for subsystems interconnection, are thus required to communicate with each other some of the measurements from their subsystems. Anyway, such communication may expose private information of a given subsystem, such as its local input. To avoid this problem, we propose here to use differential privacy to pre-process data before transmission.
In this paper, we consider stealthy data injection attacks against control systems, and develop security sensitivity metrics to quantify their impact on the system. The final objective of this work is to use such metrics as objective functions in the design of optimal resilient controllers against stealthy attacks, akin to the classical design of optimal ℋ _{∞} robust controllers. As a first metric, the recently proposed ℓ _{2} output to output gain is first examined, and fundamental limitations of this gain for systems with strictly proper dynamics are uncovered and characterized. To circumvent such limitations, a new security sensitivity metric is proposed, namely the truncated ℓ _{2} gain. Necessary and sufficient conditions for this gain to be finite are derived, which we show can cope with strictly proper systems. Finally, we report preliminary investigations on the design of optimal resilient controllers, which are supported and illustrated through numerical examples.
IEEE In this paper, we address the problem of distributed reconfiguration of networked control systems upon the removal of misbehaving sensors and actuators. In particular, we consider systems with redundant sensors and actuators cooperating to recover from faults. Reconfiguration is performed while minimizing a steady-state estimation error covariance and a quadratic control cost. A model-matching condition is imposed on the reconfiguration scheme. It is shown that the reconfiguration and its underlying computation can be distributed. Using an average dwell-time approach, the stability of the distributed reconfiguration scheme under finite-time termination is analyzed. The approach is illustrated in a numerical example.
In this paper, the problem of detecting stealthy false-data injection attacks on the measurements is considered. We propose a multiplicative watermarking scheme, where each sensor's output is individually fed to a SISO watermark generator whose parameters are supposed to be unknown to the adversary. Under such a scenario, the detectability properties of the attack are analyzed and guidelines for designing the watermarking filters are derived. Fundamental limitations to the case of single-output systems are also uncovered, for which an alternative approach is proposed. The results are illustrated through numerical examples.
In this chapter, we consider stealthy cyber- and physical attacks against control systems, where malicious adversaries aim at maximizing the impact on control performance, while simultaneously remaining undetected. As an initial goal, we develop security-related metrics to quantify the impact of stealthy attacks on the system. The key novelty of these metrics is that they jointly consider impact and detectability of attacks, unlike classical sensitivity metrics in robust control and fault detection. The final objective of this work is to use such metrics to guide the design of optimal resilient controllers and detectors against stealthy attacks, akin to the classical design of optimal robust controllers. We report preliminary investigations on the design of resilient observer-based controllers and detectors, which are supported and illustrated through numerical examples.
This Research-to-Practice Full Paper investigates the emerging perspectives for the 21st engineering curriculum, and discusses the crucial role that digital technologies will have in facilitating the management, evaluation, and development of such a curriculum. First, a vision for future engineering curricula is distilled from modern curricular perspectives and trends of future engineering professions, where the integration of non-cognitive competences and the increased individualization of study paths are central. Core requirements for future curricula are outlined, which pose significant barriers to curricular changes. Then, the role of technology in mitigating these barriers is discussed, by outlining key aspects of a data-driven digital approach to the management of future curricula. To illustrate the proposed approach, the paper presents a case example of a digital tool that analyzes curriculum coherency at the content level. The paper concludes with a discussion of future research directions regarding the conceptualization and management of future engineering curricula through digital technologies.