This paper studies the optimal tradeoff between secrecy and non-secrecy rates of the MISO wiretap channels for different power constraint settings:sum power constraint only, per-antenna power constraints only and joint sum and per-antenna power constraints. The problem is motivated by the fact thatchannel capacity and secrecy capacity are generally achieved by different transmit strategies. First, a necessary and sufficient condition to ensure a positive secrecy capacity is shown. The optimal tradeoff between secrecy rate and transmission rate is characterized by a weighted rate sum maximization problem. Since this problem is not necessarily convex, equivalent problem formulations are introduced to derive the optimal transmit strategies. Under sum power constraint only, a closed-form solution is provided. Under per-antenna power constraints, necessary conditions to find the optimal power allocation are provided. Sufficient conditions are provided for the special case of two transmit antennas. For the special case of parallel channels, the optimal transmit strategies can deduced from an equivalent point-to-point channel problem. Lastly, the theoretical results are illustrated by numerical simulations.
This paper studies the optimal tradeoff between secrecy and non-secrecy rates of the MISO wiretap channels for different power constraint settings: sum power constraint only, per-antenna power constraints only, and joint sum and per-antenna power constraints. The problem is motivated by the fact that channel capacity and secrecy capacity are generally achieved by different transmit strategies. First, a necessary and sufficient condition to ensure a positive secrecy capacity is shown. The optimal tradeoff between secrecy rate and transmission rate is characterized by a weighted rate sum maximization problem. Since this problem is not necessarily convex, equivalent problem formulations are introduced to derive the optimal transmit strategies. Under sum power constraint only, a closed-form solution is provided. Under per-antenna power constraints, necessary conditions to find the optimal power allocation are derived. Sufficient conditions are provided for the special case of two transmit antennas. For the special case of aligned channels, the optimal transmit strategies can deduced from an equivalent point-to-point channel problem. Last, the theoretical results are illustrated by numerical simulations.
Abnormal event detection is an important task in research and industrial applications, which has received considerable attention in recent years. Existing methods usually rely on standard frame-based cameras to record the data and process them with computer vision technologies. In contrast, this paper presents a novel neuromorphic vision based abnormal event detection system. Compared to the frame-based camera, neuromorphic vision sensors, such as Dynamic Vision Sensor (DVS), do not acquire full images at a fixed frame rate but rather have independent pixels that output intensity changes (called events) asynchronously at the time they occur. Thus, it avoids the design of the encryption scheme. Since events are triggered by moving edges on the scene, DVS is a natural motion detector for the abnormal objects and automatically filters out any temporally-redundant information. Based on this unique output, we first propose a highly efficient method based on the event density to select activated event cuboids and locate the foreground. We design a novel event-based multiscale spatio-temporal descriptor to extract features from the activated event cuboids for the abnormal event detection. Additionally, we build the NeuroAED dataset, the first public dataset dedicated to abnormal event detection with neuromorphic vision sensor. The NeuroAED dataset consists of four sub-datasets: Walking, Campus, Square, and Stair dataset. Experiments are conducted based on these datasets and demonstrate the high efficiency and accuracy of our method.
We design a jamming-resistant receiver scheme to enhance the robustness of a massive MIMO uplink system against jamming. We assume that a jammer attacks the system both in the pilot and data transmission phases. The key feature of the proposed scheme is that, in the pilot phase, the base station estimates not only the legitimate channel, but also the jamming channel by exploiting a purposely unused pilot sequence. The jamming channel estimate is used to construct linear receiver filters that reject the impact of the jamming signal. The performance of the proposed scheme is analytically evaluated using the asymptotic properties of massive MIMO. The best regularized zero-forcing receiver and the optimal power allocations for the legitimate system and the jammer are also studied. Numerical results are provided to verify our analysis and show that the proposed scheme greatly improves the achievable rates, as compared with conventional receivers. Interestingly, the proposed scheme works particularly well under strong jamming attacks, since the improved estimate of the jamming channel outweighs the extra jamming power.
We consider an aeronautical ad-hoc network relying on aeroplanes operating in the presence of a spoofer. The aggregated signal received by the terrestrial base station is considered as 'clean' or 'normal', if the legitimate aeroplanes transmit their signals and there is no spoofing attack. By contrast, the received signal is considered as 'spurious' or 'abnormal' in the face of a spoofing signal. An autoencoder (AE) is trained to learn the characteristics/features from a training dataset, which contains only normal samples associated with no spoofing attacks. The AE takes original samples as its input samples and reconstructs them at its output. Based on the trained AE, we define the detection thresholds of our spoofing discovery algorithm. To be more specific, contrasting the output of the AE against its input will provide us with a measure of geometric waveform similarity/dissimilarity in terms of the peaks of curves. To quantify the similarity between unknown testing samples and the given training samples (including normal samples), we first propose a so-called deviation-based algorithm. Furthermore, we estimate the angle of arrival (AoA) from each legitimate aeroplane and propose a so-called AoA-based algorithm. Then based on a sophisticated amalgamation of these two algorithms, we form our final detection algorithm for distinguishing the spurious abnormal samples from normal samples under a strict testing condition. In conclusion, our numerical results show that the AE improves the trade-off between the correct spoofing detection rate and the false alarm rate as long as the detection thresholds are carefully selected.
Network coding is an efficient means to improve the spectrum efficiency of satellite communications. However, its resilience to eavesdropping attacks is not well understood. This paper studies the confidentiality issue in a bidirectional satellite network consisting of two mobile users who want to exchange message via a multibeam satellite using the XOR network coding protocol. We aim to maximize the sum secrecy rate by designing the optimal beamforming vector along with optimizing the return and forward link time allocation. The problem is nonconvex, and we find its optimal solution using semidefinite programming together with a 1-D search. For comparison, we also solve the sum secrecy rate maximization problem for a conventional reference scheme without using network coding. Simulation results using realistic system parameters demonstrate that the bidirectional scheme using network coding provides considerably higher secrecy rate compared with that of the conventional scheme.
We present a method for jamming a time-division duplex link using a transceiver with a large number of antennas. By utilizing beamforming, a jammer with M antennas can degrade the spectral efficiency of the primary link more than conventional omnidirectional jammers under the same power constraint, or perform equally well with approximately 1/M of the output power. The jammer operates without any prior knowledge of channels to the legitimate transmitters, or the legitimate signals by relying on channel reciprocity.
Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman-Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback-Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user's energy consumption is manipulated by adaptively using a renewable energy source in order to hide user's activity from the energy provider.
We study a lossy source coding problem with secrecy constraints in which a remote information source should be transmitted to a single destination via multiple agents in the presence of a passive eavesdropper. The agents observe noisy versions of the source and independently encode and transmit their observations to the destination via noiseless rate-limited links. The destination should estimate the remote source based on the information received from the agents within a certain mean distortion threshold. The eavesdropper, with access to side information correlated to the source, is able to listen in on one of the links from the agents to the destination in order to obtain as much information as possible about the source. This problem can be viewed as the so-called CEO problem with additional secrecy constraints. We establish inner and outer bounds on the ratedistortion- equivocation region of this problem. We also obtain the region in special cases where the bounds are tight. Furthermore, we study the quadratic Gaussian case and provide the optimal rate-distortion-equivocation region when the eavesdropper has no side information and an achievable region for a more general setup with side information at the eavesdropper.
In this paper, we develop a framework against inference attacks aimed at inferring the values of the controller gains of an active steering control system (ASCS). We first show that an adversary with access to the shared information by a vehicle, via a vehicular ad hoc network (VANET), can reliably infer the values of the controller gains of an ASCS. This vulnerability may expose the driver as well as the manufacturer of the ASCS to severe financial and safety risks. To protect controller gains of an ASCS against inference attacks, we propose a randomized filtering framework wherein the lateral velocity and yaw rate states of a vehicle are processed by a filter consisting of two components: a nonlinear mapping and a randomizer. The randomizer randomly generates a pair of pseudo gains which are different from the true gains of the ASCS. The nonlinear mapping performs a nonlinear transformation on the lateral velocity and yaw rate states. The nonlinear transformation is in the form of a dynamical system with a feedforward-feedback structure which allows real-time and causal implementation of the proposed privacy filter. The output of the filter is then shared via the VANET. The optimal design of randomizer is studied under a privacy constraint that determines the protection level of controller gains against inference attacks, and is in terms of mutual information. It is shown that the optimal randomizer is the solution of a convex optimization problem. By characterizing the distribution of the output of the filter, it is shown that the statistical distribution of the filter's output depends on the pseudo gains rather than the true gains. Using information-theoretic inequalities, we analyze the inference ability of an adversary in estimating the control gains based on the output of the filter. Our analysis shows that the performance of any estimator in recovering the controller gains of an ASCS based on the output of the filter is limited by the privacy constraint. The performance of the proposed privacy filter is compared with that of an additive noise privacy mechanism. Our numerical results show that the proposed privacy filter significantly outperforms the additive noise mechanism, especially in the low distortion regime.
In this paper, we present a notion of differential privacy (DP) for data that comes from different classes. Here, the class-membership is private information that needs to be protected. The proposed method is an output perturbation mechanism that adds noise to the release of query response such that the analyst is unable to infer the underlying class-label. The proposed DP method is capable of not only protecting the privacy of class-based data but also meets quality metrics of accuracy and is computationally efficient and practical. We illustrate the efficacy of the proposed method empirically while outperforming the baseline additive Gaussian noise mechanism. We also examine a real-world application and apply the proposed DP method to the autoregression and moving average (ARMA) forecasting method, protecting the privacy of the underlying data source. Case studies on the real-world advanced metering infrastructure (AMI) measurements of household power consumption validate the excellent performance of the proposed DP method while also satisfying the accuracy of forecasted power consumption measurements.
Machine learning models are known to memorize the unique properties of individual data points in a training set. This memorization capability can be exploited by several types of attacks to infer information about the training data, most notably, membership inference attacks. In this paper, we propose an approach based on information leakage for guaranteeing membership privacy. Specifically, we propose to use a conditional form of the notion of maximal leakage to quantify the information leaking about individual data entries in a dataset, i.e., the entrywise information leakage. We apply our privacy analysis to the Private Aggregation of Teacher Ensembles (PATE) framework for privacy-preserving classification of sensitive data and prove that the entrywise information leakage of its aggregation mechanism is Schur-concave when the injected noise has a log-concave probability density. The Schur-concavity of this leakage implies that increased consensus among teachers in labeling a query reduces its associated privacy cost. Finally, we derive upper bounds on the entrywise information leakage when the aggregation mechanism uses Laplace distributed noise.
In this paper, a generalized multiple access channel (MAC) model for secret key sharing between three terminals is considered. In this model, there are two transmitters and a receiver where all three terminals receive noisy channel outputs. In addition, there is a one-way public channel from the transmitters to the receiver. Each of the transmitters intends to share a secret key with the receiver by using the MAC and the public channel, where the transmitters are eavesdroppers with respect to each other. Two strategies for secret key sharing are considered, namely, the pre-generated key strategy and the two-stage key strategy. For both of them, inner and outer bounds of the secret key capacity region are derived. Furthermore, the effect of the public channel is discussed and the two strategies are compared. In both strategies, it is assumed that the channel outputs at the transmitters are only used for eavesdropping and not as inputs to the encoders. The effect of this assumption in the presence of the public channel is analyzed for some Gaussian MACs.
We design optimal privacy-enhancing and cost-efficient energy management strategies for consumers that are equipped with a rechargeable energy storage. The Kullback-Leibler divergence rate is used as privacy measure and the expected cost-saving rate is used as utility measure. The corresponding energy management strategy is designed by optimizing a weighted sum of both privacy and cost measures over a finite time horizon, which is achieved by formulating our problem into a belief-state Markov decision process problem. A computationally efficient approximated Q-learning method is proposed as a generalization to high-dimensional problems over an infinite time horizon. At last, we explicitly characterize a stationary policy that achieves the steady belief state over an infinite time horizon, which greatly simplifies the design of the privacy-preserving energy management strategy. The performance of the practical design approaches are finally illustrated in numerical experiments.
In this paper, we design privacy-preserving and cost-efficient energy management strategies for smart grid users that are equipped with renewable energy sources. The adversary is assumed to employ a factorial hidden Markov model based inference for load disaggregation, and the corresponding joint log-likelihood of the model is utilized as the privacy measure. The studied dynamic pricing model is applicable to a commodity-limited market, where the price of unit amount of energy is determined by the users' aggregated power request. The users' energy management strategies are designed under a non-cooperative game framework, where each user aims to optimize a weighted sum objective of both privacy measure and energy cost saving. The users' non-cooperative game is shown to admit a unique pure strategy Nash equilibrium. As an extension, a computational-efficient distributed Nash equilibrium energy management strategy seeking method is proposed, which also avoids the privacy leakage due to the sharing of payoff functions between users. The performance of practical designs of the energy management strategies in the equilibrium is finally illustrated by numerical experiments.
We study a statistical signal processing privacy problem, where an agent observes useful data Y and wants to reveal the information to a user. Since the useful data is correlated with the private data X, the agent employs a privacy mechanism to generate data U that can be released. We study the privacy mechanism design that maximizes the revealed information about Y while satisfying a strong l(1)-privacy criterion. When a sufficiently small leakage is allowed, we show that the optimizer distributions of the privacy mechanism design problem have a specific geometry, i.e., they are perturbations of fixed vector distributions. This geometrical structure allows us to use a local approximation of the conditional entropy. By using this approximation the original optimization problem can be reduced to a linear program so that an approximate solution for the optimal privacy mechanism can be easily obtained. The main contribution of this work is to consider a non-invertible leakage matrix with non-zero leakage. In our first example, inspired by a watermark application, we first demonstrate the accuracy of the approximation. Then, we employ different measures for utility and privacy leakage to compare the privacy-utility trade-off using our approach with other methods. In particular, we show that by allowing small leakage, significant utility can he achieved using our method compared to the case where no leakage is allowed. In the second and third examples which are based on the MNIST data set and medical applications, we illustrate the suggested design for disclosed data U. It has been shown that the letters of Y which are disclosing more information about X are combined (randomized) to produce a new letter of U.
In this paper, we study a stochastic disclosure control problem using information-theoretic methods. The useful data to be disclosed depend on private data that should be protected. Thus, we design a privacy mechanism to produce new data which maximizes the disclosed information about the useful data under a strong chi(2)-privacy criterion. For sufficiently small leakage, the privacy mechanism design problem can be geometrically studied in the space of probability distributions by a local approximation of the mutual information. By using methods from Euclidean information geometry, the original highly challenging optimization problem can be reduced to a problem of finding the principal right-singular vector of a matrix, which characterizes the optimal privacy mechanism. In two extensions we first consider a scenario where an adversary receives a noisy version of the user's message and then we look for a mechanism which finds U based on observing X , maximizing the mutual information between U and Y while satisfying the privacy criterion on U and Z under the Markov chain (Z,Y)-X-U .
In this work, we present polar code designs that offer a provably optimal solution for biometric identification and authentication systems under noisy enrollment for certain sources and observation channels. We consider a discrete memoryless biometric source and discrete symmetric memoryless observation channels. It is shown that the proposed polar code designs achieve the fundamental limits with privacy and secrecy constraints. Depending on how the secret keys are extracted and whether the privacy leakage rate should be close to zero, we consider four related setups, which are (i) the generated secret key system, (ii) the chosen secret key system, (iii) the generated secret key system with zero leakage, and (iv) the chosen secret key system with zero leakage. For the first two setups, (i) and (ii), the privacy level is characterized by the privacy leakage rate. For the last two setups (iii) and (iv), private keys are additionally employed to achieve close to zero privacy leakage rate. In setups (i) and (iii), it is assumed that the secret keys are generated, i.e., extracted from biometric information. While in setups (ii) and (iv), secret keys provided to the system are chosen uniformly at random from some trustful source. This work provides the first examples of fundamental limits-achieving code designs for identification and authentication. Moreover, since the code designs are based on polar codes and many existing works study low-complexity and short block-length polar coding, the proposed code designs in this work provide the code design structure and a framework for the application of biometric identification and authentication.
In this paper, we study fundamental trade-offs in privacy-preserving biometric identification systems with noisy enrollment. The proposed identification systems include helper data, secret keys, and private keys. Helper data are stored in a public database and used for identification. Secret keys are either stored in a secure database or provided to the user, and can be used in a next step, e.g. for authentication. Private keys are provided by users, and are also used for identification. In this paper, we impose a noisy enrollment channel and an arbitrarily small privacy and secrecy leakage rate. We characterize the optimal trade-off among the identification, secret key, private key, and helper data rates. Depending on how secret keys are produced, we study two cases of the proposed privacy-preserving identification systems, where the secret keys are generated and chosen respectively. By introducing private keys, it is shown that the identification system achieves close to zero privacy leakage rate in both generated and chosen secret key settings. The results also show that the identification rate and the secret key rate can be enlarged by increasing the private key rate. This work provides a framework for analyzing privacy-preserving identification systems and an insight on the design of optimal systems.