We investigate the performance of delay constrained data transmission over wireless networks without end-to-end feedback. Forward error-correction coding (FEC) is performed at the bit level to combat channel distortions and random linear network coding (RLNC) is performed at the packet level to recover from packet erasures. We focus on the scenario where RLNC re-encoding is performed at intermediate nodes and we assume that any packet that contains bit errors after FEC decoding can be detected and erased. To facilitate explicit characterization of data transmission over network-coded wireless systems, we propose a generic two-layer abstraction of a network that models both bit/symbol-level operations at the lower layer (termed PHY-layer) over several heterogeneous links and packet-level operations at the upper layer (termed NET-layer). Based on this model, we propose a network reduction method to characterize the throughput-reliability function of the end-to-end transmission. Our approach not only reveals an explicit tradeoff between data delivery rate and reliability, but also provides an intuitive visualization of the bottlenecks within the underlying network. We illustrate our approach via a point-to-point link and a relay network and highlight the advantages of this method over capacity-based approaches.
Two hop relay based networks consist of three network nodes: source,relay station, and destination in which relay station assists the sourceto communicate reliably and efficiently with the destination. Moreover,these networks provide cost efficient solution for achieving highdata rate via cooperative communication between relays with singleantennas. In two hop relay based networks, communication from a source todestination takes place over two phases, i.e , in first phase from sourceto relay station and in second phase from relay station to the destination.Therefore, it is essential to formulate transmission strategies,i.e, TDMA, SDMA, Hybrid TDMA-SDMA and multicast in terms ofresource allocation, beamforming over two phases so that interferenceis taken into account and high data rates are achieved. In this thesis,some relay selection methods have been proposed to optimize thenetwork performance. Different proposed transmission strategies arecompared in different scenario settings in order to analyse and decidethe best strategy in each setting. Based upon simulation results it is recommended to use adaptivetime split ratio between the two phases. Brute force relay selection givesthe optimal relay assignment but Hungarian assignment algorithm alsoperforms pretty close to brute force performance. SDMA with cooperativerelays connection with multiple antennas at the relays performsmuch better than the other transmission strategies. However, multicaststrategy performs much better if second phase channel knowledge is notavailable at the base station.
A pitch enhancement filter is designed with the objective to approach the optimal rate-distortion trade-off. The filter shows significant perceptual benefits, restating that information-theoretical and perceptual criteria are usually consistent. The filter is easy to implement and can be used as a complement to existing audio codecs. Our experiments show that it can improve the reconstruction quality of the AMR-WB standard.
In this paper, we present a computationally efficient sliding window time updating of the Capon and amplitude and phase,estimation (APES) matched filterbank spectral estimators based on the time-variant displacement structure of the data covariance matrix. The presented algorithm forms a natural extension of the most computationally efficient algorithm to date, and offers a significant computational gain as compared to the computational complexity associated with the batch re-evaluation of the spectral estimates for each time-update. Furthermore, via simulations, the algorithm is found to be numerically superior to the time-updated spectral estimate formed from directly updating the data covariance matrix.
Nodes in a sensor network are traditionally used for sensing and data forwarding. However, with the increase of their computational capability, they can be used for in-network data processing, leading to a potential increase of the quality of the networked applications as well as the network lifetime. Visual analysis in sensor networks is a prominent example where the processing power of the network nodes needs to be leveraged to meet the frame rate and the processing delay requirements of common visual analysis applications. The modeling of the end-to-end performance for such networks is, however, challenging, because in-network processing violates the flow conservation law, which is the basis for most queuing analysis. In this work we propose to solve this methodological challenge through appropriately scaling the arrival and the service processes, and we develop probabilistic performance bounds using stochastic network calculus. We use the developed model to determine the main performance bottlenecks of networked visual processing. Our numerical results show that an end-to-end delay of 2-3 frame length is obtained with violation probability in the order of 10-6. Simulation shows that the obtained bounds overestimates the end-to-end delay by no more than 10%.
Compressive Sensing theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate and computational complexity of the measurement system. In recent years, many recovery algorithms were proposed to reconstruct the signal efficiently. Look Ahead OMP (LAOMP) is a recently proposed method which uses a look ahead strategy and performs significantly better than other greedy methods. In this paper, we propose a modification to the LAOMP algorithm to choose the look ahead parameter L adaptively, thus reducing the complexity of the algorithm, without compromising on the performance. The performance of the algorithm is evaluated through Monte Carlo simulations.
Numerous algorithms have been proposed recently for sparse signal recovery in Compressed Sensing (CS). In practice, the number of measurements can be very limited due to the nature of the problem and/or the underlying statistical distribution of the non-zero elements of the sparse signal may not be known a priori. It has been observed that the performance of any sparse signal recovery algorithm depends on these factors, which makes the selection of a suitable sparse recovery algorithm difficult. To take advantage in such situations, we propose to use a fusion framework using which we employ multiple sparse signal recovery algorithms and fuse their estimates to get a better estimate. Theoretical results justifying the performance improvement are shown. The efficacy of the proposed scheme is demonstrated by Monte Carlo simulations using synthetic sparse signals and ECG signals selected from MIT-BIH database.
Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.
Orthogonal Matching Pursuit (OMP) is a popular greedy pursuit algorithm widely used for sparse signal recovery from an undersampled measurement system. However, one of the main shortcomings of OMP is its irreversible selection procedure of columns of measurement matrix. i.e., OMP does not allow removal of the columns wrongly estimated in any of the previous iterations. In this paper, we propose a modification in OMP, using the well known Subspace Pursuit (SP), to refine the subspace estimated by OMP at any iteration and hence boost the sparse signal recovery performance of OMP. Using simulations we show that the proposed scheme improves the performance of OMP in clean and noisy measurement cases.
Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.
For compressed sensing (CS), we develop a new scheme inspired by data fusion principles. In the proposed fusion based scheme, several CS reconstruction algorithms participate and they are executed in parallel, independently. The final estimate of the underlying sparse signal is derived by fusing the estimates obtained from the participating algorithms. We theoretically analyze this fusion based scheme and derive sufficient conditions for achieving a better reconstruction performance than any participating algorithm. Through simulations, we show that the proposed scheme has two specific advantages: 1) it provides good performance in a low dimensional measurement regime, and 2) it can deal with different statistical natures of the underlying sparse signals. The experimental results on real ECG signals shows that the proposed scheme demands fewer CS measurements for an approximate sparse signal reconstruction.
Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data.
Greedy Pursuits are very popular in Compressed Sensing for sparse signal recovery. Though many of the Greedy Pursuits possess elegant theoretical guarantees for performance, it is well known that their performance depends on the statistical distribution of the non-zero elements in the sparse signal. Inpractice, the distribution of the sparse signal may not be knowna priori. It is also observed that performance of Greedy Pursuits degrades as the number of available measurements decreases from a threshold value which is method dependent. To improve the performance in these situations, we introduce a novel fusion framework for Greedy Pursuits and also propose two algorithms for sparse recovery. Through Monte Carlo simulations we show that the proposed schemes improve sparse signal recovery in clean as well as noisy measurement cases.
I den här avhandlingen behandlar vi flera problem relaterade till informationsteoretisk säkerhet. Wiretap-kanalen är den enklaste informationsteoretiska modellen som behandlar säkerhet och i de första kapitlen av avhandlingen designar vi praktiska koder för wiretap-kanalen.
Först designar vi glesa paritetskontrollkoder (LDPC) med två kanttyper för den binära erasure-wiretap-kanalen (BEC-WT). För scenariot där huvudkanalen är felfri och avlyssnarens kanal är en binär erasure-kanal (BEC) konstruerar vi en följd av koder som uppnår säkerhetkapaciteten. Dessa koder är baserade på vanliga LDPC-koder för BEC. Vår konstruktion fungerar dock inte när huvudkanalen inte är felfri. Om så inte är fallet använder vi en metod baserad på linjär programmering för att optimera gradfördelningen hos våra koder, vilket låter oss designa kodensembler som har prestanda nära säkerhetskapaciteten hos BEC-WT. Vi generaliserar sedan en av Méassons, Montanaris och Urbankes metoder för att räkna ut den betingade entropin av meddelandet hos avlyssnaren.
Vi visar sedan att Arikans polära koder kan användas för att uppnå hela kapacitets-ekvivokationsregionen för en degraderad symmetrisk wiretap-kanal med binärt inalfabet. Vi designar också polära koder för decode-and-forward-protokollet för den fysiskt degraderade reläkanalen och för den bidirektionella broadcastkanalen med gemensamma och konfidentiella meddelanden. Vi visar att koderna uppnår kapaciteten och kapacitets-ekvivokationsregionen för dessa kanalmodeller.
I nästföljande kapitel behandlar vi en gaussisk kanalmodell. Vi visar att Josephs och Barrons glesa regressionskoder (SPARCs) kan användas för att uppnå säkerhetskapaciteten för wiretapkanaler med gaussiskt brus och för decode-and-forward-protokollet för reläkanalen. Vi behandlar också generering av hemliga nycklar från korrelerade gaussiska källor med hjälp av en publik kanal av begränsad kapacitet. Vi visar att SPARC-koder uppnår kapacitetsregionen för detta problem.
I det sista kapitlet behandlar vi generering av hemliga nycklar över fädande kanaler. Vi behandlar först ett scenario med flera antenner och högt signal-till-brusförhållande (SNR) och föreslår ett protokoll baserat på träning och slumpdelning. Vi behandlar sedan ett scenario med en antenn hos varje terminal och lågt SNR, där vi begränsar den ena terminalen till att endast sända pilotsignaler. Vi föreslår ett protokoll baserat på sporadisk träning och opportunistisk sändning med en wiretap-kod och visar att det är optimalt.
We consider code design for Wyner’s wiretap channel. Optimal coding schemes for this channel require an overall code that is capacity achieving for the main channel, partitioned into smaller subcodes, all of which are capacity achieving for the wiretapper’s channel. To accomplish this we introduce two edge type low density parity check (LDPC) ensembles for the wiretap channel. For the scenario when the main channel is error free and the wiretapper’s channel is a binary erasure channel (BEC) we find secrecy capacity achieving code sequences based on standard LDPC code sequences for the BEC. However, this construction does not work when there are also erasures on the main channel. For this case we develop a method based on linear programming to optimize two edge type degree distributions. Using this method we find code ensembles that perform close to the secrecy capacity of the binary erasure wiretap channel (BEC- WT). We generalize a method of M ́easson, Montanari, and Urbanke in order to compute the conditional entropy of the message at the wire- tapper. This conditional entropy is a measure of how much information is leaked to the wiretapper. We apply this method to relatively simple ensembles and find that they show very good secrecy performance.
Based on the work of Kudekar, Richardson, and Urbanke, which showed that regular spatially coupled codes are capacity achieving for the BEC, we construct a regular two edge type spatially coupled ensem- ble. We show that this ensemble achieves the whole capacity-equivocation region for the BEC-WT.
We also find a coding scheme using Arıkans polar codes. These codes achieve the whole capacity-equivocation region for any symmetric binary input wiretap channel where the wiretapper’s channel is degraded with respect to the main channel.
We study the low SNR scaling of the non-coherent secret-key agreement capacity over a reciprocal, block-fading channel. For the restricted class of strategies, where one of the nodes is constrained to transmit pilot-only symbols, we show that the secret-key capacity scales as SNR ·log T if T ≤ 1/SNR, where T denotes the coherence period, and as SNR·log(1/SNR) otherwise. Our upper bound is inspired by the genie-aided argument of Borade and Zheng (IT-Trans 2010). Our lower bound is based on bursty communication, channel training, and secret message transmission.
We study secret-key agreement over a non-coherent block-fading multiple input multiple output (MIMO) wiretap channel. We give an achievable scheme based on training and source emulation and analyze the rate in the high SNR regime. Based on this analysis we find the optimal number of antennas to use for training. Our main result is that if the sum of the number of antennas at Alice and Bob is larger than the coherence time of the channel, the achievable rate does not depend on the number of antennas at Eve. In this case source emulation is not needed, and using only training is optimal. We also consider the case when there is no public channel available. In this case we show that secret-key agreement is still possible by using the wireless channel for discussion, giving the same number of secure degrees of freedom as in the case with a public channel.
We consider transmission over a binary erasure wiretap channel using the code construction method introduced by Rathi et al. based on two edge type Low-Density Parity-Check (LDPC) codes and the coset encoding scheme. By generalizing the method of computing conditional entropy for standard LDPC ensembles introduced by Méasson, Montanari, and Urbanke to two edge type LDPC ensembles, we show how the equivocation for the wiretapper can be computed. We find that relatively simple constructions give very good secrecy performance and are close to the secrecy capacity.
We show that polar codes asymptotically achieve the whole capacity-equivocation region for the wiretap channel when the wiretapper's channel is degraded with respect to the main channel, and the weak secrecy notion is used. Our coding scheme also achieves the capacity of the physically degraded receiver-orthogonal relay channel. We show simulation results for moderate block length for the binary erasure wiretap channel, comparing polar codes and two edge type LDPC codes.
The integration of multiple services such as the transmission of private, common, and confidential messages at the physical layer is becoming important for future wireless networks in order to increase spectral efficiency. In this paper, bidirectional relay networks are considered, in which a relay node establishes bidirectional communication between two other nodes using a decode-and-forward protocol. In the broadcast phase, the relay transmits additional common and confidential messages, which then requires the study of the bidirectional broadcast channel (BBC) with common and confidential messages. This channel generalizes the broadcast channel with receiver side information considered by Kramer and Shamai. Low complexity polar codes are constructed that achieve the capacity region of both the degraded symmetric BBC, and the BBC with common and confidential messages. The use of polar codes allows an intuitive interpretation of how to incorporate receiver side information and secrecy constraints as different sets of frozen bits at the different receivers for an optimal code design. In order to show that the constructed codes achieve capacity, a tighter bound on the cardinality of an auxiliary random variable used in the converse is found using a method by Salehi.
We consider the bidirectional broadcast channel with common and confidential messages. We show that polar codes achieve the capacity of binary input symmetrical bidirectional broadcast channels with confidential messages, if one node's channel is a degraded version of the other node's channel. We also find a new bound on the cardinality of the auxiliary random variable in this setup.
A scenario of distributed sensing for networked control systems is considered and a new approach to distributed sensing and transmission is presented. The state process of a scalar first order linear time invariant dynamical system is sensed by a network of wireless sensors, which then instantaneously transmit their measurements to a remotely situated control unit over parallel Gaussian channels. The control unit aims to stabilize the system in mean square sense. The proposed non-linear delay-free sensing and transmission strategy is compared with the well-known amplify-and-forward strategy, using the LQG control cost as a figure of merit. It is demonstrated that the proposed nonlinear scheme outperforms the best linear scheme even when there are only two sensors in the network. The proposed sensing and transmission scheme can be implemented with a reasonable complexity and it is shown to be robust to the uncertainties in the knowledge of the sensors about the statistics of the measurement noise and the channel noise.
We illustrate how channel optimized vector quantization (COVQ) can be used for channels with both bit-errors and bit-erasures. First, a memoryless channel model is presented, and the performance of COVQ's trained for this channel is evaluated for an i.i.d. Gaussian source. Then, the new method is applied in implementing an error-robust sub-band image coder, and we present image results that illustrate the resulting performance. Our experiments show that the new approach is able to outperform a traditional scheme based on separate source and channel coding.
A new design approach for multiple description vector quantizers over more than two channels is presented. The design is inspired by the concept of channel optimized vector quantization. While most previous works have split the decoder into several independent entities, identifying the appropriate channel model makes it straightforward to implement the multiple description design problem using only one decoder. Our simulation results compare systems with 2, 4 and 8 channels. We demonstrate significant gains over previous designs, as well as over a benchmark scheme based on separate quantization and forward erasure-correcting error control.
A new design approach for multiple description coding, based on multi-stage vector quantizers, is presented. The design is not limited to systems with two descriptions, but is also well suited for the n-descriptions case. Inspired by the concept of channel optimized vector quantization, the design can easily be tailored to suit different erasure channels, e.g. packet erasure channels with memory (burst-losses). The optimization procedure used in the design takes a sample-iterative approach. All stage codebooks; are updated simultaneously for each vector in the training database. The resulting algorithm has the behaviour of a simulated annealing algorithm, with several good properties, e.g. it usually provides codebooks with good index assignments. Image results are presented for systems with 2 and 4 channels. The image coder is based on a subband transform followed by 64-dimensional vector quantization, to illustrate the capacity of the design to handle large problem sizes.
This paper is the first part of an investigation if the capacity of a binary-input memoryless symmetric channel under ML decoding can be achieved asymptotically by using non-binary LDPC codes. We consider (l.r)-regular LDPC codes both over finite fields and over the general linear group and compute their asymptotic binary weight distributions in the limit of large blocklength and of large alphabet size. A surprising fact, the average binary weight distributions that we obtain do not tend to the binomial one for values of normalized binary weights ω smaller than 1-2-l/r. However, it does not mean that non-binary codes do not achieve the capacity asymptotically, but rather that there exists some exponentially small fraction of codes in the ensemble, which contains an exponentially large number of codewords of poor weight. The justification of this fact is beyond the scope of this paper and will be given in [1].
We consider the design of bilayer-expurgated low-density parity-check (BE-LDPC) codes as part of a decode-and-forward protocol for use over the full-duplex relay channel. A new ensemble of codes, termed multi-edge-type bilayer-expurgated LDPC (MET-BE-LDPC) codes, is introduced where the BE-LDPC code design problem is transformed into the problem of optimizing the multinomials of a multi-edge-type LDPC code. We propose two design strategies for optimizing MET-BE-LDPC codes; the bilayer approach is preferred when the difference in SNR between the source-to-relay and the source-to-destination channels is small, while the bilayer approach with intermediate rates is preferred when this difference is large. In both proposed design strategies multi-edge-type density evolution is used for code optimization. The resulting MET-BE-LDPC codes exhibit improved threshold and bit-error-rate performance as compared to previously reported bilayer LDPC codes.
We investigate the error performance of multidimensional constellations in the multiple access and broadcast channels. More specifically, we provide closed-form expressions for the pairwise error probability (PEP) of the joint maximum likelihood detection, for multiuser signaling in the presence of additive white Gaussian noise and Rayleigh fading. Arbitrary numbers of users and multidimensional signal sets are assumed, while the provided formula for the PEP is a function of the dimension-wise distances of the multidimensional constellation. Furthermore, a useful upper bound on the average symbol error probability is also obtained through the union bound. The analysis is applied to the sparse code multiple access systems. The analytical results are validated successfully through simulations, and show their importance in the multidimensional constellation design.
This paper investigates error performance of sparse code multiple access (SCMA) networks with multiple access channels (MAC) and broadcast channels (BC). We give the closedform expression for the pairwise error probability (PEP) of joint maximum likelihood (ML) detection for multiuser signals over additive white Gaussian noise (AWGN) and Rayleigh fading channels with an arbitrary number of users and multidimensional codebooks. An upper bound for the average symbol error rate (SER) is calculated. The bound is tight in both AWGN channel and Rayleigh fading channels for high SNR regions. The analytical bounds are compared with simulations, and the results confirm the effectiveness of the analysis for both AWGN and Rayleigh fading channels.
Encoder-decoder design is considered for a closed-loop scalar control system with feedback transmitted over a binary symmetric channel. We propose an iterative procedure which can jointly optimize adaptive encoder-decoder pairs for a certainly equivalence controller. The goal is to minimize a design criterion, in particular, the linear quadratic (LQ) cost function over a finite horizon. The algorithm leads to a practically feasible design of time-varying non-uniform encoding and decoding. Numerical results demonstrate the promising performance obtained by employing the proposed iterative optimization algorithm.
Networked embedded control systems are present almost everywhere. A recent trendis to introduce radio communication in these systems to increase mobility and flex-ibility. Network nodes, such as the sensors, are often simple devices with limitedcomputing and transmission power and low storage capacity, so an important prob-lem concerns how to optimize the use of resources to provide sustained overall sys-tem performance. The approach to this problem taken in the thesis is to analyzeand design the communication and control application layers in an integrated man-ner. We focus in particular on cross-layer design techniques for closed-loop controlover non-ideal communication channels, motivated by future control systems withvery low-rate and highly quantized sensor communication over noisy links. Severalfundamental problems in the design of source–channel coding and optimal controlfor these systems are discussed.The thesis consists of three parts. The first and main part is devoted to the jointdesign of the coding and control for linear plants, whose state feedback is trans-mitted over a finite-rate noisy channel. The system performance is measured by afinite-horizon linear quadratic cost. We discuss equivalence and separation proper-ties of the system, and conclude that although certainty equivalence does not holdin general it can still be utilized, under certain conditions, to simplify the overalldesign by separating the estimation and the control problems. An iterative opti-mization algorithm for training the encoder–controller pairs, taking channel errorsinto account in the quantizer design, is proposed. Monte Carlo simulations demon-strate promising improvements in performance compared to traditional approaches.In the second part of the thesis, we study the rate allocation problem for statefeedback control of a linear plant over a noisy channel. Optimizing a time-varyingcommunication rate, subject to a maximum average-rate constraint, can be viewedas a method to overcome the limited bandwidth and energy resources and to achievebetter overall performance. The basic idea is to allow the sensor and the controllerto communicate with a higher data rate when it is required. One general obstacle ofoptimal rate allocation is that it often leads to a non-convex and non-linear problem.We deal with this challenge by using high-rate theory and Lagrange duality. It isshown that the proposed method gives a good performance compared to some otherrate allocation schemes.In the third part, encoder–controller design for Gaussian channels is addressed.Optimizing for the Gaussian channel increases the controller complexity substan-tially because the channel output alphabet is now infinite. We show that an efficientcontroller can be implemented using Hadamard techniques. Thereafter, we proposea practical controller that makes use of both soft and hard channel outputs.
The problem of allocating communication resources to multiple plants in a networked control system is investigated. In the presence of a shared communication medium, a total transmission rate constraint is imposed. For the purpose of optimizing the rate allocation to the plants over a finite horizon, two objective functions are considered. The first one is a single-objective function, and the second one is a multi-objective function. Because of the difficulty to derive the closed-form expression of these functions, which depend on the instantaneous communication rate, an approximation is proposed by using high-rate quantization theory. It is shown that the approximate objective functions are convex in the region of interest both in the scalar case and in the multi-objective case. This allows to establish a linear control policy given by the classical linear quadratic Gaussian theory as function of the channel. Based on this result, a new complex relation between the control performance and the channel error probability is characterized.
In this paper, we study the iterative optimization of the encoder-controller pair for closed-loop control of a multi-dimensional plant over a noisy discrete memoryless channel. With the objective to minimize the expected linear quadratic cost over a finite horizon, we propose a joint design of the sensor measurement quantization, channel error protection, and optimal controller actuation. It was shown in our previous work that despite this optimization problem is known to be hard in general, an iterative design procedure can be derived to obtain a local optimal solution. However, in the vector case, optimizing the encoder for a fixed controller is in general not practically feasible due to the curse of dimensionality. In this paper, we propose a novel approach that uses the approximate dynamic programming (ADP) to implement a computationally feasible encoder updating policy with promising performance. Especially, we introduce encoder updating rules adopting the rollout approach. Numerical experiments are carried out to demonstrate the performance obtained by employing the proposed iterative design procedure and to compare it with other relevant schemes.
In this paper, we consider the problem of the joint optimization of encoder-controller for closed-loop control with state feedback over a binary-input Gaussian channel (BGC). The objective is to minimize the expected linear quadratic cost over a finite horizon. Thisencoder-controller optimization problem is hard in general, mostly because of the curse of dimensionality. The result of this paper is a synthesis technique for a computationally feasible suboptimal controller which exploits both the soft and hard information of thechannel outputs. The proposed controller is efficient in the sense that it embraces measurement quantization, error protection and control over a finite-input infinite-output noisy channel. How to effectively implement this controller is also addressed in the paper. In particular, this is done by using Hadamard techniques. Numerical experiments are carried out to verify the promising gain offered by the combined controller, in comparison to the hard-information-based controller.
Optimal rate allocation in a networked control system with limited communication resources is instrumental to achieve satisfactory overall performance. In this paper, a practical rate allocation technique for state estimation in linear dynamic systems over a noisy channel is proposed. The method consists of two steps: (i) the overall distortion is expressed as a function of rates at all time instants by means of high-rate quantization theory, and (ii) a constrained optimization problem to minimize the overall distortion is solved by using Lagrange duality. Monte Carlo simulations illustrate the proposed scheme, which is shown to have good performance when compared to arbitrarily selected rate allocations.
Optimal rate allocation in a networked control system with highly limited communication resources is instrumental to achieve satisfactory overall performance. In this paper, we propose a rate allocation technique for state feedback control in linear dynamic systems over a noisy channel. Our method consists of two steps: (i) the overall distortion is expressed as a function of rates at all time instants by means of high-rate quantization theory, and (ii) a constrained optimization problem to minimize the overall distortion is solved. We show that a non-uniform quantization is in general the best strategy for state feedback control over noisy channels. Monte Carlo simulations illustrate the proposed scheme, which is shown to have good performance compared to arbitrarily selected rate allocations.
Utility maximization in networked control systems (NCSs) is difficult in the presence of limited sensing and communication resources. In this paper, a new communication rate optimization method for state feedback control over a noisy channel is proposed. Linear dynamic systems with quantization errors, limited transmission rate, and noisy communication channels are considered. The most challenging part of the optimization is that no closed-form expressions are available for assessing the performance and the optimization problem is nonconvex. The proposed method consists of two steps: (i) the overall NCS performance measure is expressed as a function of rates at all time instants by means of high-rate quantization theory, and (ii) a constrained optimization problem to minimize a weighted quadratic objective function is solved. The proposed method is applied to the problem of state feedback control and the problem of state estimation. Monte Carlo simulations illustrate the performance of the proposed rate allocation. It is shown numerically that the proposed method has better performance when compared to arbitrarily selected rate allocations. Also, it is shown that in certain cases nonuniform rate allocation can outperform the uniform rate allocation, which is commonly considered in quantized control systems, for feedback control over noisy channels.
To achieve satisfactory overall performance, optimal rate allocation in a networked control system with highly limited communication resources is instrumental. In this paper, a rate allocation technique for state feedback control in linear dynamic systems over a noisy channel is proposed. The method consists of two steps: (i) the overall cost is expressed as a function of rates at all time instants by means of high-rate quantization theory, and (ii) a constrained optimization problem to minimize the overall distortion is solved. It is shown that a non-uniform quantization is in general the best strategy for state feedback control over noisy channels. Monte Carlo simulations illustrate the proposed scheme, which is shown to have good performance when compared to arbitrarily selected rate allocations.
We study a closed-loop scalar control system with feedback transmitted over a discrete noisy channel. For this problem, we propose a joint design of the state measurement quantization, protection against channel errors, and control. The goal is to minimize a linear quadratic cost function over a finite horizon. In particular we focus on a special case where we verify that certainty equivalence holds, and for this case we design joint source-channel encoder and decoder/estimator pairs. The proposed algorithm leads to a practically feasible design of time-varying non-uniform quantization and control. Numerical results demonstrate the promising performance obtained by employing the proposed iterative optimization algorithm.
Bandwidth limitations and energy constraints set severe restrictions on the design of control systems that utilize wireless sensor and actuator networks. It is common in these systems that a sensor node needs not be continuously monitored, but communicates to the controller only at certain instances when it detects a disturbance event. In this paper, such a scenario is studied and particular emphasis is on efficient utilization of the shared communication resources. Encoder-decoder design for an event-based control system with the plant affected by pulse disturbances is considered. A new iterative procedure is proposed which can jointly optimize encoder-decoder pairs for a certainty equivalent controller. The goal is to minimize a design criterion, in particular, a linear quadratic cost over a finite horizon. The algorithm leads to a feasible design of time-varying non-uniform encoder-decoder pairs. Numerical results demonstrate significant improvements in performance compared to a system using uniform quantization.
We study a closed-loop control system with state feedback transmitted over a noisy discrete memoryless channel. With the objective to minimize the expected linear quadratic cost over a finite horizon, we propose a joint design of the sensor measurement quantization, channel error protection, and controller actuation. It is argued that despite that this encoder-controller optimization problem is known to be hard in general, an iterative design procedure can be derived in which the controller is optimized for a fixed encoder, then the encoder is optimized for a fixed controller, etc. Several properties of such a scheme are discussed. For a fixed encoder, we study how to optimize the controller given that full or partial side-information is available at the encoder about the symbols received at the controller. It is shown that the certainty equivalence controller is optimal when the encoder is optimal and has full side-information. For a fixed controller, expressions for the optimal encoder are given and implications are discussed for the special cases when process, sensor, or channel noise is not present. Numerical experiments are carried out to demonstrate the performance obtained by employing the proposed iterative design procedure and to compare it with other relevant schemes.
We study a closed-loop control system with feedback transmitted over a noisy discrete memoryless channel. We design encoder-controller pairs that jointly optimize the sensor measurement quantization, protection against channel errors, and control. The designgoal is to minimize an expected linear quadratic cost over a finite horizon. As a result of deriving optimality criteria for this problem, we present new results on the validity of theseparation principle subject to certain assumptions. More precisely, we show that the certainty equivalence controller is optimal when the encoder is optimal and has full side-information about the symbols received at the controller. We then use this result to formulate tractable design criteria in the general case. Finally, numerical experiments are carried out to demonstrate the performance obtained by various design methods.
We study a closed-loop multivariable control system with sensor feedback transmitted over a discrete noisy channel. For this problem, we propose a joint design of the state measurement quantization, protection against channel errors, and control. The proposed algorithm leads to a practically feasible design of time-varying non-uniform encoding and control. Numerical results demonstrate the performance obtained by employing the proposed iterative optimization algorithm.
De senaste studierna om sensorn ̈atverk ̈ar f ̈oranledd av en snabb utveckling inomtr ̊adl ̈os kommunikation. Studierna motiveras av de lovande till ̈ampningar somfinns inom ett flertal omr ̊aden. Utvecklingen av sensorn ̈atverk st ̊ar dock inf ̈orett antal utmaningar. Den personliga integriteten ̈ar en utav dessa utmaningar.Studien om sekretessbegr ̈ansade sensorn ̈atverk finns presenterad i tidigare verkd ̈ar integritetsrisken vid avlyssnings tas i beaktning inom det f ̈ordelade fysiskalagrets detektionsdesign. Dock saknas en rigor ̈os under ̈okning p ̊a mottagarensdriftegenskaper (ROC).Denna avhandling kommer att fokusera p ̊a ett parallellt distribuerat detek-tionsn ̈atverk med en fusions nod som fattar det slutgiltiga beslutet. Vi visar hurfusions ROC ̈andras med olika inst ̈allningar n ̈ar man tar h ̈ansyn till begr ̈ansadsekretess. F ̈orst studeras ett Bayesianskt detekteringsscenario. Sedan kommervissa parametrar modifieras i syfte att studera dess inverkan p ̊a ROC. Slutligenanalyseras det andra scenariot d ̈ar avlyssningen modelleras som en Neyman-Pearson detektion medan inst ̈allningarna f ̈or det Bayesiansk distribuerade de-tekteringssystemet fortfarande h ̊aller.
This paper investigates the problem of secret key generation over a wiretap channel when the terminals have access to correlated sources. These sources are independent of the main channel and the users observe them before the transmission takes place. A novel achievable scheme for this model is proposed and is shown to be optimal under certain less noisy conditions. This result improves upon the existing literature where the more stringent condition of degradedness was needed.
Vehicular communication is currently seen as a key technology for enabling safer and more comfortable driving. In the general effort to reduce the number of casualties and improve the traffic flow despite an increasing number of vehicles, this field has a promising future. IEEE 802.11p has been chosen as the standard for the Physical Layer (PHY) design for wireless vehicular communication. However, the channels encountered in such situations pose several challenges for reliable communications. Time and frequency selectivity caused by dispersive environments and high mobility lead to doubly-selective channels. The systems are expected to conduct proper operation, in spite of these disturbances. In this thesis, we focus on the design of receivers working on the PHY layer, with an emphasis on limited complexity. This poses high constraints on the algorithms, which already have to cope with the limited amount of information provided by the training sequences. The solutions considered all involve joint channel estimation and decoding, characterized by the use of an iterative structure. Such structures allow the channel estimation to benefit from the knowledge brought by the decoder, which ultimately decreases the error rate. Following a previous work, we use algorithms based on Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimation. These receivers were modified to operate on full frames instead of individual subcarriers, and various improvements were studied. We provide a detailed analysis of the complexity of the proposed designs, along with an evaluation of their decoding performance. The trade-offs between these two parameters are also discussed. A part of these analyses isused in [10]. Finally, we give an insight into some considerations which may arise when implementing the algorithms on testbeds.
In the year 2000, the Swedish Telecom regulator: "Post& Telestyrelsen", PTS, granted in a "beauty contest" four licenses for operations of 3G systems. To verify the coverage and the license requirements, PTS, has developed a test procedure where the field strength of the primary Common Pilot Channel, CPICH, is measured in a drive test. Designing such a test constitutes a number of challenges mainly due to the fact that in 3G the accuracy in the measurement needs to be extremely high since even a small systematic error of ∼1dB could in Sweden have the consequence that each operator would have to build an extra +1000 sites at a staggering cost of ∼1bilion SEK! The present paper gives an overview of the considerations behind the design of the test method used for verification of the 3G licence requirements in Sweden.
Resource assignment problems occur in a vast variety of applications, from scheduling problems over image recognition to communication networks. Often these problems can be modeled by a maximum weight matching problem in (bipartite) graphs or generalizations thereof, and efficient and practical algorithms are known for these problems. Although in some of the applications an assignment of the resources may be needed only once, in many of these applications, the assignment has to be computed more often for different scenarios. In that case it is often essential that the assignments can be computed very fast. Moreover, implementing different assignments in different scenarios may come with a certain cost for the reconfiguration of the system. In this paper, we consider the problem of determining optimal assignments sequentially over a given time horizon, where consecutive assignments are coupled by constraints that control the cost of reconfiguration. We develop fast approximation and online algorithms for this problem with provable approximation guarantees and competitive ratios. Moreover, we present an extensive computational study about the applicability of our model and our algorithms in the context of orthogonal frequency division multiple access (OFDMA) wireless networks, finding a significant performance improvement for the total bandwidth of the system using our algorithms. For this application (the downlink of an OFDMA wireless cell) , the run time of matching algorithms is extremely important, having an acceptable range of a few milliseconds only. For the considered realistic instances, our algorithms perform extremely well: the solution quality is, on average, within a factor of 0.8–0.9 of optimal off-line solutions, and the running times are at most 5 ms per phase even in the worst case. Thus, our algorithms are well suited to be applied in the context of OFDMA systems.