We address the weight-balancing problem for a distributed system whose components (nodes) can exchange information via interconnection links (edges) that form an arbitrary, possibly directed, communication topology (digraph). A weighted digraph is balanced if, for each node, the sum of the weights of the edges outgoing from that node is equal to the sum of the weights of the edges incoming to that node. Weight-balanced digraphs play a key role in a variety of applications, such as coordination of groups of robots, distributed decision making, and distributed averaging which is important for a wide range of applications in signal processing. We propose a distributed algorithm for solving the weight balancing problem in a minimum number of iterations, when the weights are nonnegative real numbers. We also provide examples to corroborate the proposed algorithm.
We propose a distributed coordination mechanism which enables nodes in a directed graph to accurately estimate their eigenvector centrality (eigencentrality) even if they update their values at times determined by their own clocks. The clocks need neither be synchronized nor have the same speed. The main idea is to let nodes adjust the weights on outgoing links to compensate for their update speed: the higher the update frequency, the smaller the link weights. Our mechanism is used to develop a distributed algorithm for computing the PageRank vector, commonly used to assign importance to web pages and rank search results. Although several distributed approaches in the literature can deal with asynchronism, they cannot handle the different update speeds that occur when servers have heterogeneous computational capabilities. When existing algorithms are executed using heterogeneous update speeds, they compute incorrect PageRank values. The advantages of our algorithm over existing approaches are verified through illustrative examples.
We consider a wireless relay network that consists of a source, half-duplex decode-and-forward buffer-aided relays and a destination. While the majority of previous works on relay selection assume no direct transmission between source and destination in such a setting, we lift this assumption and propose a link selection policy that exploits both the buffering ability and the opportunity for successful reception of a packet directly from the source. The proposed relay selection scheme incorporates the instantaneous strength of the wireless links and adapts the relay selection decision based on the strongest available link. The evolution of the network as a whole is modeled by means of a Markov chain and thus, the outage probability is associated with the steady state of the Markov chain. It is deduced that even if the link between the source and the destination is in principle a very unreliable link, it is always beneficial for the source to multicast a packet to both the relay with the strongest available link and the destination.
Many of the algorithms that have been proposed in the field of distributed computation rely on assumptions that require nodes to be aware of some global parameters. In this paper, we propose algorithms to compute some network parameters in a distributed fashion and in a finite number of steps. More specifically, given an arbitrary strongly connected network of interconnected nodes, by adapting a distributed finite-time approach, we develop distributed strategies that enable nodes to compute the following network parameters: the left-eigenvector, the out-degree, and the spectrum of weighted adjacency matrices.
Distributed algorithms for average consensus in directed graphs are typically asymptotic in the literature. In this work, we propose a protocol to distributively reach average consensus in a finite number of steps on interconnection topologies that form strongly connected directed graphs (digraphs). The average consensus value can be computed, based exclusively on local observations at each component, by running a protocol that requires each component to observe and store its own value over a finite and minimal number of steps, and to have knowledge of the number of its out-going links (i.e., the number of components to which it sends information). The proposed algorithm is demonstrated via illustrative examples.
Most algorithms for distributed averaging only guarantee asymptotic convergence. This paper introduces a distributed protocol that allows nodes to find the exact average of the initial values in a finite and minimum number of steps on interconnection topologies described by strongly connected directed graphs (digraphs). More specifically, under the assumption that each component has knowledge of the number of its outgoing links (i.e., the number of components to which it sends information), we show that the average value can be computed based on local observations over a finite time interval. The average can be obtained in a finite number of steps even when the information exchange is subject to delays. The proposed algorithm is the first in the literature that allows for distributed computation of the exact average in digraphs in finite time, with and without delays.
In a Bayesian linear model, suppose observation y = Hx + n stems from independent inputs x and n which are Gaussian mixture (GM) distributed. With known matrix H, the minimum mean square error (MMSE) estimator for x , has analytical form. However, its performance measure, the MMSE itself, has no such closed form. Because existing Bayesian MMSE bounds prove to have limited practical value under these settings, we instead seek analytical bounds for the MMSE, both upper and lower. This paper provides such bounds, and relates them to the signal-to-noise-ratio (SNR).
Using spectral and spectro-temporal auditory models along with perturbation-based analysis, we develop a new framework to optimize a feature vector such that it emulates the behavior of the human auditory system. The optimization is carried out in an offline manner based on the conjecture that the local geometries of the feature vector domain and the perceptual auditory domain should be similar. Using this principle along with a static spectral auditory model, we modify and optimize the static spectral mel frequency cepstral coefficients (MFCCs) without considering any feedback from the speech recognition system. We then extend the work to include spectro-temporal auditory properties into designing a new dynamic spectro-temporal feature vector. Using a spectro-temporal auditory model, we design and optimize the dynamic feature vector to incorporate the behavior of human auditory response across time and frequency. We show that a significant improvement in automatic speech recognition (ASR) performance is obtained for any environmental condition, clean as well as noisy.
Motivation: Estimation of bacterial community composition from a high-throughput sequenced sample is an important task in metagenomics applications. As the sample sequence data typically harbors reads of variable lengths and different levels of biological and technical noise, accurate statistical analysis of such data is challenging. Currently popular estimation methods are typically time-consuming in a desktop computing environment.
Results: Using sparsity enforcing methods from the general sparse signal processing field (such as compressed sensing), we derive a solution to the community composition estimation problem by a simultaneous assignment of all sample reads to a pre-processed reference database. A general statistical model based on kernel density estimation techniques is introduced for the assignment task, and the model solution is obtained using convex optimization tools. Further, we design a greedy algorithm solution for a fast solution. Our approach offers a reasonably fast community composition estimation method, which is shown to be more robust to input data variation than a recently introduced related method.
Availability and implementation: A platform-independent Matlab implementation of the method is freely available at http://www.ee.kth.se/ctsoftware; source code that does not require access to Matlab is currently being tested and will be made available later through the above Web site.
In this paper, the Gaussian mixture model (GMM) based parametric framework is used to design a product vector quantization (PVQ) method that provides rate-distortion (R/D) performance optimality and bitrate scalability. We use a GMM consisting of a large number of Gaussian mixtures and invoke a block isotropic structure on the covariance matrices of the Gaussian mixtures. Using such a structured GMM, we design an optimum and bitrate scalable PVQ, namely an split (SVQ), for each Gaussian mixture. The use of an SVQ allows for a trade-off between complexity and R/D performance that spans the two extreme limits provided by an optimum scalar quantizer and an unconstrained vector quantizer. The efficacy of the new GMM based PVQ (GMPVQ) method is demonstrated for the application of speech spectrum quantization.
For compressive sensing (CS) recovery of Gaussian sparse signal, we explore the framework of Bayesian linear models to achieve a robust reconstruction performance in the presence of measurement noise. Using a priori statistical knowledge, we develop a minimum mean square error (MMSE) estimation based iterative greedy search algorithm. Through experimental evaluations, we show that the new algorithm provides a robust CS reconstruction performance compared to an existing least square based algorithm.
In current satellite communication systems, energy efficiency is of paramount importance since it strongly affects the satellite size and lifetime. In this paper, the energy efficiency of a multibeam downlink system is investigated in terms of maximizing the ratio of system throughput over consumed power. More specifically, the consumed power comprises both operating power which is needed for running the network equipment and transmit power which is needed for forwarding the information to the end users. Regarding the employed transmission technique, Minimum Mean Square Error (MMSE) beamforming together with power optimization for satellite downlink channel is used while full frequency reuse is assumed. By varying the ratio of operating and transmit power, we study the performance of the forward link in terms of spectral and energy efficiency.
Coordinated multi-point processing has shown greatpotential for cellular networks, while MIMO is the key to nextgeneration wireless communications. However, full exploitationof MIMO technology demands high antenna separation at thetransceivers. This paper investigates the use of dual polarizedantennas as a mean to overcome hardware size limitations.Uplink sum rate capacity of a multicell joint processing (MJP)system employing dual polarized antennas is evaluated throughtheoretical analysis and verified by numerical simulations andtheoretical analysis. The system model incorporates uniformlydistributed users, path loss and Rayleigh fading, extendingWyner’s model. Optimal and MMSE receiver architectures arecompared in terms of capacity and complexity. System capacityis calculated as cell size or cross polar discrimination (XPD)varies. The results support the use of dual-polar decoding for lowXPD, dense cellular systems while per polarization processing isacceptable in high XPD, sparse systems.
Multicell joint processing has been shown to efficiently suppress inter-cell interference, while providing a high capacity gain due to spatial multiplexing across distributed Base Stations (BSs). However, the complexity of the optimal joint decoder in the multicell uplink channel grows exponentially with the number of users, making it prohibitive to implement in practice. In this direction, this paper investigates the uplink capacity performance of multicell joint linear minimum mean square error (LMMSE) filtering, followed by single-user decoding. The considered cellular multiple-access channel model assumes both Rayleigh and Rician flat fading, path loss, distributed users and correlated multiple antennas at the base station side. The case of Rayleigh fading is tackled using a free probability approach, while the case of Rician fading is addressed through a deterministic equivalent calculated using non-linear programming techniques. In this context, it is shown that LMMSE can provide high spectral efficiencies in practical macrocellular scenarios.
Relay channels have been heavily studied during the last decades as a means of improving spectral efficiency, availability and coverage in combination with multiple antenna transceivers. Relay channels can comprise many hops but the most practical approach at the time being would be a dual-hop system. In addition, the simplest method of relaying in terms of complexity is amplify and forward. In this direction, we investigate the ergodic capacity of a dual-hop amplify-and-forward MIMO MAC and we derive asymptotic closed-form expressions based on the principles of free probability theory. We extend the current literature by proposing a analytical model which can accommodate variance-profiled Gaussian matrices. Numerical results are utilized to verify the accuracy of the derived closed-form expressions and evaluate the effect of the channel parameters.
Multicell joint decoding (MJD) has been widely studied during the last decades as a new communication paradigm which can overcome the interference-limited nature of cellular systems. From a practical point of view, a feasible solution is to exploit clusters of cooperating Base Stations (BSs) with intracluster MJD. However, the clusters would still be affected by intercluster cochannel interference. In this paper, the corresponding channel model is established incorporating four impairments, namely additive white Gaussian noise, flat fading, path loss and cochannel interference. The asymptotic capacity limit of this channel is calculated based on an asymptotic free probability approach which exploits the additive and multiplicative free convolution in the R- and Σ-transform domain respectively, as well as properties of the η and Stieltjes transform. Numerical results are utilized to verify the accuracy of the derived closed-form expressions and evaluate the effect of the cochannel interference.
During the last decade, it has been well understood that communication over multiple antennas can increase linearly the multiplexing capacity gain and provide large spectral efficiency improvements. However, the majority of studies in this area were carried out ignoring cochannel interference. Only a small number of investigations have considered cochannel interference, but even therein simple channel models were employed, assuming identically distributed fading coefficients. In this paper, a generic model for a multiantenna channel is presented incorporating four impairments, namely additive white Gaussian noise, flat fading, path loss and cochannel interference. Both point-to-point and multiple-access MIMO channels are considered, including the case of cooperating Base Station clusters. The asymptotic capacity limit of this channel is calculated based on an asymptotic free probability approach which exploits the additive and multiplicative free convolution in the R- and S-transform domains, respectively, as well as properties of the η and Stieltjes transform. Numerical results are utilized to verify the accuracy of the derived closed-form expressions and evaluate the effect of the cochannel interference.
Multicell joint processing has been proven to be very efficient in overcoming the interference-limited nature of the cellular paradigm. However, for reasons of practical implementation global multicell joint decoding is not feasible and thus clusters of cooperating Base Stations have to be considered. In this context, intercluster interference has to be mitigated in order to harvest the full potential of multicell joint processing. In this paper, interference alignment is investigated as a means of intercluster interference mitigation and its performance is compared to global multicell joint processing. Both scenarios are modelled and analyzed using the per-cell ergodic sum-rate capacity as a figure of merit. In this process, a number of theorems are derived for analytically expressing the asymptotic eigenvalue distributions of the channel covariance matrices. The analysis is based on principles from Free Probability theory and especially properties in the R and Stieltjes transform domain.
Multicell joint processing has originated from information-theoretic principles as a means of reaching the fundamental capacity limits of cellular networks. However, global multicell joint decoding is highly complex and in practice clusters of cooperating Base Stations constitute a more realistic scenario. In this direction, the mitigation of intercluster interference rises as a critical factor towards achieving the promised throughput gains. In this paper, two intercluster interference mitigation techniques are investigated and compared, namely interference alignment and resource division multiple access. The cases of global multicell joint processing and cochannel interference allowance are also considered as an upper and lower bound to the interference alignment scheme, respectively. Each case is modelled and analyzed using the per-cell ergodic sum-rate throughput as a figure of merit. In this process, the asymptotic eigenvalue distribution of the channel covariance matrices is analytically derived based on free-probabilistic arguments in order to quantify the sum-rate throughput. Using numerical results, it is established that resource division multiple access is preferable for dense cellular systems, while cochannel interference allowance is advantageous for highly sparse cellular systems. Interference alignment provides superior performance for average to sparse cellular systems on the expense of higher complexity.
In conventional multibeam satellite systems, frequencyand polarization orthogonalization have been traditionallyemployed for mitigating interbeam interference. However, theparadigm of multibeam joint precoding allows for full frequencyreuse while assisting beam-edge users. In this paper, the performanceof linear beamforming is investigated in terms of meetingtraffic demands. More importantly, generic linear constraints areconsidered over the transmit covariance matrix in order to modelthe power pooling effect which can be implemented throughflexible traveling wave tube amplifiers (TWTAS) or multiportamplifiers. The performance of this scheme is compared againstconventional spotbeam systems based on the rate-balancingobjective. In this context, it is shown that significantly higherspectral efficiency can be achieved through beamforming, whileflexible power constraint offers better rate-balancing.
Undetectable attacks in security studies of cyber-physical systems render the measurements of the system equal to a possible physical response. In this paper, we investigate defense strategies against the undetectable single-attack for positive systems and second-order systems, which both can be reinterpreted in terms of graphs with nodes and edges, while the undetectable attack is added through one of the nodes. We show that an arbitrary placement of a sensor prevents undetectable single-attack for these classes of systems. It is worth emphasising that we do not need to measure at the corrupted node to prevent the undetectable single-attack, but can measure at any node. The defense strategy is of a low complexity and can be readily implemented.
A big challenge for wireless networked control systems is how to design the underlying networking algorithms and protocols to provide high reliability, defined as the end-to-end probability of packet delivery, despite the high packet loss rates of individual wireless links. This paper formulates the problem of jointly designing a set of packet forwarding policies on a multipath mesh network to meet control application requirements. We derive several results to help understand the problem space. First, we demonstrate that some common approaches, like applying a single forwarding policy to all packets or always routing packets on disjoint paths, are not optimal for the application when the links are bursty. Second, we introduce the notion of dominance to give a partial ordering to sets of forwarding policies, used to prove that an optimal policy schedules all outgoing links at each node and that an upper bound on the performance attained by unicast forwarding policies on the network graph can be computed assuming a flooding policy. Third, we demonstrate how to convert application performance metrics to packet forwarding policy objectives, using the probability that the error covariance of a Kalman filter stays within a bound as our application metric. Fourth, we provide an algorithm to compute the joint probability mass function that a sequence of packets are delivered, given a set of policies and a network graph. Finally, we describe how to obtain optimal policies via an exhaustive search, motivating future research for more computationally efficient solutions.
Much of the current theory of networked control systems uses simple point-to-point communication models as an abstraction of the underlying network. As a result, the controller has very limited information on the network conditions and performs suboptimally. This work models the underlying wireless multihop mesh network as a graph of links with transmission success probabilities, and uses a recursive Bayesian estimator to provide packet delivery predictions to the controller. The predictions are a joint probability distribution on future packet delivery sequences, and thus capture correlations between successive packet deliveries. We look at finite horizon LQG control over a lossy actuation channel and a perfect sensing channel, both without delay, to study how the controller can compensate for predicted network outages.
In this paper, we propose an iterative algorithm to jointly design codes and trainingsequences for frequency-selective block fading channels with partial channel state information (CSI) at the receiver. After showing that the maximum-likelihood (ML) decoding metric over channels with partial CSI can be well approximated by the joint maximum-likelihood (JML) decoding metric for combined channel estimation and data detection, we propose to use the JML criterion to search for good codes and training sequences in an iterative fashion. Simulations show that the code and training sequence found by our method can outperform a typical system using a channel code with a separately designedtraining sequence, in particular when codes of low rates are considered.
A symmetric signed Laplacian matrix uniquely defines a resistive electrical circuit, where the negative weights correspond to negative resistances. The positive semidefiniteness of signed Laplacian matrices is studied in this paper using the concept of effective resistance. We show that a signed Laplacian matrix is positive semidefinite with a simple zero eigenvalue if, and only if, the underlying graph is connected, and a suitably defined effective resistance matrix is positive definite.
The paper investigates the positive semidefiniteness of signed Laplacians. It is noted that a symmetric signed Laplacian defines a unique resistive electrical network, wherein the negative weights correspond to negative resistances. As such, the positive semidefiniteness of the signed Laplacians is equivalent to the passivity of the associated resistive networks. By utilizing n-port circuit theory, we obtain several equivalent conditions for signed Laplacians to be positive semidefinite with a simple zero eigenvalue. These conditions characterize a set of negative weights that maintain the semidefiniteness of the Laplacian. The results are used to analyze the small-disturbance angle stability of microgrids as an application.
We demonstrate a wireless real-time music-sharing application that lets users play music directly from their mobiles through a jukebox. We have designed and implemented the application by using a previously developed content-centric opportunistic networking middleware. The jukebox plays the music file that is first in its playlist by streaming it in real-time from the publishing user device. All users can observe the collaboratively formed playlist on their mobiles in real-time. The application shows the usefulness of our middleware and demonstrates a new form of situated applications. The application handles churn and garbage collection after departed users.
The fast spreading of smart mobile devices has changed the way people create and share multimedia contents. A recently proposed idea is to distribute contents opportunistically when devices are within each other's communication range. In this work we have seized this idea and designed and implemented a real-time music-sharing application that allows people to share a sound system for collective listening. The system utilizes a content-centric opportunistic networking middleware. The application provides an interface for users to play audio files stored in their own mobile devices through a shared jukebox that is within direct communication range. The jukebox plays the audio file that is first in queue in its playlist by streaming it from the publishing user device. All devices synchronize with the jukebox so that the users can observe the contents of the playlist on their displays in real-time. This paper presents the idea and motivation for such a system as well as its design and evaluation.
In the paper, an extension of LaSalle's Invariance Principle to a class of switched linear systems is studied. One of the motivations is the consensus problem in multi-agent systems. Unlike most existing results in which each switching mode in the system needs to be asymptotically stable, this paper allows that the switching modes are only Lyapunov stable. Under certain ergodicity assumptions, an extension of LaSalle's Invariance Principle for global asymptotic stability is obtained. Then it is used to solve the consensus reaching problem of certain multi-agent systems in which each agent is modeled by a double integrator, and the associated interaction graph is switching and is assumed to be only jointly connected.
Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.
We consider a secure source coding problem with side informations at the decoder and the eavesdropper. The encoder has a source that it wishes to describe with limited distortion through a rate-limited link to a legitimate decoder. The message sent is also observed by the eavesdropper. The encoder aims to minimize both the distortion incurred by the legitimate decoder; and the information leakage rate at the eavesdropper. When the encoder has access to the side information (S.I.) at the decoder, we characterize the rate-distortion-information leakage rate (R.D.I.) region under a Markov chain assumption and when S.I. at the encoder does not improve the rate-distortion region as compared to the case when S.I. is absent. We then extend our setting to consider the case where the encoder and decoder obtain coded S.I. through a rate-limited helper, and characterize the R.D.I. region for several special cases under logarithmic loss distortion (log-loss). Finally, we consider the case of list or entropy constraints at the decoder and show that the R.D.I. region coincides with R.D.I. region under log-loss.
A method has been developed to tune the DO and pH controller PID parameters for pilot / large scale mammalian cultivation. Our approach is to identify a model of the variable to be controlled (e.g. DO, pH) and to design several possible PID controllers based on this model. The controllers were first tested in computer simulations, followed by wet simulation and finally the best controller was tested on the real process. The approach is developed for the tuning of the DO controller of a 50 L bioreactor using microbubble continuous oxygen flow. The method, called lag control here, is based on a lead lag control design using Bode analysis where the prediction part is omitted. Experiments show that the approach results in a highly satisfactory DO control. The oxygen microbubbles were almost completely consumed before reaching the liquid surface so the oxygen flow used to maintain the DO gave an excellent indication of the cellular oxygen consumption. The control system was robust against all the perturbations, i.e. cell growth, cell bleed, addition of air-saturated fresh medium, DO set point change and a second gas sparger used to strip out the carbon dioxide. This approach was also successfully used for the tuning of a 400 L bioreactor DO controller and pH controller.
The throughput of current multibeam satellite systemsis limited by self interference. Interference mitigationtechniques have the potential to significantly increase the spectralefficiency of these satellite communication systems. The presentcontribution investigates the ergodic capacity of the return linkof a multibeam satellite system, where full frequency reuse is employedand user signals are jointly processed at the gateway. Theproposed model incorporates correlated satellite antennas overRician channels which represent some inherent characteristics ofsatellite communications. Additionally, the effects of shadowingcaused by user mobility, are modeled via the lognormal distribution.Hence, a composite Rician/lognormal fading channel withfully correlated receive antennas is considered. For this channel, anew lower bound on the ergodic capacity is analytically deducedand verified through simulations.
In this paper we examine opportunistic content distribution. We design and evaluate a caching strategy where a node will fetch and share contents on behalf of other nodes, although the contents are not of its own interest. We propose three caching options for improving the use of network resources: relay request on demand, hop-limit, and greedy relay request. The proposed strategies are implemented in the OMNeT++ simulator and evaluated on mobility traces from Legion Studio that have churn. We also compare our strategies with a strategy from the literature. The results are analyzed and they show that the use of opportunistic caching for a community of nodes may enhance the performance marginally while overhead increases significantly.
The objective of this work is to study time-minimum and energy-minimum global optimal control for dissipative open quantum systems whose dynamics is governed by the Lindblad equation. The controls appear only in the Hamiltonian. Using recent results regarding the decoupling of such dissipative dynamics into intra- and inter-unitary orbits, we transform the control system into a bi-linear control system on the Bloch ball (the unitary sphere together with its interior). We then design a numerical algorithm to construct an optimal path to achieve a desired point given initial states close to the origin (the singular point) of the Bloch ball. This is done both for the minimum-time and minimum-energy control problems.
We present a data symmetry reduction approach for model checking temporal-epistemic logic. The technique abstracts the epistemic indistinguishably relation for the knowledge operators, and is shown to preserve temporal-epistemic formulae. We show a method for statically detecting data symmetry in an ISPL program, the input to the temporal-epistemic model checker MCMAS. The experiments we report show an exponential saving in verification time and space while verifying security properties of the NSPK protocol.
We present an abstraction technique for multi-agent, systems preserving temporal-epistcrnic specifications. We abstract a multi-agent system, defined in the interpreted systems framework, by collapsing the local states and actions of each agent in the system. We show that the resulting abstract system simulates the concrete system, from which we obtain a preservation theorem: If a temporal-epistemic specification holds on the abstract system, the specification also holds on the concrete one. In principle this permits us to model check the abstract system rather than the concrete one, thereby saving time and space in the verification step. We illustrate the abstraction technique with two examples. The first example, a card game, illustrates the potential savings in the cost of model checking a typical MAS scenario. In the second example, the abstraction technique is used to verify a communication protocol with an arbitrarily large data domain.
In many robot applications, sensor feedback is needed to reduce uncertainties in environment models. However, sensor data acquisition also induces costs in terms of the time elapsed to make the observations and the computations needed to find new estimates. In this paper, we show how to use event based sampling to reduce the number of measurements done, thereby saving time, computational resources and power, without jeopardizing critical system properties such as safety and goal convergence. This is done by combining recent advances in nonlinear estimation with event based control using artificial potential fields. The results are particularly useful for real time systems such as high speed vehicles or teleoperated robots, where the cost of taking measurements is even higher, in terms of stops or transmission times. We conclude the paper with a set of simulations to illustrate the effectiveness of the approach and compare it with a baseline approach using periodic measurements.
A subspace method for channel estimation is pro- posed for asymmetric antenna array systems. The so-called pilot contamination problem reported in [1] is found to be due to the linearity of channel estimation in [2]. We show that it does not occur in cellular systems with power control and power-controlled handoff when the nonlinear channel estimation method proposed in this paper is used. Power-control hand-off is needed to guarantee separability between signal and interference subspaces. We derive the transmission conditions for subspace separability based on free probability and perturbation theory.
Recently, it has been shown that the time-varying multiple-access channel (MAC) with perfect channel state information (CSI) at the receiver and delayed feedback CSI at the transmitters can be modeled as the finite state MAC (FS-MAC) with delayed state feedback, where the time variation of the channel is characterized by the statistics of the underlying state process. To study the fundamental limit of the secure transmission over multi-user wireless communication systems, we re-visit the FS-MAC with delayed state feedback by considering an external eavesdropper, which we call the finite state multipleaccess wiretap channel (FS-MAC-WT) with delayed feedback. The main contribution of this paper is to show that taking full advantage of the delayed channel output feedback helps to increase the secrecy rate region of the FS-MAC-WT with delayed state feedback. Moreover, by a degraded Gaussian fading example, we show the effects of feedback delay and channel memory on the secrecy sum rate of the FS-MAC-WT with delayed feedback.
This technical note develops a new form of distributed stochastic model predictive control (DSMPC) algorithm for a group of linear stochastic subsystems subject to additive uncertainty and coupled probabilistic constraints. We provide an appropriate way to design the DSMPC algorithm by extending a centralized SMPC (CSMPC) scheme. To achieve the satisfaction of coupled probabilistic constraints in a distributed manner, only one subsystem is permitted to optimize at each time step. In addition, by making explicit use of the probabilistic distribution of the uncertainties, probabilistic constraints are converted into a set of deterministic constraints for the predictions of nominal models. The distributed controller can achieve recursive feasibility and ensure closed-loop stability for any choice of update sequence. Numerical examples illustrate the efficacy of the algorithm.
Inline reference monitoring is a powerful technique to enforce security policies on untrusted programs. The security-by-contract paradigm proposed by the EU FP6 S 3 MS project uses policies, monitoring, and monitor inlining to secure third-party applications running on mobile devices. The focus of this paper is on multi-threaded Java bytecode. An important consideration is that inlining should interfere with the client program only when mandated by the security policy. In a multithreaded setting, however, this requirement turns out to be problematic. Generally, inliners use locks to control access to shared resources such as an embedded monitor state. This will interfere with application program non-determinism due to Java's relaxed memory consistency model, and rule out the transparency property, that all policy-adherent behaviour of an application program is preserved under inlining. In its place we propose a notion of strong conservativity, to formalise the property that the inliner can terminate the client program only when the policy is about to be violated. An example inlining algorithm is given and proved to be strongly conservative. Finally, benchmarks are given for four example applications studied in the S 3 MS project.
On-off keying (OOK) is an attractive modulation technique to reduce energy consumption of power-constrained wireless networks. The OOK detection threshold, however, must be carefully selected to minimize the bit error probability (BER). This is a challenging task to accomplish on resource-limited nodes or on networks with high-mobility. In this paper, an efficient algorithm to compute the optimal threshold is proposed. The system scenario considers nodes simultaneously transmitting over same frequencies in Rayleigh-log-normal or Rice-log-normal fading conditions. It is shown that by using the Stirling expansion for the BER, and a log-normal approximation, a quick contraction-mapping can be built to achieve the threshold numerically. The mapping is simple to implement and converges quickly. Numerical simulations verify the validity of the theoretical analysis, and show that the new algorithm performs quite well in scenarios of practical interest.
This paper presents a novel vision-based localization and mapping algorithm using image moments of region features. The environment is represented using regions, such as planes and/or 3D objects instead of only a dense set of feature points. The regions can be uniquely defined using a small number of parameters; e.g., a plane can be completely characterized by normal vector and distance to a local coordinate frame attached to the plane. The variation of image moments of the regions in successive images can be related to the parameters of the regions. Instead of tracking a large number of feature points, variations of image moments of regions can be computed by tracking the segmented regions or a few feature points on the objects in successive images. A map represented by regions can be characterized using a minimal set of parameters. The problem is formulated as a nonlinear filtering problem. A new discrete-time nonlinear filter based on the state-dependent coefficient (SDC) form of nonlinear functions is presented. It is shown via Monte-Carlo simulations that the new nonlinear filter is more accurate and consistent than EKF by evaluating the root-mean squared error (RMSE) and normalized estimation error squared (NEES).
Various industrial control applications have stringent end-to-end latency requirements in the order of a few milliseconds. Software-defined networking (SDN) is a promising solution in order to meet these stringent requirements under varying traffic patterns, as it enables the flexible management of flows across the network. Thus, SDN allows to ensure that traffic flows use congestion-free paths, reducing the delay to forwarding and processing delays at the SDN nodes. However, accommodating new flows at runtime is under such a setting challenging as it may require the migration of existing flows, without interrupting ongoing traffic. In this paper, we consider the problem of dynamic flow migration and propose a polynomial time algorithm that can find a solution if direct flow migration is feasible. We furthermore propose an algorithm for computing both direct and indirect flow migration and prove its correctness. Numerical results obtained on a FatTree network topology show that flow migration is typically necessary for networks with a moderate number of flows, while direct flow migration is feasible in around 60% of the cases.
Distributed aggregation algorithms have traditionally been applied to environments with no or rather low rates of node churn. The proliferation of mobile devices in recent years introduces high mobility and node churn to these environments, thus imposing a new dimension on the problem of distributed aggregation in terms of scalability and convergence speed. To address this, we present DiVote, a distributed voting protocol for mobile device-to-device communication. We investigate a particular use case, in which pedestrians equipped with mobile phones roam around in an urban area and participate in a distributed yes/no poll, which has both spatial and temporal relevance to the community. Each node casts a vote and collects votes from other participants in the system whenever in communication range; votes are immediately integrated into a local estimate. The objective of DiVote is to produce a precise mapping of the local estimate to the anticipated global voting result while preserving node privacy. Since mobile devices may have limited resources allocated for mobile sensing activities, DiVote utilizes D-GAP compression. We evaluate the proposed protocol via extensive trace-driven simulations of realistic pedestrian behavior, and demonstrate that it scales well with the number of nodes in the system. Furthermore, in densely populated areas the local estimate of participants does not deviate by more than 3% from the global result. Finally, in certain scenarios the achievable compression rate of DiVote is at least 19% for realistic vote distributions.
Surveillance, management and estimation of spontaneous crowd formations in urban environments, e.g., during open-air festivals or rush hours, are necessary measures for city administration. Most solutions that implement these measures however require additional costly hardware installations (e.g., installation of observation cameras) and infrastructure support, and often pose privacy concerns. In this work, we present UrbanCount, a fully distributed crowd counting protocol for cities with high crowd densities. UrbanCount relies on mobile device-to-device communication to perform crowd estimation. Each node collects crowd size estimates from other participants in the system whenever in communication range and immediately integrates these estimates into a local estimate. The objective of UrbanCount is to produce a precise mapping of the local estimate to the anticipated global result while preserving node privacy. We evaluate the proposed protocol via extensive tracedriven simulations of synthetic and realistic mobility models. Furthermore, we investigate the dependency between accuracy and density, and demonstrate that in dense environments the local estimate does not deviate by more than 2% for synthetic and 7% for realistic scenarios. Index Terms-Crowd counting,