Stochastic Gradient Descent (SGD) has become the standard tool for distributed training of massive machine learning models, in particular deep neural networks. Several families of communication-reduction methods, such as quantization, large-batch methods, and gradient sparsification, have been proposed to reduce the overheads of distribution. To date, gradient sparsification methods-where each node sorts gradients by magnitude, and only communicates a subset of the components, accumulating the rest locally-are known to yield some of the largest practical gains. Such methods can reduce the amount of communication per step by up to three orders of magnitude, while preserving model accuracy. Yet, this family of methods currently has no theoretical justification. This is the question we address in this paper. We prove that, under analytic assumptions, sparsifying gradients by magnitude with local error correction provides convergence guarantees, for both convex and non-convex smooth objectives, for data-parallel SGD. The main insight is that sparsification methods implicitly maintain bounds on the maximum impact of stale updates, thanks to selection by magnitude. Our analysis also reveals that these methods do require analytical conditions to converge well, justifying and complementing existing heuristics.
We characterize the stability and achievable performance of networked estimation under correlated packet losses described by the Gilbert-Elliot model. For scalar continuous-time linear systems, we derive closed-form expressions for the mean-square distortion of the optimal estimator. The conditions for stable mean square estimation error are equivalent to those obtained previously for stability of 'peak distortions' [3]. We study how the estimator performance depends on loss probability and loss burst length, and show that the mean-square distortion remains bounded if the average burst length does not exceeda calculated bound. The main new finding is that, once we fix the mean length of loss bursts, the average packet loss rate influences the estimator's performance but not its stability.
The increasing demand for wireless cyber-physical systems requires correct design, implementation and validation of computation, communication and control methods. Traditional simulation tools, which focus on either computation, communication or control, are insufficient when the three aspects interact. Efforts to extend the traditional tools to cover multiple domains, e.g., from simulating only control aspects to simulating both control and communication, often rely on simplistic models of a small subset of possible communication solutions. We introduce GISOO, a virtual testbed for simulation of wireless cyber-physical systems that integrates two state-of-the art simulators, Simulink and COOJA. GISOO enables users to evaluate actual embedded code for the wireless nodes in realistic cyber-physical experiments, observing the effects of both the control and communication components. In this way, a wide range of communication solutions can be evaluated without developing abstract models of their control-relevant aspects, and changes made to the networking code in simulations is guaranteed to be translated into production code without errors. A double-tank laboratory experimental setup controlled over a multi-hop relay wireless network is used to validate GISOO and demonstrate its features.
One approach for State-of-health estimation onboard electric vehicles is to train a data-driven virtual battery on operational data and use this model, rather than the actual battery, for performance tests. A temperature-dependent continuous-time output-error (OE) model is proposed as virtual battery and identified and validated on real operational data from electric buses. The proposed model is compared to discrete-time and parameter-invariant models and shows better performance on all data sets. In addition, the OE model structure is shown to be superior to a conventional Auto Regressive eXogenous (ARX) model for the purpose of modeling the battery voltage response. Finally, challenges regarding vehicle log data are identified and improvements to the model are suggested in order to capture observed un-modeled phenomena.
Physics-based battery models are important tools in battery research, development, and control. To obtain useful information from the models, accurate parametrization is essential. A complex model structure and many unknown and hard-to-measure parameters make parametrization challenging. Furthermore, numerous applications require non-invasive parametrization relying on parameter estimation from measurements of current and voltage. Parametrization of physics-based battery models from input-output data is a growing research area with many recent publications. This paper aims to bridge the gap between researchers from different fields that work with battery model parametrization, since successful parametrization requires both knowledge of the underlying physical system as well as understanding of theory and concepts behind parameter estimation. The review encompasses sensitivity analyses, methods for parameter optimization, structural and practical identifiability analyses, design of experiments and methods for validation as well as the use of machine learning in parametrization. We highlight that not all model parameters can accurately be identified nor are all relevant for model performance. Nonetheless, no consensus on parameter importance could be shown. Local methods are commonly chosen because of their computational advantages. However, we find that the implications of local methods for analysis of non-linear models are often not sufficiently considered in reviewed literature.
Fast charging of electric vehicles remains a compromise between charging time and degradation penalty. Conventional battery management systems use experience-based charging protocols that are expected to meet vehicle lifetime goals. Novel electrochemical model-based battery fast charging uses a model to observe internal battery states. This enables control of charging rates based on states such as the lithium-plating potential but relies on an accurate model as well as accurate model parameters. However, the impact of battery degradation on the model’s accuracy and therefore the fitness of the estimated optimal charging procedure is often not considered. In this work, we therefore investigate electrochemical model-based aging-adaptive fast charging of automotive lithium-ion cells. First, an electrochemical model is identified at the beginning of life for 6 automotive prototype cells and the electrochemically constrained fast-charge is designed. The model parameters are then periodically re-evaluated during a cycling study and the charging procedure is updated to account for cell degradation. The proposed method is compared with two reference protocols to investigate both the effectiveness of selected electrochemical constraints as well as the benefit of aging-adaptive usage. Finally, post-mortem characterization is presented to highlight the benefit of aging-adaptive battery utilization.
Interference and link dynamics constitute great concerns for stability and performance of protocols in WSNs. In this paper we evaluate the impact of channel hopping and adaptive routing on delay and reliability focusing on delay critical applications.
Motivated by large-scale optimization problems arising in the context of machine learning, there have been several advances in the study of asynchronous parallel and distributed optimization methods during the past decade. Asynchronous methods do not require all processors to maintain a consistent view of the optimization variables. Consequently, they generally can make more efficient use of computational resources than synchronous methods, and they are not sensitive to issues like stragglers (i.e., slow nodes) and unreliable communication links. Mathematical modeling of asynchronous methods involves proper accounting of information delays, which makes their analysis challenging. This article reviews recent developments in the design and analysis of asynchronous optimization methods, covering both centralized methods, where all processors update a master copy of the optimization variables, and decentralized methods, where each processor maintains a local copy of the variables. The analysis provides insights into how the degree of asynchrony impacts convergence rates, especially in stochastic optimization methods.
This paper studies a flexible algorithm for minimizing a sum of component functions, each of which depends on a large number of decision variables. Such formulations appear naturally in “big data” applications, where each function describes the loss estimated using the data available at a specific machine, and the number of features under consideration is huge. In our algorithm, a coordinator updates a global iterate based on delayed partial gradients of the individual objective functions with respect to blocks of coordinates. Delayed incremental gradient and delayed coordinate descent algorithms are obtained as special cases. Under the assumption of strong convexity and block coordinate-wise Lipschitz continuous partial gradients, we show that the algorithm converges linearly to a ball around the optimal value. Contrary to related proposals in the literature, our algorithm is delay-insensitive: it converges for any bounded information delay, and its step-size parameter can be chosen independently of the maximum delay bound.
Serverless runtimes provide efficient and cost-effective environments for scalable computations, thanks to their event-driven and elastic nature. So far, they have mostly been used for stateless, data parallel and sporadic computations. In this work, we propose exploiting serverless runtimes to solve generic, large-scale optimization problems. To this end, we implement a parallel optimization algorithm for solving a regularized logistic regression problem, and use AWS Lambda for the compute-intensive work. We show that relative speedups up to 256 workers and efficiencies above 70% up to 64 workers can be expected.
This paper presents a novel cross-layer design for joint power and end-to-end rate control optimization in DS-CDMA wireless networks, along with a detailed implementation and evaluation in the network simulator ns-2. Starting with a network utility maximization formulation of the problem, we derive distributed power control, transport rate and queue management schemes that jointly achieve the optimal network operation. Our solution has several attractive features compared to alternatives: it adheres to the natural time-scale separation between rapid power control updates and slower end-to-end rate adjustments, and uses simplified power control mechanisms with reduced signalling requirements. We argue that these features are critical for a successful real-world implementation. To validate these claims, we present a detailed implementation of a crosslayer adapted networking stack for DS-CSMA ad-hoc networks in ns-2. We describe several critical issues that arise in the implementation, but are typically neglected in the theoretical protocol design, and evaluate the alternatives in extensive simulations.
Device-to-device (D2D) communication integrated into cellular networks is an advanced tool to take advantage of the proximity of devices and allow for reusing cellular resources and thereby to increase the user bitrates and the system capacity. However, the introduction of D2D in legacy long term evolution (LTE) cellular spectrum requires to revisit and modify the existing radio resource management and power control (PC) techniques in order to fully realize the potential of the proximity and reuse gains and to limit the interference to the cellular layer. In this paper, we examine the performance of the legacy LTE PC tool box and benchmark it against an utility optimal iterative scheme. We find that the open loop PC scheme of LTE performs well for cellular users both in terms of the used transmit power levels and the achieved signal-to-interference-and-noise-ratio distribution. However, the performance of the D2D users as well as the overall system throughput can be boosted by the utility optimal scheme, by taking better advantage of both the proximity and the reuse gains. Therefore, in this paper we propose a hybrid PC scheme, in which cellular users employ the legacy LTE open loop PC, while D2D users exploits the utility optimizing distributed PC scheme. We also recognize that the hybrid scheme is not only nearly optimal, and can balance between spectral and energy efficiency, but it also allows for a distributed implementation at the D2D users, while preserving the LTE PC scheme for the cellular users.
The curvature-aided IAG (CIAG) algorithm is an efficient asynchronous optimization method that accelerates IAG using a delay compensation technique. However, existing step-size rules for CIAG are conservative and hard to implement, and the Hessian computation in CIAG is often computationally expensive. To alleviate these issues, we first provide an easy-to-implement and less conservative step-size rule for CIAG. Next, we propose a modified CIAG algorithm that reduces the computational complexity by approximating the Hessian with a constant matrix. Convergence results are derived for each algorithm on both convex and strongly convex problems, and numerical experiments on logistic regression demonstrate their effectiveness in practice.
One of the main advantages of second-order methods in a centralized setting is that they are insensitive to the condition number of the objective function's Hessian. For applications such as regression analysis, this means that less pre-processing of the data is required for the algorithm to work well, as the ill-conditioning caused by highly correlated variables will not be as problematic. Similar condition number independence has not yet been established for distributed methods. In this paper, we analyze the performance of a simple distributed second-order algorithm on quadratic problems and show that its convergence depends only logarithmically on the condition number. Our empirical results indicate that the use of second-order information can yield large efficiency improvements over first-order methods, both in terms of iterations and communications, when the condition number is of the same order of magnitude as the problem dimension.
Stability properties of monotone nonlinear systems with max-separable Lyapunov functions are considered in this paper, motivated by the following observations. First, recent results have shown that such Lyapunov functions are guaranteed to exist for asymptotically stable monotone systems on compact sets. Second, it is well-known that, for monotone linear systems, asymptotic stability implies the stronger properties of D-stability and robustness with respect to time-delays. This paper shows that similar properties hold for monotone nonlinear systems that admit max-separable Lyapunov functions. In particular, a notion of D-stability for monotone nonlinear systems and delay-independent stability will be discussed. The theoretical results are illustrated by means of examples.
We present POLO. j1- a Julia package that helps algorithm developers and machine-learning practitioners design and use state-of-the-art parallel optimization algorithms in a flexible and efficient way. POLO. j1 extends our C+ + library POLO, which has been designed and implemented with the same intentions. POLO. j1 not only wraps selected algorithms in POLO and provides an easy mechanism to use data manipulation facilities and loss function definitions in Julia together with the underlying compiled C+ + library, but it also uses the policy-based design technique in a Julian way to help users prototype optimization algorithms from their own building blocks. In our experiments, we observe that there is little overhead when using the compiled C+ + code directly within Julia. We also notice that the performance of algorithms implemented in pure Julia is comparable with that of their C+ + counterparts. Both libraries are hosted on GitHub(1)under the free MIT license, and can be used easily by pulling the pre-built 64-bit architecture Docker images.(2)
We present LShapedSolvers.jl, a suite of scalable stochastic programming solvers implemented in the Julia programming language. The solvers, which are based on the L-shaped algorithm, run efficiently in parallel, exploit problem structure, and operate on distributed data. The implementation introduces several flexible high-level abstractions that result in a modular design and simplify the development of algorithm variants. In addition, we demonstrate how the abstractions available in the Julia module for distributed computing are exploited to simplify the implementation of the parallel algorithms. The performance of the solvers is evaluated on large-scale problems for finding optimal orders on the Nordic day-ahead electricity market. With 16 worker cores, the fastest algorithm solves a distributed problem with 2.5 million variables and 1.5 million linear constraints about 19 times faster than Gurobi is able to solve the extended form directly.
We present StochasticPrograms.jl, a user-friendly and powerful open-source framework for stochastic programming written in the Julia language. The framework includes both modeling tools and structure-exploiting optimization algorithms. Stochastic programming models can be efficiently formulated using an expressive syntax, and models can be instantiated, inspected, and analyzed interactively. The framework scales seamlessly to distributed environments. Small instances of a model can be run locally to ensure correctness, whereas larger instances are automatically distributed in a memory-efficient way onto supercomputers or clouds and solved using parallel optimization algorithms. These structure-exploiting solvers are based on variations of the classical L-shaped, progressive-hedging, and quasi-gradient algorithms. We provide a concise mathematical background for the various tools and constructs available in the framework along with code listings exemplifying their usage. Both software innovations related to the implementation of the framework and algorithmic innovations related to the structured solvers are highlighted. We conclude by demonstrating strong scaling properties of the distributed algorithms on numerical benchmarks in a multinode setup.
We develop a fast smoothing procedure for solving linear two-stage stochastic programs, which outperforms the well-known L-shaped algorithm on large-scale benchmarks. We derive problem-dependent bounds for the effect of smoothing and characterize the convergence rate of the proposed algorithm. The theory suggests that the smoothing scheme can be sped up by sacrificing accuracy in the final solution. To obtain an efficient and effective method, we suggest a hybrid solution that combines the speed of the smoothing scheme with the accuracy of the L-shaped algorithm. We benchmark a parallel implementation of the smoothing scheme against an efficient parallelized L-shaped algorithm on three large-scale stochastic programs, in a distributed environment with 32 worker cores. The smoothing scheme reduces the solution time by up to an order of magnitude compared to L-shaped.
This paper deals with architecture and algorithm design of networked control systems (NCSs) to cope with network outages. A Networked PID controller and an internal model control based tuning scheme is proposed, using the jitter margin to ensure stability despite dropped packets. For longer periods of disconnection, an outage heuristic is proposed. The control design and outage heuristic need only the step response of the process and is as such simple to apply in practice. The effect of these outages on the stability and performance of the closed-loop control system are evaluated, and compared with alternative schemes via extensive simulations. The simple Networked PID with the outage heuristic is shown to perform well, compared to other techniques proposed in the NCS literature.
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit and symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach.
We address the weight-balancing problem for a distributed system whose components (nodes) can exchange information via interconnection links (edges) that form an arbitrary, possibly directed, communication topology (digraph). A weighted digraph is balanced if, for each node, the sum of the weights of the edges outgoing from that node is equal to the sum of the weights of the edges incoming to that node. Weight-balanced digraphs play a key role in a variety of applications, such as coordination of groups of robots, distributed decision making, and distributed averaging which is important for a wide range of applications in signal processing. We propose a distributed algorithm for solving the weight balancing problem in a minimum number of iterations, when the weights are nonnegative real numbers. We also provide examples to corroborate the proposed algorithm.
We propose a distributed coordination mechanism which enables nodes in a directed graph to accurately estimate their eigenvector centrality (eigencentrality) even if they update their values at times determined by their own clocks. The clocks need neither be synchronized nor have the same speed. The main idea is to let nodes adjust the weights on outgoing links to compensate for their update speed: the higher the update frequency, the smaller the link weights. Our mechanism is used to develop a distributed algorithm for computing the PageRank vector, commonly used to assign importance to web pages and rank search results. Although several distributed approaches in the literature can deal with asynchronism, they cannot handle the different update speeds that occur when servers have heterogeneous computational capabilities. When existing algorithms are executed using heterogeneous update speeds, they compute incorrect PageRank values. The advantages of our algorithm over existing approaches are verified through illustrative examples.
In this paper we address the problem of balancing the processing load of MapReduce tasks running on heterogeneous clusters, i.e., clusters with different capacities and update cycles. We present a fully decentralized algorithm, based on ratio consensus, where each mapper decides the amount of workload data to handle for a single user job using only job specific local information, i.e., information that can be collected from directly connected neighboring mappers, regarding their current workload and capacity. In contrast to other algorithms in the literature, the proposed algorithm can be deployed in heterogeneous networks and can operate asynchronously in both directed and undirected communication topologies. The performance of the proposed algorithm is demonstrated via simulation experiments on large-scale strongly connected topologies.
We study a cooperative network with a buffer-aided multi-antenna source, multiple half-duplex (HD) buffer-aided relays and a single destination. Such a setup could represent a cellular downlink scenario, in which the source can be a more powerful wireless device with a buffer and multiple antennas, while a set of intermediate less powerful devices are used as relays to reach the destination. The main target is to recover the multiplexing loss of the network by having the source and a relay to simultaneously transmit their information to another relay and the destination, respectively. Successive transmissions in such a cooperative network, however, cause inter-relay interference (IRI). First, by assuming global channel state information (CSI), we show that the detrimental effect of IRI can be alleviated by precoding at the source, mitigating or even fully cancelling the interference. A cooperative relaying policy is proposed that employs a joint precoding design and relay-pair selection. Note that both fixed rate and adaptive rate transmissions can be considered. For the case when channel state information is only available at the receiver side (CSIR), we propose a relay selection policy that employs a phase alignment technique to reduce the IRI. The performance of the two proposed relay pair selection policies are evaluated and compared with other state-of-the-art relaying schemes in terms of outage and throughput. The results show that the use of a powerful source can provide considerable performance improvements.
In this paper, we study a simple X-relay configuration where the shared relay operates in full-duplex (FD) mode. The relay node may have limited spatial degrees of freedom, and as a result, it may not be able to handle both the loop interference and the multiuser interference. Hence, a decision on the precoding scheme is necessitated. It is often the case that the relay does not have the option of real-Time switching between different precoding schemes, either due to hardware limitations of the relay or increased complexity of the problem. Hence, we investigate a 'static' precoding decision where the relay node decides on its precoding scheme based only on statistical knowledge of the channel conditions. To perform this decision, the behavior of the system is formulated as a Markov chain and the outage probability of the system is derived in a closed-form with the precoding decision as a parameter. The outage probability is minimized by optimally choosing the precoding scheme, using easily verifiable conditions on the statistical knowledge of the channel conditions. Simulations validate the investigated scheme.
We consider a wireless relay network that consists of a source, half-duplex decode-and-forward buffer-aided relays and a destination. While the majority of previous works on relay selection assume no direct transmission between source and destination in such a setting, we lift this assumption and propose a link selection policy that exploits both the buffering ability and the opportunity for successful reception of a packet directly from the source. The proposed relay selection scheme incorporates the instantaneous strength of the wireless links and adapts the relay selection decision based on the strongest available link. The evolution of the network as a whole is modeled by means of a Markov chain and thus, the outage probability is associated with the steady state of the Markov chain. It is deduced that even if the link between the source and the destination is in principle a very unreliable link, it is always beneficial for the source to multicast a packet to both the relay with the strongest available link and the destination.
Many of the algorithms that have been proposed in the field of distributed computation rely on assumptions that require nodes to be aware of some global parameters. In this paper, we propose algorithms to compute some network parameters in a distributed fashion and in a finite number of steps. More specifically, given an arbitrary strongly connected network of interconnected nodes, by adapting a distributed finite-time approach, we develop distributed strategies that enable nodes to compute the following network parameters: the left-eigenvector, the out-degree, and the spectrum of weighted adjacency matrices.
Distributed algorithms for average consensus in directed graphs are typically asymptotic in the literature. In this work, we propose a protocol to distributively reach average consensus in a finite number of steps on interconnection topologies that form strongly connected directed graphs (digraphs). The average consensus value can be computed, based exclusively on local observations at each component, by running a protocol that requires each component to observe and store its own value over a finite and minimal number of steps, and to have knowledge of the number of its out-going links (i.e., the number of components to which it sends information). The proposed algorithm is demonstrated via illustrative examples.
Most algorithms for distributed averaging only guarantee asymptotic convergence. This paper introduces a distributed protocol that allows nodes to find the exact average of the initial values in a finite and minimum number of steps on interconnection topologies described by strongly connected directed graphs (digraphs). More specifically, under the assumption that each component has knowledge of the number of its outgoing links (i.e., the number of components to which it sends information), we show that the average value can be computed based on local observations over a finite time interval. The average can be obtained in a finite number of steps even when the information exchange is subject to delays. The proposed algorithm is the first in the literature that allows for distributed computation of the exact average in digraphs in finite time, with and without delays.
Wireless sensor networks are trying to findtheir way from relatively undemanding applications suchas environmental monitoring to applications such as industrialcontrol, which have stronger requirements in termsof security and predictability. Predictability cannot beachieved without coordination and the coordination of distributedentities and events requires time synchronization.Towards this end, we present a secure time synchronizationservice, that as our experimental results show does notdegrade time synchronization accuracy. Based on the timesynchronization service we implement time slotted datacollection and present results that show that this way wecan provide a predictable data collection service.
Frequency channel allocation is a key technique for improving the performance of cellular networks. In this paper, we address the channel allocation problem for a 5G multi-cell system. We consider a heterogeneous network in which cellular users, micro-cell users, and device-to-device (D2D) communications coexist within the radio footprint of the macro cell. We maximize the aggregate transmission rate, exploiting channel diversity and managing both the inter-cell interference, typical of cellular networks and the intra-cell interference generated by the nonorthogonal transmissions of the small-cell and D2D users. By modeling the allocation problem as a potential game, whose Nash equilibria correspond to the local optima of the objective function, we propose a new decentralized solution. The convergence of our scheme is enforced by using a better response dynamic based on a message passing approach. The simulation results assess the validity of the proposed scheme in terms of convergence time and achievable rate under different settings.
This paper investigates the subcarrier allocation problem for uplink transmissions in a multi-cell network, where device-to-device communications are enabled. We focus on maximizing the aggregate transmission rate in the system accounting for both inter- and intra-cell interference. This problem is computationally hard due to its nonconvex and combinatorial nature. However, we show that it can be described by a potential game, and thus a Nash equilibrium can be found using iterative algorithms based on best/better response dynamics. In particular, we propose a simple iterative algorithm with limited signaling that is guaranteed to converge to an equilibrium point, corresponding to a local maximum of the potential function. Using extensive simulations, we show that the algorithm converges quickly also for dense networks, and that the distance to the true optimum is often small, at least for the small-sized networks for which we were able to compute the true optimum.
Device-to-Device (D2D) communication is a promising technology for improving the performance of proximity-based services. This paper demonstrates how the integration of D2D communication in cellular systems operating under dynamic Time Division Duplex (TDD) can improve energy efficiency. We perform joint optimization of mode selection, uplink/downlink transmission period, and power allocation to minimize the transmission energy consumption while satisfying a traffic requirement. Solutions are developed for two scenarios: with and without interference among D2D communications. Both formulations are expressed as mixed-integer nonlinear programming problems, which are NP hardin general. We exploit problem structure to develop efficient solutions for both scenarios. For the interference-free case, we develop algorithms that find the optimal solution in polynomial time. When considering interference, we propose a customized solver based on branch-and-bound that reduces the search complexity by taking advantage of the problem-specific proprieties. We complement this solver by a more practical heuristic algorithm. Simulation results demonstrate that D2D communications in dynamic TDD systems can yield significant energy savings and improved spectral efficiency compared to traditional cellular communication. Furthermore, we give analytical characterizations of the receiver locations relative to a given transmitter where D2D communication is optimal. These regions can be surprisingly large and not necessarily circular.
Network-assisted Device-to-Device (D2D) communicationis a promising technology for improving the performanceof proximity-based services. This paper demonstrates how D2Dcommunication can be used to improve the energy-efficiencyof cellular networks, leading to a greener system operationand a prolonged battery life of the mobile devices. Assuminga flexible TDD system, we develop optimal mode selectionpolicies for minimizing the energy cost (either from the systemor from the device perspective) while guaranteeing a certainrate requirement. The jointly optimal transmit power and timeallocation, as well as the optimal mode selection, is found bysolving a small convex optimization problem. Special attentionis given to the geometrical interpretation of the obtained results.We show that when network energy is the primary concern, D2Dmode is preferable in a large portion of the cell. When the deviceenergy consumption is most important, on the other hand, thearea where D2D mode is preferable shrinks and becomes close tocircular. Finally, we investigate how network parameters affectthe range where direct communication is preferred.
In cellular systems, the combination of Device-to-Device (D2D) communication and relaying is an efficient means for improving network coverage and transmissions quality without additional infrastructure deployment. It enables communication between user pairs in situations when both their direct D2D transmission and the traditional communication via the base station experience poor channel quality. In this paper, we propose a joint relaying-operation selection and power-allocation scheme, herein called HyD2D, for relay-assisted D2D communication in Rician fading environment. The target is to choose the set of communication links that minimizes the power consumption, while ensuring a minimum success probability. To overcome the nonconvexity of the outage probability constraints under Rician fading, we use the concept of coherent-measure-of-risk from the field of finance. We therefore obtain a linear programming formulation that we can efficiently solve. Simulations show that HyD2D selects the most energy-efficient relaying operation that satisfies the success probability requirement, while leveraging only statistical channel state information.
Device-to-device communication is a technology that allows users in close proximity to establish a direct communication link instead of passing through the base station. Because direct communications are likely to have a strong line-of-sight component in the received signal, it is reasonable to model the direct channel with Rician fading. In this paper, we propose a power-control scheme for device-to-device communications on a shared channel. Our allocation minimizes the total power consumption while limiting the link outage probability due to Rician fast fading. By leveraging the concept of conditional-value-at-risk from the field of finance, we obtain a linear programming formulation which can be efficiently solved. Through simulation results we show the benefit of the proposed power allocation compared to a deterministic power control that does not account for the random channel variations. Moreover, we provide insights into how the network topology and the parameter settings affect the performance and feasibility of the power allocation.
Networks with device-to-device(D2D) technology allow for two possible communication modes: traditional communication via the base station, and direct communication between the users. Recent studies show that in-band full-duplex(IBFD) operations can be advantageously combined with D2D communication to improve the spectral efficiency. However, no algorithms for selecting the communication mode of mobile users in IBFD networks have yet appeared in the literature. In this paper, we design a distributed mode selection scheme for users in D2D-enabled IBFD networks. The proposed scheme maximizes the users prob-ability of successful communication by leveraging only existing signaling mechanisms.
The dimensioning of wireless communication protocols for networked control involves a non-trivial trade-off between reliability and delay. Due to the lossy nature of wireless communications, there is a risk that sensor messages will be dropped. The end-to-end reliability can be improved by retransmitting dropped messages, but this comes at the expense of additional delays. In this work, we determine the number of retransmissions that strikes the optimal balance between communication reliability and delay, in the sense that it achieves the minimal expected linear-quadratic loss of the closed-loop system. An important feature of our setup is that it accounts for the random delays and possible losses that occur when unreliable communication is combatted with retransmissions. The resulting controller dynamically switches among a set of infinite-horizon linear-quadratic regulators, and is simple to implement. Numerical simulations are carried out to highlight the trade-off between reliability and delay.
This paper proposes a supervisory control structure for networked systems with time-varying delays. The control structure, in which a supervisor triggers the most appropriate controller from a multi-controller unit, aims at improving the closed-loop performance relative to what can be obtained using a single robust controller. Our analysis considers average dwell-time switching and is based on a novel multiple Lyapunov-Krasovskii functional. We develop stability conditions that can be verified by semi-definite programming, and show that the associated state feedback synthesis problem also can be solved using convex optimization tools. Extensions of the analysis and synthesis procedures to the case when the evolution of the delay mode is described by a Markov chain are also developed. Simulations on small and large-scale networked control systems are used to illustrate the effectiveness of our approach.
This paper proposes a supervisory control structure for networked systems with time-varying delays. The control structure, in which a supervisor triggers the most appropriate controller from a multi-controller unit, aims at improving the closed-loop performance relative to what can be obtained using a single robust controller. Our analysis considers average dwelltime switching and is based on a novel multiple Lyapunov-Krasovskii functional. We develop analysis conditions that can be verified by semi-definite programming, and show that associated state feedback synthesis problem also can be solved using convex optimization. Small and large scale networked control systems are used to illustrate the effectiveness of our approach.
This paper investigates the stability of discrete-time linear systems with stochastic delays. We assume that delays are modeled as random variables, which take values in integers with a certain probability. For the scalar case, we provide an analytical bound on the probability to guarantee the stability of linear systems. In the vector case, we derive a linear matrix inequality condition to compute the probability for ensuring the stability of closed-loop systems. As a special case, we also determine the step size of gradient algorithms with stochastic delays in the unconstrained quadratic programming to guarantee convergence to the optimal solution. Numerical examples are provided to show the effectiveness of the proposed analysis techniques.
We consider a finite-horizon linear-quadratic optimal control problem where only a limited number of control messages are allowed for sending from the controller to the actuator. To restrict the number of control actions computed and transmitted by the controller, we employ a threshold-based event-triggering mechanism that decides whether or not a control message needs to be calculated and delivered. Due to the nature of threshold-based event-triggering algorithms, finding the optimal control sequence requires minimizing a quadratic cost function over a nonconvex domain. In this paper, we first provide an exact solution to this nonconvex problem by solving an exponential number of quadratic programs. To reduce computational complexity, we then propose two efficient heuristic algorithms based on greedy search and the alternating direction method of multipliers technique. Later, we consider a receding horizon control strategy for linear systems controlled by event-triggered controllers, and we further provide a complete stability analysis of receding horizon control that uses finite-horizon optimization in the proposed class. Numerical examples testify to the viability of the presented design technique.
This paper develops a theoretical framework for quantifying the trade-off between communication cost and control performance in event-triggered control over lossy networks. We consider a system where the communication between the controller and actuator is dictated by a threshold-based event-triggering algorithm, and develop a Markov-chain model that describes the attempted and successful transmissions of control messages over the lossy communication channel. A feature of our model is that it considers retransmissions of unsuccessful messages and that it accounts for the delay associated with such retransmissions. A systematic framework for analyzing the trade-off between the communication rate and control performance and for optimal tuning of the event threshold emanates by combining this model with an analytical model of the closed-loop performance. Numerical examples demonstrate the effectiveness of the proposed framework.
We consider a stochastic system where the communication between the controller and the actuator is triggered by a thresholdbased rule. The communication is performed across an unreliable link that stochastically erases transmitted packets. To decrease the communication burden, and as a partial protection against dropped packets, the controller sends a sequence of control commands to the actuator in each packet. These commands are stored in a buffer and applied sequentially until the next control packet arrives. In this context, we study dead-beat control laws and compute the expected linear-quadratic loss of the closed-loop system for any given event-threshold. Furthermore, we provide analytical expressions that quantify the trade-off between the communication cost and the control performance of event-triggered control systems. Numerical examples demonstrate the effectiveness of the proposed technique.
This paper considers multiple linear stochastic control systems whose feedback loops are closed over a shared communication medium. A threshold-based event-triggering rule is used to transmit control commands from the controllers to the actuators, and network access is arbitrated using a static priority mechanism. Under these conditions, we study deadbeat control laws and compute the expected linear-quadratic loss of the closed-loop system as a function of the event-thresholds of the individual loops. Also, we present analytical expressions that quantify the trade-off between the communication cost and the control performance of such event-triggered control systems. Using a multi-dimensional exhaustive search method, we determine the set of event thresholds that attains the minimal expected linear-quadratic loss of the closed-loop systems. Simulation studies highlight the trade-off between the communication and control cost.
We consider the joint design of transmissionschedules and controllers for networked control loops that useWirelessHART communication for sensor and actuator data. Byparameterizing the design problem in terms of the samplingrate of the control loop, the co-design problem separates intotwo well-defined subproblems which admit optimal solutions:transmission scheduling should be done to maximize the delayconstrained reliability while the control design should optimizeclosed-loop performance under packet loss. We illustrate howthese problems can be solved and demonstrate our co-designframework for the case of linear-quadratic contro
We consider the joint design of packet forwarding policies and controllers for wireless control loops where sensor measurements are sent to the controller over an unreliable and energy-constrained multi-hop wireless network. For fixed sampling rate of the sensor, the co-design problem separates into two well-defined and independent subproblems: transmission scheduling for maximizing the deadline-constrained reliability and optimal control under packet loss. We develop optimal and implementable solutions for these subproblems and show that the optimally co-designed system can be efficiently found. Numerical examples highlight the many trade-offs involved and demonstrate the power of our approach.