Low-latency communication is one of the most important application scenarios in next-generation wireless networks. Often in communication-theoretic studies latency is defined as the time required for the transmission of a packet over a channel. However, with very stringent latency requirements and complexity constrained receivers, the time required for the decoding of the packet cannot be ignored and must be included in the total latency analysis through accurate modeling. In this paper, we first present a way to calculate decoding time using per bit complexity metric and introduce an empirical model that accurately describes the trade-off between the decoding complexity versus the performance of state-of-the-art codes. By considering various communication parameters, we show that including the decoding time in latency analyses has a significant effect on the optimum selection of parameters.
Squared-envelope receivers, also known as energy detectors, are, due to their simplified circuitry, low-cost and low-complexity receivers. Hence they are attractive implementation structures for future Internet-of-Things (IoT) applications. Even though there is considerable work on the wider research area of squared-envelope receivers, a comprehensive comparison and statistical characterization of training-assisted channel estimators for squared-envelope receivers appear to be absent from the literature. A detailed description of practical channel estimation schemes is necessary for the optimal training design of latency-constrained IoT applications. In this paper, various channel estimators are derived, their bias and variance are studied, and their performance is numerically compared against the Cramer-Rao lower bound.
The emergence of the Internet-of-Things (IoT), which will enable billions of devices to seamlessly connect with each other and to the Internet, aims to enhance the quality of daily life in diverse fields. Today, even though an abundance of IoT applications already exists, the growth of IoT is expected to accelerate in the foreseeable future. IoT applications are mainly divided into two categories: (1) consumer IoT and (2) industrial IoT (IIoT). The IIoT consists of interconnected sensors, machinery, and other “things” that are used in various fields of industrial applications. Throughout this chapter, the main focus is on wireless communication for IIoT applications and therefore the major challenges in designing a suitable wireless communication solution for IIoT applications are initially discussed. A comprehensive overview of the state-of-the-art wireless communication standards, which are suitable for IIoT applications, is presented and representative comparisons on some of the most common industrial wireless communication technologies including 5G, the next generation of the wireless technologies, are provided. Next, we focus on one of the most significant technologies for 5G systems, namely the ultra-reliable low-latency communication (URLLC), which is highly relevant for mission-critical IIoT applications. We list the challenges of URLLC and study the theoretical limits on the transmission of short packets. In these information theoretic works, latency is mostly computed as the total transmission time of a single packet. However, decoding a encoded packet is a computationally demanding operation and when we analyse complexity-constrained receivers, such as low complexity IIoT receivers, the time duration that is needed for decoding should also be taken into account in latency analysis. Finally, by including the decoding duration, we present the trade-offs in low-latency communication for receivers with computational complexity constraints.
In this article, we study the problem of latency and reliability trade-off in ultra-reliable low-latency communication (URLLC) in the presence of decoding complexity constraints. We consider linear block encoded codewords transmitted over a binary-input AWGN channel and decoded with order-statistic (OS) decoder. We first investigate the performance of OS decoders as a function of decoding complexity and propose an empirical model that accurately quantifies the corresponding trade-off. Next, a consistent way to compute the aggregate latency for complexity constrained receivers is presented, where the latency due to decoding is also included. It is shown that, with strict latency requirements, decoding latency cannot be neglected in complexity constrained receivers. Next, based on the proposed model, several optimization problems, relevant to the design of URLLC systems, are introduced and solved. It is shown that the decoding time has a drastic effect on the design of URLLC systems when constraints on decoding complexity are considered. Finally, it is also illustrated that the proposed model can closely describe the performance versus complexity trade-off for other candidate coding solutions for URLLC such as tail-biting convolutional codes, polar codes, and low-density parity-check codes.
The phenomenon of pilot contamination (PC) in multi-cell Massive MIMO systems is investigated in the presence of imperfect timing synchronization (TS). In particular, a basic setup is considered, where a base station (BS) is perfectly synchronized with the user of its cell, but there is imperfect TS between the BS and the user in another cell, possibly due to different propagation distances. A discrete-time system model is derived based on the continuous-time system model. The discrete-time system model accurately captures the phenomenon of imperfect TS in terms of the timing mismatch and the pulse shaping filter impulse responses. The derived discrete-time system model is used to study the achievable rates of a two-cell Massive MIMO uplink. It is shown that the structure imposed to the pilot contaminating signal due to the imperfect TS can be leveraged to mitigate the effect of PC. The level of PC suppression is quantified as a function of the timing mismatch and the characteristics of the transmit/receive pulse shaping filters.
The efficient design of ultra-reliable low-latency communication (URLLC) is a major research objective for next generation wireless systems, in particular for industrial automation applications. Massive MINO has been successful in providing high spectral and energy efficiency, and it is of importance to investigate the potential gains and limitations it exhibits when applied for URLLC. We study a scenario where two sets of nodes with different traffic characteristics communicate with a central node equipped with multiple antenna elements. We characterize the outage probability when fully orthogonal training sequences are used versus sharing of the training sequences between the two sets of nodes. It is shown that substantial performance gains can be reaped with shared training sequences when there are strict latency requirements and/or large number of nodes to be served.
The problem of communication in Rayleigh fading channels with estimated channel state information at the receiver (CSIR) is investigated. Based on a related hypothesis testing problem in the Neyman-Pearson formulation, a non-asymptoticin the codeword block-length-converse on the maximal coding rate is derived. The bound summarizes succinctly the effect of various system parameters that include the length of channel coherence interval, the length of the training and data intervals and the power allocated to training and data transmission. The bound is also studied in the asymptotic-in the codeword block-length-regime and a particularly simple, non-trivial upper bound on the ergodic capacity of Raleigh fading channels with estimated CSIR is obtained. Finally, a second-order asymptotic expansion of the non-asymptotic converse is provided, which can be very useful in the study of latency-constrained communication systems.
The effect of correlation between neighboring resource blocks (RBs) in the outage probability performance of OFDM-based Massive MIMO systems is investigated. An upper bound on the outage probability, which is the relevant performance metric for latency-constrained communication, of two operations that exploit this correlation structure is derived and compared with the base scenario of orthogonal communication, where the correlation is ignored. It is observed that substantial outage probability improvement can be reaped already when moderate correlation is present. Closed-form upper and lower bounds on the investigated outage probability are derived. The bounds are shown to be tight for a wide range of system parameters and can be used to draw insights on the optimal design of latency-constrained Massive MIMO systems.