We consider two theorems from the theory of compressive sensing. Mainly a theorem concerning uniform recovery of random sampling matrices, where the number of samples needed in order to recover an s-sparse signal from linear measurements (with high probability) is known to be m greater than or similar to s(ln s)(3) ln N. We present new and improved constants together with what we consider to be a more explicit proof. A proof that also allows for a slightly larger class of m x N-matrices, by considering what is called effective sparsity. We also present a condition on the so-called restricted isometry constants, delta s, ensuring sparse recovery via l(1)-minimization. We show that delta(2s) < 4/root 41 is sufficient and that this can be improved further to almost allow for a sufficient condition of the type delta(2s) < 2/3.
Two sets of 0-1 vectors of fixed length form a uniquely decodeable code pair if their Cartesian product is of the same size as their sumset, where the addition is pointwise over integers. For the size of the sumset of such a pair, van Tilborg has given an upper bound in the general case. Urbanke and Li, and later Ordentlich and Shayevitz, have given better bounds in the unbalanced case, that is, when either of the two sets is sufficiently large. Improvements to the latter bounds are presented.
We present a simple deterministic gap-preserving reduction from SAT to the minimum distance of code problem over F-2. We also show how to extend the reduction to work over any fixed finite field. Previously, a randomized reduction was known due to Dumer, Micciancio, and Sudan, which was recently derandomized by Cheng and Wan. These reductions rely on highly nontrivial coding theoretic constructions, whereas our reduction is elementary. As an additional feature, our reduction gives hardness within a constant factor even for asymptotically good codes, i.e., having constant positive rate and relative distance. Previously, it was not known how to achieve a deterministic reduction for such codes.
It is a well-known fact that feedback does not increase the capacity of point-to-point memoryless channels, however, its effect in secure communications is not fully understood yet. In this paper, an achievable scheme for the wiretap channel with generalized feedback is presented. This scheme, which uses the feedback signal to generate a shared secret key between the legitimate users, encrypts the message to be sent at the bit level. New capacity results for a class of channels are provided, as well as some new insights into the secret key agreement problem. Moreover, this scheme recovers previously reported rate regions from the literature, and thus it can be seen as a generalization that unifies several results in the field.
The use of large-scale antenna arrays can bring substantial improvements in energy and/or spectral efficiency to wireless systems due to the greatly improved spatial resolution and array gain. Recent works in the field of massive multiple-input multiple-output (MIMO) show that the user channels decorrelate when the number of antennas at the base stations (BSs) increases, thus strong signal gains are achievable with little interuser interference. Since these results rely on asymptotics, it is important to investigate whether the conventional system models are reasonable in this asymptotic regime. This paper considers a new system model that incorporates general transceiver hardware impairments at both the BSs (equipped with large antenna arrays) and the single-antenna user equipments (UEs). As opposed to the conventional case of ideal hardware, we show that hardware impairments create finite ceilings on the channel estimation accuracy and on the downlink/uplink capacity of each UE. Surprisingly, the capacity is mainly limited by the hardware at the UE, while the impact of impairments in the large-scale arrays vanishes asymptotically and interuser interference (in particular, pilot contamination) becomes negligible. Furthermore, we prove that the huge degrees of freedom offered by massive MIMO can be used to reduce the transmit power and/or to tolerate larger hardware impairments, which allows for the use of inexpensive and energy-efficient antenna elements.
We analyze the stability of standard, buffered, slotted-Aloha systems. Specifically, we consider a set of users, each equipped with an infinite buffer. Packets arrive into user i's buffer according to some stationary ergodic Markovian process of intensity lambda(i). At the beginning of each slot, if user i has packets in its buffer, it attempts to transmit a packet with fixed probability p(i) over a shared resource/channel. The transmission is successful only when no other user attempts to use the channel. The stability of such systems has been open since their very first analysis in 1979 by Tsybakov and Mikhailov. In this paper, we propose an approximate stability condition that is provably exact when the number of users grows large. We provide theoretical evidence and numerical experiments to explain why the proposed approximate stability condition is extremely accurate even for systems with a restricted number of users (even two or three).
The performance of bit-interleaved coded modulation (BICM) with (or without) iterative decoding (ID) is significantly influenced by the mapping of bits to the symbol constellation. Our main objective in this paper is to develop a systematic design approach for BICM-ID schemes, ensuring the best possible performance with iterative decoding. Although useful mappings for BICM-ID have been found based on various search strategies, no attempt has been made to systematically enumerate and classify all unique mappers for a given constellation. As the basis for a systematic enumeration and classification, we define the average bit-wise distance spectrum for a mapping from bits to symbols. Different bit-wise distance spectra are derived assuming no prior information or full prior information, respectively. The bit-wise distance spectra determine correspondingbit-wise error probability and bit-wise mutual information. The latter allows us to use theclassification of mappings with unique bit-wise distance spectra to also classifymappings with unique extremal points in the corresponding extrinsic information transfer (EXIT) curves. As an example of our approach, we classify 8PSK mappings into 86 classes of unique mappings according to bit-wise distance spectra. The classificationcan be used to significantly reduce the complexity of the search for suitable mappers for BICM-ID. For 8PSK and a given encoder, only 86 different mappings need to be investigated. As examples of the systematic design approach, the best 8PSK mappingsfor minimizing the convergence threshold are found for concatenation with the rate 1/2 (5, 7)8 and (133,1718 convolutional codes, and the rate 1/2 UMTS turbo code with identical constituent convolutional codes (15/13)8.
An interesting practical consideration for decoding of serial or parallel concatenated codes with more than two components is the determination of the lowest complexity component decoder schedule which results in convergence. This correspondence presents an algorithm that finds such an optimal decoder schedule. A technique is also given for combining and projecting a series of three-dimensional extrinsic information transfer (EXIT) functions onto a single two-dimensional EXIT chart. This is a useful technique for visualizing the convergence threshold for multiple concatenated codes and provides a design tool for concatenated codes with more than two components.
We propose a systematic design framework for optimal, low-complexity punctured multiple parallel concatenated codes (MPCCs), based on minimizing the convergence threshold using extrinsic information transfer (EXIT) charts. As the convergence threshold is related to the area between the two EXIT curves, the corresponding optimization problem is equivalent to a curve-fitting problem. The EXIT curves are determined by the respective EXIT functions of the constituents, which can be conveniently shaped through the use of random puncturing and unequal energy allocations across parallel coding streams. The design task is therefore to find the optimal combination of constituents, puncturing ratios, and energy allocation for matching the EXIT curves. A search over all rate-one convolutional codes of memory length four or less is performed, identifying 98 classes of codes with unique EXIT functions out of a total of 310 codes. Low-complexity MPCCs with up to four constituents are found, where the convergence thresholds are observed to be within 0.1 dB or less of the fundamental minimum signal-to-noise ratio (SNR) corresponding to the binary phase-shift keying (BPSK) capacity for code rates 1/3 ≤ R < 7/8. Further allowing for unequal energy allocation, the convergence thresholds for lower code rates are similarly improved.
We consider codes over fixed alphabets against worst case symbol deletions. For any fixed k >= 2, we construct a family of codes over alphabet of size k with positive rate, which allow efficient recovery from a worst case deletion fraction approaching 1 - (2/(k + root k)). In particular, for binary codes, we are able to recover a fraction of deletions approaching 1/(root 2+1) = root 2-1 approximate to 0.414. Previously, even non-constructively, the largest deletion fraction known to be correctable with positive rate was 1 - Theta (1/root k), and around 0.17 for the binary case. Our result pins down the largest fraction of correctable deletions for k-ary codes as 1 - Theta (1/k), since 1 - 1/k is an upper bound even for the simpler model of erasures where the locations of the missing symbols are known. Closing the gap between (root 2 - 1) and 1/2 for the limit of worst case deletions correctable by binary codes remains a tantalizing open question.
This paper presents lossless prefix codes optimized with respect to a payoff criterion consisting of a convex combination of maximum codeword length and average codeword length. The optimal codeword lengths obtained are based on a new coding algorithm, which transforms the initial source probability vector into a new probability vector according to a merging rule. The coding algorithm is equivalent to a partition of the source alphabet into disjoint sets on which a new transformed probability vector is defined as a function of the initial source probability vector and scalar parameter. The payoff criterion considered encompasses a tradeoff between maximum and average codeword length; it is related to a payoff criterion consisting of a convex combination of average codeword length and average of an exponential function of the codeword length, and to an average codeword length payoff criterion subject to a limited length constraint. A special case of the first related payoff is connected to coding problems involving source probability uncertainty and codeword overflow probability, whereas the second related payoff compliments limited length Huffman coding algorithms.
This paper discovers fundamental principles of the backoff process that governs the performance of IEEE 802.11. A simplistic principle founded upon regular variation theory is that the backoff time has a truncated Pareto-type tail distribution with an exponent of $ { (log gamma !)/log m}$ ( $boldsymbol {m}$ is the multiplicative factor and $ {gamma }$ is the collision probability). This reveals that the per-node backoff process is heavy-tailed in the strict sense for $ { gamma > 1/m^{2}}$ , and paves the way for the following unifying result. The state-of-the-art theory on the superposition of the heavy-tailed processes is applied to establish a dichotomy exhibited by the aggregate backoff process, putting emphasis on the importance of time-scales on which we view the backoff processes. While the aggregation on normal time-scales leads to a Poisson process, it is approximated by a new limiting process possessing long-range dependence (LRD) on coarse time-scales. This dichotomy turns out to be instrumental in formulating short-term fairness, extending existing formulas to arbitrary population, and to elucidate the absence of LRD in practical situations. A refined wavelet analysis is conducted to strengthen this argument.
Performance evaluation of the 802.11 MAC protocol is classically based on the decoupling assumption, which hypothesizes that the backoff processes at different nodes are independent. This decoupling assumption results from mean field convergence and is generally true in transient regime in the asymptotic sense (when the number of wireless nodes tends to infinity), but, contrary to widespread belief, may not necessarily hold in stationary regime. The issue is often related with the existence and uniqueness of a solution to a fixed point equation; however, it was also recently shown that this condition is not sufficient; in contrast, a sufficient condition is a global stability property of the associated ordinary differential equation. In this paper, we give a simple condition that establishes the asymptotic validity of the decoupling assumption for the homogeneous case (all nodes have the same parameters). We also discuss the heterogeneous and the differentiated service cases and formulate a new ordinary differential equation. We show that the uniqueness of a solution to the associated fixed point equation is not sufficient; we exhibit one case where the fixed point equation has a unique solution but the decoupling assumption is not valid in the asymptotic sense in stationary regime.
In this paper, we consider an automatic-repeat-request (ARQ) retransmission protocol signaling over a block-fading multiple-input-multiple-output (MIMO) channel. Unlike previous work, we allow for multiple fading blocks within each transmission (ARQ round), and we constrain the transmitter to fixed rate codes constructed over complex signal constellations. In particular, we examine the general case of average input-power-constrained constellations with a fixed signaling alphabet of finite cardinality. This scenario is a suitable model for practical wireless communications systems employing orthogonal frequency division multiplexing (OFDM) techniques over a MIMO ARQ channel. Two cases of fading dynamics are considered, namely, short-term static fading wherechannel fading gains change randomly for each ARQ round, and long-term static fadingwhere channel fading gains remain constant over all ARQ rounds pertaining to a given message. As our main result, we prove that for the block-fading MIMO ARQ channel with a fixed signaling alphabet satisfying a short-term power constraint, the optimal signal-to-noise ratio (SNR) exponent is given by a modified Singleton bound, relating all the system parameters. To demonstrate the practical significance of the theoretical analysis, we present numerical results showing that practical Singleton-bound-achieving maximum distance separable codes achieve the optimal SNR exponent.
This paper studies and derives new results for the interference channel with a relay (ICR). Three inner bounds for the discrete memoryless ICR are proposed, based on three coding strategies that employ layered code at the relay. The first scheme is inspired by layered noisy network coding, proposed by Lim et al. for the two-way relay channel, the second and the third schemes rely on simpler encoding and decoding processes, dubbed layered quantize-forward. Performance of the proposed schemes is investigated for two classes of channels with Gaussian noise: the interference channel with in-band relay reception/out-of-band relay transmission and the interference with in-band relay reception/in-band relay transmission. For the former class of channels, it is shown that the first proposed scheme achieves the same inner bound as the generalized hash-forward scheme with incremental binning. In addition, the inner bound is within 0.5 bit of the capacity region under certain conditions on the channel parameters. For the latter class of channels, new upper bounds on sum-rate are established by extending known upper bounds for symmetric channels. The first inner bound is shown to be within 0.5 bit of the capacity region if the relay's power exceeds a certain threshold, which depends on channel parameters. Numerical examples show that the proposed schemes can achieve significantly higher sum-rates when compared with other compress-forward schemes. Analysis also reveals a tradeoff between achievable rates, coding delay, and complexity of the proposed schemes. Results in this paper provide a better understanding of coding for the ICR, in particular, they show that layered coding is a beneficial element in multiuser networks with relays.
The framework of network equivalence theory developed by Koetter et al. introduces a notion of channel emulation to construct noiseless networks as upper (respectively, lower) bounding models, which can be used to calculate the outer (respectively, inner) bounds for the capacity region of the original noisy network. Based on the network equivalence framework, this paper presents scalable upper and lower bounding models for wireless networks with potentially many nodes. A channel decoupling method is proposed to decompose wireless networks into decoupled multiple-access channels and broadcast channels. The upper bounding model, consisting of only point-to-point bit pipes, is constructed by first extending the one-shot upper bounding models developed by Calmon et al. and then integrating them with network equivalence tools. The lower bounding model, consisting of both point-to-point and point-to-points bit pipes, is constructed based on a two-step update of the lower bounding models to incorporate the broadcast nature of wireless transmission. The main advantages of the proposed methods are their simplicity and the fact that they can be extended easily to large networks with a complexity that grows linearly with the number of nodes. It is demonstrated that the resulting upper and lower bounds can approach the capacity in some setups.
A method for constructing n-stage Galois NLFSRs with period 2(n) - 1 from n-stage maximum length LFSRs is presented. Nonlinearity is introduced into state cycles by adding a non-linear Boolean function to the feedback polynomial of the LFSR. Each assignment of variables for which this function evaluates to 1 acts as a crossing point for the LFSR state cycle. The effect of non-linearity is cancelled and state cycles are joined back by adding a copy of the same function to a later stage of the register. The presented method requires no extra time steps and it has a smaller area overhead compared to the previous approaches based on cross-join pairs. It is feasible for large n.
Conventional nonlinear feedback shift registers (NLFSRs) use the Fibonacci configuration in which the feedback is applied to the last bit only. In this paper, we show how to transform a Fibonacci NLFSR into an equivalent NLFSR in the Galois configuration, in which the feedback can be applied to every bit. Such a transformation can potentially reduce the depth of the circuits implementing feedback functions, thus decreasing the propagation time and increasing the throughput.
The Fibonacci and the Galois configurations of nonlinear feedback shift registers (NLFSRs) are considered. In the former, the feedback is applied to the input bit of the shift register only. In the latter, the feedback can potentially be applied to every bit. The sufficient conditions for equivalence of NLFSRs in the Fibonacci and the Galois configurations have been formulated previously. The equivalent NLFSRs in different configurations normally have to be initialized to different states to generate the same output sequences. The mapping between the initial states of two equivalent NLFSRs in the Fibonacci and the Galois configurations is derived in this paper.
The problem of constructing a binary machine with the minimum number of stages generating a given binary sequence is addressed. Binary machines are a generalization of nonlinear feedback shift registers (NLFSRs) in which both connections, feedback and feedforward, are allowed and no chain connection between the register stages is required. An algorithm for constructing a shortest binary machine generating a given periodic binary sequence is presented.
Previous work on cooperative communications has concentrated primarily on the diversity benefits of such techniques. This paper, instead, considers the multiplexing benefits of cooperative communications. First, a new interpretation on the fundamental tradeoff between the transmission rate and outage probability in multiple-antenna relay networks is given. It follows that multiplexing gains can be obtained at any finite signal-to-noise ratio (SNR), in full-duplex multiple-antenna relay networks. Thus, relaying can offer not only stronger link reliability, but also higher spectral efficiency.
Specifically, the decode-and-forward protocol is applied and networks that have one source, one destination, and multiple relays are considered. A receive power gain at the relays, which captures the network large-scale fading characteristics, is also considered. It is shown that this power gain can significantly affect the system diversity-multiplexing tradeoff for any finite SNR value. Several relaying protocols are proposed and are shown to offer nearly the same outage probability as if the transmit antennas at the source and the relay(s) were colocated, given certain SNR and receive power gains at the relays. Thus, a higher multiplexing gain than that of the direct link can be obtained if the destination has more antennas than the source.
Much of the analysis in the paper is valid for arbitrary channel fading statistics. These results point to a view of relay networks as a means for providing higher spectral efficiency, rather than only link reliability.
The notion of a 1-vertex transfer matrix for multidimensional codes is introduced. It is shown that the capacity of such codes, or the topological entropy, can be expressed as the limit of the logarithm of spectral radii of 1-vertex transfer matrices. Storage and computations using the 1-vertex transfer matrix are much smaller than storage and computations needed for the standard transfer matrix. The method is applied to estimate the first 15 digits of the entropy of the 2-D (0, 1) run length limited channel. A large-scale computation of eigenvalues for the (0, 1) run length limited channel in 2-D and 3-D have been carried out. This was done in order to be able to compare the computational cost of the new method with the standard transfer matrix and have rigorous bounds to compare the estimates with. This in turn leads to improvements on the best previous lower and upper bounds for these channels.
The sparsity and compressibility of finitedimensional signals are of great interest in fields, such as compressed sensing. The notion of compressibility is also extended to infinite sequences of independent identically distributed or ergodic random variables based on the observed error in their nonlinear k-term approximation. In this paper, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of "entropy dimension" which was originally defined by Renyi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of impulsive Poisson laws compared with continuous fat-tailed distributions, our entropy measure ranks alpha-stable innovations according to their tail.
In some applications, the variance of additive measurement noise depends on the signal that we aim to measure. For instance, additive signal-dependent Gaussian noise (ASDGN) channel models are used in molecular and optical communication. Herein, we provide lower and upper bounds on the capacity of additive signal-dependent noise (ASDN) channels. The first lower bound is based on an extension of majorization inequalities, and the second lower bound utilizes the properties of the differential entropy. The lower bounds are valid for arbitrary ASDN channels. The upper bound is based on a previous idea of the authors ("symmetric relative entropy") and is applied to the ASDGN channels. These bounds indicate that in the ASDN channels (unlike the classical additive white Gaussian noise channels), the capacity does not necessarily become larger by reducing the noise variance function. We also provide sufficient conditions under which the capacity becomes infinite. This is complemented by some conditions implying that the capacity is finite, and a unique capacity achieving measure exists (in the sense of the output measure).
Data transmission through frequency-selective block fading channels is considered in the case where neither the transmitter nor the receiver has any knowledge of the channel coefficients. Standard code design approaches for this scenario take channel uncertainty at the receiver into account by splitting the available channel coherence time into a part dedicated to training symbols utilized for channel estimation and a second part using an error-control coding scheme that is designed without channel uncertainty in mind. In contrast, in this correspondence joint codes are designed that are optimized for communication over the unknown channel and operate over the full coherence time. Using an approximation of the union bound on codeword error probability as design criterion, codes based on general complex-valued symbols are obtained with a gradient search optimization technique. Numerical examples for both single antenna as well as multiple-antenna systems illustrate that significant improvement over training-based schemes can be obtained.
The design of signal constellations for a communication system using multiple transmitter antennas over a Rayleigh-fading channel is considered under the assumption that no channel state information (CSI) is available at the transmitter and the receiver has acquired a CSI estimate with known error covariance. This setup encompasses the Well-studied scenarios of perfect and no channel knowledge at the receiver and allows a smooth transition between these two cases. The data detection performance as a function of the CSI error covariance is analyzed and used to investigate the design of training blocks if such blocks are transmitted to provide CSI to the receiver. Moreover, two approaches to design constellations adapted to the error covariance of the receiver CSI are presented. Whereas the first approach is based on a generic gradient search method, the second approach uses an appropriate combination of constellations designed for perfect and no CSI at the receiver. Simulations confirm the benefits of the presented designs.
This paper studies the asymptotic performance of multi-hop amplify-and-forward relay multiple-antenna communication channels. Each multi-antenna relay terminal in the considered network amplifies the received signal, sent by a source, and retransmits it upstream toward a destination. Achievable ergodic rates of the relay channel with both jointly optimal detection and decoding and practical separate-decoding receiver architectures for arbitrary signaling schemes, along with average bit error rates for various types of detectors are derived in the regime where the number of antennas at each terminal grows large without a bound. To overcome the difficulty of averaging over channel realizations, we apply a large-system analysis based on the replica method from statistical physics. The validity of the large-system analysis is further verified through Monte Carlo simulations of realistic finite-sized systems.
Informally, an error-correcting code has nice list-decodability properties if every Hamming ball of large radius has a small number of codewords in it. Here, we report linear codes with nontrivial list-decodability: i.e., codes of large rate that are nicely list-decodable, and codes of large distance that are not nicely list-decodable. Specifically, on the positive side, we show that there exist codes of rate R and block length n that have at most c codewords in every Hamming ball of radius H-1 (1-R-1/c) (.) n. This answers the main open question from the work of Elias. This result also has consequences for the construction of concatenated codes of good rate that are list decodable from a large fraction of errors, improving previous results of Guruswami and Sudan in this vein. Specifically, for every epsilon > 0, we present a polynomial time constructible asymptotically good family of binary codes of rate Omega(epsilon(4)) that can be list-decoded in polynomial time from up to a fraction (1/2 - epsilon) of errors, using lists of size O(epsilon(-2)). On the negative side, we show that for every delta and c, there exists tau < delta, c(1) > 0, and an infinite family of linear codes {C-i}(i) such that if n(i) denotes the block length of C-i, then C-i has minimum distance at least delta (.) n(i) and contains more than c(1) (.) n(i)(c) codewords in some Hamming ball of radius tau (.) n(i). While this result is still far from known bounds on the list-decodability of linear codes, it is the first to bound the radius for list-decodability by a polynomial-sized list away from the minimum distance of the code.
The list-decodability of random linear codes is shown to be as good as that of general random codes. Specifically, for every fixed finite field, Gamma(q), p is an element of (0, 1 - 1/q) and epsilon > 0, it is proved that with high probability a random linear code C in F-q(n) of rate (1 - H-q(p) - epsilon) can be list decoded from a fraction p of errors with lists of size at most O(1/epsilon). This also answers a basic open question concerning the existence of highly list-decodable linear codes, showing that a list-size of O(1/epsilon) suffices to have rate within epsilon of the information-theoretically optimal rate of 1 - H-q(p). The best previously known list-size bound was q(O(1/epsilon)) (except in the q = 2 case where a list-size bound of O(1/epsilon) was known). The main technical ingredient in the proof is a strong upper bound on the probability that l random vectors chosen from a Hamming ball centered at the origin have too many (more than Omega(l)) vectors from their linear span also belong to the ball.
Peaky and non-peaky signaling schemes have long been considered species apart in non-coherent wideband fading channels, as the first approaches asymptotically the linear-in-power capacity of a wideband AWGN channel with the same SNR, whereas the second reaches a nearly power-limited peak rate at some finite critical bandwidth and then falls to zero as bandwidth grows to infinity. In this paper it is shown that this distinction is in fact an artifact of the limited attention paid in the past to the product between the bandwidth and the fraction of time it is in use. This fundamental quantity, that is termed bandwidth occupancy, measures average bandwidth usage over time. As it turns out, a peaky signal that transmits in an infinite bandwidth but only for an infinitesimal fraction of the time may only have a small bandwidth occupancy, and so does a non-peaky scheme that limits itself to the critical bandwidth even though more spectrum is available, so as to not degrade rate. The two types of signaling in the literature are harmonized to show that, for any type of signals, there is a fundamental limit---a critical bandwidth occupancy. All signaling schemes with the same bandwidth occupancy approach the linear-in-power capacity of wideband AWGN channels with the same asymptotic behavior as the bandwidth occupancy approaches its critical value. For a bandwidth occupancy above the critical value, rate decreases to zero as the occupancy goes to infinity. This unified analysis not only recovers previous results on capacity bounds for (non-)peaky signaling schemes, but also reveals the fundamental tradeoff between accuracy and convergence when characterizing the maximal achievable rate.
This paper identifies the first general, explicit, and nonrandom MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in the rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.
In the setting of quasi-static multiple-input multiple-output channels, we consider the high signal-to-noise ratio (SNR) asymptotic complexity required by the sphere decoding (SD) algorithm for decoding a large class of full-rate linear space-time codes. With SD complexity having random fluctuations induced by the random channel, noise, and codeword realizations, the introduced SD complexity exponent manages to concisely describe the computational reserves required by the SD algorithm to achieve arbitrarily close to optimal decoding performance. Bounds and exact expressions for the SD complexity exponent are obtained for the decoding of large families of codes with arbitrary performance characteristics. For the particular example of decoding the recently introduced threaded cyclic-division-algebra-based codes-the only currently known explicit designs that are uniformly optimal with respect to the diversity multiplexing tradeoff-the SD complexity exponent is shown to take a particularly concise form as a non-monotonic function of the multiplexing gain. To date, the SD complexity exponent also describes the minimum known complexity of any decoder that can provably achieve a gap to maximum likelihood performance that vanishes in the high SNR limit.
Zhang et al. recently derived upper and lower bounds on the achievable diversity of an N-R X N-T, i.i.d. Rayleigh fading multiple antenna system using transmit antenna selection, spatial multiplexing and a linear receiver structure. For the case of L = 2 transmitting (out of N-T available) antennas the bounds are tight and therefore specify the maximal diversity order. For the general case with L
In this paper, we consider the detection of binary (antipodal) signals transmitted in a spatially multiplexed fashion over a fading multiple-input-multiple-output (MIMO) channel and where the detection is done by means of semidefinite relaxation (SDR). The SDR detector is an attractive alternative to maximum-likelihood (NIL) detection since the complexity is polynomial rather than exponential. Assuming that the channel matrix is drawn with independent identically distributed (i.i.d.) real-valued Gaussian entries, we study the receiver diversity and prove that the SDR detector achieves the maximum possible diversity. Thus, the error probability of the receiver tends to zero at the same rate as the optimal NIL receiver in the high signal-to-noise ratio (SNR) limit. This significantly strengthens previous performance guarantees available for the semidefinite relaxation detector. Additionally, it proves that full diversity detection is also possible in certain scenarios when using a noncombinatorial receiver structure.
This work considers how the presence of quantized channel information obtained from a feedback link may be utilized for determining a transmit weighting matrix that improves the performance of a predetermined orthogonal space-time block (OSTB) code. To reduce the effects of feedback delay, quantization errors and feedback channel bit errors, methods based on vector quantization for noisy channels are used in the design of the feedback link. The resulting transmission scheme and feedback link take the imperfect nature of the channel information into account while combining the benefits of conventional beamforming with those provided by OSTB coding.
Multiple transmit and receive antennas can be used in wireless systems to achieve high data rate communication. Recently, efficient space-time codes have been developed that utilize a large portion of the available capacity. These codes are designed under the assumption that the transmitter has no knowledge about the channel. In this work, on the other hand, we consider the case when the transmitter has partial, but not perfect, knowledge about the channel and how to improve a predetermined code so that this fact is taken into account. A performance criterion is derived for a frequency-nonselective fading channel and then utilized to optimize a linear transformation of the predetermined code. The resulting optimization problem turns out to be convex and can thus be efficiently solved using standard methods. In addition, a particularly efficient solution method is developed for the special case of independently fading channel coefficients. The proposed transmission scheme combines the benefits of conventional beamforming with those given by orthogonal space-time block coding. Simulation results for a narrow-band system with multiple transmit antennas and one or more receive antennas demonstrate significant gains over conventional methods, in a scenario with nonperfect channel knowledge.
This paper considers the problem of communicating over a relay channel with state when noncausal state information is partially available at the nodes. We first establish a lower bound on the achievable rates based on noisy network coding and Gelfand-Pinsker coding, and show that it provides an alternative characterization of a previously known bound. We then introduce the class of state-decoupled relay channels and show that our lower bound is tight for a subclass of semideterministic channels. We also compute the capacity for two specific examples of this subclass-a channel with multiplicative binary fading and a channel with additive Gaussian interference. These examples are not special cases of previous classes of semideterministic relay channels with known capacity.
The Gaussian, three-node relay channel with orthogonal receive components (i.e., the transmitted signals from the source and the relay do not interfere with each other) is investigated. For such channels, linear relaying is a suboptimal strategy in general. This is because a linear scheme merely repeats the received noisy signal and does not utilize the available degrees of freedom efficiently. At this background, nonlinear, symbol-wise (as opposed to block-wise) relaying strategies are developed to compensate for the shortcomings of the linear strategy. Optimal strategies are presented for two special cases of the general scenario, and it is shown that memoryless relaying can achieve the capacity. Furthermore, for the general Gaussian relay channel, a parametric piecewise linear (PL) mapping is proposed and analyzed. The achievable rates obtained by the PL mapping are computed numerically and optimized for a certain number of design parameters. It is concluded that optimized PL relaying always outperforms conventional instantaneous linear relaying (amplify-and-forward). It is also illustrated that the proposed PL relaying scheme can improve on sophisticated block Markov encoding (i.e., decode-and-forward) when the source-relay link is ill-conditioned (relative to other links). Furthermore, PL relaying can work at rates close to those achieved by side-information encoding (i.e., compress-and-forward), but at a much lower complexity.
A multiantenna channel with partial channel state information at the transmitter (CSIT) is studied. The partial CSIT takes the form of the channel matrix corrupted by additive white Gaussian noise (AWGN) with a variance that is assumed to decay as a power of the signal-to-noise ratio (SNR). It is shown that under a long-term power constraint and in the regime of asymptotically high SNR, a large diversity gain over the channel can be achieved by using rarely a high power at the transmitter that compensates for bad channel realizations. Examples relating the diversity gain of the systems with the channel Doppler bandwidth are discussed.
The problem of resource allocation to maximize the outage exponent over a fading relay channel using the decode-and-forward protocol with quantized channel state feedback (CSF) is studied. Three different scenarios are considered: relay-to-source, destination-to-relay, and destination-to-source-and-relay CSF. In the relay-to-source CSF scenario, it is found that using merely one bit of CSF to control the source transmit power is sufficient to achieve the multiantenna upper bound in a range of multiplexing gains. In the destination-to-relay CSF scenario, the systems slightly outperform dynamic decode-and-forward (DDF) at high multiplexing gains, even with only one bit of feedback. Finally, in the destination-to-source-and-relay CSF scenario, if the source-relay channel gain is unknown to the feedback quantizer at the destination, the diversity gain only grows linearly in the number of feedback levels, in sharp contrast to an exponential growth for multiantenna channels. In this last scenario, a simple scheme is shown to perform close to the corresponding upper bound.
The optimality of nondynamic DF relaying over a general three-node Nakagami fading channel in terms of the diversity-multiplexing tradeoff (DMT) is established. In particular, when the fading of the relay-source link is less severe than that of the relay-destination link, both orthogonal and nonorthogonal DF protocols are shown to be DMT-optimal over certain ranges of the multiplexing gains. It is also shown that when the multiplexing gains are sufficiently high, the DMT of the DF protocols is completely independent of the statistical properties of the relay-destination link.
The compress-and-forward (CF) strategy achieves the optimal diversity-multiplexing tradeoff (DMT) of a three-node half-duplex relay network in slow fading, under the assumption that the relay has perfect knowledge of all three channel coefficients and that the relay makes use of Wyner-Ziv (WZ) source coding with side information. This paper studies the achievable DMT of the same network when the relay is constrained to make use of standard (non-WZ) source coding. Under a short-term power constraint at the relay, using source coding without side information results in a significant loss in terms of the DMT. For multiplexing gains r <= 2/3 this loss can be fully compensated for by using power control at the relay. On the contrary, for r is an element of (2/3, 1), the loss with respect to WZ coding remains significant.
The diversity-multiplexing (D-M) tradeoff in a multiantenna channel with optimized resolution-constrained channel state feedback is characterized. The concept of minimum guaranteed multiplexing gain in the forward link is introduced and shown to significantly influence the optimal D-M tradeoff. It is demonstrated that power control based on the feedback is instrumental in achieving the D-M tradeoff, and that rate adaptation is important in obtaining a high diversity gain even at high rates. A criterion to determine finite-length codes to be tradeoff optimal is presented, leading to a useful geometric characterization of the class of extended approximately universal codes. With codes from this class, the optimal D-M tradeoff is achievable by the combination of a feedback-dependent power controller and a single codebook for single-rate or two codebooks; for adaptive-rate transmission. Finally, lower bounds to the optimal D-M tradeoffs based on Gaussian coding arguments are also studied. In contrast to the no-feedback case, these random coding bounds are only asymptotically tight, but can quickly approach the optimal tradeoff even with moderate codeword lengths.
We consider secure multi-terminal source coding problems in the presence of a public helper. Two main scenarios are studied: 1) source coding with a helper where the coded side information from the helper is eavesdropped by an external eavesdropper and 2) triangular source coding with a helper where the helper is considered as a public terminal. We are interested in how the helper can support the source transmission subject to a constraint on the amount of information leaked due to its public nature. We characterize the tradeoff among transmission rate, incurred distortion, and information leakage rate at the helper/eavesdropper in the form of the rate-distortion-leakage region for various classes of problems.
Two classes of source/channel coding problems, namely, coding with action-dependent side information and coding with additional signal reconstruction are considered in a unified fashion. In the source coding setting, a decoder wishes to reconstruct the source subject to a distortion constraint, while an encoder is required to estimate the decoder's reconstruction reliably. Side information is action-dependent in the sense that its quality and/or availability at the encoder or decoder can be influenced by a cost-constrained action sequence. In the channel coding dual, the decoder wishes to decode both the message and the channel input sequence reliably, and the channel state information available at the encoder or decoder is assumed to depend on the action sequence. We consider discrete memoryless systems and characterize single letter expressions for the rate-distortion-cost function and channel capacity for the respective source and channel coding problems.
We consider the problems of secure lossy source coding with side information in the presence of a passive eavesdropper who has access to the source description. The encoder wishes to compress the source sequence in order to satisfy a distortion criterion at the decoder, while revealing only limited knowledge about the source to the eavesdropper. The side information available to the encoder, the legitimate decoder, or the eavesdropper can be influenced by a cost-constrained action sequence. Three different settings are studied. In the first two settings, we are interested in understanding the influence of the action sequence on the rate-distortion-leakage tradeoff where the action is taken either by the decoder or by the encoder to influence side information at the decoder and eavesdropper. Next, we consider a setting where common action-dependent side information is available securely to both encoder and decoder, and thus can be used for secret key generation. We characterize the optimal rate-distortion-cost-leakage region or the corresponding inner bounds for a discrete memoryless source for above settings. The results are useful in characterizing fundamental limits for example in secure sensor networking and future cyber physical systems.
We introduce a Kullback-Leibler-type distance between spectral density functions of stationary stochastic processes and solve the problem of optimal approximation of a given spectral density P by one that is consistent with prescribed second-order statistics. In general, such statistics are expressed as the state covariance of a linear filter driven by a stochastic process whose spectral density is sought. In this context, we show i) that there is a unique spectral density P which minimizes this Kullback-Leibler distance, ii) that this optimal approximate is of the form psi/Q where the "correction term" Q is a rational spectral density function, and iii) that the coefficients of Q can be obtained numerically by solving a suitable convex optimization problem. In the special case where psi = 1, the convex functional becomes quadratic and the solution is then specified by linear equations.
We investigate large wireless networks subject to security constraints. In contrast to point-to-point, interference-limited communications considered in prior works, we propose active cooperative relaying-based schemes. We consider a network with n(l) legitimate nodes, n(e) eavesdroppers, and path loss exponent alpha >= 2. As long as n(e)(2)(log(n(e))(gamma) = o(n(l)), for some positive gamma, we show that one can obtain unbounded secure aggregate rate. This means zero-cost secure communication, given fixed total power constraint for the entire network. We achieve this result through: 1) the source using Wyner randomized encoder and a serial ( multi-stage) block Markov scheme, to cooperate with the relays and 2) the relays acting as a virtual multi-antenna to apply beamforming against the eavesdroppers. Our simpler parallel ( two-stage) relaying scheme can achieve the same unbounded secure aggregate rate when n(e)(alpha/2+1) (log(n(e))(gamma+delta(alpha/2+1)) = o(n(l)) holds, for some positive gamma, delta. Finally, we study the improvement (to the detriment of legitimate nodes) that the eavesdroppers achieve in terms of the information leakage rate in a large cooperative network in the case of collusion. We show that again the zero-cost secure communication is possible, if n(e)((2+2/alpha)) (log n(e))(gamma) = o(n(l)) holds, for some positive gamma; that is, in the case of collusion slightly fewer eavesdroppers can be tolerated compared with the non-colluding case.
This paper studies the asymptotic outage performance of incremental redundancy automatic-repeat-request (INR-ARQ) transmission over multiple-input multiple-output (MIMO) block-fading channels with discrete input constellations. We first show that transmission with random codes using a discrete signal constellation across all transmit antennas achieves the optimal outage diversity given by the Singleton bound. The optimal SNR-exponent and outage diversity of INR-ARQ transmission over the MIMO block-fading channel are then analysed. We show that a significant gain in outage diversity is obtained by providing more than one bit feedback at each ARQ round. Thus, the outage performance of INR-ARQ transmission can be remarkably improved with minimal additional overhead. A practical feedback-and-power-adaptation rule is proposed for MIMO INR-ARQ, demonstrating the benefits provided by multibit feedback. Although the rule is sub-optimal in terms of outage performance, it achieves the optimal outage diversity.