Open this publication in new window or tab >>2022 (English)In: IEEE Transactions on Communications, ISSN 0090-6778, E-ISSN 1558-0857, Vol. 70, no 5, p. 3081-3095Article in journal (Refereed) Published
Abstract [en]
We consider a finite-state Discrete-Time Markov Chain (DTMC) source that can be sampled for detecting the events when the DTMC transits to a new state. Our goal is to study the trade-off between sampling frequency and staleness in detecting the events. We argue that, for the problem at hand, using Age of Information (AoI) for quantifying the staleness of a sample is conservative and therefore, study another freshness metric age penalty, which is defined as the time elapsed since the first transition out of the most recently observed state. We study two optimization problems: minimize average age penalty subject to an average sampling frequency constraint, and minimize average sampling frequency subject to an average age penalty constraint; both are Constrained Markov Decision Problems. We solve them using the Lagrangian MDP approach, where we also provide structural results that reduce the search space. Our numerical results demonstrate that the computed Markov policies not only outperform optimal periodic sampling policies, but also achieve sampling frequencies close to or lower than that of an optimal clairvoyant (non-causal) sampling policy, if a small age penalty is allowed.
Place, publisher, year, edition, pages
New York: Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
Measurement, Markov processes, Frequency shift keying, Web pages, Delays, Databases, Receivers, Age of information, age penalty, sampling, DTMC source, CMDP
National Category
Telecommunications Computer Systems
Identifiers
urn:nbn:se:kth:diva-313504 (URN)10.1109/TCOMM.2022.3160563 (DOI)000797439600018 ()2-s2.0-85126675858 (Scopus ID)
Funder
EU, European Research Council, 742648Swedish Research Council, 2016-04404
Note
QC 20220607
2022-06-072022-06-072023-11-22Bibliographically approved