We consider stochastic sensor scheduling with application to networked control systems. We model sampling instances (in a networked system) using jumps between states of a continuous-time Markov chain. We introduce a cost function for this Markov chain which is composed of terms depending on the average sampling frequencies of the subsystems and the effort needed for changing the parameters of the underlying Markov chain. By extending Brockett's recent contribution in optimal control of Markov chains, we extract an optimal scheduling policy to fairly allocate network resources (i.e., access to the network) among the control loops. We apply this scheduling policy to a networked control system composed of several scalar decoupled subsystems and compute upper bounds for their closed-loop performance. We illustrate the developed results numerically on a networked system composed of several water tanks.
QC 20131105