The Optimal Control of Partially Observable Markov Processes over a Finite Horizon

Operations Research - Tập 21 Số 5 - Trang 1071-1088 - 1973
Richard D. Smallwood1, Edward J. Sondik2
1Stanford University, Stanford, California, and Xerox Palo Alto Research Center, Palo Alto, California
2Stanford University, Stanford, California

Tóm tắt

This paper formulates the optimal control problem for a class of mathematical models in which the system to be controlled is characterized by a finite-state discrete-time Markov process. The states of this internal process are not directly observable by the controller; rather, he has available a set of observable outputs that are only probabilistically related to the internal state of the system. The formulation is illustrated by a simple machine-maintenance example, and other specific application areas are also discussed. The paper demonstrates that, if there are only a finite number of control intervals remaining, then the optimal payoff function is a piecewise-linear, convex function of the current state probabilities of the internal Markov process. In addition, an algorithm for utilizing this property to calculate the optimal control policy and payoff function for any finite horizon is outlined. These results are illustrated by a numerical example for the machine-maintenance problem.

Từ khóa


Tài liệu tham khảo