Feasibility of on-line speed policies in real-time systems

Springer Science and Business Media LLC - Tập 56 - Trang 254-292 - 2020
Bruno Gaujal1, Alain Girault1, Stéphan Plassart1
1Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LIG, Grenoble, France

Tóm tắt

We consider a real-time system where a single processor with variable speed executes an infinite sequence of sporadic and independent jobs. We assume that job sizes and relative deadlines are bounded by C and $$\varDelta $$ respectively. Furthermore, $$S_{\max }$$ denotes the maximal speed of the processor. In such a real-time system, a speed selection policy dynamically chooses (i.e., on-line) the speed of the processor to execute the current, not yet finished, jobs. We say that an on-line speed policy is feasible if it is able to execute any sequence of jobs while meeting two constraints: the processor speed is always below $$S_{\max }$$ and no job misses its deadline. In this paper, we compare the feasibility region of four on-line speed selection policies in single-processor real-time systems, namely Optimal Available $${\text{(OA)}}$$ (Yao et al. in IEEE annual foundations of computer science, 1995), Average Rate $${\text{(AVR)}}$$ (Yao et al. 1995), $${\text{(BKP)}}$$ (Bansal in J ACM 54:1, 2007), and a Markovian Policy based on dynamic programming $${\text{(MP)}}$$ (Gaujal in Technical Report hal-01615835, Inria, 2017). We prove the following results: This reinforces the interest of $${\text{(MP)}}$$ that is not only optimal for energy consumption (on average) but is also optimal regarding feasibility.

Tài liệu tham khảo

Augustine J, Irani S, Swamy C (2004) Optimal power-down strategies. Symposium on Foundations of Computer Science, FOCS’04. IEEE, Rome, Italy, pp 530–539 Bansal N, Kimbrel T, Pruhs K (2007) Speed scaling to manage energy and temperature. J ACM 54:1 Chen J, Stoimenov N, Thiele L (2009) Feasibility analysis of on-line DVS algorithms for scheduling arbitrary event streams. In: Real-time systems symposium, RTSS’09. Washington (DC), USA, pp 261–270 Gaujal B, Girault A, Plassart S (2017) Dynamic speed scaling minimizing expected energy consumption for real-time tasks. Technical Report hal-01615835, Inria Jejurikar R, Gupta R (2004) Procrastination scheduling in fixed priority real-time systems. In: Conference on languages, compilers, and tools for embedded systems, LCTES’04. ACM, Washington (DC), USA, pp 57–66 Liu C, Layland J (1973) Scheduling algorithms for multiprogramming in hard real-time environnement. J ACM 20(1):46–61 Petrovitsch M (1901) Sur une manière d’étendre le théorème de la moyence aux équations différentielles du premier ordre. Math Ann 54(3):417–436 Puterman ML (2005) Markov decision process: discrete stochastic dynamic programming, Wiley series in probability and, statistics edn. Wiley, New York Sutton RS, Barto AG (2018) Reinforcement learning: an introduction, 2nd edn. The MIT Press, New York Thiele L, Chakraborty S, Naedele M (2000) Real-time calculus for scheduling hard real-time systems. In: International Symposium on Circuits and Systems, ISCAS’00, IEEE, pp 101–104 Yao F, Demers A, Shenker S (1995) A scheduling model for reduced CPU energy. In: IEEE annual foundations of computer science, pp 374–382