Parallel evolutionary approaches for game playing and verification using Intel Xeon Phi

Journal of Parallel and Distributed Computing - Tập 133 - Trang 258-271 - 2019
Sebastián Rodríguez1, Facundo Parodi1, Sergio Nesmachnow1
1Facultad de Ingeniería, Universidad de la República, Uruguay

Tài liệu tham khảo

Alba, 2013, Parallel metaheuristics: recent advances and new trends, Int. Trans. Oper. Res., 20, 1, 10.1111/j.1475-3995.2012.00862.x Aloupis, 2014, Classic nintendo games are (computationally) hard, 40 Bäck, 1997 Barriga, 2014, Parallel UCT search on GPUs, 1 M. Bodén, A guide to recurrent neural networks and backpropagation, The Dallas project, 2001. Bourki, 2010, Scalability and parallelization of monte-carlo tree search, 48 Chaslot, 2008, Parallel monte-carlo tree search, 60 D. Dyer, Watchmaker framework for evolutionary computation, [Online] http://watchmaker.uncommons.org/. (Accessed October 2016). Fang, 2014, Test-driving Intel Xeon Phi, 137 García-Sánchez, 2015, Towards automatic StarCraft strategy generation using genetic programming, 284 Glover, 1986, Future paths for integer programming and links to artificial intelligence, Comput. Oper. Res., 13, 533, 10.1016/0305-0548(86)90048-1 Guzdial, 2017, Game engine learning from video Guzdial, 2016, Game level generation from gameplay videos Hart, 1968, A formal basis for the heuristic determination of minimum cost paths, IEEE Trans. Syst. Sci. Cybern., 4, 100, 10.1109/TSSC.1968.300136 Hausknecht, 2014, A neuroevolution approach to general Atari game playing, IEEE Trans. Comput. Intell. AI Games, 6, 355, 10.1109/TCIAIG.2013.2294713 Hong, 2004, Evolution of emergent behaviors for shooting game characters in Robocode, 634 Jørgensen, 2009 Leane, 2017, An evolutionary metaheuristic algorithm to optimise solutions to NES games, 19 Logas, 2014, Software verification games: Designing Xylem, The Code of Plants Mnih, 2015, Human-level control through deep reinforcement learning, Nature, 518, 529, 10.1038/nature14236 Murphy, 2013, The first level of Super Mario Bros. is easy with lexicographic orderings and time travel, 112 Nesmachnow, 2010, Computacion científica de alto desempeño en la F́acultad de Ingeniería, Universidad de la República, Rev. Asoc. Ingenieros Uruguay, 61, 12 Nesmachnow, 2014, An overview of metaheuristics: accurate and efficient methods for optimisation, Int. J. Metaheuristics, 3, 320, 10.1504/IJMHEUR.2014.068914 Ortega, 2013, Imitating human playing styles in super mario bros, Entertainment Comput., 4, 93, 10.1016/j.entcom.2012.10.001 Osborn, 2014, A game-independent play trace dissimilarity metric F. Parodi, S. Rodríguez Leopold, S. Iturriaga, S. Nesmachnow, Optimizing a pinball computer player using evolutionary algorithms, in: Proceedings of the XVIII Latin-Iberoamerican Conference on Operations Research, 2016. Risi, 2017, Neuroevolution in games: state of the art and open challenges, IEEE Trans. Comput. Intell. AI Games, 9, 25, 10.1109/TCIAIG.2015.2494596 J. Schaffer, D. Whitley, L. Eshelman, Combinations of genetic algorithms and neural networks: a survey of the state of the art, in: International Workshop on Combinations of Genetic Algorithms and Neural Networks, 1992, 1–37. Simpson, 2012 Stanley, 2002, Evolving neural networks through augmenting topologies, Evol. Comput., 10, 99, 10.1162/106365602320169811 A. Summerville, J. Osborn, M. Mateas, Charda: Causal hybrid automata recovery via dynamic analysis, 2017. ArXiv preprint arXiv:1707.03336. Togelius, 2009, Super mario evolution, 156 Van Hasselt, 2016, Deep Reinforcement Learning with Double Q-Learning, 2094 Yannakakis, 2015, A panorama of artificial and computational intelligence in games, IEEE Trans. Comput. Intell. AI Games, 317, 10.1109/TCIAIG.2014.2339221 Yannakakis, 2018 Zook, 2015, Monte-carlo tree search for simulation-based strategy analysis