Ad hoc teamwork by learning teammates’ task

Francisco S. Melo1, Alberto Sardinha1
1INESC-ID and Instituto Superior Técnico, Universidade de Lisboa, Porto Salvo, Portugal

Tóm tắt

Từ khóa


Tài liệu tham khảo

Abbeel, P. (2008). Apprenticeship learning and reinforcement learning with application to robotic control. PhD thesis, Stanford University.

Agmon, N., & Stone, P. (2012). Leading ad hoc agents in joint action settings with multiple teammates. In Proceedings 11th International Conference on Autonomous Agents and Multiagent Systems (pp. 341–348).

Albrecht, S., & Ramamoorthy, S. (2013). A game-theoretic model and best-response learning method for ad hoc coordination in multiagent systems. In: Proceedings 2013 International Conference on Autonomous Agents and Multiagent Systems (pp. 1155–1156).

Barrett, S., & Stone, P. (2011). Ad hoc teamwork modeled with multi-armed bandits: An extension to discounted infinite rewards. In Proceedings of 2011 AAMAS Workshop on Adaptive and Learning Agents (pp. 9–14).

Barrett, S., & Stone, P. (2012). An analysis framework for ad hoc teamwork tasks. In Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems (pp. 357–364).

Barrett, S., Stone, P., & Kraus, S. (2011). Empirical evaluation of ad hoc teamwork in the pursuit domain. In Proceedings of 10th International Conference on Autonomous Agents and Multiagent Systems (pp. 567–574).

Barrett, S., Stone, P., Kraus, S., & Rosenfeld, A. (2013). Teamwork with limited knowledge of reammates. In Proceedings of 27th AAAI Conference on Artificial Intelligence.

Barron, A. (1988). The exponential convergence of posterior probabilities with implications for Bayes estimators of density functions. Technical Report 7, University of Illinois at Urbana-Champaign.

Blackwell, D., & Dubbins, L. (1962). Merging of opinions with increasing information. The Annals of Mathematical Statistics, 33(3), 882–886.

Boutilier, C. (1996). Planning, learning and coordination in multiagent decision processes. In Proceedings 6th Conference on Theoretical Aspects of Rationality and Knowledge (pp. 195–210).

Bowling, M., & McCracken, P. (2005). Coordination and adaptation in impromptu teams. In Proceedings of 20th AAAI Conference on Artificial Intelligence (pp. 53–58).

Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, learning and games. New York: Cambridge University Press.

Chakraborty, D., & Stone, P. (2013). Cooperating with a Markovian ad hoc teammate. In Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems (pp. 1085–1092).

Clarke, B., & Barron, A. (1990). Information-theoretic asymptotics of Bayes methods. IEEE Transactions on Information Theory, 36(3), 371–453.

de Farias, D., & Megiddo, N. (2006). Combining expert advice in reactive environments. The Journal of the ACM, 53(5), 762–799.

Duff, M. (2002). Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, University of Massassachusetts Amherst.

Fu, J., & Kass, R. (1988). The exponential rates of convergence of posterior distributions. Annals of the Institute of Statistical Mathematics, 40(4), 683–691.

Fudenberg, D., & Levine, D. (1989). Reputation and equilibrium selection in games with a patient player. Econometrica, 57(4), 759–778.

Fudenberg, D., & Levine, D. (1993). Steady state learning and Nash equilibrium. Econometrica, 61(3), 547–573.

Fudenberg, D., & Levine, D. (1998). The theory of learning in games. Cambridge, MA: MIT Press.

Ganzfried, S., & Sandholm, T. (2011). Game theory-based opponent modeling in large imperfect-information games. In Proceedings of 10th International Conference on Autonomous Agents and Multiagent Systems (pp. 533–540).

Genter, K., Agmon, N., & Stone, P. (2011). Role-based ad hoc teamwork. In Proceedings of 25th AAAI Conference on Artificial Intelligence (pp. 1782–1783).

Genter, K., Agmon, N., & Stone, P. (2013). Ad hoc teamwork for leading a flock. In Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems (pp. 531–538).

Ghosal, S., & van der Vaart, A. (2007). Convergence rates of posterior distributions for non IID observations. The Annals of Statistics, 35(1), 192–223.

Ghosal, S., Ghosh, J., & van der Vaart, A. (2000). Convergence rates of posterior distributions. The Annals of Statistics, 28(2), 500–531.

Gittins, J. (1979). Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society B, 41(2), 148–177.

Gmytrasiewicz, P., & Doshi, P. (2005). A framework for sequential planning in multiagent settings. Journal of Artificial Intelligence Research, 24, 49–79.

Gossner, O., & Tomala, T. (2008). Entropy bounds on Bayesian learning. Journal of Mathematical Economics, 44, 24–32.

Haussler, D., & Opper, M. (1997). Mutual information, metric entropy and cumulative entropy risk. Annals of Statistics, 25(6), 2451–2492.

Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58, 13–30.

Jordan, J. (1991). Bayesian learning in normal form games. Games and Economic Behavior, 3, 60–81.

Jordan, J. (1992). The exponential convergence of Bayesian learning in normal form games. Games and Economic Behavior, 4(2), 202–217.

Kaelbling, L., Littman, M., & Cassandra, A. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101, 99–134.

Kalai, E., & Lehrer, E. (1993). Rational learning leads to Nash equilibrium. Econometrica, 61(5), 1019–1045.

Kauffman, E., Cappé, O., & Garivier, A (2012). On Bayesian upper confidence bounds for bandit problems. In Proceedings of 15th International Conference on Artificial Intelligence and Statistics (pp. 592–600).

Kauffman, E., Korda, N., & Munos, R. (2012). Thompson sampling: An asymptotically optimal finite-time analysis. In Proceedings of 23rd International Conference on Algorithmic Learning Theory (pp. 199–213).

Kautz, H., Pelavin, R., Tenenberg, J., & Kaufmann, M. (1991). A formal theory of plan recognition and its implementation. Reasoning about plans (pp. 69–125). San Mateo, CA: Morgan Kaufmann.

Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. In Proceedings of 17th European Conference on Machine Learning (pp. 282–293).

Lai, T., & Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1), 4–22.

Leyton-Brown, K., & Shoham, Y. (2008). Essential of game theory: A concise, multidisciplinary introduction. San Rafael, CA: Morgan & Claypool Publishers.

Liemhetcharat, S., & Veloso, M. (2014). Weighted synergy graphs for effective team formation with heterogeneous ad hoc agents. Artificial Intelligence, 208, 41–65.

Littman, M. (2001). Value-function reinforcement learning in Markov games. Journal of Cognitive Systems Research, 2(1), 55–66.

Madani, O., Hanks, S., & Condon, A. (1999). On the undecidability of probabilistic planning and infinite-horizon partially observable Markov decision problems. In Proceedings of 16th AAAI Conference Artificial Intelligence (pp. 541–548).

Nachbar, J. (1997). Prediction, optimization and learning in repeated games. Econometrica, 65(2), 275–309.

Ng, A., & Russel, S. (2000). Algorithms for inverse reinforcement learning. In Proceedings of 17th International Conference on Machine Learning (pp. 663–670).

Pineau, J., Gordon, G., & Thrun, S. (2006). Anytime point-based approximations for large POMDPs. Journal of Artificial Intelligence Research, 27, 335–380.

Poupart, P., Vlassis, N., Hoey, J., & Regan, K. (2006). An analytic solution to discrete Bayesian reinforcement learning. In Proceedings of 23rd International Conference on Machine Learning (pp. 697–704).

Pourmehr, S., & Dadkhah, C. (2012). An overview on opponent modeling in RoboCup soccer simulation 2D. Robot soccer world cup XV (pp. 402–414). Berlin: Springer.

Puterman, M. (2005). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.

Ramchurn, S., Osborne, M., Parson, O., Rahwan, T., Maleki, S., Reece, S., Huynh, T., Alam, M., Fischer, J., Rodden, T., Moreau, L., & Roberts, S. (2013). AgentSwitch: Towards smart energy tariff selection. In Proceedings of 12th International Conference on Autonomous Agents and Multiagent Systems (pp. 981–988).

Rosenthal, S., Biswas, J., & Veloso, M. (2010). An effective personal mobile robot agent through symbiotic human-robot interaction. In Proceedings of 9th International Conference on Autonomous Agents and Multiagent Systems (pp. 915–922).

Seuken, S., & Zilberstein, S. (2008). Formal models and algorithms for decentralized decision making under uncertainty. Journal of Autonomous Agents and Multiagent Systems, 17(2), 190–250.

Shani, G., Pineau, J., & Kaplow, R. (2013). A survey of point-based POMDP solvers. Journal of Autonomous Agents and Multiagent Systems, 27(1), 1–51.

Shen, X., & Wasserman, L. (2001). Rates of convergence of posterior distributions. The Annals of Statistics, 29(3), 687–714.

Shiryaev, A. (1996). Probability. New York: Springer.

Spaan, M., & Vlassis, N. (2005). Perseus: Randomized point-based value iteration for POMDPs. Jornal of Artificial Intelligence Reseasrch, 24, 195–220.

Spaan, M., Gordon, G., & Vlassis, N. (2006). Decentralized planning under uncertainty for teams of communicating agents. In Proceedings of 5th International Conference on Autonomous Agents and Multi Agent Systems (pp. 249–256).

Stone, P., & Kraus, S. (2010). To teach or not to teach? Decision-making under uncertainty in ad hoc teams. In Proceedings of 9th International Conference on Autonomous Agents and Multiagent Systems (pp. 117–124).

Stone, P., & Veloso, M. (2000). Multiagent systems: A survey from a machine learning perspective. Autonomous Robots, 8(3), 345–383.

Stone, P., Kaminka, G., Kraus, S., & Rosenschein, J. (2010). Ad hoc autonomous agent teams: Collaboration without pre-coordination. In Proceedings of 24th AAAI Conference on Artificial Intelligence (pp. 1504–1509).

Stone, P., Kaminka, G., & Rosenschein, J. (2010). Leading a best-response teammate in an ad hoc team. Agent-mediated electronic commerce. Designing trading strategies and mechanisms for electronic markets. Lecture notes in business information processing (pp. 132–146). Berlin: Springer.

Sutton, R., & Barto, A. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press.

Walker, S., Lijoi, A., & Prünster, I. (2007). On rates of convergence for posterior distributions in infinite-dimensional models. The Annals of Statistics, 35(2), 738–746.

Wang, X., & Sandholm, T. (2002). Reinforcement learning to play an optimal Nash equilibrium in team Markov games. Advances in Neural Information Processing Systems, 15, 1571–1578.

Watkins, C. (1989). Learning from delayed rewards. PhD thesis, King’s College, Cambridge Univ.

Wu, F., Zilberstein, S., & Chen, X. (2011). Online planning for ad hoc autonomous agent teams. In Proceedings of 22nd International Joint Conference on Artificial Intelligence (pp. 439–445).

Yorke-Smith, N., Saadati, S., Myers, K., & Morley, D. (2012). The design of a proactive personal agent for task management. International Journal on Artificial Intelligence Tools, 21(1), 90–119.