Adaptive Dynamics Learning and Q-initialization in the context of multiagent learning

Authors: Burkov, Andriy
Advisor: Chaib-draa, Brahim
Abstract: Multiagent learning is a promising direction of the modern and future research in the context of intelligent systems. While the single-agent case has been well studied in the last two decades, the multiagent case has not been broadly studied due to its complex- ity. When several autonomous agents learn and act simultaneously, the environment becomes strictly unpredictable and all assumptions that are made in single-agent case, such as stationarity and the Markovian property, often do not hold in the multiagent context. In this Master’s work we study what has been done in this research field, and propose an original approach to multiagent learning in presence of adaptive agents. We explain why such an approach gives promising results by comparing it with other different existing approaches. It is important to note that one of the most challenging problems of all multiagent learning algorithms is their high computational complexity. This is due to the fact that the state space size of multiagent problem is exponential in the number of agents acting in the environment. In this work we propose a novel approach to the complexity reduction of the multiagent reinforcement learning. Such an approach permits to significantly reduce the part of the state space needed to be visited by the agents to learn an efficient solution. Then we evaluate our algorithms on a set of empirical tests and give a preliminary theoretical result, which is first step in forming the basis of validity of our approaches to multiagent learning.
Document Type: Mémoire de maîtrise
Issue Date: 2007
Open Access Date: 12 April 2018
Permalink: http://hdl.handle.net/20.500.11794/19077
Grantor: Université Laval
Collection:Thèses et mémoires

Files in this item:
SizeFormat 
24476.pdf924.33 kBAdobe PDFView/Open
All documents in CorpusUL are protected by Copyright Act of Canada.