Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.
However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.
This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.
This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Felipe Leno da Silva (Leno) holds a Ph.D. (2019) from the University of São Paulo, Brazil. He is currently a Postdoc Researcher at the Advanced Institute for AI, where he helped organe one of the first Brazilian AI residency programs. He has been actively researching knowledge reuse for multiagent RL since the start of his Ph.D. and is a firm believer that RL will bridge the gap between virtual agents and the physical real world. Leno enjoys serving the AI community in oft-neglected yet important roles. He has been part of the Program Committees of most of the major AI conferences and has organized multiple workshops, such as the Adaptive and Learning Agents (ALA) and the Scaling-Up Reinforcement Learning (SURL) workshop series. Leno is a strong advocate for the inclusion of minorities in the AI community and has been involved in multiple iterations of the Latinx in AI workshop at NeurIPS.Anna Helena Reali Costa (Anna Reali) is Full Professor at Universidade de Sã o Paulo (USP), Brazil. She received her Ph.D. at USP, investigated robot vision as a research scientist at the University of Karlsruhe, and was a guest researcher at Carnegie Mellon University, working in the integration of learning, planning, and execution in mobile robot teams. She is the Director of the Data Science Center (C2D), a partnership between USP and the Itau-Unibanco bank, and a member of the Center for Artificial Intelligence (C4AI), a partnership between USP, IBM, and FAPESP. Her scientific contributions lie in AI and Machine Learning, in particular RL; her long-term research objective is to create autonomous, ethical, and robust agents that can learn to interact in complex and dynamic environments, aiming at the well-being of human beings.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 17,11 per la spedizione da U.S.A. a Italia
Destinazione, tempi e costiEUR 9,70 per la spedizione da Germania a Italia
Destinazione, tempi e costiDa: moluna, Greven, Germania
Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Codice articolo 608128888
Quantità: Più di 20 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area. 132 pp. Englisch. Codice articolo 9783031004636
Quantità: 2 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In English. Codice articolo ria9783031004636_new
Quantità: Più di 20 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 44569090-n
Quantità: Più di 20 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Taschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area. Codice articolo 9783031004636
Quantità: 1 disponibili
Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Learning to solve sequential decision-making tasks is difficult. Humans take years exploring the environment essentially in a random way until they are able to reason, solve difficult tasks, and collaborate with other humans towards a common goal. Artificial Intelligent agents are like humans in this aspect. Reinforcement Learning (RL) is a well-known technique to train autonomous agents through interactions with the environment. Unfortunately, the learning process has a high sample complexity to infer an effective actuation policy, especially when multiple agents are simultaneously actuating in the environment.However, previous knowledge can be leveraged to accelerate learning and enable solving harder tasks. In the same way humans build skills and reuse them by relating different tasks, RL agents might reuse knowledge from previously solved tasks and from the exchange of knowledge with other agents in the environment. In fact, virtually all of the most challenging tasks currently solved by RL rely on embedded knowledge reuse techniques, such as Imitation Learning, Learning from Demonstration, and Curriculum Learning.This book surveys the literature on knowledge reuse in multiagent RL. The authors define a unifying taxonomy of state-of-the-art solutions for reusing knowledge, providing a comprehensive discussion of recent progress in the area. In this book, readers will find a comprehensive discussion of the many ways in which knowledge can be reused in multiagent sequential decision-making tasks, as well as in which scenarios each of the approaches is more efficient. The authors also provide their view of the current low-hanging fruit developments of the area, as well as the still-open big questions that could result in breakthrough developments. Finally, the book provides resources to researchers who intend to join this area or leverage those techniques, including a list of conferences, journals, and implementation tools.This book will be useful for a wide audience; and will hopefully promote new dialogues across communities and novel developments in the area.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 132 pp. Englisch. Codice articolo 9783031004636
Quantità: 1 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 44569090-n
Quantità: Più di 20 disponibili
Da: California Books, Miami, FL, U.S.A.
Condizione: New. Codice articolo I-9783031004636
Quantità: Più di 20 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. 1st edition NO-PA16APR2015-KAP. Codice articolo 26395061422
Quantità: 4 disponibili
Da: Chiron Media, Wallingford, Regno Unito
PF. Condizione: New. Codice articolo 6666-IUK-9783031004636
Quantità: 10 disponibili