Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning. Probabilistic planning assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. On the other hand, reinforcement learning additionally learns these models based on the feedback the agent gets from the environment. This book provides a concise introduction to the use of MDPs for solving probabilistic planning problems, with an emphasis on the algorithmic perspective. It covers the whole spectrum of the field, from the basics to state-of-the-art optimal and approximation algorithms. We first describe the theoretical foundations of MDPs and the fundamental solution techniques for them. We then discuss modern optimal algorithms based on heuristic search and the use of structured representations. A major focus of the book is on the numerous approximation schemes for MDPs that have been developed in the AI literature. These include determinization-based approaches, sampling techniques, heuristic functions, dimensionality reduction, and hierarchical representations. Finally, we briefly introduce several extensions of the standard MDP classes that model and solve even more complex planning problems. Table of Contents: Introduction / MDPs / Fundamental Algorithms / Heuristic Search Algorithms / Symbolic Algorithms / Approximation Algorithms / Advanced Notes
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Mausam is a Research Assistant Professor at the Turing Center in the Department of Computer Science at the University of Washington, Seattle. His research interests span various sub-fields of artificial intelligence, including sequential decision making under uncertainty, large scale natural language processing, Web information systems, heuristic search, machine learning, and AI applications to crowd-sourcing. Mausam obtained a Ph.D. from University of Washington in 2007 and a Bachelor of Technology from IIT Delhi in 2001. His PhD thesis was awarded honorable mention for the 2008 ICAPS Best Dissertation Award. Mausam has written scores of papers in top AI conferences and journals. He has served on the senior program committees of AI conferences such as AAAI and IJCAI, program committees of several other conferences, and on NSF panels.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 17,02 per la spedizione da U.S.A. a Italia
Destinazione, tempi e costiEUR 8,01 per la spedizione da Regno Unito a Italia
Destinazione, tempi e costiDa: Speedyhen, London, Regno Unito
Condizione: NEW. Codice articolo NW9783031004315
Quantità: 1 disponibili
Da: PBShop.store UK, Fairford, GLOS, Regno Unito
PAP. Condizione: New. New Book. Shipped from UK. Established seller since 2000. Codice articolo GB-9783031004315
Quantità: 1 disponibili
Da: moluna, Greven, Germania
Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long per. Codice articolo 608128860
Quantità: Più di 20 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In English. Codice articolo ria9783031004315_new
Quantità: Più di 20 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning. Probabilistic planning assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. On the other hand, reinforcement learning additionally learns these models based on the feedback the agent gets from the environment. This book provides a concise introduction to the use of MDPs for solving probabilistic planning problems, with an emphasis on the algorithmic perspective. It covers the whole spectrum of the field, from the basics to state-of-the-art optimal and approximation algorithms. We first describe the theoretical foundations of MDPs and the fundamental solution techniques for them. We then discuss modern optimal algorithms based on heuristic search and the use of structured representations. A major focus of the book is on the numerous approximation schemes for MDPs that have been developed in the AI literature. These include determinization-based approaches, sampling techniques, heuristic functions, dimensionality reduction, and hierarchical representations. Finally, we briefly introduce several extensions of the standard MDP classes that model and solve even more complex planning problems. Table of Contents: Introduction / MDPs / Fundamental Algorithms / Heuristic Search Algorithms / Symbolic Algorithms / Approximation Algorithms / Advanced Notes 212 pp. Englisch. Codice articolo 9783031004315
Quantità: 2 disponibili
Da: California Books, Miami, FL, U.S.A.
Condizione: New. Codice articolo I-9783031004315
Quantità: Più di 20 disponibili
Da: BargainBookStores, Grand Rapids, MI, U.S.A.
Paperback or Softback. Condizione: New. Planning with Markov Decision Processes: An AI Perspective 0.82. Book. Codice articolo BBS-9783031004315
Quantità: 5 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 44569072-n
Quantità: Più di 20 disponibili
Da: Revaluation Books, Exeter, Regno Unito
Paperback. Condizione: Brand New. 210 pages. 9.25x7.51x0.48 inches. In Stock. This item is printed on demand. Codice articolo __3031004310
Quantità: 1 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Taschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning. Probabilistic planning assumes known models for the agent's goals and domain dynamics, and focuses on determining how the agent should behave to achieve its objectives. On the other hand, reinforcement learning additionally learns these models based on the feedback the agent gets from the environment. This book provides a concise introduction to the use of MDPs for solving probabilistic planning problems, with an emphasis on the algorithmic perspective. It covers the whole spectrum of the field, from the basics to state-of-the-art optimal and approximation algorithms. We first describe the theoretical foundations of MDPs and the fundamental solution techniques for them. We then discuss modern optimal algorithms based on heuristic search and the use of structured representations. A major focus of the book is on the numerous approximation schemes for MDPs that have been developed in the AI literature. These include determinization-based approaches, sampling techniques, heuristic functions, dimensionality reduction, and hierarchical representations. Finally, we briefly introduce several extensions of the standard MDP classes that model and solve even more complex planning problems. Table of Contents: Introduction / MDPs / Fundamental Algorithms / Heuristic Search Algorithms / Symbolic Algorithms / Approximation Algorithms / Advanced Notes. Codice articolo 9783031004315
Quantità: 1 disponibili