This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
1 Controlled Markov Processes.- 1.1 Introduction.- 1.2 Stochastic Control Problems.- Control Models.- Policies.- Performance Criteria.- Control Problems.- 1.3 Examples.- An Inventory/Production System.- Control of Water Reservoirs.- Fisheries Management.- Nonstationary MCM’s.- Semi-Markov Control Models.- 1.4 Further Comments.- 2 Discounted Reward Criterion.- 2.1 Introduction.- Summary.- 2.2 Optimality Conditions.- Continuity of ?*.- 2.3 Asymptotic Discount Optimality.- 2.4 Approximation of MCM’s.- Nonstationary Value-Iteration.- Finite-State Approximations.- 2.5 Adaptive Control Models.- Preliminaries.- Nonstationary Value-Iteration.- The Principle of Estimation and Control.- Adaptive Policies.- 2.6 Nonparametric Adaptive Control.- The Parametric Approach.- New Setting.- The Empirical Distribution Process.- Nonparametric Adaptive Policies.- 2.7 Comments and References.- 3 Average Reward Criterion.- 3.1 Introduction.- Summary.- 3.2 The Optimality Equation.- 3.3 Ergodicity Conditions.- 3.4 Value Iteration.- Uniform Approximations.- Successive Averagings.- 3.5 Approximating Models.- 3.6 Nonstationary Value Iteration.- Nonstationary Successive Averagings.- Discounted-Like NVI.- 3.7 Adaptive Control Models.- Preliminaries.- The Principle of Estimation and Control (PEC).- Nonstationary Value Iteration (NVI).- 3.8 Comments and References.- 4 Partially Observable Control Models.- 4.1 Introduction.- Summary.- 4.2 PO-CM: Case of Known Parameters.- The PO Control Problem.- 4.3 Transformation into a CO Control Problem.- I-Policies.- The New Control Model.- 4.4 Optimal I-Policies.- 4.5 PO-CM’s with Unknown Parameters.- PEC and NVI I-Policies.- 4.6 Comments and References.- 5 Parameter Estimation in MCM’s.- 5.1 Introduction.- Summary.- 5.2 Contrast Functions.- 5.3 Minimum Contrast Estimators.- 5.4 Comments and References.- 6 Discretization Procedures.- 6.1 Introduction.- Summary.- 6.2 Preliminaries.- 6.3 The Non-Adaptive Case.- A Non-Recursive Procedure.- A Recursive Procedure.- 6.4 Adaptive Control Problems.- Preliminaries.- Discretization of the PEC Adaptive Policy.- Discretization of the NVI Adaptive Policy.- 6.5 Proofs.- The Non-Adaptive Case.- The Adaptive Case.- 6.6 Comments and References.- Appendix A. Contraction Operators.- Appendix B. Probability Measures.- Total Variation Norm.- Weak Convergence.- Appendix C. Stochastic Kernels.- Appendix D. Multifunctions and Measurable Selectors.- The Hausdorff Metric.- Multifunctions.- References.- Author Index.
3
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 24,00 per la spedizione da Germania a U.S.A.
Destinazione, tempi e costiEUR 3,40 per la spedizione in U.S.A.
Destinazione, tempi e costiDa: Lucky's Textbooks, Dallas, TX, U.S.A.
Condizione: New. Codice articolo ABLIING23Feb2215580174916
Quantità: Più di 20 disponibili
Da: Best Price, Torrance, CA, U.S.A.
Condizione: New. SUPER FAST SHIPPING. Codice articolo 9780387969664
Quantità: 1 disponibili
Da: NEPO UG, Rüsselsheim am Main, Germania
Condizione: Gut. Auflage: 1989. 148 Seiten Exemplar aus einer wissenchaftlichen Bibliothek Sprache: Englisch Gewicht in Gramm: 469 24330049536,0 x 16210032640,0 x 1570003200,0 cm, Gebundene Ausgabe. Codice articolo 400676
Quantità: 1 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In. Codice articolo ria9780387969664_new
Quantità: Più di 20 disponibili
Da: Rarewaves.com USA, London, LONDO, Regno Unito
Hardback. Condizione: New. 1989 ed. This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided. Codice articolo LU-9780387969664
Quantità: Più di 20 disponibili
Da: THE SAINT BOOKSTORE, Southport, Regno Unito
Hardback. Condizione: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 960. Codice articolo C9780387969664
Quantità: Più di 20 disponibili
Da: moluna, Greven, Germania
Gebunden. Condizione: New. This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP s), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contribut. Codice articolo 458432933
Quantità: Più di 20 disponibili
Da: Rarewaves.com UK, London, Regno Unito
Hardback. Condizione: New. 1989 ed. This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided. Codice articolo LU-9780387969664
Quantità: Più di 20 disponibili
Da: AHA-BUCH GmbH, Einbeck, Germania
Buch. Condizione: Neu. Neuware - This book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided. Codice articolo 9780387969664
Quantità: 2 disponibili