Vikram Krishnamurthy
Partially Observed Markov Decision Processes
From Filtering to Controlled Sensing
Vikram Krishnamurthy
Partially Observed Markov Decision Processes
From Filtering to Controlled Sensing
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
This book covers formulation, algorithms, and structural results of partially observed Markov decision processes, linking theory to real-world applications in controlled sensing.
Andere Kunden interessierten sich auch für
- P P VaidyanathanSignal Processing and Optimization for Transceiver Systems160,99 €
- William A PearlmanDigital Signal Compression105,99 €
- Hyeong Soo ChangSimulation-Based Algorithms for Markov Decision Processes75,99 €
- Hyeong Soo ChangSimulation-Based Algorithms for Markov Decision Processes75,99 €
- Abhi NahaEssentials of Mobile Handset Design54,99 €
- Junyi LiOfdma Mobile Broadband Communications123,99 €
- Azadeh KushkiWLAN Positioning Systems96,99 €
-
-
-
This book covers formulation, algorithms, and structural results of partially observed Markov decision processes, linking theory to real-world applications in controlled sensing.
Produktdetails
- Produktdetails
- Verlag: Cambridge University Press
- Seitenzahl: 488
- Erscheinungstermin: 21. März 2016
- Englisch
- Abmessung: 250mm x 175mm x 31mm
- Gewicht: 1020g
- ISBN-13: 9781107134607
- ISBN-10: 1107134609
- Artikelnr.: 44263413
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
- Verlag: Cambridge University Press
- Seitenzahl: 488
- Erscheinungstermin: 21. März 2016
- Englisch
- Abmessung: 250mm x 175mm x 31mm
- Gewicht: 1020g
- ISBN-13: 9781107134607
- ISBN-10: 1107134609
- Artikelnr.: 44263413
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
Vikram Krishnamurthy is a Professor and Canada Research Chair in Statistical Signal Processing at the University of British Columbia, Vancouver. His research contributions focus on nonlinear filtering, stochastic approximation algorithms and POMDPs. Dr Krishnamurthy is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and served as a distinguished lecturer for the IEEE Signal Processing Society. In 2013, he received an honorary doctorate from KTH, Royal Institute of Technology, Sweden.
Preface
1. Introduction
Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes
7. Partially observed Markov decision processes (POMDPs)
8. POMDPs in controlled sensing and sensor scheduling
Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes
10. Structural results for optimal filters
11. Monotonicity of value function for POMPDs
12. Structural results for stopping time POMPDs
13. Stopping time POMPDs for quickest change detection
14. Myopic policy bounds for POMPDs and sensitivity to model parameters
Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation
16. Reinforcement learning
17. Stochastic approximation algorithms: examples
18. Summary of algorithms for solving POMPDs
Appendix A. Short primer on stochastic simulation
Appendix B. Continuous-time HMM filters
Appendix C. Markov processes
Appendix D. Some limit theorems
Bibliography
Index.
1. Introduction
Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes
7. Partially observed Markov decision processes (POMDPs)
8. POMDPs in controlled sensing and sensor scheduling
Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes
10. Structural results for optimal filters
11. Monotonicity of value function for POMPDs
12. Structural results for stopping time POMPDs
13. Stopping time POMPDs for quickest change detection
14. Myopic policy bounds for POMPDs and sensitivity to model parameters
Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation
16. Reinforcement learning
17. Stochastic approximation algorithms: examples
18. Summary of algorithms for solving POMPDs
Appendix A. Short primer on stochastic simulation
Appendix B. Continuous-time HMM filters
Appendix C. Markov processes
Appendix D. Some limit theorems
Bibliography
Index.
Preface
1. Introduction
Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes
7. Partially observed Markov decision processes (POMDPs)
8. POMDPs in controlled sensing and sensor scheduling
Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes
10. Structural results for optimal filters
11. Monotonicity of value function for POMPDs
12. Structural results for stopping time POMPDs
13. Stopping time POMPDs for quickest change detection
14. Myopic policy bounds for POMPDs and sensitivity to model parameters
Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation
16. Reinforcement learning
17. Stochastic approximation algorithms: examples
18. Summary of algorithms for solving POMPDs
Appendix A. Short primer on stochastic simulation
Appendix B. Continuous-time HMM filters
Appendix C. Markov processes
Appendix D. Some limit theorems
Bibliography
Index.
1. Introduction
Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes
7. Partially observed Markov decision processes (POMDPs)
8. POMDPs in controlled sensing and sensor scheduling
Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes
10. Structural results for optimal filters
11. Monotonicity of value function for POMPDs
12. Structural results for stopping time POMPDs
13. Stopping time POMPDs for quickest change detection
14. Myopic policy bounds for POMPDs and sensitivity to model parameters
Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation
16. Reinforcement learning
17. Stochastic approximation algorithms: examples
18. Summary of algorithms for solving POMPDs
Appendix A. Short primer on stochastic simulation
Appendix B. Continuous-time HMM filters
Appendix C. Markov processes
Appendix D. Some limit theorems
Bibliography
Index.