Vikram Krishnamurthy (New York Cornell University)
Partially Observed Markov Decision Processes
Vikram Krishnamurthy (New York Cornell University)
Partially Observed Markov Decision Processes
- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
This survey of formulation, algorithms, and structural results in POMDPs focuses on underlying concepts and connections to real-world applications in controlled sensing, keeping technical machinery to a minimum. The new edition includes inverse reinforcement learning, non-parametric Bayesian inference, variational Bayes and conformal prediction.
Andere Kunden interessierten sich auch für
Alexey B Piunovskiy (Uk The Univ Of Liverpool)COUNTEREXAMPLES IN MARKOV DECISION PROCESSES136,99 €
Theodore J. SheskinMarkov Chains and Decision Processes for Engineers and Managers71,99 €
James R. KirkwoodMarkov Processes81,99 €
Gregory F. Lawler (University of Chicago, Illinois, USA)Introduction to Stochastic Processes117,99 €
5G Mobile and Wireless Communications Technology123,99 €
Eldad PerahiaNext Generation Wireless LANs92,99 €
Teik-Cheng Lim (Singapore University of Social Sciences)A Partially Auxetic Metamaterial Inspired by the Maltese Cross22,99 €-
-
-
This survey of formulation, algorithms, and structural results in POMDPs focuses on underlying concepts and connections to real-world applications in controlled sensing, keeping technical machinery to a minimum. The new edition includes inverse reinforcement learning, non-parametric Bayesian inference, variational Bayes and conformal prediction.
Produktdetails
- Produktdetails
- Verlag: Cambridge University Press
- 2 Revised edition
- Seitenzahl: 652
- Erscheinungstermin: 28. April 2025
- Englisch
- Abmessung: 260mm x 183mm x 39mm
- Gewicht: 1372g
- ISBN-13: 9781009449434
- ISBN-10: 1009449435
- Artikelnr.: 72879217
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
- Verlag: Cambridge University Press
- 2 Revised edition
- Seitenzahl: 652
- Erscheinungstermin: 28. April 2025
- Englisch
- Abmessung: 260mm x 183mm x 39mm
- Gewicht: 1372g
- ISBN-13: 9781009449434
- ISBN-10: 1009449435
- Artikelnr.: 72879217
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
Vikram Krishnamurthy is Professor of Electrical and Computer Engineering at Cornell University. From 2002 to 2016, he was Professor and Senior Canada Research Chair in Statistical Signal Processing at the University of British Columbia. His research contributions are in statistical signal processing, stochastic optimization and control, with applications in social networks, adaptive radar systems and biological ion channels. He is a Fellow of IEEE and served as Distinguished Lecturer for the IEEE Signal Processing Society and Editor-in-Chief of IEEE Journal of Selected Topics in Signal Processing. He was awarded an honorary doctorate from the Royal Institute of Technology (KTH) Sweden in 2014.
Preface to revised edition
Notation
1. Introduction
I. Stochastic Models and Bayesian Filtering: 2. Stochastic state space model
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
6. Nonparametric Bayesian inference
II. POMDPs: Models and Applications: 7. Fully observed Markov decision processes
8. Partially observed Markov decision processes
9. POMDPs in controlled sensing and sensor scheduling
III. POMDP Structural Results: 10. Structural results for Markov decision processes
11. Structural results for optimal filters
12. Monotonicity of value function for POMDPs
13. Structural results for stopping-time POMDPs
14. Stopping-Time POMDPs for quickest detection
15. Myopic policy bounds for POMDPs and sensitivity to model parameters
IV. Stochastic Gradient Algorithms and Reinforcement Learning: 16. Stochastic optimization and gradient estimation
17. Reinforcement learning
18. Stochastic gradient algorithms: convergence analysis
19. Discrete stochastic optimization
V. Inverse Reinforcement Learning: 20. Revealed preferences for inverse reinforcement learning
21. Bayesian inverse reinforcement learning
Appendix A. Short primer on stochastic stimulation
Appendix B. Continuous-time HMM filters
Appendix C. Discrete-time Martingales
Appendix D. Markov processes
Appendix E. Some limit theorems in statistics
Appendix F. Summary of POMDP algorithms
Bibliography
Index.
Notation
1. Introduction
I. Stochastic Models and Bayesian Filtering: 2. Stochastic state space model
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
6. Nonparametric Bayesian inference
II. POMDPs: Models and Applications: 7. Fully observed Markov decision processes
8. Partially observed Markov decision processes
9. POMDPs in controlled sensing and sensor scheduling
III. POMDP Structural Results: 10. Structural results for Markov decision processes
11. Structural results for optimal filters
12. Monotonicity of value function for POMDPs
13. Structural results for stopping-time POMDPs
14. Stopping-Time POMDPs for quickest detection
15. Myopic policy bounds for POMDPs and sensitivity to model parameters
IV. Stochastic Gradient Algorithms and Reinforcement Learning: 16. Stochastic optimization and gradient estimation
17. Reinforcement learning
18. Stochastic gradient algorithms: convergence analysis
19. Discrete stochastic optimization
V. Inverse Reinforcement Learning: 20. Revealed preferences for inverse reinforcement learning
21. Bayesian inverse reinforcement learning
Appendix A. Short primer on stochastic stimulation
Appendix B. Continuous-time HMM filters
Appendix C. Discrete-time Martingales
Appendix D. Markov processes
Appendix E. Some limit theorems in statistics
Appendix F. Summary of POMDP algorithms
Bibliography
Index.
Preface to revised edition
Notation
1. Introduction
I. Stochastic Models and Bayesian Filtering: 2. Stochastic state space model
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
6. Nonparametric Bayesian inference
II. POMDPs: Models and Applications: 7. Fully observed Markov decision processes
8. Partially observed Markov decision processes
9. POMDPs in controlled sensing and sensor scheduling
III. POMDP Structural Results: 10. Structural results for Markov decision processes
11. Structural results for optimal filters
12. Monotonicity of value function for POMDPs
13. Structural results for stopping-time POMDPs
14. Stopping-Time POMDPs for quickest detection
15. Myopic policy bounds for POMDPs and sensitivity to model parameters
IV. Stochastic Gradient Algorithms and Reinforcement Learning: 16. Stochastic optimization and gradient estimation
17. Reinforcement learning
18. Stochastic gradient algorithms: convergence analysis
19. Discrete stochastic optimization
V. Inverse Reinforcement Learning: 20. Revealed preferences for inverse reinforcement learning
21. Bayesian inverse reinforcement learning
Appendix A. Short primer on stochastic stimulation
Appendix B. Continuous-time HMM filters
Appendix C. Discrete-time Martingales
Appendix D. Markov processes
Appendix E. Some limit theorems in statistics
Appendix F. Summary of POMDP algorithms
Bibliography
Index.
Notation
1. Introduction
I. Stochastic Models and Bayesian Filtering: 2. Stochastic state space model
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
6. Nonparametric Bayesian inference
II. POMDPs: Models and Applications: 7. Fully observed Markov decision processes
8. Partially observed Markov decision processes
9. POMDPs in controlled sensing and sensor scheduling
III. POMDP Structural Results: 10. Structural results for Markov decision processes
11. Structural results for optimal filters
12. Monotonicity of value function for POMDPs
13. Structural results for stopping-time POMDPs
14. Stopping-Time POMDPs for quickest detection
15. Myopic policy bounds for POMDPs and sensitivity to model parameters
IV. Stochastic Gradient Algorithms and Reinforcement Learning: 16. Stochastic optimization and gradient estimation
17. Reinforcement learning
18. Stochastic gradient algorithms: convergence analysis
19. Discrete stochastic optimization
V. Inverse Reinforcement Learning: 20. Revealed preferences for inverse reinforcement learning
21. Bayesian inverse reinforcement learning
Appendix A. Short primer on stochastic stimulation
Appendix B. Continuous-time HMM filters
Appendix C. Discrete-time Martingales
Appendix D. Markov processes
Appendix E. Some limit theorems in statistics
Appendix F. Summary of POMDP algorithms
Bibliography
Index.







