A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented The contributors are leading researchers in the field. ADP or Approximate Dynamic Programming has gone by many different names including: Reinforcement Learning (RL), Adaptive Ccritics (AC), and Neuro-Dynamic Programming (NDP). The dynamic…mehr
A complete resource to Approximate Dynamic Programming (ADP), including on-line simulation code Provides a tutorial that readers can use to start implementing the learning algorithms provided in the book Includes ideas, directions, and recent results on current research issues and addresses applications where ADP has been successfully implemented The contributors are leading researchers in the field.ADP or Approximate Dynamic Programming has gone by many different names including: Reinforcement Learning (RL), Adaptive Ccritics (AC), and Neuro-Dynamic Programming (NDP). The dynamic programming approach to decision and control problems involving nonlinear dynamic systems provides the optimal solution in any stochastic or uncertain environment. This handbook presents general-purpose programming tools for doing optimization over time by using learning and approximation to handle problems that severely challenge conventional methods.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
JENNIE SI is Professor of Electrical Engineering, Arizona State University, Tempe, AZ. She is director of Intelligent Systems Laboratory, which focuses on analysis and design of learning and adaptive systems. In addition to her own publications, she is the Associate Editor for IEEE Transactions on Neural Networks, and past Associate Editor for IEEE Transactions on Automatic Control and IEEE Transactions on Semiconductor Manufacturing. She was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. ANDREW G. BARTO is Professor of Computer Science, University of Massachusetts, Amherst. He is co-director of the Autonomous Learning Laboratory, which carries out interdisciplinary research on machine learning and modeling of biological learning. He is a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts and was the co-chair for the 2002 NSF Workshop on Learning and Approximate Dynamic Programming. He currently serves as an associate editor of Neural Computation. WARREN B. POWELL is Professor of Operations Research and Financial Engineering at Princeton University. He is director of CASTLE Laboratory, which focuses on real-time optimization of complex dynamic systems arising in transportation and logistics. DONALD C. WUNSCH is the Mary K. Finley Missouri Distinguished Professor in the Electrical and Computer Engineering Department at the University of Missouri, Rolla. He heads the Applied Computational Intelligence Laboratory and also has a joint appointment in Computer Science, and is President-Elect of the International Neural Networks Society.
Inhaltsangabe
Foreword. 1. ADP: goals, opportunities and principles. Part I: Overview. 2. Reinforcement learning and its relationship to supervised learning. 3. Model-based adaptive critic designs. 4. Guidance in the use of adaptive critics for control. 5. Direct neural dynamic programming. 6. The linear programming approach to approximate dynamic programming. 7. Reinforcement learning in large, high-dimensional state spaces. 8. Hierarchical decision making. Part II: Technical advances. 9. Improved temporal difference methods with linear function approximation. 10. Approximate dynamic programming for high-dimensional resource allocation problems. 11. Hierarchical approaches to concurrency, multiagency, and partial observability. 12. Learning and optimization - from a system theoretic perspective. 13. Robust reinforcement learning using integral-quadratic constraints. 14. Supervised actor-critic reinforcement learning. 15. BPTT and DAC - a common framework for comparison. Part III: Applications. 16. Near-optimal control via reinforcement learning. 17. Multiobjective control problems by reinforcement learning. 18. Adaptive critic based neural network for control-constrained agile missile. 19. Applications of approximate dynamic programming in power systems control. 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. 21. Helicopter flight control using direct neural dynamic programming. 22. Toward dynamic stochastic optimal power flow. 23. Control, optimization, security, and self-healing of benchmark power systems.
Foreword. 1. ADP: goals, opportunities and principles. Part I: Overview. 2. Reinforcement learning and its relationship to supervised learning. 3. Model-based adaptive critic designs. 4. Guidance in the use of adaptive critics for control. 5. Direct neural dynamic programming. 6. The linear programming approach to approximate dynamic programming. 7. Reinforcement learning in large, high-dimensional state spaces. 8. Hierarchical decision making. Part II: Technical advances. 9. Improved temporal difference methods with linear function approximation. 10. Approximate dynamic programming for high-dimensional resource allocation problems. 11. Hierarchical approaches to concurrency, multiagency, and partial observability. 12. Learning and optimization - from a system theoretic perspective. 13. Robust reinforcement learning using integral-quadratic constraints. 14. Supervised actor-critic reinforcement learning. 15. BPTT and DAC - a common framework for comparison. Part III: Applications. 16. Near-optimal control via reinforcement learning. 17. Multiobjective control problems by reinforcement learning. 18. Adaptive critic based neural network for control-constrained agile missile. 19. Applications of approximate dynamic programming in power systems control. 20. Robust reinforcement learning for heating, ventilation, and air conditioning control of buildings. 21. Helicopter flight control using direct neural dynamic programming. 22. Toward dynamic stochastic optimal power flow. 23. Control, optimization, security, and self-healing of benchmark power systems.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826