Contents1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 What is Deep Reinforcement Learning? . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Three Machine Learning Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3 Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Tabular Value-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.1 Sequential Decision Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2 Tabular Value-Based Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.3 Classic Gym Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.4 Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 Approximating the Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.1 Large, High-Dimensional, Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.2 Deep Value-Based Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.3 Atari 2600 Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.4 Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874 Policy-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.1 Continuous Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.2 Policy-Based Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944.3 Locomotion and Visuo-Motor Environments . . . . . . . . . . . . . . . . . . . . 1114.4 Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165 Model-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195.1 Dynamics Models of High-Dimensional Problems . . . . . . . . . . . . . . . 1225.2 Learning and Planning Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.3 High-dimensional Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365.4 Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142viiviii CONTENTS5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446 Two-Agent Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.1 Two-Agent Zero-Sum Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506.2 Tabula Rasa Self-Play Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566.3 Self-Play Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1786.4 Summary and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1866.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1887 Multi-Agent Reinforcement Learning . . . .
Author: Aske PlaatPublisher: Springer
Published: 06/12/2022
Pages: 406
Binding Type: Paperback
Weight: 1.30lbs
Size: 9.21h x 6.14w x 0.86d
ISBN13: 9789811906374
ISBN10: 9811906378
BISAC Categories:-
Computers |
Artificial Intelligence | General-
Computers |
Computer ScienceAbout the Author
Aske Plaat is a Professor of Data Science at Leiden University and scientific director of the Leiden Institute of Advanced Computer Science (LIACS). He is co-founder of the Leiden Centre of Data Science (LCDS) and initiated SAILS, a multidisciplinary program on artificial intelligence. His research interests include reinforcement learning, combinatorial games and self-learning systems. He is the author of Learning to Play (published by Springer in 2020), which specifically covers reinforcement learning and games.
This title is not returnable