強化學習的數學原理(英文版)
趙世鈺
- 出版商: 清華大學
- 出版日期: 2024-07-01
- 定價: $708
- 售價: 8.5 折 $602
- 語言: 簡體中文
- ISBN: 7302658528
- ISBN-13: 9787302658528
-
相關分類:
Reinforcement、化學 Chemistry、英文 English
下單後立即進貨 (約4週~6週)
相關主題
商品描述
目錄大綱
Contents
Overview of this Book 1
Chapter 1 Basic Concepts 6
1.1 A grid world example 7
1.2 State and action 8
1.3 State transition 9
1.4 Policy 11
1.5 Reward 13
1.6 Trajectories, returns, and episodes 15
1.7 Markov decision processes 18
1.8 Summary 20
1.9 Q&A 20
Chapter 2 State Values and the Bellman Equation 21
2.1 Motivating example 1: Why are returns important? 23
2.2 Motivating example 2: How to calculate returns? 24
2.3 State values 26
2.4 The Bellman equation 27
2.5 Examples for illustrating the Bellman equation 30
2.6 Matrix-vector form of the Bellman equation 33
2.7 Solving state values from the Bellman equation 35
2.7.1 Closed-form solution 35
2.7.2 Iterative solution 35
2.7.3 Illustrative examples 36
2.8 From state value to action value 38
2.8.1 Illustrative examples 39
2.8.2 The Bellman equation in terms of action values 40
2.9 Summary 41
2.10 Q&A 42
Chapter 3 Optimal State Values and the Bellman Optimality Equation 43
3.1 Motivating example: How to improve policies? 45
3.2 Optimal state values and optimal policies 46
3.3 The Bellman optimality equation 47
3.3.1 Maximization of the right-hand side of the BOE 48
3.3.2 Matrix-vector form of the BOE 49
3.3.3 Contraction mapping theorem 50
3.3.4 Contraction property of the right-hand side of the BOE 53
3.4 Solving an optimal policy from the BOE 55
3.5 Factors that influence optimal policies 58
3.6 Summary 63
3.7 Q&A 63
Chapter 4 Value Iteration and Policy Iteration 66
4.1 Value iteration 68
4.1.1 Elementwise form and implementation 68
4.1.2 Illustrative examples 70
4.2 Policy iteration 72
4.2.1 Algorithm analysis 73
4.2.2 Elementwise form and implementation 76
4.2.3 Illustrative examples 77
4.3 Truncated policy iteration 81
4.3.1 Comparing value iteration and policy iteration 81
4.3.2 Truncated policy iteration algorithm 83
4.4 Summary 85
4.5 Q&A 86
Chapter 5 Monte Carlo Methods 89
5.1 Motivating example: Mean estimation 91
5.2 MC Basic: The simplest MC-based algorithm 93
5.2.1 Converting policy iteration to be model-free 93
5.2.2 The MC Basic algorithm 94
5.2.3 Illustrative examples 96
5.3 MC Exploring Starts 99
5.3.1 Utilizing samples more efficiently 100
5.3.2 Updating policies more efficiently 101
5.3.3 Algorithm description 101
5.4 MC -Greedy: Learning without exploring starts 102
5.4.1 -greedy policies 103
5.4.2 Algorithm description 103
5.4.3 Illustrative examples 105
5.5 Exploration and exploitation of -greedy policies 106
5.6 Summary 111
5.7 Q&A 111
Chapter 6 Stochastic Approximation 114
6.1 Motivating example: Mean estimation 116
6.2 Robbins-Monro algorithm 117
6.2.1 Convergence properties 119
6.2.2 Application to mean estimation 123
6.3 Dvoretzky's convergence theorem 124
6.3.1 Proof of Dvoretzky's theorem 125
6.3.2 Application to mean estimation. 126
6.3.3 Application to the Robbins-Monro theorem 127
6.3.4 An extension of Dvoretzky's theorem 127
6.4 Stochastic gradient descent 128
6.4.1 Application to mean estimation 130
6.4.2 Convergence pattern of SGD 131
6.4.3 A deterministic formulation of SGD 133
6.4.4 BGD, SGD, and mini-batch GD 134
6.4.5 Convergence of SGD 136
6.5 Summary 138
6.6 Q&A 138
Chapter 7 Temporal-Difference Methods 140
7.1 TD learning of state values 142
7.1.1 Algorithm description 142
7.1.2 Property analysis 144
7.1.3 Convergence analysis 146
7.2 TD learning of action values: Sarsa 149
7.2.1 Algorithm description 149
7.2.2 Optimal policy learning via Sarsa 151
7.3 TD learning of action values: n-step Sarsa 154
7.4 TD learning of optimal action values: Q-learning 156
7.4.1 Algorithm description 156
7.4.2 Off-policy vs. on-policy 158
7.4.3 Implementation 160
7.4.4 Illustrative examples 161
7.5 A unified viewpoint 165
7.6 Summary 165
7.7 Q&A 166
Chapter 8 Value Function Approximation 168
8.1 Value representation: From table to function 170
8.2 TD learning of state values with function approximation 174
8.2.1 Objective function 174
8.2.2 Optimization algorithms 180
8.2.3 Selection of function approximators 182
8.2.4 Illustrative examples 183
8.2.5 Theoretical analysis 187
8.3 TD learning of action values with function approximation 198
8.3.1 Sarsa with function approximation 198
8.3.2 Q-learning with function approximation 200
8.4 Deep Q-learning 201
8.4.1 Algorithm description 202
8.4.2 Illustrative examples 204
8.5 Summary 207
8.6 Q&A 207
Chapter 9 Policy Gradient Methods 211
9.1 Policy representation: From table to function 213
9.2 Metrics for defining optimal policies 214
9.3 Gradients of the metrics 219
9.3.1 Derivation of the gradients in the discounted case 221
9.3.2 Derivation of the gradients in the undiscounted case 226
9.4 Monte Carlo policy gradient (REINFORCE) 232
9.5 Summary 235
9.6 Q&A 235
Chapter 10 Actor-Critic Methods 237
10.1 The simplest actor-critic algorithm (QAC) 239
10.2 Advantage actor-critic (A2C) 240
10.2.1 Baseline invariance 240
10.2.2 Algorithm description 243
10.3 Off-policy actor-critic 244
10.3.1 Importance sampling 245
10.3.2 The off-policy policy gradient theorem 247
10.3.3 Algorithm description 249
10.4 Deterministic actor-critic 251
10.4.1 The deterministic policy gradient theorem 251
10.4.2 Algorithm description 258
10.5 Summary 259
10.6 Q&A 260
Appendix A Preliminaries for Probability Theory 262
Appendix B Measure-Theoretic Probability Theory 268
Appendix C Convergence of Sequences 276
C.1 Convergence of deterministic sequences 277
C.2 Convergence of stochastic sequences 280
Appendix D Preliminaries for Gradient Descent 284
Bibliography 290
Symbols 297
Index 299