Handbook of Learning and Approximate Dynamic Programming
Jennie Si, Andrew G. Barto, Warren Buckler Powell, Don Wunsch
- 出版商: Wiley
- 出版日期: 2004-08-02
- 售價: $6,600
- 貴賓價: 9.5 折 $6,270
- 語言: 英文
- 頁數: 672
- 裝訂: Hardcover
- ISBN: 047166054X
- ISBN-13: 9780471660545
-
相關分類:
人工智慧、控制系統 Control-systems
立即出貨 (庫存=1)
買這商品的人也買了...
-
$1,176Optical Networks: A Practical Perspective, 2/e
-
$650$553 -
$2,010$1,910 -
$780$741 -
$931Network Systems Design Using Network Processors (Paperback)
-
$590$466 -
$490$387 -
$690$538 -
$399Beginning Visual C++ 6 (Paperback)
-
$550$468 -
$490$417 -
$4,700$4,465 -
$1,176Computer Organization and Design: The Hardware/Software Interface, 3/e(IE) (美國版ISBN:1558606041)
-
$760$684 -
$890$703 -
$650$553 -
$650$507 -
$1,615CCNA Cisco Certified Network Associate Study Guide, 5/e (640-801)
-
$880$748 -
$580$522 -
$2,250WiMAX Handbook
-
$420$328 -
$680$578 -
$450$356 -
$490$417
相關主題
商品描述
Description:
Approximate dynamic programming solves decision and control problems
While advances in science and engineering have enabled us to design and build complex systems, how to control and optimize them remains a challenge. This was made clear, for example, by the major power outage across dozens of cities in the Eastern United States and Canada in August of 2003. Learning and approximate dynamic programming (ADP) is emerging as one of the most promising mathematical and computational approaches to solve nonlinear, large-scale, dynamic control problems under uncertainty. It draws heavily both on rigorous mathematics and on biological inspiration and parallels, and helps unify new developments across many disciplines.
The foundations of learning and approximate dynamic programming have evolved from several fields–optimal control, artificial intelligence (reinforcement learning), operations research (dynamic programming), and stochastic approximation methods (neural networks). Applications of these methods span engineering, economics, business, and computer science. In this volume, leading experts in the field summarize the latest research in areas including:
- Reinforcement learning and its relationship to supervised learning
- Model-based adaptive critic designs
- Direct neural dynamic programming
- Hierarchical decision-making
- Multistage stochastic linear programming for resource allocation problems
- Concurrency, multiagency, and partial observability
- Backpropagation through time and derivative adaptive critics
- Applications of approximate dynamic programming and reinforcement learning in control-constrained agile missiles; power systems; heating, ventilation, and air conditioning; helicopter flight control; transportation and more.
商品描述(中文翻譯)
描述:
近似動態規劃解決決策和控制問題。儘管科學和工程的進步使我們能夠設計和建造複雜系統,但如何控制和優化這些系統仍然是一個挑戰。例如,2003年8月美國東部和加拿大多個城市發生大規模停電事件,這一點變得很明顯。學習和近似動態規劃(ADP)正在成為解決非線性、大規模、動態控制問題的最有前途的數學和計算方法之一。它在嚴謹的數學和生物啟發以及類比方面有很大的依賴,有助於統一各個學科的新發展。
學習和近似動態規劃的基礎來自於多個領域-最優控制、人工智能(強化學習)、運籌學(動態規劃)和隨機逼近方法(神經網絡)。這些方法的應用涵蓋了工程、經濟學、商業和計算機科學等領域。在本書中,該領域的領先專家總結了包括以下領域的最新研究:
- 強化學習及其與監督學習的關係
- 基於模型的自適應評論設計
- 直接神經動態規劃
- 階層決策
- 多階段隨機線性規劃用於資源分配問題
- 並發性、多機構和部分可觀察性
- 時間反向傳播和導數自適應評論
- 近似動態規劃和強化學習在受控制約束的敏捷導彈、電力系統、暖通空調、直升機飛行控制、運輸等方面的應用。