Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games
Lian, Bosen, Xue, Wenqian, Lewis, Frank L.
- 出版商: Springer
- 出版日期: 2024-03-06
- 售價: $6,290
- 貴賓價: 9.5 折 $5,976
- 語言: 英文
- 頁數: 267
- 裝訂: Hardcover - also called cloth, retail trade, or trade
- ISBN: 3031452518
- ISBN-13: 9783031452512
-
相關分類:
Reinforcement、控制系統 Control-systems、DeepLearning
海外代購書籍(需單獨結帳)
相關主題
商品描述
Integral and Inverse Reinforcement Learning for Optimal Control Systems and Games develops its specific learning techniques, motivated by application to autonomous driving and microgrid systems, with breadth and depth: integral reinforcement learning (RL) achieves model-free control without system estimation compared with system identification methods and their inevitable estimation errors; novel inverse RL methods fill a gap that will help them to attract readers interested in finding data-driven model-free solutions for inverse optimization and optimal control, imitation learning and autonomous driving among other areas.
Graduate students will find that this book offers a thorough introduction to integral and inverse RL for feedback control related to optimal regulation and tracking, disturbance rejection, and multiplayer and multiagent systems. For researchers, it provides a combination of theoretical analysis, rigorous algorithms, and a wide-ranging selection of examples. The book equips practitioners working in various domains - aircraft, robotics, power systems, and communication networks among them - with theoretical insights valuable in tackling the real-world challenges they face.
商品描述(中文翻譯)
《積分與反向強化學習於最佳控制系統與遊戲》發展了其特定的學習技術,受到自動駕駛和微電網系統應用的啟發,具有廣度和深度:積分強化學習(RL)實現了無模型控制,與系統識別方法及其不可避免的估計誤差相比,無需系統估計;新穎的反向RL方法填補了一個空白,將幫助吸引對尋找數據驅動的無模型解決方案感興趣的讀者,這些解決方案適用於反向優化、最佳控制、模仿學習和自動駕駛等領域。
研究生將發現本書提供了對於與最佳調節和跟蹤、擾動拒絕以及多玩家和多代理系統相關的反饋控制的積分和反向RL的徹底介紹。對於研究人員而言,本書提供了理論分析、嚴謹算法和廣泛範例的結合。該書為在航空、機器人、電力系統和通信網絡等各個領域工作的實務者提供了在應對現實世界挑戰時寶貴的理論見解。
作者簡介
Bosen Lian obtained his B.S. degree from the North China University of Water Resources and Electric Power, Zhengzhou, China, in 2015, the M.S. degree from Northeastern University, Shenyang, China, in 2018, and the Ph.D. from the University of Texas at Arlington, TX, USA, in 2021. He is currently an Assistant Professor at the Electrical and Computer Engineering Department, Auburn University, Auburn, AL, USA. Prior to that, he was an Adjunct Professor at the Electrical Engineering Department, University of Texas at Arlington and a Postdoctoral Research Associate at the University of Texas at Arlington Research Institute. His research interests focus on reinforcement learning, inverse reinforcement learning, distributed estimation, distributed control, and robotics.
Wenqian Xue received the B.Eng. degree from the Qingdao University, Qingdao, China, in 2015, the M.SE. degree from the Northeastern University, Shenyang, China, in 2018, where she is currently pursuing towards the Ph.D. degree. She was a Research Assistant (Visiting Schlor) with the University of Texas at Arlington from 2019 to 2021. Her current research interests include learning-based data-driven control, reinforcement learning and inverse reinforcement learning, game theory, distributed control of multi-agent systems. She is a Reviewer of Automatica, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, etc.
Frank L. Lewis obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from Univ. W. Florida, and the Ph.D. at Ga. Tech. Fellow, National Academy of Inventors. Fellow IEEE, Fellow IFAC, Fellow AAAS, Fellow European Union Academy of Science, Fellow U.K. Institute of Measurement & Control. PE Texas, U.K. Chartered Engineer. UTA Charter Distinguished Scholar Professor, UTA Distinguished Teaching Professor, and Moncrief-O'Donnell Chair at the University of Texas at Arlington Research Institute. Lewis is Ranked as number 19 in the world of all scientists in Electronics and Electrical Engineering by Research.com. Ranked number 5 in the world in the subfield of Industrial Engineering and Automation according to a Stanford University Research Study in 2021. 80,000 Google Scholar Citations, H-index 123. He works in feedback control, intelligent systems, reinforcement learning, cooperative control systems, and nonlinear systems. He is author of 8 U.S. patents, numerous journal special issues, 445 journal papers, 20 books, including the textbooks Optimal Control, Aircraft Control, Optimal Estimation, and Robot Manipulator Control. He received the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, Int. Neural Network Soc. Gabor Award, U.K. Inst Measurement & Control Honeywell Field Engineering Medal, IEEE Computational Intelligence Society Neural Networks Pioneer Award, AIAA Intelligent Systems Award, AACC Ragazzini Award. He has received over $12M in 100 research grants from NSF, ARO, ONR, AFOSR, DARPA, and USA industry contracts. Helped win the US SBA Tibbets Award in 1996 as Director of the UTA Research Institute SBIR Program.
Hamidreza Modares received the B.S. degree from the University of Tehran, Tehran, Iran, in 2004, the M.S. degree from the Shahrood University of Technology, Shahrood, Iran, in 2006, and the Ph.D. degree from the University of Texas at Arlington, Arlington, TX, USA, in 2015. He is currently an Assistant Professor in the Department of Mechanical Engineering at Michigan State University. Prior to joining Michigan State University, he was an Assistant professor in the Department of Electrical Engineering, Missouri University of Science and Technology. His current research interests include control and security of cyber-physical systems, machine learning in control, distributed control of multi-agent systems, and robotics. He is an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems.
Bahare Kiumarsi received the B.S. degree in electrical engineering from the Shahrood University of Technology, Iran, in 2009, the M.S. degree in electrical engineering from the Ferdowsi University of Mashhad, Iran, in 2013, and the Ph.D. degree in electrical engineering from the University of Texas at Arlington, Arlington, TX, USA, in 2017. In 2018, she was a Post-Doctoral Research Associate with the Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, USA. She is currently an Assistant Professor with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA. Her current research interests include machine learning in control, security of cyber-physical systems, game theory, and distributed control.作者簡介(中文翻譯)
Bosen Lian於2015年獲得中國鄭州北方水利電力大學的學士學位,2018年獲得中國沈陽東北大學的碩士學位,並於2021年獲得美國德克薩斯州阿靈頓大學的博士學位。他目前是美國阿伯丁大學電機與計算機工程系的助理教授。在此之前,他曾擔任德克薩斯州阿靈頓大學電機工程系的兼任教授及德克薩斯州阿靈頓大學研究所的博士後研究助理。他的研究興趣集中在強化學習、逆強化學習、分散估計、分散控制和機器人技術。
Wenqian Xue於2015年獲得中國青島大學的工程學士學位,2018年獲得中國沈陽東北大學的碩士學位,並目前正在該校攻讀博士學位。她曾於2019年至2021年擔任德克薩斯州阿靈頓大學的研究助理(訪問學者)。她目前的研究興趣包括基於學習的數據驅動控制、強化學習和逆強化學習、博弈論以及多智能體系統的分散控制。她是《Automatica》、《IEEE Transactions on Neural Networks and Learning Systems》、《IEEE Transactions on Cybernetics》等期刊的審稿人。
Frank L. Lewis在萊斯大學獲得物理/電機工程學士學位和電機工程碩士學位,並在佛羅里達大學獲得航空工程碩士學位,最後在喬治亞理工學院獲得博士學位。他是美國發明家國家學院的院士,IEEE、IFAC、AAAS及歐盟科學院的院士,並且是英國測量與控制學會的院士。他是德克薩斯州阿靈頓大學研究所的特聘學者教授、傑出教學教授及Moncrief-O'Donnell講座教授。根據Research.com的排名,Lewis在電子與電機工程領域的科學家中排名第19,在工業工程與自動化的子領域中根據2021年斯坦福大學的研究排名第5。他擁有80,000次Google Scholar引用,H指數為123。他的研究領域包括反饋控制、智能系統、強化學習、合作控制系統和非線性系統。他擁有8項美國專利,發表了445篇期刊論文及20本書籍,包括《Optimal Control》、《Aircraft Control》、《Optimal Estimation》和《Robot Manipulator Control》等教科書。他曾獲得富布萊特研究獎、NSF研究啟動獎、ASEE Terman獎、國際神經網絡學會Gabor獎、英國測量與控制學會Honeywell現場工程獎、IEEE計算智能學會神經網絡先驅獎、AIAA智能系統獎及AACC Ragazzini獎。他從NSF、ARO、ONR、AFOSR、DARPA及美國工業合同中獲得超過1200萬美元的100項研究資助。作為UTA研究所SBIR計劃的主任,他在1996年幫助贏得了美國小企業管理局Tibbets獎。
Hamidreza Modares於2004年獲得伊朗德黑蘭大學的學士學位,2006年獲得伊朗沙赫魯德科技大學的碩士學位,並於2015年獲得美國德克薩斯州阿靈頓大學的博士學位。他目前是密西根州立大學機械工程系的助理教授。在加入密西根州立大學之前,他曾擔任密蘇里科技大學電機工程系的助理教授。他目前的研究興趣包括網絡物理系統的控制與安全、控制中的機器學習、多智能體系統的分散控制和機器人技術。