Explainable Ai: Interpreting, Explaining and Visualizing Deep Learning
暫譯: 可解釋的人工智慧:深度學習的詮釋、解釋與視覺化

Samek, Wojciech, Montavon, Gregoire, Vedaldi, Andrea

  • 出版商: Springer
  • 出版日期: 2019-08-30
  • 售價: $4,130
  • 貴賓價: 9.5$3,924
  • 語言: 英文
  • 頁數: 439
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 3030289532
  • ISBN-13: 9783030289539
  • 相關分類: DeepLearning
  • 海外代購書籍(需單獨結帳)

商品描述

The development of "intelligent" systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to "intelligent" machines. For sensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner.

The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.

商品描述(中文翻譯)

「智能」系統的發展能夠自主做出決策並執行任務,可能會導致更快速且一致的決策。然而,人工智慧(AI)技術更廣泛採用的一個限制因素是放棄人類控制和監督給「智能」機器所帶來的固有風險。對於涉及關鍵基礎設施並影響人類福祉或健康的敏感任務,限制不當、不穩健和不安全的決策和行動的可能性至關重要。在部署AI系統之前,我們強烈需要驗證其行為,從而建立保證,確保其在實際環境中部署後能繼續按預期運行。為了實現這一目標,我們探索了人類如何驗證AI決策結構與他們自身真實知識之間的一致性。可解釋的AI(XAI)作為AI的一個子領域,專注於以系統化和可解釋的方式向人類揭示複雜的AI模型。

本書包含的22章提供了可解釋和可解釋的AI及最近提出的AI技術的算法、理論和應用的及時快照,反映了該領域當前的討論並提供未來發展的方向。該書分為六個部分:朝向AI透明度;解釋AI系統的方法;解釋AI系統的決策;評估可解釋性和解釋;可解釋AI的應用;以及可解釋AI的軟體。