Explainable AI with Python
暫譯: 使用 Python 的可解釋人工智慧

Gianfagna, Leonida, Di Cecco, Antonio

相關主題

商品描述

This book provides a full presentation of the current concepts and available techniques to make "machine learning" systems more explainable. The approaches presented can be applied to almost all the current "machine learning" models: linear and logistic regression, deep learning neural networks, natural language processing and image recognition, among the others.

Progress in Machine Learning is increasing the use of artificial agents to perform critical tasks previously handled by humans (healthcare, legal and finance, among others). While the principles that guide the design of these agents are understood, most of the current deep-learning models are "opaque" to human understanding. Explainable AI with Python fills the current gap in literature on this emerging topic by taking both a theoretical and a practical perspective, making the reader quickly capable of working with tools and code for Explainable AI.

Beginning with examples of what Explainable AI (XAI) is and why it is needed in the field, the book details different approaches to XAI depending on specific context and need. Hands-on work on interpretable models with specific examples leveraging Python are then presented, showing how intrinsic interpretable models can be interpreted and how to produce "human understandable" explanations. Model-agnostic methods for XAI are shown to produce explanations without relying on ML models internals that are "opaque." Using examples from Computer Vision, the authors then look at explainable models for Deep Learning and prospective methods for the future. Taking a practical perspective, the authors demonstrate how to effectively use ML and XAI in science. The final chapter explains Adversarial Machine Learning and how to do XAI with adversarial examples.

商品描述(中文翻譯)

這本書全面介紹了當前的概念和可用技術,以使「機器學習」系統更具可解釋性。所提出的方法幾乎可以應用於所有當前的「機器學習」模型:線性和邏輯回歸、深度學習神經網絡、自然語言處理和圖像識別等。

機器學習的進展正在增加人工代理執行以前由人類處理的關鍵任務(如醫療、法律和金融等)的使用。雖然指導這些代理設計的原則已被理解,但目前大多數深度學習模型對人類理解來說是「不透明」的。《用 Python 解釋性人工智慧》填補了這一新興主題在文獻中的空白,從理論和實踐的角度出發,使讀者能夠迅速掌握解釋性人工智慧的工具和代碼。

本書首先以解釋性人工智慧(XAI)的例子開始,說明其在該領域中的必要性,然後詳細介紹根據特定上下文和需求的不同 XAI 方法。接著,書中展示了使用 Python 的可解釋模型的實作範例,顯示如何解釋內在可解釋模型以及如何產生「人類可理解」的解釋。書中還展示了與模型無關的 XAI 方法,這些方法能在不依賴於「不透明」的機器學習模型內部的情況下產生解釋。作者使用計算機視覺的例子,探討了深度學習的可解釋模型及未來的潛在方法。從實踐的角度出發,作者展示了如何在科學中有效地使用機器學習和 XAI。最後一章解釋了對抗性機器學習以及如何使用對抗性範例進行 XAI。

作者簡介

Leonida Gianfagna (Phd, MBA) is a theoretical physicist that is currently working in Cyber Security as R&D director for Cyber Guru. Before joining Cyber Guru he worked in IBM for 15 years covering leading roles in software development in ITSM (IT Service Management). He is the author of several publications in theoretical physics and computer science and accredited as IBM Master Inventor (15+ filings).

Antonio Di Cecco is a theoretical physicist with a strong mathematical background that is fully engaged on delivering education on AIML at different levels from dummies to experts (face to face classes and remotely). The main strength of his approach is the deep-diving of the mathematical foundations of AIML models that open new angles to present the AIML knowledge and space of improvements for the existing state of art. Antonio has also a "Master in Economics" with focus innovation and teaching experiences. He is leading School of AI in Italy with chapters in Rome and Pescara

作者簡介(中文翻譯)

**Leonida Gianfagna (博士, MBA)** 是一位理論物理學家,目前擔任 Cyber Guru 的研發總監,專注於網路安全。在加入 Cyber Guru 之前,他在 IBM 工作了 15 年,擔任 ITSM(IT 服務管理)軟體開發的領導角色。他是多篇理論物理和計算機科學出版物的作者,並被認證為 IBM 大師發明家(擁有 15 份以上的專利申請)。

**Antonio Di Cecco** 是一位具有強大數學背景的理論物理學家,專注於提供不同層級的 AIML 教育,從初學者到專家(包括面對面課程和遠程教學)。他的方法的主要優勢在於深入探討 AIML 模型的數學基礎,這為呈現 AIML 知識和現有技術的改進空間開啟了新的視角。Antonio 也擁有「經濟學碩士」學位,專注於創新和教學經驗。他在意大利領導 AI 學院,並在羅馬和佩斯卡拉設有分校。