Hands-On Explainable AI (XAI) with Python (Paperback)
Rothman, Denis
- 出版商: Packt Publishing
- 出版日期: 2020-07-30
- 售價: $1,850
- 貴賓價: 9.5 折 $1,758
- 語言: 英文
- 頁數: 454
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1800208138
- ISBN-13: 9781800208131
-
相關分類:
Python、程式語言、人工智慧
-
相關翻譯:
Python 可解釋AI(XAI)實戰 (簡中版)
立即出貨 (庫存=1)
買這商品的人也買了...
-
$630$630 -
$2,200$2,090 -
$474$450 -
$3,980$3,781 -
$580$493 -
$190$181 -
$780$616 -
$520$411 -
$3,690$3,506 -
$580$458 -
$880$748 -
$1,200$948 -
$620$465 -
$880$695 -
$500$395 -
$1,200$948 -
$720$569 -
$5,210$4,950 -
$680$537 -
$980$774 -
$750$593 -
$350$273 -
$380$296 -
$680$537 -
$880$695
相關主題
商品描述
Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.
Key Features
- Learn explainable AI tools and techniques to process trustworthy AI results
- Understand how to detect, handle, and avoid common issues with AI ethics and bias
- Integrate fair AI into popular apps and reporting tools to deliver business value using Python and associated tools
Book Description
Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.
Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.
You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.
You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.
By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.
What you will learn
- Plan for XAI through the different stages of the machine learning life cycle
- Estimate the strengths and weaknesses of popular open-source XAI applications
- Examine how to detect and handle bias issues in machine learning data
- Review ethics considerations and tools to address common problems in machine learning data
- Share XAI design and visualization best practices
- Integrate explainable AI results using Python models
- Use XAI toolkits for Python in machine learning life cycles to solve business problems
Who this book is for
This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.
Some of the potential readers of this book include:
- Professionals who already use Python for as data science, machine learning, research, and analysis
- Data analysts and data scientists who want an introduction into explainable AI tools and techniques
- AI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications
商品描述(中文翻譯)
解決AI應用中的黑盒模型,使其公平、可信和安全。熟悉部署可解釋AI(XAI)到應用程序和報告界面的基本原則和工具。
主要特點:
- 學習可解釋AI工具和技術,處理可信的AI結果
- 瞭解檢測、處理和避免AI倫理和偏見常見問題的方法
- 使用Python和相關工具將公平AI集成到流行的應用程序和報告工具中,提供商業價值
書籍描述:
有效地將AI洞察力傳達給業務利益相關者需要仔細的計劃、設計和可視化選擇。描述問題、模型以及變量之間的關係和發現通常是微妙、令人驚訝和技術上復雜的。
《Hands-On Explainable AI (XAI) with Python》將讓您參與特定的實踐機器學習Python項目,這些項目的安排有助於加深您對AI結果分析的理解。您將構建模型,使用可視化解釋結果,並集成XAI報告工具和不同應用程序。
您將使用Python、TensorFlow 2、Google Cloud的XAI平台、Google Colaboratory和其他框架來構建XAI解決方案,以打開機器學習模型的黑盒。本書將向您介紹幾個用於Python的開源XAI工具,可在整個機器學習項目生命周期中使用。
您將學習如何探索機器學習模型的結果,審查關鍵影響變量和變量之間的關係,檢測和處理偏見和倫理問題,並使用Python將預測與機器學習模型的可視化集成到用戶可解釋界面中。
通過閱讀本書,您將深入了解XAI的核心概念。
您將學到什麼:
- 在機器學習生命周期的不同階段計劃XAI
- 估計流行的開源XAI應用的優點和缺點
- 檢測和處理機器學習數據中的偏見問題
- 審查倫理考慮因素和解決機器學習數據中常見問題的工具
- 分享XAI設計和可視化的最佳實踐
- 使用Python模型集成可解釋AI結果
- 在機器學習生命周期中使用Python的XAI工具包解決業務問題
本書適合對象:
本書不是Python編程或機器學習概念的入門。您必須具備一些基礎知識和/或使用scikit-learn等機器學習庫的經驗,以充分利用本書。
潛在讀者包括:
- 已經使用Python進行數據科學、機器學習、研究和分析的專業人士
- 希望了解可解釋AI工具和技術的數據分析師和數據科學家
- 必須面對AI可解釋性的合同和法律義務的AI項目經理
作者簡介
Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2vector embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP) chatbots applied as a language teacher for Moët et Chandon and other companies. He has also authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution that is used worldwide. Denis is an expert in explainable AI (XAI), having added interpretable mandatory, acceptance-based explanation data and explanation interfaces to the solutions implemented for major corporate aerospace, apparel, and supply chain projects.
作者簡介(中文翻譯)
Denis Rothman畢業於索邦大學和巴黎第四大學,撰寫了最早的word2vector嵌入解決方案之一。他的職業生涯始於為Moët et Chandon和其他公司擔任語言教師的AI認知自然語言處理(NLP)聊天機器人的作者。他還為IBM和服裝生產商撰寫了一個AI資源優化器。之後,他撰寫了一個全球使用的高級計劃和排程(APS)解決方案。Denis是可解釋AI(XAI)的專家,為主要的航空航天、服裝和供應鏈項目實施的解決方案添加了可解釋的強制性、基於接受的解釋數據和解釋界面。
目錄大綱
- Explaining Artificial Intelligence with Python
- White Box XAI for AI Bias and Ethics
- Explaining Machine Learning with Facets
- Microsoft Azure Machine Learning Model Interpretability with SHAP
- Building an Explainable AI Solution from Scratch
- AI Fairness with Google's What-If Tool (WIT)
- A Python Client for Explainable AI Chatbots
- Local Interpretable Model-Agnostic Explanations (LIME)
- The Counterfactual Explanations Method
- Contrastive XAI
- Anchors XAI
- Cognitive XAI
目錄大綱(中文翻譯)
- 用Python解釋人工智慧
- 針對人工智慧偏見和倫理問題的白盒可解釋性人工智慧
- 用Facets解釋機器學習
- 使用SHAP解釋Microsoft Azure機器學習模型的可解釋性
- 從頭開始建立可解釋的人工智慧解決方案
- 使用Google的What-If Tool (WIT)進行人工智慧公平性
- 用Python客戶端實現可解釋的人工智慧聊天機器人
- 局部可解釋的模型無關解釋 (LIME)
- 反事實解釋方法
- 對比可解釋性人工智慧
- 锚點可解釋性人工智慧
- 認知可解釋性人工智慧