Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks
暫譯: 實用的可解釋人工智慧:使用Python的人工智慧模型解釋與相關函式庫、擴展及框架
Mishra, Pradeepta
相關主題
商品描述
Chapter 2: AI Ethics, Biasness and Reliability Chapter Goal: This chapter aims at covering different frameworks using XAI Python libraries to control biasness, execute the principles of reliability and maintain ethics while generating predictions.No of pages: 30-40
Chapter 3: Model Explainability for Linear Models Using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by linear models for supervised learning task, for structured dataNo of pages: 30-40
Chapter 4: Model Explainability for Non-Linear Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by non-linear models, such as tree based models for supervised learning task, for structured dataNo of pages: 30-40
Chapter 5: Model Explainability for Ensemble Models Using XAI Components
Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by ensemble models, such as tree based ensemble models for supervised learning task, for structured data No of pages: 30-40
Chapter 6: Model Explainability for Time Series Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by time series models for structured data, both univariate time series model and multivariate time series modelNo of pages: 30-40
Chapter 7: Model Explainability for Natural Language Processing using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by models from text classification, summarization, sentiment classification No of pages: 30-40
Chapter 8: AI Model Fairness Using What-If ScenarioChapter Goal: This chapter explains use of Google's WIT Tool and custom libraries to explain the fairness of an AI modelNo of pages: 30-40
Chapter 9: Model Explainability for Deep Neural Network ModelsChapter Goal: This chapter explains use of Python libraries to interpret the neural network models and deep learning models such as LSTM models, CNN models etc. using smooth grad and deep shiftNo of pages: 30-40
Chapter 10: Counterfactual Explanations for XAI modelsChapter Goal: This chapter aims at providing counterfactual explanations to explain predictions of individual instances. The "event" is the predicted outcome of an instance, the "cause" are the particular feature values of this instance that were the input to the model that "caused" a certain prediction.No of pages: 30-40
Chapter 11: Contrastive Explanation for Machine Learning
Chapter Goal: In this chapter we will use foil trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil)No of pages: 20-30
Chapter 12: Model-Agnostic Explanations By Identifying Prediction InvarianceChapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.No of pages: 20-30
Chapter 13: Model Explainability for Rule based Exper
商品描述(中文翻譯)
第 1 章:模型可解釋性與可解釋性的介紹
章節目標:本章旨在理解什麼是模型可解釋性和可解釋性,並使用 Python 進行說明。
頁數:30-40 頁
第 2 章:人工智慧倫理、偏見與可靠性
章節目標:本章旨在涵蓋使用 XAI Python 函式庫的不同框架,以控制偏見、執行可靠性原則並在生成預測時維持倫理。
頁數:30-40 頁
第 3 章:使用 XAI 元件的線性模型可解釋性
章節目標:本章解釋如何使用 LIME、SKATER、SHAP 和其他函式庫來解釋線性模型在監督學習任務中對結構化數據所做的決策。
頁數:30-40 頁
第 4 章:使用 XAI 元件的非線性模型可解釋性
章節目標:本章解釋如何使用 LIME、SKATER、SHAP 和其他函式庫來解釋非線性模型(如基於樹的模型)在監督學習任務中對結構化數據所做的決策。
頁數:30-40 頁
第 5 章:使用 XAI 元件的集成模型可解釋性
章節目標:本章解釋如何使用 LIME、SKATER、SHAP 和其他函式庫來解釋集成模型(如基於樹的集成模型)在監督學習任務中對結構化數據所做的決策。
頁數:30-40 頁
第 6 章:使用 XAI 元件的時間序列模型可解釋性
章節目標:本章解釋如何使用 LIME、SKATER、SHAP 和其他函式庫來解釋時間序列模型對結構化數據所做的決策,包括單變量時間序列模型和多變量時間序列模型。
頁數:30-40 頁
第 7 章:使用 XAI 元件的自然語言處理模型可解釋性
章節目標:本章解釋如何使用 LIME、SKATER、SHAP 和其他函式庫來解釋文本分類、摘要、情感分類模型所做的決策。
頁數:30-40 頁
第 8 章:使用 What-If 情境的 AI 模型公平性
章節目標:本章解釋如何使用 Google 的 WIT 工具和自訂函式庫來解釋 AI 模型的公平性。
頁數:30-40 頁
第 9 章:深度神經網絡模型的可解釋性
章節目標:本章解釋如何使用 Python 函式庫來解釋神經網絡模型和深度學習模型(如 LSTM 模型、CNN 模型等),使用平滑梯度和深度偏移技術。
頁數:30-40 頁
第 10 章:XAI 模型的反事實解釋
章節目標:本章旨在提供反事實解釋,以解釋單個實例的預測。'事件' 是實例的預測結果,'原因' 是該實例的特定特徵值,這些值是輸入到模型中並'導致'某個特定預測的。
頁數:30-40 頁
第 11 章:機器學習的對比解釋
章節目標:在本章中,我們將使用對比樹:一種模型無關的方法,用於提取解釋,以找出導致預測實際結果(事實)而非其他(對比)的規則集。
頁數:20-30 頁
第 12 章:通過識別預測不變性進行模型無關解釋
章節目標:在本章中,我們將使用 anchor-LIME (a-LIME),這是一種模型無關的技術,能夠生成高精度的基於規則的解釋,其覆蓋邊界非常明確。
頁數:20-30 頁
第 13 章:基於規則的模型可解釋性