Practical Explainable AI Using Python: Artificial Intelligence Model Explanations Using Python-based Libraries, Extensions, and Frameworks (實用可解釋的人工智慧:使用Python的人工智慧模型解釋與庫、擴展及框架)

Mishra, Pradeepta

  • 出版商: Apress
  • 出版日期: 2021-12-15
  • 定價: $2,100
  • 售價: 8.0$1,680
  • 語言: 英文
  • 頁數: 364
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1484271572
  • ISBN-13: 9781484271575
  • 相關分類: Python程式語言人工智慧
  • 立即出貨 (庫存=1)

相關主題

商品描述

Chapter 1: Introduction to Model Explainability and InterpretabilityChapter Goal: This chapter is to understand what is model explainability and interpretability using Python. No of pages: 30-40 pages
Chapter 2: AI Ethics, Biasness and Reliability Chapter Goal: This chapter aims at covering different frameworks using XAI Python libraries to control biasness, execute the principles of reliability and maintain ethics while generating predictions.No of pages: 30-40
Chapter 3: Model Explainability for Linear Models Using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by linear models for supervised learning task, for structured dataNo of pages: 30-40
Chapter 4: Model Explainability for Non-Linear Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by non-linear models, such as tree based models for supervised learning task, for structured dataNo of pages: 30-40
Chapter 5: Model Explainability for Ensemble Models Using XAI Components
Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by ensemble models, such as tree based ensemble models for supervised learning task, for structured data No of pages: 30-40
Chapter 6: Model Explainability for Time Series Models using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by time series models for structured data, both univariate time series model and multivariate time series modelNo of pages: 30-40
Chapter 7: Model Explainability for Natural Language Processing using XAI ComponentsChapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by models from text classification, summarization, sentiment classification No of pages: 30-40
Chapter 8: AI Model Fairness Using What-If ScenarioChapter Goal: This chapter explains use of Google's WIT Tool and custom libraries to explain the fairness of an AI modelNo of pages: 30-40
Chapter 9: Model Explainability for Deep Neural Network ModelsChapter Goal: This chapter explains use of Python libraries to interpret the neural network models and deep learning models such as LSTM models, CNN models etc. using smooth grad and deep shiftNo of pages: 30-40
Chapter 10: Counterfactual Explanations for XAI modelsChapter Goal: This chapter aims at providing counterfactual explanations to explain predictions of individual instances. The "event" is the predicted outcome of an instance, the "cause" are the particular feature values of this instance that were the input to the model that "caused" a certain prediction.No of pages: 30-40
Chapter 11: Contrastive Explanation for Machine Learning
Chapter Goal: In this chapter we will use foil trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil)No of pages: 20-30
Chapter 12: Model-Agnostic Explanations By Identifying Prediction InvarianceChapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.No of pages: 20-30
Chapter 13: Model Explainability for Rule based Exper

商品描述(中文翻譯)

第1章:模型可解釋性和解釋性介紹
章節目標:本章旨在使用Python了解模型可解釋性和解釋性是什麼。頁數:30-40頁
第2章:人工智慧倫理、偏見和可靠性
章節目標:本章旨在使用XAI Python庫來涵蓋不同的框架,以控制偏見、執行可靠性原則並在生成預測時保持倫理。頁數:30-40頁
第3章:使用XAI組件解釋線性模型的模型可解釋性
章節目標:本章解釋了使用LIME、SKATER、SHAP和其他庫來解釋線性模型對於監督學習任務中的結構化數據所做的決策。頁數:30-40頁
第4章:使用XAI組件解釋非線性模型的模型可解釋性
章節目標:本章解釋了使用LIME、SKATER、SHAP和其他庫來解釋非線性模型(如基於樹的模型)對於監督學習任務中的結構化數據所做的決策。頁數:30-40頁
第5章:使用XAI組件解釋集成模型的模型可解釋性
章節目標:本章解釋了使用LIME、SKATER、SHAP和其他庫來解釋集成模型(如基於樹的集成模型)對於監督學習任務中的結構化數據所做的決策。頁數:30-40頁
第6章:使用XAI組件解釋時間序列模型的模型可解釋性
章節目標:本章解釋了使用LIME、SKATER、SHAP和其他庫來解釋時間序列模型對於結構化數據(包括單變量和多變量時間序列模型)所做的決策。頁數:30-40頁
第7章:使用XAI組件解釋自然語言處理模型的模型可解釋性
章節目標:本章解釋了使用LIME、SKATER、SHAP和其他庫來解釋從文本分類、摘要、情感分類等模型所做的決策。頁數:30-40頁
第8章:使用假設情境解釋AI模型的公平性
章節目標:本章解釋了使用Google的WIT工具和自定義庫來解釋AI模型的公平性。頁數:30-40頁
第9章:使用Python庫解釋深度神經網絡模型的模型可解釋性
章節目標:本章解釋了使用Python庫來解釋神經網絡模型和深度學習模型(如LSTM模型、CNN模型等)的模型可解釋性,使用平滑梯度和深度位移。頁數:30-40頁
第10章:XAI模型的反事實解釋
章節目標:本章旨在提供反事實解釋,以解釋個別實例的預測。該“事件”是實例的預測結果,“原因”是該實例的特定特徵值,這些特徵值是模型的輸入,導致了某個預測結果。頁數:30-40頁
第11章:對比解釋機器學習
章節目標:本章將使用箔樹(foil trees),一種與模型無關的方法,提取解釋,找到導致解釋預測實際結果(事實)而不是其他結果(箔)的一組規則。頁數:20-30頁
第12章:通過識別預測不變性的模型無關解釋
章節目標:本章將使用錨-LIME(a-LIME),一種模型無關的技術,產生高精度基於規則的解釋,其覆蓋範圍非常清晰。頁數:20-30頁
第13章:基於規則的實驗模型的模型可解釋性