Explainable and Interpretable Models in Computer Vision and Machine Learning (The Springer Series on Challenges in Machine Learning)
暫譯: 計算機視覺與機器學習中的可解釋與可理解模型(施普林格機器學習挑戰系列)
- 出版商: Springer
- 出版日期: 2019-01-16
- 售價: $6,250
- 貴賓價: 9.5 折 $5,938
- 語言: 英文
- 頁數: 299
- 裝訂: Paperback
- ISBN: 3319981307
- ISBN-13: 9783319981307
-
相關分類:
Machine Learning、Computer Vision
海外代購書籍(需單獨結帳)
商品描述
This book compiles leading research on the development of explainable and interpretable machine learning methods in the context of computer vision and machine learning.
Research progress in computer vision and pattern recognition has led to a variety of modeling techniques with almost human-like performance. Although these models have obtained astounding results, they are limited in their explainability and interpretability: what is the rationale behind the decision made? what in the model structure explains its functioning? Hence, while good performance is a critical required characteristic for learning machines, explainability and interpretability capabilities are needed to take learning machines to the next step to include them in decision support systems involving human supervision.
This book, written by leading international researchers, addresses key topics of explainability and interpretability, including the following:
· Evaluation and Generalization in Interpretable Machine Learning
· Explanation Methods in Deep Learning
· Learning Functional Causal Models with Generative Neural Networks
· Learning Interpreatable Rules for Multi-Label Classification
· Structuring Neural Networks for More Explainable Predictions
· Generating Post Hoc Rationales of Deep Visual Classification Decisions
· Ensembling Visual Explanations
· Explainable Deep Driving by Visualizing Causal Attention
· Interdisciplinary Perspective on Algorithmic Job Candidate Search
· Multimodal Personality Trait Analysis for Explainable Modeling of Job Interview Decisions
· Inherent Explainability Pattern Theory-based Video Event Interpretations
商品描述(中文翻譯)
這本書匯編了在計算機視覺和機器學習背景下,關於可解釋和可解釋的機器學習方法的前沿研究。
計算機視覺和模式識別的研究進展導致了各種建模技術,這些技術的表現幾乎達到了人類水平。儘管這些模型取得了驚人的結果,但它們在可解釋性和可解釋性方面存在限制:做出決策的理由是什麼?模型結構中的哪些部分解釋了其運作?因此,雖然良好的性能是學習機器的一個關鍵要求特徵,但可解釋性和可解釋性能力是將學習機器推向下一步的必要條件,以便將其納入涉及人類監督的決策支持系統中。
這本書由國際領先的研究者撰寫,探討了可解釋性和可解釋性的關鍵主題,包括以下內容:
· 在可解釋機器學習中的評估和泛化
· 深度學習中的解釋方法
· 使用生成神經網絡學習功能性因果模型
· 為多標籤分類學習可解釋的規則
· 結構化神經網絡以獲得更可解釋的預測
· 生成深度視覺分類決策的事後理由
· 集成視覺解釋
· 通過可視化因果注意力進行可解釋的深度駕駛
· 對算法求職候選人的跨學科視角
· 用於可解釋的工作面試決策建模的多模態人格特徵分析
· 基於內在可解釋性模式理論的視頻事件解釋