Gaussian Processes for Machine Learning (Hardcover)
Carl Edward Rasmussen, Christopher K. I. Williams
- 出版商: MIT
- 出版日期: 2005-11-23
- 售價: $2,300
- 貴賓價: 9.5 折 $2,185
- 語言: 英文
- 頁數: 272
- 裝訂: Hardcover
- ISBN: 026218253X
- ISBN-13: 9780262182539
-
相關分類:
Machine Learning
無法訂購
買這商品的人也買了...
-
$1,107Bioinformatics: The Machine Learning Approach, 2/e (Hardcover)
-
$149$149 -
$4,587$4,495 -
$1,264Introduction to Machine Learning
-
$1,274Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 5/e
-
$860$731 -
$880$581 -
$790$672 -
$650$553 -
$780$702 -
$620$490 -
$580$493 -
$650$507 -
$480$456 -
$880$695 -
$720$569 -
$990Artificial Intelligence for Games (Hardcover)
-
$1,200$948 -
$880$836 -
$990Large-Scale Kernel Machines
-
$6,200$5,890 -
$1,330$1,260 -
$1,560$1,482 -
$1,020$1,000 -
$1,215Fundamentals of Communication Systems, 2/e (Paperback)
相關主題
商品描述
Description
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.
The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.
Carl Edward Rasmussen is a Research Scientist at the Department of Empirical Inference for Machine Learning and Perception at the Max Planck Institute for Biological Cybernetics, Tübingen.
Christopher K. I. Williams is Professor of Machine Learning and Director of the Institute for Adaptive and Neural Computation in the School of Informatics, University of Edinburgh.
Table of Contents
Series Foreword
Preface
Symbols and Notation xvii
1 Introduction
2 Regression
3 Classification 33
4 Covariance Functions 79
5 Model Selection and Adaptation of Hyperparameters 105
6 Relationships between GPs and Other Models 129
7 Theoretical Perspectives 151
8 Approximation Methods for Large Datasets
9 Further Issues and Conclusions 189
Appendix A Mathematical Background 199
Appendix B Gaussian Markov Process 207
Appendix C Datasets and Code 221
Bibliography
Author Index
Subject Index
商品描述(中文翻譯)
描述
高斯過程(GPs)提供了一種基於核機器的學習的原則性、實用性和概率方法。在過去十年中,GPs在機器學習社區中受到了越來越多的關注,而本書提供了對GPs在機器學習中的理論和實踐方面的長期需要的系統和統一的處理。這個處理是全面的和自包含的,針對機器學習和應用統計學的研究人員和學生。
本書涉及回歸和分類的監督學習問題,並包括詳細的算法。介紹了各種協方差(核)函數並討論了它們的性質。模型選擇從貝葉斯和經典的角度進行討論。討論了與機器學習和統計學中其他著名技術的許多聯繫,包括支持向量機、神經網絡、樣條、正則化網絡、相關向量機等。理論問題包括學習曲線和PAC-Bayesian框架的處理,並討論了幾種用於處理大數據集的近似方法。本書包含了說明性的例子和練習,並且在網上提供了代碼和數據集。附錄提供了數學背景和關於高斯馬可夫過程的討論。
Carl Edward Rasmussen是圖賓根生物控制最大普朗克研究所實證推理部門的研究科學家。
Christopher K. I. Williams是愛丁堡大學信息學院機器學習和自適應神經計算研究所的教授和主任。
目錄
系列前言
前言
符號和標記
1. 引言
2. 回歸
3. 分類
4. 協方差函數
5. 模型選擇和超參數調整
6. GPs與其他模型的關係
7. 理論觀點
8. 大數據集的近似方法
9. 其他問題和結論
附錄