Scaling up Machine Learning: Parallel and Distributed Approaches (Hardcover)
Ron Bekkerman, Mikhail Bilenko, John Langford
- 出版商: Cambridge
- 出版日期: 2011-12-30
- 售價: $4,400
- 貴賓價: 9.5 折 $4,180
- 語言: 英文
- 頁數: 492
- 裝訂: Hardcover
- ISBN: 0521192242
- ISBN-13: 9780521192248
-
相關分類:
Machine Learning
海外代購書籍(需單獨結帳)
相關主題
商品描述
This book presents an integrated collection of representative approaches for scaling up machine learning and data mining methods on parallel and distributed computing platforms. Demand for parallelizing learning algorithms is highly task-specific: in some settings it is driven by the enormous dataset sizes, in others by model complexity or by real-time performance requirements. Making task-appropriate algorithm and platform choices for large-scale machine learning requires understanding the benefits, trade-offs, and constraints of the available options. Solutions presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters, concurrent programming frameworks including CUDA, MPI, MapReduce, and DryadLINQ, and learning settings (supervised, unsupervised, semi-supervised, and online learning). Extensive coverage of parallelization of boosted trees, SVMs, spectral clustering, belief propagation and other popular learning algorithms and deep dives into several applications make the book equally useful for researchers, students, and practitioners.
商品描述(中文翻譯)
本書介紹了一系列整合的方法,用於在並行和分散式計算平台上擴展機器學習和數據挖掘方法。並行化學習算法的需求高度特定於任務:在某些情況下,它是由於巨大的數據集大小,而在其他情況下,則是由於模型複雜性或實時性能要求。為了在大規模機器學習中選擇適合的算法和平台,需要了解可用選項的優點、權衡和限制。本書提供的解決方案涵蓋了一系列並行化平台,從FPGA和GPU到多核系統和通用集群,並包括CUDA、MPI、MapReduce和DryadLINQ等並行編程框架,以及監督、非監督、半監督和在線學習等學習設置。詳細介紹了提升樹、支持向量機、譜聚類、信念傳播和其他流行的學習算法的並行化,並深入探討了幾個應用,使本書對研究人員、學生和從業人員同樣有用。