Scaling up Machine Learning: Parallel and Distributed Approaches (Paperback) (擴展機器學習:平行與分散式方法)
- 出版商: Cambridge
- 出版日期: 2018-03-29
- 售價: $1,360
- 貴賓價: 9.5 折 $1,292
- 語言: 英文
- 頁數: 491
- 裝訂: Paperback
- ISBN: 1108461743
- ISBN-13: 9781108461740
-
相關分類:
Machine Learning
立即出貨 (庫存=1)
買這商品的人也買了...
-
$1,000$900 -
$250超標量處理器設計
-
$594$564 -
$1,960Large-Scale and Distributed Optimization (Lecture Notes in Mathematics)
-
$650$553 -
$420$357 -
$300$255 -
$520$411 -
$301特徵工程入門與實踐 (Feature Engineering Made Easy)
-
$680$578 -
$480$379 -
$880$695 -
$380$342 -
$580$458 -
$680$476 -
$690$345 -
$378產品經理方法論 構建完整的產品知識體系
-
$419$398 -
$403中台產品經理:數字化轉型復雜產品架構案例實戰
-
$390$332 -
$760$570 -
$806數據治理:工業企業數字化轉型之道(第2版)
-
$880$695 -
$630$498 -
$580$458
相關主題
商品描述
This book presents an integrated collection of representative approaches for scaling up machine learning and data mining methods on parallel and distributed computing platforms. Demand for parallelizing learning algorithms is highly task-specific: in some settings it is driven by the enormous dataset sizes, in others by model complexity or by real-time performance requirements. Making task-appropriate algorithm and platform choices for large-scale machine learning requires understanding the benefits, trade-offs, and constraints of the available options. Solutions presented in the book cover a range of parallelization platforms from FPGAs and GPUs to multi-core systems and commodity clusters, concurrent programming frameworks including CUDA, MPI, MapReduce, and DryadLINQ, and learning settings (supervised, unsupervised, semi-supervised, and online learning). Extensive coverage of parallelization of boosted trees, SVMs, spectral clustering, belief propagation and other popular learning algorithms and deep dives into several applications make the book equally useful for researchers, students, and practitioners.
商品描述(中文翻譯)
本書介紹了一系列整合的方法,用於在並行和分散式計算平台上擴展機器學習和數據挖掘方法。對於並行化學習算法的需求高度特定於任務:在某些情況下,它是由於巨大的數據集大小,而在其他情況下,則是由於模型的複雜性或實時性能要求。在大規模機器學習中選擇適合的算法和平台需要了解可用選項的優點、權衡和限制。本書提供的解決方案涵蓋了一系列並行化平台,從FPGA和GPU到多核系統和通用集群,並包括CUDA、MPI、MapReduce和DryadLINQ等並行編程框架,以及監督、非監督、半監督和在線學習等學習設置。詳細介紹了提升樹、支持向量機、譜聚類、信念傳播和其他流行的學習算法的並行化,並深入探討了幾個應用,使本書對研究人員、學生和從業人員同樣有用。