Learning Spark: Lightning-Fast Big Data Analysis (Paperback)
Holden Karau, Andy Konwinski, Patrick Wendell, Matei Zaharia
- 出版商: O'Reilly
- 出版日期: 2015-02-27
- 售價: $1,560
- 貴賓價: 9.5 折 $1,482
- 語言: 英文
- 頁數: 276
- 裝訂: Paperback
- ISBN: 1449358624
- ISBN-13: 9781449358624
-
相關分類:
Spark、大數據 Big-data、Data Science
-
相關翻譯:
Spark 學習手冊 (Learning Spark: Lightning-Fast Big Data Analysis) (繁中版)
-
其他版本:
Learning Spark: Lightning-Fast Data Analytics, 2/e (Paperback)
銷售排行:
🥈 2016/6 英文書 銷售排行 第 2 名
買這商品的人也買了...
-
$480$379 -
$480$470 -
$720$684 -
$680$537 -
$590$502 -
$1,550$1,473 -
$880$695 -
$450$356 -
$480$374 -
$580$458 -
$680$578 -
$720$648 -
$360$252 -
$780$616 -
$350$263 -
$480$408 -
$560$442 -
$520$411 -
$825Advanced Analytics with Spark: Patterns for Learning from Data at Scale (Paperback)
-
$2,048Hadoop: The Definitive Guide, 4/e (Paperback)
-
$580$458 -
$450$356 -
$780$616 -
$360$284 -
$680$537
相關主題
商品描述
The Web is getting faster, and the data it delivers is getting bigger. How can you handle everything efficiently? This book introduces Spark, an open source cluster computing system that makes data analytics fast to run and fast to write. You’ll learn how to run programs faster, using primitives for in-memory cluster computing. With Spark, your job can load data into memory and query it repeatedly much quicker than with disk-based systems like Hadoop MapReduce.
Written by the developers of Spark, this book will have you up and running in no time. You’ll learn how to express MapReduce jobs with just a few simple lines of Spark code, instead of spending extra time and effort working with Hadoop’s raw Java API.
- Quickly dive into Spark capabilities such as collect, count, reduce, and save
- Use one programming paradigm instead of mixing and matching tools such as Hive, Hadoop, Mahout, and S4/Storm
- Learn how to run interactive, iterative, and incremental analyses
- Integrate with Scala to manipulate distributed datasets like local collections
- Tackle partitioning issues, data locality, default hash partitioning, user-defined partitioners, and custom serialization
- Use other languages by means of pipe() to achieve the equivalent of Hadoop streaming
商品描述(中文翻譯)
網路速度越來越快,所傳送的資料也越來越大。你該如何有效處理這一切呢?這本書介紹了一個名為Spark的開源叢集計算系統,它能夠使資料分析運行速度更快,編寫速度更快。你將學習如何使用內存叢集計算的基本功能來更快地運行程式。使用Spark,你的工作可以將資料加載到內存中並重複查詢,比使用基於磁碟的系統如Hadoop MapReduce更快。
這本書由Spark的開發人員撰寫,將使你能夠迅速上手。你將學習如何使用僅需幾行簡單的Spark程式碼來表達MapReduce任務,而不需要花費額外的時間和精力使用Hadoop的原始Java API。
- 快速深入了解Spark的功能,如collect、count、reduce和save
- 使用一種編程範式,而不是混合使用Hive、Hadoop、Mahout和S4/Storm等工具
- 學習如何運行互動式、迭代式和增量分析
- 集成Scala以操作分佈式資料集,如本地集合
- 解決分區問題、資料本地性、默認哈希分區、用戶定義分區器和自定義序列化
- 使用pipe()方法使用其他語言,實現與Hadoop streaming相等的功能