此商品已下架,若仍需天瓏代為查詢或代購書籍,請與門市客服人員聯繫,或 E-mail 至 service@tenlong.com.tw 將有專人為您服務。

Fast Data Processing with Spark
暫譯: 使用 Spark 快速數據處理

Holden Karau

  • 出版商: Packt Publishing
  • 出版日期: 2013-09-08
  • 售價: $1,640
  • 貴賓價: 9.5$1,558
  • 語言: 英文
  • 頁數: 120
  • 裝訂: Paperback
  • ISBN: 1782167064
  • ISBN-13: 9781782167068
  • 相關分類: Spark
  • 海外代購書籍(需單獨結帳)

買這商品的人也買了...

商品描述

Spark offers a streamlined way to write distributed programs and this tutorial gives you the know-how as a software developer to make the most of Spark's many great features, providing an extra string to your bow.

Overview

  • Implement Spark's interactive shell to prototype distributed applications
  • Deploy Spark jobs to various clusters such as Mesos, EC2, Chef, YARN, EMR, and so on
  • Use Shark's SQL query-like syntax with Spark

In Detail

Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.

Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes.

Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python.

We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).

What you will learn from this book

  • Prototype distributed applications with Spark's interactive shell
  • Learn different ways to interact with Spark's distributed representation of data (RDDs)
  • Load data from the various data sources
  • Query Spark with a SQL-like query syntax
  • Integrate Shark queries with Spark programs
  • Effectively test your distributed software
  • Tune a Spark installation
  • Install and set up Spark on your cluster
  • Work effectively with large data sets

Approach

This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer.

Who this book is written for

Fast Data Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

商品描述(中文翻譯)

Spark 提供了一種簡化的方式來編寫分散式程式,而本教程將為您提供作為軟體開發者的知識,讓您充分利用 Spark 的眾多優秀功能,為您的技能增添一項新武器。

概述
- 實作 Spark 的互動式命令列以原型化分散式應用程式
- 將 Spark 工作部署到各種叢集,如 Mesos、EC2、Chef、YARN、EMR 等
- 使用 Shark 的 SQL 查詢類語法與 Spark

詳細內容
Spark 是一個用於編寫快速分散式程式的框架。Spark 解決的問題與 Hadoop MapReduce 類似,但採用快速的內存處理方式和乾淨的函數式風格 API。它能夠與 Hadoop 整合,並內建互動式查詢分析工具(Shark)、大規模圖形處理與分析(Bagel)以及實時分析(Spark Streaming),可以互動式地快速處理和查詢大數據集。

《使用 Spark 進行快速數據處理》涵蓋了如何使用 Spark 編寫分散式的 MapReduce 風格程式。本書將指導您完成編寫有效分散式程式所需的每一步,從設置叢集和互動式探索 API,到將工作部署到叢集並進行調整以滿足您的需求。

《使用 Spark 進行快速數據處理》涵蓋了從在各種情況下設置 Spark 叢集(獨立、EC2 等),到如何使用互動式命令列互動式編寫分散式程式的所有內容。接著,我們將討論如何在 Java、Scala 和 Python 中編寫和部署分散式工作。

然後,我們將檢視如何使用互動式命令列快速原型化分散式程式並探索 Spark API。我們還將探討如何將 Hive 與 Spark 結合使用,以便使用類 SQL 查詢語法與 Shark 進行操作,以及操作彈性分散式數據集(RDDs)。

您將從本書中學到的內容
- 使用 Spark 的互動式命令列原型化分散式應用程式
- 學習與 Spark 的分散式數據表示(RDDs)互動的不同方式
- 從各種數據來源加載數據
- 使用類 SQL 查詢語法查詢 Spark
- 將 Shark 查詢與 Spark 程式整合
- 有效測試您的分散式軟體
- 調整 Spark 安裝
- 在您的叢集上安裝和設置 Spark
- 有效處理大型數據集

方法
本書將是一個基本的逐步教程,幫助讀者充分利用 Spark 所提供的一切。

本書的讀者對象
《使用 Spark 進行快速數據處理》適合希望學習如何使用 Spark 編寫分散式程式的軟體開發者。它將幫助那些在單一電腦上無法處理的問題的開發者。不需要具備分散式程式設計的先前經驗。本書假設讀者具備 Java、Scala 或 Python 的知識。