Spark for Python Developers (Paperback)
Amit Nandi
- 出版商: Packt Publishing
- 出版日期: 2015-12-24
- 售價: $1,330
- 貴賓價: 9.5 折 $1,264
- 語言: 英文
- 頁數: 206
- 裝訂: Paperback
- ISBN: 1784399698
- ISBN-13: 9781784399696
-
相關分類:
Python、程式語言、Spark
立即出貨 (庫存=1)
買這商品的人也買了...
-
$2,180$2,071 -
$1,980$1,881 -
$550$539 -
$1,810$1,720 -
$1,860$1,767 -
$3,120$2,964 -
$2,010$1,910 -
$680$612 -
$2,330$2,214 -
$1,560$1,482 -
$1,560$1,482 -
$825Advanced Analytics with Spark: Patterns for Learning from Data at Scale (Paperback)
-
$825Machine Learning with R, 2/e (Paperback)
-
$1,980$1,881 -
$1,064Apache Spark Machine Learning Blueprints (Paperback)
-
$1,680$1,596 -
$990Mastering Scala Machine Learning (Paperback)
-
$360$281 -
$5,300$5,035
相關主題
商品描述
Key Features
- Set up real-time streaming and batch data intensive infrastructure using Spark and Python
- Deliver insightful visualizations in a web app using Spark (PySpark)
- Inject live data using Spark Streaming with real-time events
Book Description
Looking for a cluster computing system that provides high-level APIs? Apache Spark is your answer―an open source, fast, and general purpose cluster computing system. Spark's multi-stage memory primitives provide performance up to 100 times faster than Hadoop, and it is also well-suited for machine learning algorithms.
Are you a Python developer inclined to work with Spark engine? If so, this book will be your companion as you create data-intensive app using Spark as a processing engine, Python visualization libraries, and web frameworks such as Flask.
To begin with, you will learn the most effective way to install the Python development environment powered by Spark, Blaze, and Bookeh. You will then find out how to connect with data stores such as MySQL, MongoDB, Cassandra, and Hadoop.
You'll expand your skills throughout, getting familiarized with the various data sources (Github, Twitter, Meetup, and Blogs), their data structures, and solutions to effectively tackle complexities. You'll explore datasets using iPython Notebook and will discover how to optimize the data models and pipeline. Finally, you'll get to know how to create training datasets and train the machine learning models.
By the end of the book, you will have created a real-time and insightful trend tracker data-intensive app with Spark.
What you will learn
- Create a Python development environment powered by Spark (PySpark), Blaze, and Bookeh
- Build a real-time trend tracker data intensive app
- Visualize the trends and insights gained from data using Bookeh
- Generate insights from data using machine learning through Spark MLLIB
- Juggle with data using Blaze
- Create training data sets and train the Machine Learning models
- Test the machine learning models on test datasets
- Deploy the machine learning algorithms and models and scale it for real-time events
About the Author
Amit Nandi studied physics at the Free University of Brussels in Belgium, where he did his research on computer generated holograms. Computer generated holograms are the key components of an optical computer, which is powered by photons running at the speed of light. He then worked with the university Cray supercomputer, sending batch jobs of programs written in Fortran. This gave him a taste for computing, which kept growing. He has worked extensively on large business reengineering initiatives, using SAP as the main enabler. He focused for the last 15 years on start-ups in the data space, pioneering new areas of the information technology landscape. He is currently focusing on large-scale data-intensive applications as an enterprise architect, data engineer, and software developer. He understands and speaks seven human languages. Although Python is his computer language of choice, he aims to be able to write fluently in seven computer languages too.
Table of Contents
- Setting Up a Spark Virtual Environment
- Building Batch and Streaming Apps with Spark
- Juggling Data with Spark
- Learning from Data Using Spark
- Streaming Live Data with Spark
- Visualizing Insights and Trends
商品描述(中文翻譯)
主要特點
- 使用Spark和Python建立即時流和批量數據密集型基礎設施
- 使用Spark(PySpark)在Web應用程序中提供深入見解的可視化
- 使用Spark Streaming注入實時事件的實時數據
書籍描述
正在尋找一個提供高級API的集群計算系統嗎?Apache Spark是您的答案-一個開源、快速且通用的集群計算系統。Spark的多階段內存原語提供的性能比Hadoop快100倍,並且非常適合機器學習算法。
您是一位傾向於使用Spark引擎的Python開發人員嗎?如果是這樣,本書將成為您的伴侶,您可以使用Spark作為處理引擎、Python可視化庫和Flask等Web框架創建數據密集型應用程序。
首先,您將學習使用Spark、Blaze和Bookeh構建由Spark(PySpark)驅動的Python開發環境的最有效方法。然後,您將了解如何連接MySQL、MongoDB、Cassandra和Hadoop等數據存儲。
在整個過程中,您將擴展自己的技能,熟悉各種數據源(Github、Twitter、Meetup和Blogs),它們的數據結構以及解決複雜性的解決方案。您將使用iPython Notebook探索數據集,並發現如何優化數據模型和流程。最後,您將了解如何創建訓練數據集並訓練機器學習模型。
通過本書,您將創建一個實時且具有洞察力的趨勢追踪數據密集型應用程序。
您將學到什麼
- 創建由Spark(PySpark)、Blaze和Bookeh驅動的Python開發環境
- 構建實時趨勢追踪數據密集型應用程序
- 使用Bookeh對從數據獲得的趨勢和見解進行可視化
- 通過Spark MLLIB從數據中生成洞察力
- 使用Blaze處理數據
- 創建訓練數據集並訓練機器學習模型
- 在測試數據集上測試機器學習模型
- 部署機器學習算法和模型,並對實時事件進行擴展
關於作者
Amit Nandi在比利時布魯塞爾自由大學學習物理學,他在那裡研究了計算機生成全息圖。計算機生成的全息圖是光子以光速運行的光學計算機的關鍵組件。然後,他與大學Cray超級計算機合作,發送用Fortran編寫的批量程序作業。這使他對計算機有了一種品味,並且不斷增長。他在大型業務重組項目上進行了廣泛的工作,使用SAP作為主要推動因素。在過去的15年中,他專注於數據領域的初創企業,開拓了信息技術領域的新領域。他目前專注於大規模數據密集型應用程序,擔任企業架構師、數據工程師和軟件開發人員。他懂並能說七種人類語言。儘管Python是他首選的計算機語言,但他的目標是能夠流利地使用七種計算機語言。
目錄
- 設置Spark虛擬環境
- 使用Spark構建批量和流式應用程序
- 使用Spark處理數據
- 使用Spark從數據中學習
- 使用Spark流式傳輸實時數據
- 可視化見解和趨勢