Data Engineering with Databricks Cookbook: Build effective data and AI solutions using Apache Spark, Databricks, and Delta Lake
Chadha, Pulkit
- 出版商: Packt Publishing
- 出版日期: 2024-05-31
- 售價: $1,920
- 貴賓價: 9.5 折 $1,824
- 語言: 英文
- 頁數: 438
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1837633355
- ISBN-13: 9781837633357
-
相關分類:
Spark、人工智慧
立即出貨 (庫存=1)
買這商品的人也買了...
-
$790$774 -
$650$514 -
$940$700 -
$780$616 -
$474$450 -
$250OpenCV 3 計算機視覺 : Python 語言實現, 2/e (Learning OpenCV 3 Computer Vision with Python, 2/e)
-
$550$468 -
$790$616 -
$590$460 -
$520$411 -
$500$390 -
$580$452 -
$210$200 -
$680$578 -
$69$60 -
$880$695 -
$68$68 -
$520$411 -
$880$748 -
$880$695 -
$599$473 -
$400$360 -
$768$730 -
$450$351 -
$888$844
相關主題
商品描述
Work through 70 recipes for implementing reliable data pipelines with Apache Spark, optimally store and process structured and unstructured data in Delta Lake, and use Databricks to orchestrate and govern your data
Key Features
- Learn data ingestion, data transformation, and data management techniques using Apache Spark and Delta Lake
- Gain practical guidance on using Delta Lake tables and orchestrating data pipelines
- Implement reliable DataOps and DevOps practices, and enforce data governance policies on Databricks
- Purchase of the print or Kindle book includes a free PDF eBook
Book Description
Data Engineering with Databricks Cookbook will guide you through recipes to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, beginning with an introduction to data ingestion and loading with Apache Spark.
As you progress, you'll be introduced to various data manipulation and data transformation solutions that can be applied to data. You'll find out how to manage and optimize Delta tables, as well as how to ingest and process streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Later chapters will show you how to use Databricks to implement DataOps and DevOps practices and teach you how to orchestrate and schedule data pipelines using Databricks Workflows. Finally, you'll understand how to set up and configure Unity Catalog for data governance.
By the end of this book, you'll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.
What you will learn
- Perform data loading, ingestion, and processing with Apache Spark
- Discover data transformation techniques and custom user-defined functions (UDFs) in Apache Spark
- Manage and optimize Delta tables with Apache Spark and Delta Lake APIs
- Use Spark Structured Streaming for real-time data processing
- Optimize Apache Spark application and Delta table query performance
- Implement DataOps and DevOps practices on Databricks
- Orchestrate data pipelines with Delta Live Tables and Databricks Workflows
- Implement data governance policies with Unity Catalog
Who this book is for
This book is for data engineers, data scientists, and data practitioners who want to learn how to build efficient and scalable data pipelines using Apache Spark, Delta Lake, and Databricks. To get the most out of this book, you should have basic knowledge of data architecture, SQL, and Python programming.
商品描述(中文翻譯)
這本書的標題是《使用 Databricks 的資料工程烹飪書》,其中包含了70個實踐可靠的資料流程的食譜,使用 Apache Spark 來儲存和處理結構化和非結構化數據,並使用 Databricks 來協調和管理您的數據。
主要特點包括:
1. 學習使用 Apache Spark 和 Delta Lake 進行數據載入、數據轉換和數據管理技術。
2. 獲得有關使用 Delta Lake 表和協調數據流程的實用指南。
3. 實施可靠的 DataOps 和 DevOps 實踐,並在 Databricks 上執行數據治理策略。
4. 購買印刷版或 Kindle 版本的書籍將包含免費的 PDF 電子書。
書籍描述:
《使用 Databricks 的資料工程烹飪書》將引導您通過食譜有效地使用 Apache Spark、Delta Lake 和 Databricks 進行資料工程,從介紹 Apache Spark 的數據載入和加載開始。
隨著學習的深入,您將了解到可以應用於數據的各種數據操作和轉換解決方案。您將學習如何管理和優化 Delta 表,以及如何載入和處理流式數據。本書還將向您展示如何改善 Apache Spark 應用程序和 Delta Lake 的性能問題。後面的章節將向您展示如何使用 Databricks 實施 DataOps 和 DevOps 實踐,並教您如何使用 Databricks Workflows 協調和安排數據流程。最後,您將了解如何設置和配置 Unity Catalog 以進行數據治理。
通過閱讀本書,您將熟練掌握使用現代資料工程技術構建可靠且可擴展的資料流程。
您將學到以下內容:
1. 使用 Apache Spark 進行數據載入、加載和處理。
2. 在 Apache Spark 中發現數據轉換技術和自定義的用戶定義函數(UDFs)。
3. 使用 Apache Spark 和 Delta Lake API 管理和優化 Delta 表。
4. 使用 Spark Structured Streaming 進行實時數據處理。
5. 優化 Apache Spark 應用程序和 Delta 表的查詢性能。
6. 在 Databricks 上實施 DataOps 和 DevOps 實踐。
7. 使用 Delta Live Tables 和 Databricks Workflows 協調數據流程。
8. 使用 Unity Catalog 實施數據治理策略。
本書適合資料工程師、資料科學家和資料從業者閱讀,他們希望學習如何使用 Apache Spark、Delta Lake 和 Databricks 構建高效且可擴展的資料流程。為了更好地理解本書,您應該具備基本的資料架構、SQL 和 Python 編程知識。