機器學習圖解

[加] 路易斯·G·塞拉諾(Luis G. Serrano)著 郭濤 譯

  • 機器學習圖解-preview-1
  • 機器學習圖解-preview-2
  • 機器學習圖解-preview-3
機器學習圖解-preview-1

買這商品的人也買了...

相關主題

商品描述

《機器學習圖解》是一本以簡明易懂的方式介紹機器學習算法和技術的書籍。即使讀者僅掌握高中數學知識,也能理解和應用強大的機器學習技術。書中不涉及深奧的術語,通過基本的代數知識提供清晰的解釋。

本書的內容包括以下方面:

  1. 分類和劃分數據的監督算法:介紹使用監督學習算法對數據進行分類和劃分的方法。

  2. 清理和簡化數據的方法:介紹如何處理數據中的噪音和缺失值,以及簡化數據的方法。

  3. 機器學習包和工具:介紹常用的機器學習包和工具,如Python中的Scikit-learn等。

  4. 復雜數據集的神經網絡和集成方法:介紹使用神經網絡和集成方法處理復雜數據集的技巧和方法。

書中的示例豐富,練習題有趣,插圖清晰,並且重點講解機器學習的核心概念。閱讀本書前,讀者需要具備一定的Python基礎知識,但不需要事先了解機器學習的知識。

作者簡介

Luis G. Serrano是量子人工智能領域的研究科學家。此前,他曾擔任Google機器學習工程師和Apple公司首席人工智能教師。

目錄大綱

第1 章 什麽是機器學習?這是一種常識,唯一特別之處在於由電腦完成···········1

1.1 我是否需要掌握大量的數學和編程背景知識才能理解機器學習············ 2

1.2 機器學習究竟是什麽······· 3

1.3 如何讓機器根據數據做出決策?記憶-制定-預測框架············· 6

1.4 本章小結····················· 12

第2 章 機器學習類型·····················15

2.1 標簽數據和無標簽數據的區別··························· 17

2.2 監督學習:處理標簽數據的機器學習分支··············· 18

2.3 無監督學習:處理無標簽

數據的機器學習分支······ 21

2.4 什麽是強化學習············ 28

2.5 本章小結····················· 30

2.6 練習··························· 31

第3 章 在點附近畫一條線:線性

回歸····································33

3.1 問題:預測房屋的價格··· 35

3.2 解決方案:建立房價回歸模型··························· 35

3.3 如何讓電腦繪制出這

條線:線性回歸算法······ 41

3.4 如何衡量結果?誤差函數··························· 54

3.5 實際應用:使用Turi Create預測房價····················· 61

3.6 如果數據不在一行怎麽辦?

多項式回歸·················· 63

3.7 參數和超參數··············· 64

3.8 回歸應用····················· 65

3.9 本章小結····················· 66

3.10 練習························· 66

第4 章 優化訓練過程:欠擬合、過擬合、測試和正則化······ 69

4.1 使用多項式回歸的欠擬合和過擬合示例··············· 71

4.2 如何讓電腦選擇正確的模型?測試·················· 73

4.3 我們在哪裡打破了黃金法則,如何解決呢?驗證集························ 75

4.4 一種決定模型復雜度的數值方法:模型復雜度圖··························· 76

4.5 避免過擬合的另一種選擇:正則化························ 77

4.6 使用Turi Create 進行多項式回歸、測試和正則化······ 85

4.7 本章小結····················· 89

4.8 練習··························· 90

第5 章 使用線來劃分點: 感知器算法····································93

5.1 問題:我們在一個外星球上,聽不懂外星人的語言······ 95

5.2 如何確定分類器的好壞?誤差函數··················· 108

5.3 如何找到一個好的分類器?感知器算法················ 115

5.4 感知器算法編程實現···· 123

5.5 感知器算法的應用······· 128

5.6 本章小結··················· 129

5.7 練習························· 130

第6 章 劃分點的連續方法:邏輯分類器··································133

6.1 邏輯分類器:連續版感知器分類器··················· 134

6.2 如何找到一個好的邏輯分類器?邏輯回歸算法······· 144

6.3 對邏輯回歸算法進行編程························· 150

6.4 實際應用:使用Turi Create對IMDB 評論進行分類························· 154

6.5 多分類:softmax 函數·· 156

6.6 本章小結··················· 157

6.7 練習························· 158

第7 章 如何衡量分類模型?準確率和其他相關概念·················· 159

7.1 準確率:模型的正確頻率是多少······················ 160

7.2 如何解決準確率問題?定義不同類型的誤差以及如何進行衡量············· 161

7.3 一個有用的模型評價工具

ROC 曲線·················· 170

7.4 本章小結··················· 179

7.5 練習························· 181

第8 章 使用概率最大化:

樸素貝葉斯模型··············· 183

8.1 生病還是健康?以貝葉斯定理為主角的故事······· 184

8.2 用例:垃圾郵件檢測模型························· 188

8.3 使用真實數據構建垃圾郵件檢測模型············· 201

8.4 本章小結··················· 204

8.5 練習························· 205

第9 章 通過提問劃分數據:決策樹····························· 207

9.1 問題:需要根據用戶可能下載的內容向用戶推薦

應用························· 213

9.2 解決方案:構建應用推薦系統························· 214

9.3 超出“是”或“否”之類的問題················ 228

9.4 決策樹的圖形邊界······· 231

9.5 實際應用:使用Scikit-Learn 構建招生模型····· 234

9.6 用於回歸的決策樹······· 238

9.7 應用························· 241

9.8 本章小結··················· 242

9.9 練習························· 242

第10 章 組合積木以獲得更多力量:

神經網絡························245

10.1 以更復雜的外星球為例,開

啟神經網絡學習········ 247

10.2 訓練神經網絡··········· 258

10.3 Keras 中的神經網絡編程······················· 264

10.4 用於回歸的神經網絡·· 272

10.5 用於更復雜數據集的其他架構················· 273

10.6 本章小結················· 275

10.7 練習······················· 276

第11 章 用風格尋找界限:支持向量機和內核方法··········279

11.1 使用新的誤差函數構建更好的分類器··········· 281

11.2 Scikit-Learn 中的SVM編程······················· 287

11.3 訓練非線性邊界的SVM:

內核方法················· 289

11.4 本章小結················· 308

11.5 練習······················· 309

第12 章 組合模型以最大化結果:

集成學習························311

12.1 獲取朋友的幫助········ 312

12.2 bagging:隨機組合弱學習器以構建強學習器····· 314

12.3 AdaBoost:以智能方式組合弱學習器以構建強學習器···················· 319

12.4 梯度提升:使用決策樹構建強學習器··········· 327

12.5 XGBoost:一種梯度提升

的極端方法·············· 332

12.6 集成方法的應用········ 340

12.7 本章小結················· 341

12.8 練習······················· 341

第13 章 理論付諸實踐:數據工程和

機器學習真實示例········· 343

13.1 泰坦尼克號數據集····· 344

13.2 清洗數據集:缺失值及其處理方法·············· 348

13.3 特徵工程:在訓練模型之前轉換數據集中的特徵······················· 350

13.4 訓練模型················· 355

13.5 調整超參數以找到最佳模型:網格搜索········ 359

13.6 使用k 折交叉驗證來重用訓練和驗證數據········ 362

13.7 本章小結················· 363

13.8 練習······················· 364

以下內容可掃封底二維碼下載

附錄A 習題解答·························· 365

附錄B 梯度下降背後的數學原理:

使用導數和斜率下山········ 398

附錄C 參考資料·························· 416