Modern Data Mining Algorithms in C++ and Cuda C: Recent Developments in Feature Extraction and Selection Algorithms for Data Science (現代數據挖掘演算法:C++與Cuda C中的特徵提取與選擇新進展)
Masters, Timothy
- 出版商: Apress
- 出版日期: 2020-06-06
- 售價: $2,400
- 貴賓價: 9.5 折 $2,280
- 語言: 英文
- 頁數: 213
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1484259874
- ISBN-13: 9781484259870
-
相關分類:
C++ 程式語言、CUDA、Algorithms-data-structures、Data-mining、Data Science
-
相關翻譯:
數據挖掘算法 — 基於 C++ 及 CUDA C (簡中版)
立即出貨 (庫存=1)
相關主題
商品描述
Discover a variety of data-mining algorithms that are useful for selecting small sets of important features from among unwieldy masses of candidates, or extracting useful features from measured variables.
As a serious data miner you will often be faced with thousands of candidate features for your prediction or classification application, with most of the features being of little or no value. You'll know that many of these features may be useful only in combination with certain other features while being practically worthless alone or in combination with most others. Some features may have enormous predictive power, but only within a small, specialized area of the feature space. The problems that plague modern data miners are endless. This book helps you solve this problem by presenting modern feature selection techniques and the code to implement them. Some of these techniques are:
- Forward selection component analysis
- Local feature selection
- Linking features and a target with a hidden Markov model
- Improvements on traditional stepwise selection
- Nominal-to-ordinal conversion
All algorithms are intuitively justified and supported by the relevant equations and explanatory material. The author also presents and explains complete, highly commented source code.
The example code is in C++ and CUDA C but Python or other code can be substituted; the algorithm is important, not the code that's used to write it.
What You Will Learn
- Combine principal component analysis with forward and backward stepwise selection to identify a compact subset of a large collection of variables that captures the maximum possible variation within the entire set.
- Identify features that may have predictive power over only a small subset of the feature domain. Such features can be profitably used by modern predictive models but may be missed by other feature selection methods.
- Find an underlying hidden Markov model that controls the distributions of feature variables and the target simultaneously. The memory inherent in this method is especially valuable in high-noise applications such as prediction of financial markets.
- Improve traditional stepwise selection in three ways: examine a collection of 'best-so-far' feature sets; test candidate features for inclusion with cross validation to automatically and effectively limit model complexity; and at each step estimate the probability that our results so far could be just the product of random good luck. We also estimate the probability that the improvement obtained by adding a new variable could have been just good luck. Take a potentially valuable nominal variable (a category or class membership) that is unsuitable for input to a prediction model, and assign to each category a sensible numeric value that can be used as a model input.
Who This Book Is For
Intermediate to advanced data science programmers and analysts. C++ and CUDA C experience is highly recommended. However, this book can be used as a framework using other languages such as Python.
商品描述(中文翻譯)
發現各種資料探勘演算法,這些演算法對於從龐大的候選特徵中選擇重要的小集合,或從測量變數中提取有用的特徵非常有用。
作為一個嚴肅的資料探勘者,你常常會面臨數千個候選特徵,用於預測或分類應用,其中大部分特徵都沒有或價值很小。你會知道其中許多特徵可能只有在與某些其他特徵結合時才有用,而單獨或與大多數其他特徵結合時幾乎沒有價值。有些特徵可能具有巨大的預測能力,但僅限於特定的特徵空間中的小而專門的領域。困擾現代資料探勘者的問題是無窮無盡的。本書通過介紹現代特徵選擇技術和實現它們的代碼來幫助你解決這個問題。其中一些技術包括:
- 正向選擇成分分析
- 本地特徵選擇
- 使用隱馬爾可夫模型將特徵和目標相關聯
- 對傳統逐步選擇進行改進
- 名目到序數轉換
所有的演算法都有直觀的理由,並且有相關的方程式和解釋材料支持。作者還提供並解釋了完整且有詳細註釋的源代碼。
示例代碼使用C++和CUDA C編寫,但可以替換為Python或其他代碼;重要的是演算法,而不是用於編寫它的代碼。
你將學到什麼:
- 結合主成分分析與正向和反向逐步選擇,識別出一個大集合中捕捉整個集合內最大可能變異性的緊湊子集。
- 識別出可能僅對特徵域的一小部分具有預測能力的特徵。這些特徵可以被現代預測模型有利地使用,但可能被其他特徵選擇方法忽略。
- 找到一個同時控制特徵變量和目標分佈的潛在隱馬爾可夫模型。這種方法中的記憶特性在高噪聲應用中特別有價值,例如金融市場預測。
- 在三個方面改進傳統逐步選擇:檢查一系列“到目前為止最好”的特徵集合;使用交叉驗證測試候選特徵的包含性,以自動且有效地限制模型複雜性;並在每一步估計到目前為止的結果可能僅僅是隨機好運的產物的概率。我們還估計添加新變量所獲得的改進可能僅僅是好運的概率。將一個潛在有價值的名目變量(類別或類別成員資格)轉換為每個類別都可以用作模型輸入的合理數值。
這本書適合對資料科學有一定經驗的程式設計師和分析師。建議具備C++和CUDA C經驗。然而,這本書也可以使用其他語言(如Python)作為框架使用。
作者簡介
Timothy Masters has a PhD in statistics and is an experienced programmer. His dissertation was in image analysis. His career moved in the direction of signal processing, and for the last 25 years he's been involved in the development of automated trading systems in various financial markets.
作者簡介(中文翻譯)
Timothy Masters擁有統計學博士學位,並且是一位經驗豐富的程式設計師。他的博士論文是關於影像分析的。他的職業生涯轉向信號處理,過去25年來一直參與各種金融市場自動交易系統的開發工作。