Linear Algebra and Optimization for Machine Learning: A Textbook (Hardcover) (機器學習的線性代數與優化:教科書 (精裝版))
Aggarwal, Charu C.
- 出版商: Springer
- 出版日期: 2020-05-13
- 售價: $2,600
- 貴賓價: 9.5 折 $2,470
- 語言: 英文
- 頁數: 516
- 裝訂: Hardcover - also called cloth, retail trade, or trade
- ISBN: 3030403432
- ISBN-13: 9783030403430
-
相關分類:
Machine Learning、線性代數 Linear-algebra
-
其他版本:
Linear Algebra and Optimization for Machine Learning: A Textbook (Paperback)
立即出貨
買這商品的人也買了...
-
$1,962Introduction to Machine Learning with Python: A Guide for Data Scientists (Paperback)
-
$1,980The Data Science Design Manual (Texts in Computer Science)
-
$260$234 -
$2,070Machine Learning with Python Cookbook: Practical Solutions from Preprocessing to Deep Learning
-
$560$442 -
$490$441 -
$1,911An Introduction to Categorical Data Analysis, 3/e (Hardcover)
-
$480$408 -
$1,480$1,450 -
$2,520$2,394 -
$400$316 -
$352TensorFlow + Keras 自然語言處理實戰
-
$3,400$3,230 -
$1,710$1,620 -
$2,650$2,597 -
$2,680$2,626 -
$1,200$948 -
$2,150$2,107 -
$1,330$1,260 -
$2,370$2,323 -
$2,780$2,641 -
$2,580$2,451 -
$1,580$1,501 -
$880$695 -
$680$537
相關主題
商品描述
This textbook introduces linear algebra and optimization in the context of machine learning. Examples and exercises are provided throughout this text book together with access to a solution's manual. This textbook targets graduate level students and professors in computer science, mathematics and data science. Advanced undergraduate students can also use this textbook. The chapters for this textbook are organized as follows:
1. Linear algebra and its applications: The chapters focus on the basics of linear algebra together with their common applications to singular value decomposition, matrix factorization, similarity matrices (kernel methods), and graph analysis. Numerous machine learning applications have been used as examples, such as spectral clustering, kernel-based classification, and outlier detection. The tight integration of linear algebra methods with examples from machine learning differentiates this book from generic volumes on linear algebra. The focus is clearly on the most relevant aspects of linear algebra for machine learning and to teach readers how to apply these concepts.
2. Optimization and its applications: Much of machine learning is posed as an optimization problem in which we try to maximize the accuracy of regression and classification models. The "parent problem" of optimization-centric machine learning is least-squares regression. Interestingly, this problem arises in both linear algebra and optimization, and is one of the key connecting problems of the two fields. Least-squares regression is also the starting point for support vector machines, logistic regression, and recommender systems. Furthermore, the methods for dimensionality reduction and matrix factorization also require the development of optimization methods. A general view of optimization in computational graphs is discussed together with its applications to back propagation in neural networks.
A frequent challenge faced by beginners in machine learning is the extensive background required in linear algebra and optimization. One problem is that the existing linear algebra and optimization courses are not specific to machine learning; therefore, one would typically have to complete more course material than is necessary to pick up machine learning. Furthermore, certain types of ideas and tricks from optimization and linear algebra recur more frequently in machine learning than other application-centric settings. Therefore, there is significant value in developing a view of linear algebra and optimization that is better suited to the specific perspective of machine learning.
商品描述(中文翻譯)
這本教科書在機器學習的背景下介紹了線性代數和最佳化。本書提供了大量的例子和練習,並附有解答手冊。這本教科書的目標讀者是計算機科學、數學和數據科學的研究生和教授。高年級本科生也可以使用這本教科書。本書的章節組織如下:
1. 線性代數及其應用:這些章節重點介紹線性代數的基礎知識,以及它們在奇異值分解、矩陣分解、相似性矩陣(核方法)和圖分析等常見應用中的應用。大量的機器學習應用被用作例子,例如譜聚類、基於核的分類和異常檢測。線性代數方法與機器學習示例的緊密結合使得這本書與一般的線性代數書籍有所不同。重點明確放在線性代數對於機器學習最相關的方面上,並教授讀者如何應用這些概念。
2. 最佳化及其應用:許多機器學習問題可以被視為最佳化問題,我們試圖最大化回歸和分類模型的準確性。最小二乘回歸是最佳化中心的機器學習問題。有趣的是,這個問題同時出現在線性代數和最佳化中,是兩個領域之間的關鍵連接問題之一。最小二乘回歸也是支持向量機、邏輯回歸和推薦系統的起點。此外,降維和矩陣分解的方法也需要發展最佳化方法。本書還討論了計算圖中最佳化的一般觀點,以及其在神經網絡的反向傳播中的應用。
機器學習初學者面臨的一個常見挑戰是需要具備豐富的線性代數和最佳化背景知識。問題在於現有的線性代數和最佳化課程並不專門針對機器學習;因此,通常需要完成比機器學習所需更多的課程內容。此外,在機器學習中,某些類型的最佳化和線性代數的想法和技巧比其他應用中更常見。因此,開發一種更適合機器學習特定觀點的線性代數和最佳化觀點具有重要價值。
作者簡介
Charu C. Aggarwal is a Distinguished Research Staff Member (DRSM) at the IBM T. J. Watson Research Center in Yorktown Heights, New York. He completed his undergraduate degree in Computer Science from the Indian Institute of Technology at Kanpur in 1993 and his Ph.D. in Operations Research from the Massachusetts Institute of Technology in 1996. He has published more than 400 papers in refereed conferences and journals and has applied for or been granted more than 80 patents. He is author or editor of 19 books, including textbooks on data mining, neural networks, machine learning (for text), recommender systems, and outlier analysis. Because of the commercial value of his patents, he has thrice been designated a Master Inventor at IBM. He has received several internal and external awards, including the EDBT Test-of-Time Award (2014), the IEEE ICDM Research Contributions Award (2015), and the ACM SIGKDD Innovation Award (2019). He has served as editor-in-chief of the ACM SIGKDD Explorations, and is currently serving as an editor-in-chief of the ACM Transactions on Knowledge Discovery from Data. He is a fellow of the SIAM, ACM, and the IEEE, for "contributions to knowledge discovery and data mining algorithms."
作者簡介(中文翻譯)
Charu C. Aggarwal是紐約約克鎮IBM T. J. Watson研究中心的傑出研究員。他於1993年在印度坎普爾的印度理工學院獲得計算機科學的學士學位,並於1996年在麻省理工學院獲得運籌學的博士學位。他在被審查的會議和期刊上發表了400多篇論文,並申請或獲得了80多項專利。他是19本書的作者或編輯,包括關於數據挖掘、神經網絡、機器學習(用於文本)、推薦系統和異常值分析的教科書。由於他的專利具有商業價值,他曾三次被IBM指定為大師發明家。他獲得了多個內部和外部獎項,包括EDBT Test-of-Time獎(2014年)、IEEE ICDM研究貢獻獎(2015年)和ACM SIGKDD創新獎(2019年)。他曾擔任ACM SIGKDD Explorations的主編,目前擔任ACM Transactions on Knowledge Discovery from Data的主編。他是SIAM、ACM和IEEE的會士,以表彰他對知識發現和數據挖掘算法的貢獻。