Optimization for Machine Learning (Hardcover)
Suvrit Sra, Sebastian Nowozin, Stephen J. Wright
- 出版商: MIT
- 出版日期: 2011-09-30
- 售價: $2,240
- 貴賓價: 9.5 折 $2,128
- 語言: 英文
- 頁數: 512
- 裝訂: Hardcover
- ISBN: 026201646X
- ISBN-13: 9780262016469
-
相關分類:
Machine Learning
海外代購書籍(需單獨結帳)
買這商品的人也買了...
-
$990$891 -
$480$379 -
$750$638 -
$600$510 -
$420$332 -
$600$468 -
$580$458 -
$880$695 -
$780$663 -
$450$351 -
$680$530 -
$480$408 -
$580$458 -
$360$324 -
$780$663 -
$750$638 -
$480$379 -
$1,200$948 -
$550$468 -
$450$356 -
$980$833 -
$580$458 -
$280$252 -
$4,490$4,266 -
$1,680$1,646
相關主題
商品描述
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
商品描述(中文翻譯)
優化和機器學習之間的相互作用是現代計算科學中最重要的發展之一。優化的形式和方法在設計從大量數據中提取關鍵知識的算法方面被證明是至關重要的。然而,機器學習不僅僅是優化技術的使用者,而且是一個快速發展的領域,它本身正在產生新的優化思想。本書以一種對兩個領域的研究人員都易於理解的方式,捕捉了優化和機器學習之間的交互作用的最新狀態。由於其廣泛的應用性和吸引人的理論性質,優化方法在機器學習中受到重視。如今機器學習模型的複雜性、規模和多樣性不斷增加,需要重新評估現有的假設。本書開始了這個重新評估的過程。它描述了已建立框架(如一階方法、隨機逼近、凸松弛、內點方法和近端方法)在新的背景下的復興。它還關注了新的主題,如正則化優化、魯棒優化、梯度和次梯度方法、分裂技術和二階方法。其中許多技術受到其他領域的啟發,包括運籌學、理論計算機科學和優化的子領域。本書將豐富機器學習社區與這些其他領域以及更廣泛的優化社區之間的交流和互相影響。