Optimization for Machine Learning
Sra, Suvrit, Nowozin, Sebastian, Wright, Stephen J.
- 出版商: Summit Valley Press
- 出版日期: 2011-09-30
- 售價: $2,700
- 貴賓價: 9.5 折 $2,565
- 語言: 英文
- 頁數: 512
- 裝訂: Quality Paper - also called trade paper
- ISBN: 0262537761
- ISBN-13: 9780262537766
-
相關分類:
Machine Learning
海外代購書籍(需單獨結帳)
買這商品的人也買了...
-
$1,274The Intel Microprocessors, 8/e (IE-Paperback)(書況較舊,書側有霉斑)
-
$1,098Neural Networks and Learning Machines, 3/e (IE-Paperback)
-
$1,350$1,323 -
$1,225Computer Vision: A Modern Approach, 2/e (IE-Paperback)
-
$580$493 -
$403自製編程語言
-
$354$336 -
$352自己動手構造編譯系統:編譯、彙編與鏈接
-
$1,343Fundamentals of Database Systems, 7/e (IE-Paperback)
-
$1,980$1,940 -
$2,300$2,185 -
$480$379 -
$267MATLAB/Simulink 系統模擬
-
$1,280$1,254 -
$430$387 -
$1,380$1,352 -
$680$612 -
$1,680$1,646 -
$540$486 -
$680$646 -
$1,420$1,392 -
$2,150$2,043 -
$1,440$1,411 -
$880$836 -
$650$507
相關主題
商品描述
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.
Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.