Log-Linear Models, Extensions, and Applications

Aravkin, Aleksandr, Choromanska, Anna, Deng, Li

  • 出版商: MIT
  • 出版日期: 2024-12-03
  • 售價: $4,080
  • 貴賓價: 9.5$3,876
  • 語言: 英文
  • 頁數: 214
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 0262553465
  • ISBN-13: 9780262553469
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

Advances in training models with log-linear structures, with topics including variable selection, the geometry of neural nets, and applications.

Log-linear models play a key role in modern big data and machine learning applications. From simple binary classification models through partition functions, conditional random fields, and neural nets, log-linear structure is closely related to performance in certain applications and influences fitting techniques used to train models. This volume covers recent advances in training models with log-linear structures, covering the underlying geometry, optimization techniques, and multiple applications. The first chapter shows readers the inner workings of machine learning, providing insights into the geometry of log-linear and neural net models. The other chapters range from introductory material to optimization techniques to involved use cases. The book, which grew out of a NIPS workshop, is suitable for graduate students doing research in machine learning, in particular deep learning, variable selection, and applications to speech recognition. The contributors come from academia and industry, allowing readers to view the field from both perspectives.

Contributors
Aleksandr Aravkin, Avishy Carmi, Guillermo A. Cecchi, Anna Choromanska, Li Deng, Xinwei Deng, Jean Honorio, Tony Jebara, Huijing Jiang, Dimitri Kanevsky, Brian Kingsbury, Fabrice Lambert, Aurélie C. Lozano, Daniel Moskovich, Yuriy S. Polyakov, Bhuvana Ramabhadran, Irina Rish, Dimitris Samaras, Tara N. Sainath, Hagen Soltau, Serge F. Timashev, Ewout van den Berg

商品描述(中文翻譯)

在訓練具有對數線性結構的模型方面的進展,主題包括變數選擇、神經網絡的幾何以及應用。

對數線性模型在現代大數據和機器學習應用中扮演著關鍵角色。從簡單的二元分類模型到分區函數、條件隨機場和神經網絡,對數線性結構與某些應用中的性能密切相關,並影響用於訓練模型的擬合技術。本書涵蓋了在訓練具有對數線性結構的模型方面的最新進展,探討了其基礎幾何、優化技術和多種應用。第一章向讀者展示了機器學習的內部運作,提供了對對數線性和神經網絡模型幾何的見解。其他章節則涵蓋了從入門材料到優化技術以及複雜的使用案例。本書源於一個NIPS研討會,適合從事機器學習研究的研究生,特別是在深度學習、變數選擇和語音識別應用方面。貢獻者來自學術界和產業界,使讀者能夠從兩個角度來看待這一領域。

貢獻者
Aleksandr Aravkin, Avishy Carmi, Guillermo A. Cecchi, Anna Choromanska, Li Deng, Xinwei Deng, Jean Honorio, Tony Jebara, Huijing Jiang, Dimitri Kanevsky, Brian Kingsbury, Fabrice Lambert, Aurélie C. Lozano, Daniel Moskovich, Yuriy S. Polyakov, Bhuvana Ramabhadran, Irina Rish, Dimitris Samaras, Tara N. Sainath, Hagen Soltau, Serge F. Timashev, Ewout van den Berg

作者簡介

Aleksandr Aravkin is Assistant Professor of Applied Mathematics at the University of Washington.

Anna Choromanska is Assistant Professor at New York University's Tandon School of Engineering.

Li Deng is Chief Artificial Intelligence Officer of Citadel.

Georg Heigold is Research Scientist at Google.

Tony Jebara is Associate Professor of Computer Science at Columbia University.

Dimitri Kanevsky is Research Scientist at Google.

Stephen J. Wright is Professor of Computer Science at the University of Wisconsin-Madison.

作者簡介(中文翻譯)

亞歷山大·阿拉夫金(Aleksandr Aravkin)是華盛頓大學應用數學的助理教授。
安娜·喬羅曼斯卡(Anna Choromanska)是紐約大學坦登工程學院的助理教授。
李登(Li Deng)是Citadel的首席人工智慧官。
喬治·海戈德(Georg Heigold)是谷歌的研究科學家。
托尼·傑巴拉(Tony Jebara)是哥倫比亞大學計算機科學的副教授。
迪米特里·卡涅夫斯基(Dimitri Kanevsky)是谷歌的研究科學家。
史蒂芬·J·賴特(Stephen J. Wright)是威斯康辛大學麥迪遜分校的計算機科學教授。