Semi-Supervised Learning and Domain Adaptation in Natural Language Processing (Synthesis Lectures on Human Language Technologies)
暫譯: 自然語言處理中的半監督學習與領域適應(人類語言技術綜合講座)

Anders Sogaard

  • 出版商: Morgan & Claypool
  • 出版日期: 2013-05-01
  • 售價: $1,290
  • 貴賓價: 9.5$1,226
  • 語言: 英文
  • 頁數: 104
  • 裝訂: Paperback
  • ISBN: 1608459853
  • ISBN-13: 9781608459858
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias. This book is intended to be both readable by first-year students and interesting to the expert audience. My intention was to introduce what is necessary to appreciate the major challenges we face in contemporary NLP related to data sparsity and sampling bias, without wasting too much time on details about supervised learning algorithms or particular NLP applications. I use text classification, part-of-speech tagging, and dependency parsing as running examples, and limit myself to a small set of cardinal learning algorithms. I have worried less about theoretical guarantees ("this algorithm never does too badly") than about useful rules of thumb ("in this case this algorithm may perform really well"). In NLP, data is so noisy, biased, and non-stationary that few theoretical guarantees can be established and we are typically left with our gut feelings and a catalogue of crazy ideas. I hope this book will provide its readers with both. Throughout the book we include snippets of Python code and empirical evaluations, when relevant. Table of Contents: Introduction / Supervised and Unsupervised Prediction / Semi-Supervised Learning / Learning under Bias / Learning under Unknown Bias / Evaluating under Bias

商品描述(中文翻譯)

本書介紹了適用於自然語言處理(NLP)的基本監督學習算法,並展示了如何通過利用大量未標記數據的邊際分佈來改善這些算法的性能。造成這種情況的原因之一是數據稀疏性,即我們在NLP中可用的數據量有限。然而,在大多數現實世界的NLP應用中,我們的標記數據也存在嚴重的偏差。本書介紹了監督學習算法的擴展,以應對數據稀疏性和不同類型的抽樣偏差。本書旨在讓一年級學生易於閱讀,同時對專家讀者也具有趣味性。我的目的是介紹理解當前NLP中與數據稀疏性和抽樣偏差相關的主要挑戰所需的知識,而不會在監督學習算法或特定NLP應用的細節上浪費太多時間。我使用文本分類、詞性標註和依賴解析作為持續的例子,並將自己限制在一小組基本學習算法上。我對理論保證(「這個算法從不會表現得太差」)的關注少於對有用的經驗法則(「在這種情況下,這個算法可能表現得非常好」)。在NLP中,數據是如此嘈雜、偏見和非穩定,以至於很少能建立理論保證,我們通常只能依賴直覺和一系列瘋狂的想法。我希望本書能為讀者提供這兩者。在整本書中,我們在相關的地方包含了Python代碼片段和實證評估。

目錄:
引言 / 監督與非監督預測 / 半監督學習 / 在偏差下學習 / 在未知偏差下學習 / 在偏差下評估