Knowledge-Augmented Methods for Natural Language Processing

Jiang, Meng, Lin, Bill Yuchen, Wang, Shuohang

  • 出版商: Springer
  • 出版日期: 2024-04-11
  • 售價: $2,010
  • 貴賓價: 9.5$1,910
  • 語言: 英文
  • 裝訂: Hardcover - also called cloth, retail trade, or trade
  • ISBN: 9819707463
  • ISBN-13: 9789819707461
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

Over the last few years, natural language processing has seen remarkable progress due to the emergence of larger-scale models, better training techniques, and greater availability of data. Examples of these advancements include GPT-4, ChatGPT, and other pre-trained language models. These models are capable of characterizing linguistic patterns and generating context-aware representations, resulting in high-quality output. However, these models rely solely on input-output pairs during training and, therefore, struggle to incorporate external world knowledge, such as named entities, their relations, common sense, and domain-specific content. Incorporating knowledge into the training and inference of language models is critical to their ability to represent language accurately. Additionally, knowledge is essential in achieving higher levels of intelligence that cannot be attained through statistical learning of input text patterns alone. In this book, we will review recent developments in the field of natural language processing, specifically focusing on the role of knowledge in language representation. We will examine how pre-trained language models like GPT-4 and ChatGPT are limited in their ability to capture external world knowledge and explore various approaches to incorporate knowledge into language models. Additionally, we will discuss the significance of knowledge in enabling higher levels of intelligence that go beyond statistical learning on input text patterns. Overall, this survey aims to provide insights into the importance of knowledge in natural language processing and highlight recent advances in this field.

 

 

 

商品描述(中文翻譯)

在過去幾年中,由於更大規模的模型、更好的訓練技術和更多的數據可用性的出現,自然語言處理取得了顯著的進展。這些進展的例子包括GPT-4、ChatGPT和其他預訓練語言模型。這些模型能夠表徵語言模式並生成具有上下文感知能力的表示,從而產生高質量的輸出。然而,這些模型在訓練過程中僅依賴於輸入-輸出對,因此難以融入外部世界的知識,例如命名實體、它們的關係、常識和特定領域的內容。將知識納入語言模型的訓練和推理對於準確表徵語言的能力至關重要。此外,知識對於實現超越僅通過統計學習輸入文本模式的智能水平也是必不可少的。在本書中,我們將回顧自然語言處理領域的最新發展,特別關注知識在語言表徵中的作用。我們將探討像GPT-4和ChatGPT這樣的預訓練語言模型在捕捉外部世界知識方面的局限性,並探索將知識納入語言模型的各種方法。此外,我們還將討論知識在實現超越僅通過統計學習輸入文本模式的更高智能水平方面的重要性。總的來說,本調查旨在提供有關知識在自然語言處理中的重要性的見解,並突出該領域的最新進展。

作者簡介

Dr. Meng Jiang is currently an assistant professor at the Department of Computer Science and Engineering in the University of Notre Dame. He obtained his B.E. and Ph.D. from Tsinghua University. He spent two years in UIUC as a postdoc and joined ND in 2017. His research interests include data mining, machine learning, and natural language processing. He has published more than 100 peer-reviewed papers of these topics. He is the recipient of the Notre Dame International Faculty Research Award. The honors and awards he received include Best Paper Finalist in KDD 2014, Best Paper Award in KDD-DLG 2020, and ACM SIGSOFT Distinguished Paper Award in ICSE 2021. He received NSF CRII Award in 2019 and CAREER Award in 2022.

Bill Yuchen Lin is a postdoctoral young investigator at Allen Institute for AI (AI2), advised by Prof. Yejin Choi. He received his PhD from University of Southern California in 2022, advised by Prof. Xiang Ren. His research goal is to teach machines to think, talk, and act with commonsense knowledge and commonsense reasoning ability as humans do. Towards this ultimate goal, he has been developing knowledge-augmented reasoning methods (e.g., KagNet, MHGRN, DrFact) and constructing benchmark datasets (e.g., CommonGen, RiddleSense, X-CSR) that require commonsense knowledge and complex reasoning for both NLU and NLG. He initiated an online compendium of commonsense reasoning research, which serves as a portal for the community.

Dr. Shuohang Wang is a senior researcher in the Knowledge and Language Team of Cognitive Service Research Group. His research mainly focuses on question answering, multilingual NLU, summarization with deep learning, reinforcement learning, and few-shot learning. He served as area chair or senior PC member for ACL, EMNLP, and AAAI. He co-organized AAAI'23 workshop on Knowledge Augmented Methods for NLP.

Dr. Yichong Xu is a senior researcher in the Knowledge and Language Team of Cognitive Service Research Group. His research focuses on the combination of knowledge and NLP, with applications to question answering, summarization, and multimodal learning. He led the effort to achieve the human parity on the CommonsenseQA benchmark. He has held tutorials on knowledge-augmented NLP methods in ACL and WSDM. Prior to joining Microsoft, Dr. Xu got his Ph.D. in machine learning from Carnegie Mellon University.

Wenhao Yu is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of Notre Dame. His research lies in language model + knowledge for solving knowledge-intensive applications, such as open-domain question answering and commonsense reasoning. He has published over 15 conference papers and presented 3 tutorials in machine learning and natural language processing conferences, including ICLR, ICML, ACL, and EMNLP. He was the recipient of Bloomberg Ph.D. Fellowship in 2022 and won the Best Paper Award at SoCal NLP in 2022. He was a research intern in Microsoft Research and Allen Institute for AI.

Dr. Chenguang Zhu is a principal research manager in Microsoft Cognitive Services Research Group, where he leads the Knowledge and Language Team. His research covers knowledge-enhanced language model, text summarization, and prompt learning. Dr. Zhu has led teams to achieve human parity in CommonsenseQA, HellaSwag, and CoQA, and first places in CommonGen, FEVER, ARC, and SQuAD v1.0. He holds a Ph.D. degree in Computer Science from Stanford University. Dr. Zhu has published over 100 papers on NLP and knowledge-augmented methods. He has held tutorials and workshops in knowledge-augmented NLP in conferences like ACL, AAAI, and WSDM. He has published the book Machine Reading Comprehension: Algorithm and Practice published in Elsevier.

 

 

 

作者簡介(中文翻譯)

Dr. Meng Jiang目前是聖母大學計算機科學與工程系的助理教授。他在清華大學獲得學士和博士學位,並在UIUC擔任博士後研究員兩年後於2017年加入聖母大學。他的研究興趣包括數據挖掘、機器學習和自然語言處理。他已發表了100多篇同行評審的論文。他獲得了聖母大學國際教職員研究獎。他獲得的榮譽和獎項包括2014年KDD最佳論文入圍獎、2020年KDD-DLG最佳論文獎和2021年ICSE ACM SIGSOFT傑出論文獎。他分別在2019年獲得NSF CRII獎和2022年獲得CAREER獎。

Bill Yuchen Lin是艾倫人工智能研究所(AI2)的博士後研究員,由Yejin Choi教授指導。他於2022年從南加州大學獲得博士學位,由Xiang Ren教授指導。他的研究目標是教導機器像人類一樣具有常識知識和常識推理能力。為了實現這一最終目標,他一直在開發知識增強的推理方法(例如KagNet、MHGRN、DrFact)並構建需要常識知識和複雜推理的基準數據集(例如CommonGen、RiddleSense、X-CSR),適用於自然語言理解和生成。他發起了一個常識推理研究的在線彙編,作為社區的門戶。

Dr. Shuohang Wang是認知服務研究小組的知識和語言團隊的高級研究員。他的研究主要集中在問答、多語言自然語言理解、深度學習摘要、強化學習和少樣本學習。他曾擔任ACL、EMNLP和AAAI的領域主席或高級PC成員。他共同組織了AAAI'23關於知識增強NLP方法的研討會。

Dr. Yichong Xu是認知服務研究小組的知識和語言團隊的高級研究員。他的研究重點是知識和自然語言處理的結合,應用於問答、摘要和多模態學習。他帶領團隊在CommonsenseQA基準上實現了人類水平。他曾在ACL和WSDM上舉辦過關於知識增強NLP方法的教程。在加入微軟之前,徐博士在卡內基梅隆大學獲得了機器學習博士學位。

Wenhao Yu是聖母大學計算機科學與工程系的博士候選人。他的研究涉及語言模型和知識的結合,用於解決知識密集型應用,如開放領域問答和常識推理。他已發表了15多篇會議論文,並在機器學習和自然語言處理會議上進行了3次教程,包括ICLR、ICML、ACL和EMNLP。他於2022年獲得了彭博博士獎學金,並在2022年的SoCal NLP獲得了最佳論文獎。他曾在微軟研究院和艾倫人工智能研究所擔任研究實習生。

Dr. Chenguang Zhu是微軟認知服務研究小組的首席研究經理,負責知識和語言團隊。他的研究涵蓋知識增強的語言模型、文本摘要和提示學習。Zhu博士帶領團隊在CommonsenseQA、HellaSwag和CoQA上實現了人類水平,並在CommonGen、FEVER、ARC和SQuAD v1.0上獲得了第一名。他擁有斯坦福大學的計算機科學博士學位。Zhu博士在NLP和知識增強方法方面發表了100多篇論文。他在ACL、AAAI和WSDM等會議上舉辦了知識增強NLP的教程和研討會。他出版了在Elsevier出版的書籍《機器閱讀理解:算法與實踐》。