Python 3 Text Processing with NLTK 3 Cookbook (Paperback)
Jacob Perkins
- 出版商: Packt Publishing
- 出版日期: 2014-08-26
- 定價: $1,600
- 售價: 8.0 折 $1,280
- 語言: 英文
- 頁數: 310
- 裝訂: Paperback
- ISBN: 1782167854
- ISBN-13: 9781782167853
-
相關分類:
Python、程式語言
立即出貨 (庫存=1)
買這商品的人也買了...
-
$620$527 -
$680$449 -
$780$616 -
$400$380 -
$3,200$3,040 -
$480$379 -
$680$578 -
$560$437 -
$360$252 -
$780$515 -
$650$514 -
$350$231 -
$490$417 -
$580$452 -
$420$277 -
$490$417 -
$720$562 -
$520$343 -
$560$370 -
$620$527 -
$680$537 -
$450$297 -
$650$553 -
$500$330 -
$1,680$1,596
相關主題
商品描述
Over 80 practical recipes on natural language processing techniques using Python's NLTK 3.0
About This Book
- Break text down into its component parts for spelling correction, feature extraction, and phrase transformation
- Learn how to do custom sentiment analysis and named entity recognition
- Work through the natural language processing concepts with simple and easy-to-follow programming recipes
Who This Book Is For
This book is intended for Python programmers interested in learning how to do natural language processing. Maybe you've learned the limits of regular expressions the hard way, or you've realized that human language cannot be deterministically parsed like a computer language. Perhaps you have more text than you know what to do with, and need automated ways to analyze and structure that text. This Cookbook will show you how to train and use statistical language models to process text in ways that are practically impossible with standard programming tools. A basic knowledge of Python and the basic text processing concepts is expected. Some experience with regular expressions will also be helpful.
In Detail
This book will show you the essential techniques of text and language processing. Starting with tokenization, stemming, and the WordNet dictionary, you'll progress to part-of-speech tagging, phrase chunking, and named entity recognition. You'll learn how various text corpora are organized, as well as how to create your own custom corpus. Then, you'll move onto text classification with a focus on sentiment analysis. And because NLP can be computationally expensive on large bodies of text, you'll try a few methods for distributed text processing. Finally, you'll be introduced to a number of other small but complementary Python libraries for text analysis, cleaning, and parsing.
This cookbook provides simple, straightforward examples so you can quickly learn text processing with Python and NLTK.
商品描述(中文翻譯)
超過80個實用的食譜,使用Python的NLTK 3.0進行自然語言處理技術
關於本書
- 將文本拆解為其組成部分,進行拼寫校正、特徵提取和片語轉換
- 學習如何進行自訂情感分析和命名實體識別
- 通過簡單易懂的編程食譜來理解自然語言處理概念
本書適合對學習如何進行自然語言處理感興趣的Python程序員。也許您已經以艱難的方式了解了正則表達式的限制,或者您已經意識到人類語言無法像計算機語言一樣被確定性地解析。也許您有更多的文本,但不知道該如何處理,需要自動化的方法來分析和結構化文本。本書將向您展示如何訓練和使用統計語言模型以在標準編程工具無法實現的方式處理文本。預期讀者具備Python和基本文本處理概念的基礎知識,對正則表達式也有一些經驗將更有幫助。
詳細內容
本書將向您展示文本和語言處理的基本技術。從分詞、詞幹提取和WordNet字典開始,進一步學習詞性標註、片語分塊和命名實體識別。您將了解各種文本語料庫的組織方式,以及如何創建自己的自定義語料庫。然後,您將轉向文本分類,重點放在情感分析上。由於在大量文本上進行自然語言處理可能計算成本高昂,因此您將嘗試一些分散式文本處理方法。最後,您將介紹其他一些小型但互補的Python庫,用於文本分析、清理和解析。
本書提供簡單直觀的示例,讓您可以快速學習使用Python和NLTK進行文本處理。