GPT-4 Complete: A comprehensive technical guide to the new OpenAI model
Huck, Mark
- 出版商: Independently Published
- 出版日期: 2023-04-03
- 售價: $1,110
- 貴賓價: 9.5 折 $1,055
- 語言: 英文
- 頁數: 216
- 裝訂: Quality Paper - also called trade paper
- ISBN: 9798390009154
- ISBN-13: 9798390009154
-
相關分類:
ChatGPT、人工智慧、Text-mining
無法訂購
買這商品的人也買了...
-
$450$383 -
$1,100$1,045 -
$1,350Exploring Raspberry Pi: Interfacing to the Real World with Embedded Linux (Paperback)
-
$1,110$1,055 -
$1,930$1,834 -
$1,120$1,064 -
$2,100$1,995 -
$750$592 -
$600$468 -
$620$490 -
$1,520$1,444 -
$403一本書讀懂 AIGC:ChatGPT、AI繪畫、智能文明與生產力變革
-
$620$465 -
$680$510 -
$1,200$900 -
$680$530 -
$265ChatGPT 時代:ChatGPT 全能應用一本通
-
$880$660 -
$650$455 -
$520$411 -
$720$504
相關主題
商品描述
"#1 New Release in Artificial Intelligence" (Amazon, April 2023) ... Here is the definitive technical guide to GPT-4 as well as its loquacious counterpart, ChatGPT. Along with step-by-step examples for prompt engineering and fine tuning, the book looks at the current discussions around the technology's promise and peril. Includes a 2-year subscription to GPTAnalytica's PromptBuilder tool. Contents:
=============================
1 Preface
2 A short history of intelligence
. . 2.1 What is "intelligence"?
. . 2.2 Intelligence and humans
. . 2.3 Intelligence and computing
. . 2.4 Artificial intelligence
. . 2.5 Generative AI
. . 2.6 Conversant AI
. . 2.7 The Promethean Moment 3 Models and sources
. . 3.1 Natural Language Processing (NLP)
. . 3.2 Language Modeling (LM)
. . 3.3 Pre-GPT Language Models
. . 3.4 GPT Language Models
. . . . 3.4.1 From data to training set
. . . . 3.4.2 Limitations and bias
. . 3.5 Common Crawl
. . 3.6 WebText data set
. . . . 3.6.1 Test set
. . 3.7 Wikipedia
. . 3.8 Quality of sources 4 GPT-3
. . 4.1 Tokens
. . 4.2 Parameters
. . 4.3 GPT-3 and ChatGPT 5 GPT-4 6 ChatGPT 7 Using GPT and ChatGPT in OpenAI
. . 7.1 Playground
. . . . 7.1.1 Mode
. . . . 7.1.2 Model
. . . . 7.1.3 Temperature
. . 7.2 ChatGPT playground
. . 7.3 Get your API key
. . 7.4 Programmatic use of OpenAI
. . . . 7.4.1 Import the openai library
. . . . 7.4.2 An example chat API call 8 OpenAI via Python 9 OpenAI via Node.js 10 OpenAI .NET API 11 Prompt engineering
. . 11.1 Misunderstanding in human communication
. . 11.2 Misunderstanding in ChatGPT
. . 11.3 Model capabilities depend on context
. . 11.4 How to improve reliability on complex tasks
. . . . 11.4.1 Provide quality data
. . . . 11.4.2 Check your settings
. . . . 11.4.3 Use plain language to describe your inputs and outputs
. . . . 11.4.4 Show the API how to respond to any case
. . . . 11.4.5 Add context
. . . . 11.4.6 Include helpful information up-front
. . . . 11.4.7 Give examples
. . . . 11.4.8 Length of response
. . . . 11.4.9 Define a role
. . . . 11.4.10 Be more specific
. . . . 11.4.11 Divide a complex task into simpler tasks
. . . . 11.4.12 Prompt the model to explain before answering
. . . . 11.4.13 Ask for explanations before the answer 12 Fine tuning with a custom dataset
. . 12.1 Extract data into a csv file
. . 12.2 Check the headers in OpenAI
. . 12.3 Playground
. . 12.4 Create Prompt and Completion Pairs
. . 12.5 Prepare for GPT
. . 12.6 Fine-tune a GPT model with your data
. . 12.7 Interact with your fine-tuned model 13 Robust fine tuning
. . 13.1 Creating a robust, fine-tuned GPT model
. . . . 13.1.1 Step 1: Data preparation
. . . . 13.1.2 Step 2: Model architecture selection
. . . . 13.1.3 Step 3: Model training
. . . . 13.1.4 Step 4: Model evaluation 14 Self-taught reasoner 15 Data retrieval plug-in
. . 15.1 Plugins
. . 15.2 Retrieval Plugin
. . 15.3 Memory Feature
. . 15.4 Security
. . 15.5 API Endpoints
. . 15.6 Quickstart 16 Additional techniques
. . 16.1 Selection-inference prompting
. . 16.2 Faithful reasoning architecture
. . 16.3 Least-to-most prompting 17 Act-as prompts 18 Prompt templates 19 Template libraries 20 Prompt generators 21 GPTAnalytica PromptBuilder (user guide)
=============================
1 Preface
2 A short history of intelligence
. . 2.1 What is "intelligence"?
. . 2.2 Intelligence and humans
. . 2.3 Intelligence and computing
. . 2.4 Artificial intelligence
. . 2.5 Generative AI
. . 2.6 Conversant AI
. . 2.7 The Promethean Moment 3 Models and sources
. . 3.1 Natural Language Processing (NLP)
. . 3.2 Language Modeling (LM)
. . 3.3 Pre-GPT Language Models
. . 3.4 GPT Language Models
. . . . 3.4.1 From data to training set
. . . . 3.4.2 Limitations and bias
. . 3.5 Common Crawl
. . 3.6 WebText data set
. . . . 3.6.1 Test set
. . 3.7 Wikipedia
. . 3.8 Quality of sources 4 GPT-3
. . 4.1 Tokens
. . 4.2 Parameters
. . 4.3 GPT-3 and ChatGPT 5 GPT-4 6 ChatGPT 7 Using GPT and ChatGPT in OpenAI
. . 7.1 Playground
. . . . 7.1.1 Mode
. . . . 7.1.2 Model
. . . . 7.1.3 Temperature
. . 7.2 ChatGPT playground
. . 7.3 Get your API key
. . 7.4 Programmatic use of OpenAI
. . . . 7.4.1 Import the openai library
. . . . 7.4.2 An example chat API call 8 OpenAI via Python 9 OpenAI via Node.js 10 OpenAI .NET API 11 Prompt engineering
. . 11.1 Misunderstanding in human communication
. . 11.2 Misunderstanding in ChatGPT
. . 11.3 Model capabilities depend on context
. . 11.4 How to improve reliability on complex tasks
. . . . 11.4.1 Provide quality data
. . . . 11.4.2 Check your settings
. . . . 11.4.3 Use plain language to describe your inputs and outputs
. . . . 11.4.4 Show the API how to respond to any case
. . . . 11.4.5 Add context
. . . . 11.4.6 Include helpful information up-front
. . . . 11.4.7 Give examples
. . . . 11.4.8 Length of response
. . . . 11.4.9 Define a role
. . . . 11.4.10 Be more specific
. . . . 11.4.11 Divide a complex task into simpler tasks
. . . . 11.4.12 Prompt the model to explain before answering
. . . . 11.4.13 Ask for explanations before the answer 12 Fine tuning with a custom dataset
. . 12.1 Extract data into a csv file
. . 12.2 Check the headers in OpenAI
. . 12.3 Playground
. . 12.4 Create Prompt and Completion Pairs
. . 12.5 Prepare for GPT
. . 12.6 Fine-tune a GPT model with your data
. . 12.7 Interact with your fine-tuned model 13 Robust fine tuning
. . 13.1 Creating a robust, fine-tuned GPT model
. . . . 13.1.1 Step 1: Data preparation
. . . . 13.1.2 Step 2: Model architecture selection
. . . . 13.1.3 Step 3: Model training
. . . . 13.1.4 Step 4: Model evaluation 14 Self-taught reasoner 15 Data retrieval plug-in
. . 15.1 Plugins
. . 15.2 Retrieval Plugin
. . 15.3 Memory Feature
. . 15.4 Security
. . 15.5 API Endpoints
. . 15.6 Quickstart 16 Additional techniques
. . 16.1 Selection-inference prompting
. . 16.2 Faithful reasoning architecture
. . 16.3 Least-to-most prompting 17 Act-as prompts 18 Prompt templates 19 Template libraries 20 Prompt generators 21 GPTAnalytica PromptBuilder (user guide)