Pattern Recognition with Neural Networks in C++
暫譯: 使用 C++ 的神經網絡模式識別

Pandya, Abhijit S., Macy, Robert B.

  • 出版商: CRC
  • 出版日期: 2019-12-17
  • 售價: $2,830
  • 貴賓價: 9.5$2,689
  • 語言: 英文
  • 頁數: 432
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 0367448874
  • ISBN-13: 9780367448875
  • 相關分類: C++ 程式語言
  • 海外代購書籍(需單獨結帳)

買這商品的人也買了...

相關主題

商品描述

The addition of artificial neural network computing to traditional pattern recognition has given rise to a new, different, and more powerful methodology that is presented in this interesting book. This is a practical guide to the application of artificial neural networks. Geared toward the practitioner, Pattern Recognition with Neural Networks in C++ covers pattern classification and neural network approaches within the same framework. Through the book's presentation of underlying theory and numerous practical examples, readers gain an understanding that will allow them to make judicious design choices rendering neural application predictable and effective. The book provides an intuitive explanation of each method for each network paradigm. This discussion is supported by a rigorous mathematical approach where necessary.  C++ has emerged as a rich and descriptive means by which concepts, models, or algorithms can be precisely described. For many of the neural network models discussed, C++ programs are presented for the actual implementation. Pictorial diagrams and in-depth discussions explain each topic. Necessary derivative steps for the mathematical models are included so that readers can incorporate new ideas into their programs as the field advances with new developments. For each approach, the authors clearly state the known theoretical results, the known tendencies of the approach, and their recommendations for getting the best results from the method.  The material covered in the book is accessible to working engineers with little or no explicit background in neural networks. However, the material is presented in sufficient depth so that those with prior knowledge will find this book beneficial. Pattern Recognition with Neural Networks in C++ is also suitable for courses in neural networks at an advanced undergraduate or graduate level. This book is valuable for academic as well as practical research.

商品描述(中文翻譯)

人工神經網路計算的加入,使得傳統的模式識別產生了一種新的、不同的且更強大的方法論,這在本書中得到了呈現。本書是一本關於人工神經網路應用的實用指南。針對實務工作者,《使用 C++ 的神經網路模式識別》涵蓋了模式分類和神經網路方法,並在同一框架內進行探討。透過本書對基礎理論的介紹和眾多實際範例的展示,讀者將獲得理解,使他們能夠做出明智的設計選擇,從而使神經應用變得可預測且有效。本書對每種網路範式的方法提供了直觀的解釋,並在必要時以嚴謹的數學方法進行支持。

C++ 已成為一種豐富且具描述性的手段,可以精確地描述概念、模型或演算法。對於許多討論的神經網路模型,本書提供了 C++ 程式碼以進行實際實現。圖示和深入的討論解釋了每個主題。必要的數學模型導出步驟也包含在內,以便讀者能夠隨著該領域的新發展將新想法融入他們的程式中。對於每種方法,作者清楚地陳述了已知的理論結果、該方法的已知趨勢,以及他們對於獲得最佳結果的建議。

本書所涵蓋的材料對於在神經網路方面幾乎沒有背景的工程師來說是可接觸的。然而,材料的深度足以使那些具備先前知識的讀者也能從中受益。《使用 C++ 的神經網路模式識別》同樣適合於高級本科或研究生層級的神經網路課程。本書對於學術研究和實務研究都具有重要價值。

目錄大綱

Introduction
Pattern Recognition Systems
Motivation for Artificial Neural Network Approach
A Prelude to Pattern Recognition
Statistical Pattern Recognition
Syntactic Pattern Recognition
The Character Recognition Problem
Organization of Topics
Neural Networks: An Overview
Motivation for Overviewing Biological Neural Networks
Background
Biological Neural Networks
Hierarchical Organization of the Brain
Historical Background
Artificial Neural Networks
Preprocessing
General
Dealing with Input from a Scanned Image
Image Compression
Edge Detection
Skeletonizing
Dealing with Input from a Tablet
Segmentation
Feed Forward Networks with Supervised Learning
Feed-Forward Multilayer Perceptron (FFMLP) Architecture
FFMLP in C++
Training with Back Propagation
A Primitive Example
Training Strategies and Avoiding Local Minima
Variations on Gradient Descent
Topology
ACON vs. OCON
Overtraining and Generalization
Training Set Size and Network Size
Conjugate Gradient Method
ALOPEX
Some Other Types of Neural Networks
General
Radial Basis Function Networks
Higher Order Neural Networks
Feature Extraction I: Geometric Features and Transformations
General
Geometric Features (Loops, Intersections and Endpoints)
Feature Maps
A Network Example Using Geometric Features
Feature Extraction Using Transformations
Fourier Descriptors
Gabor Transformations and Wavelets
Feature Extraction II: Principle Component Analysis
Dimensionality Reduction
Principal Components
Karhunen-Loeve (K-L) Transformation
Principal Component Neural Networks
Applications
Kohonen Networks and Learning Vector Quantization
General
K-Means Algorithm
An Introduction to the Kohonen Model
The Role of Lateral Feedback
Kohonen Self-Organizing Feature Map
Learning Vector Quantization
Variations on LVQ
Neural Associative Memories and Hopfield Netwo

目錄大綱(中文翻譯)

Introduction

Pattern Recognition Systems

Motivation for Artificial Neural Network Approach

A Prelude to Pattern Recognition

Statistical Pattern Recognition

Syntactic Pattern Recognition

The Character Recognition Problem

Organization of Topics

Neural Networks: An Overview

Motivation for Overviewing Biological Neural Networks

Background

Biological Neural Networks

Hierarchical Organization of the Brain

Historical Background

Artificial Neural Networks

Preprocessing

General

Dealing with Input from a Scanned Image

Image Compression

Edge Detection

Skeletonizing

Dealing with Input from a Tablet

Segmentation

Feed Forward Networks with Supervised Learning

Feed-Forward Multilayer Perceptron (FFMLP) Architecture

FFMLP in C++

Training with Back Propagation

A Primitive Example

Training Strategies and Avoiding Local Minima

Variations on Gradient Descent

Topology

ACON vs. OCON

Overtraining and Generalization

Training Set Size and Network Size

Conjugate Gradient Method

ALOPEX

Some Other Types of Neural Networks

General

Radial Basis Function Networks

Higher Order Neural Networks

Feature Extraction I: Geometric Features and Transformations

General

Geometric Features (Loops, Intersections and Endpoints)

Feature Maps

A Network Example Using Geometric Features

Feature Extraction Using Transformations

Fourier Descriptors

Gabor Transformations and Wavelets

Feature Extraction II: Principle Component Analysis

Dimensionality Reduction

Principal Components

Karhunen-Loeve (K-L) Transformation

Principal Component Neural Networks

Applications

Kohonen Networks and Learning Vector Quantization

General

K-Means Algorithm

An Introduction to the Kohonen Model

The Role of Lateral Feedback

Kohonen Self-Organizing Feature Map

Learning Vector Quantization

Variations on LVQ

Neural Associative Memories and Hopfield Netwo

最後瀏覽商品 (20)