Parallel Architectures For Artificial Neural Networks: Paradigms And Implementations
暫譯: 人工神經網路的平行架構:範式與實作
N. Sundararajan, P. Saratchandran
- 出版商: Wiley
- 出版日期: 1998-12-14
- 售價: $4,890
- 貴賓價: 9.5 折 $4,646
- 語言: 英文
- 頁數: 409
- 裝訂: Hardcover
- ISBN: 0818683996
- ISBN-13: 9780818683992
海外代購書籍(需單獨結帳)
相關主題
商品描述
Description:
This excellent reference for all those involved in neural networks research and application presents, in a single text, the necessary aspects of parallel implementation for all major artificial neural network models. The book details implementations on varoius processor architectures (ring, torus, etc.) built on different hardware platforms, ranging from large general purpose parallel computers to custom built MIMD machines using transputers and DSPs.
Experts who performed the implementations author the chapters and research results are covered in each chapter. These results are divided into three parts.
Theoretical analysis of parallel implementation schemes on MIMD message passing machines.
Details of parallel implementation of BP neural networks on a general purpose, large, parallel computer.
Four chapters each describing a specific purpose parallel neural computer configuration.
This book is aimed at graduate students and researchers working in artificial neural networks and parallel computing. Graduate level educators can use it to illustrate the methods of parallel computing for ANN simulation. The text is an ideal reference providing lucid mathematical analyses for practitioners in the field.
Table of Contents:
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks (Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N. Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N. Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams, Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line Learning (Urs A. Müller, Patrick Spiess, Michael Kocheisen, Beat Flepp, Anton Gunzinger, Walter Guggenbühl).
11. Training Neural Networks with SPERT-II (Krste Asanović;, James Beck, David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
商品描述(中文翻譯)
**描述:**
這本優秀的參考書籍針對所有從事神經網絡研究和應用的人士,將所有主要人工神經網絡模型的平行實現所需的各個方面集中於單一文本中。書中詳細介紹了在各種處理器架構(環形、圓環等)上,基於不同硬體平台的實現,這些平台從大型通用平行計算機到使用傳輸器和數位信號處理器(DSP)的定制 MIMD 機器不等。
執行這些實現的專家撰寫了各章節,並在每章中涵蓋了研究結果。這些結果分為三個部分。
- 對 MIMD 消息傳遞機器上平行實現方案的理論分析。
- 在大型通用平行計算機上 BP 神經網絡的平行實現細節。
- 四章各自描述特定用途的平行神經計算機配置。
這本書的目標讀者是從事人工神經網絡和平行計算的研究生和研究人員。研究生層級的教育者可以利用它來說明 ANN 模擬的平行計算方法。該文本是一本理想的參考書,為該領域的從業者提供清晰的數學分析。
**目錄:**
1. 介紹 (N. Sundararajan, P. Saratchandran, Jim Torresen)。
2. BP 神經網絡的平行實現回顧 (Jim Torresen, Olav Landsverk)。
I: 平行實現的分析。
3. 在異構架構上進行 BP 神經網絡的網絡平行性 (R. Arularasan, P. Saratchandran, N. Sundararajan, Shou King Foo)。
4. 在異構架構上進行 BP 神經網絡的訓練集平行性 (Shou King Foo, P. Saratchandran, N. Sundararajan)。
5. 用於訓練大型完全遞歸神經網絡的平行實時遞歸算法 (Elias S. Manolakos, George Kechriotis)。
6. 在處理器環形架構上實現 ART1 神經網絡的平行實現 (Elias S. Manolakos, Stylianos Markogiannakis)。
II: 在大型通用平行計算機上的實現。
7. 在大型平行計算機上實現 BP 神經網絡 (Jim Torresen, Shinji Tomita)。
III: 特殊平行架構和應用案例研究。
8. 用於大規模神經網絡計算的超平行架構 (Yoshiji Fujimoto)。
9. 在 DREAM 機器上定期結構的神經網絡 (Soheil Shams, Jean-Luc Gaudiot)。
10. 具有在線學習的高性能平行反向傳播模擬 (Urs A. Müller, Patrick Spiess, Michael Kocheisen, Beat Flepp, Anton Gunzinger, Walter Guggenbühl)。
11. 使用 SPERT-II 訓練神經網絡 (Krste Asanović, James Beck, David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek)。
12. 總結 (N. Sundararajan, P. Saratchandran)。