Parallel Architectures For Artificial Neural Networks: Paradigms And Implementations
N. Sundararajan, P. Saratchandran
- 出版商: Wiley
- 出版日期: 1998-12-14
- 售價: $4,830
- 貴賓價: 9.5 折 $4,589
- 語言: 英文
- 頁數: 409
- 裝訂: Hardcover
- ISBN: 0818683996
- ISBN-13: 9780818683992
海外代購書籍(需單獨結帳)
相關主題
商品描述
Description:
This excellent reference for all those involved in neural networks research and application presents, in a single text, the necessary aspects of parallel implementation for all major artificial neural network models. The book details implementations on varoius processor architectures (ring, torus, etc.) built on different hardware platforms, ranging from large general purpose parallel computers to custom built MIMD machines using transputers and DSPs.
Experts who performed the implementations author the chapters and research results are covered in each chapter. These results are divided into three parts.
Theoretical analysis of parallel implementation schemes on MIMD message passing machines.
Details of parallel implementation of BP neural networks on a general purpose, large, parallel computer.
Four chapters each describing a specific purpose parallel neural computer configuration.
This book is aimed at graduate students and researchers working in artificial neural networks and parallel computing. Graduate level educators can use it to illustrate the methods of parallel computing for ANN simulation. The text is an ideal reference providing lucid mathematical analyses for practitioners in the field.
Table of Contents:
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks (Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N. Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N. Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams, Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line Learning (Urs A. Müller, Patrick Spiess, Michael Kocheisen, Beat Flepp, Anton Gunzinger, Walter Guggenbühl).
11. Training Neural Networks with SPERT-II (Krste Asanović;, James Beck, David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).