- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Presents the parallel implementation aspects of all major artificial network models. The text details implementations on various processor architectures built on different hardware platforms, ranging from large parallel computers to MIMD machines using transputers and DSPs.
Presents the parallel implementation aspects of all major artificial network models. The text details implementations on various processor architectures built on different hardware platforms, ranging from large parallel computers to MIMD machines using transputers and DSPs.
Produktdetails
- Produktdetails
- Verlag: Wiley
- Seitenzahl: 412
- Erscheinungstermin: 14. Dezember 1998
- Englisch
- Abmessung: 260mm x 183mm x 27mm
- Gewicht: 968g
- ISBN-13: 9780818683992
- ISBN-10: 0818683996
- Artikelnr.: 22116757
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
- Verlag: Wiley
- Seitenzahl: 412
- Erscheinungstermin: 14. Dezember 1998
- Englisch
- Abmessung: 260mm x 183mm x 27mm
- Gewicht: 968g
- ISBN-13: 9780818683992
- ISBN-10: 0818683996
- Artikelnr.: 22116757
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- gpsr@libri.de
N. Sundararajan and P. Saratchandran are the authors of Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations, published by Wiley.
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).