Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
Composed of three sections, this book presents the most popular training algorithm for neural networks: backpropagation. The first section presents the theory and principles behind backpropagation as seen from different perspectives such as statistics, machine learning, and dynamical systems. The second presents a number of network architectures that may be designed to match the general concepts of Parallel Distributed Processing with backpropagation learning. Finally, the third section shows how these principles can be applied to a number of different fields related to the cognitive sciences,…mehr
Composed of three sections, this book presents the most popular training algorithm for neural networks: backpropagation. The first section presents the theory and principles behind backpropagation as seen from different perspectives such as statistics, machine learning, and dynamical systems. The second presents a number of network architectures that may be designed to match the general concepts of Parallel Distributed Processing with backpropagation learning. Finally, the third section shows how these principles can be applied to a number of different fields related to the cognitive sciences, including control, speech recognition, robotics, image processing, and cognitive psychology. The volume is designed to provide both a solid theoretical foundation and a set of examples that show the versatility of the concepts. Useful to experts in the field, it should also be most helpful to students seeking to understand the basic principles of connectionist learning and to engineers wanting to add neural networks in general -- and backpropagation in particular -- to their set of problem-solving methods.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.
Die Herstellerinformationen sind derzeit nicht verfügbar.
Autorenporträt
Yves Chauvin, David E. Rumelhart
Inhaltsangabe
Contents: D.E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin, Backpropagation: The Basic Theory. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, K.J. Lang, Phoneme Recognition Using Time-Delay Neural Networks. C. Schley, Y. Chauvin, V. Henkle, Automated Aircraft Flare and Touchdown Control Using Neural Networks. F.J. Pineda, Recurrent Backpropagation Networks. M.C. Mozer, A Focused Backpropagation Algorithm for Temporal Pattern Recognition. D.H. Nguyen, B. Widrow, Nonlinear Control with Neural Networks. M.I. Jordan, D.E. Rumelhart, Forward Models: Supervised Learning with a Distal Teacher. S.J. Hanson, Backpropagation: Some Comments and Variations. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland, Graded State Machines: The Representation of Temporal Contingencies in Feedback Networks. S. Becker, G.E. Hinton, Spatial Coherence as an Internal Teacher for a Neural Network. J.R. Bachrach, M.C. Mozer, Connectionist Modeling and Control of Finite State Systems Given Partial State Information. P. Baldi, Y. Chauvin, K. Hornik, Backpropagation and Unsupervised Learning in Linear Networks. R.J. Williams, D. Zipser, Gradient-Based Learning Algorithms for Recurrent Networks and Their Computational Complexity. P. Baldi, Y. Chauvin, When Neural Networks Play Sherlock Homes. P. Baldi, Gradient Descent Learning Algorithms: A Unified Perspective.
Contents: D.E. Rumelhart, R. Durbin, R. Golden, Y. Chauvin, Backpropagation: The Basic Theory. A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, K.J. Lang, Phoneme Recognition Using Time-Delay Neural Networks. C. Schley, Y. Chauvin, V. Henkle, Automated Aircraft Flare and Touchdown Control Using Neural Networks. F.J. Pineda, Recurrent Backpropagation Networks. M.C. Mozer, A Focused Backpropagation Algorithm for Temporal Pattern Recognition. D.H. Nguyen, B. Widrow, Nonlinear Control with Neural Networks. M.I. Jordan, D.E. Rumelhart, Forward Models: Supervised Learning with a Distal Teacher. S.J. Hanson, Backpropagation: Some Comments and Variations. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland, Graded State Machines: The Representation of Temporal Contingencies in Feedback Networks. S. Becker, G.E. Hinton, Spatial Coherence as an Internal Teacher for a Neural Network. J.R. Bachrach, M.C. Mozer, Connectionist Modeling and Control of Finite State Systems Given Partial State Information. P. Baldi, Y. Chauvin, K. Hornik, Backpropagation and Unsupervised Learning in Linear Networks. R.J. Williams, D. Zipser, Gradient-Based Learning Algorithms for Recurrent Networks and Their Computational Complexity. P. Baldi, Y. Chauvin, When Neural Networks Play Sherlock Homes. P. Baldi, Gradient Descent Learning Algorithms: A Unified Perspective.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826