Neural Networks and Speech Processing

We would like to take this opportunity to thank all of those individ­ uals who helped us assemble this text, including the people of Lockheed Sanders and Nestor, Inc., whose encouragement and support were greatly appreciated. In addition, we would like to thank the members of the Lab­ oratory for En...

Full description

Bibliographic Details
Main Authors: Morgan, David P., Scofield, Christopher L. (Author)
Format: eBook
Language:English
Published: New York, NY Springer US 1991, 1991
Edition:1st ed. 1991
Series:The Springer International Series in Engineering and Computer Science
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 04681nmm a2200361 u 4500
001 EB000625342
003 EBX01000000000000000478424
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9781461539506 
100 1 |a Morgan, David P. 
245 0 0 |a Neural Networks and Speech Processing  |h Elektronische Ressource  |c by David P. Morgan, Christopher L. Scofield 
250 |a 1st ed. 1991 
260 |a New York, NY  |b Springer US  |c 1991, 1991 
300 |a XVII, 391 p  |b online resource 
505 0 |a 9.1 Keyword Spotting -- 9.2 The Primary KWS System -- 9.3 DUR Experiments -- 9.4 Secondary Processing Experiments -- 9.5 Summary -- 10 Neural Networks and Speech Processing -- 10.1 Speech Processing Applications -- 10.2 Summary of Efforts in ASR -- 10.3 Concluding Remarks 
505 0 |a 1 Introduction -- 1.1 Motivation -- 1.2 A Few Words on Speech Recognition -- 1.3 A Few Words on Neural Networks -- 1.4 Contents -- 2 The Mammalian Auditory System -- 2.1 Introduction to Auditory Processing -- 2.2 The Anatomy and Physiology of Neurons -- 2.3 Neuroanatomy of the Auditory System -- 2.4 Recurrent Connectivity in the Auditory Pathway -- 2.5 Summary -- 3 An Artificial Neural Network Primer -- 3.1 A Neural Network Primer for Speech Scientists -- 3.2 Elements of Artificial Neural Networks -- 3.3 Learning in Neural Networks -- 3.4 Supervised Learning -- 3.5 Multi-Layer Networks -- 3.6 Unsupervised Learning -- 3.7 Summary -- 4 A Speech Technology Primer -- 4.1 A Speech Primer for Neural Scientists -- 4.2 Human Speech Production/Perception -- 4.3 ASR Technology -- 4.4 Signal Processing and Feature Extraction -- 4.5 Time Alignment and Pattern Matching -- 4.6 Language Models -- 4.7 Summary -- 5 Methods in Neural Network Applications --  
505 0 |a 5.1 The Allure of Neural Networks for Speech Processing -- 5.2 The Computational Properties of ANNs -- 5.3 ANN Limitations: The Scaling Problem -- 5.4 Structured ANN Solutions -- 5.5 Summary -- 6 Signal Processing and Feature Extraction -- 6.1 The Importance of Signal Representations -- 6.2 The Signal Processing Problem Domain -- 6.3 Biologically Motivated Signal Processing -- 6.4 ANNs for Conventional Signal Processing -- 6.5 Feature Representations -- 6.6 Summary -- 7 Time Alignment and Pattern Matching -- 7.1 Modeling Spectro-Temporal Structure -- 7.2 Time Normalization Via Pre-Processing -- 7.3 The Dynamic Programming Neural Network -- 7.4 HMM Motivated Networks -- 7.5 Recurrent Networks for Temporal Modeling -- 7.6 The Time Delay Neural Network -- 7.7 Summary -- 8 Natural Language Processing -- 8.1 The Importance of Language Processing -- 8.2 Syntactic Models -- 8.3 Semantic Models -- 8.4 Knowledge Representation -- 8.5 Summary -- 9 ANN Keyword Recognition --  
653 |a Electrical and Electronic Engineering 
653 |a Artificial Intelligence 
653 |a Electrical engineering 
653 |a Signal, Speech and Image Processing 
653 |a Artificial intelligence 
653 |a Signal processing 
700 1 |a Scofield, Christopher L.  |e [author] 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a The Springer International Series in Engineering and Computer Science 
028 5 0 |a 10.1007/978-1-4615-3950-6 
856 4 0 |u https://doi.org/10.1007/978-1-4615-3950-6?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 621.382 
520 |a We would like to take this opportunity to thank all of those individ­ uals who helped us assemble this text, including the people of Lockheed Sanders and Nestor, Inc., whose encouragement and support were greatly appreciated. In addition, we would like to thank the members of the Lab­ oratory for Engineering Man-Machine Systems (LEMS) and the Center for Neural Science at Brown University for their frequent and helpful discussions on a number of topics discussed in this text. Although we both attended Brown from 1983 to 1985, and had offices in the same building, it is surprising that we did not meet until 1988. We also wish to thank Kluwer Academic Publishers for their profes­ sionalism and patience, and the reviewers for their constructive criticism. Thanks to John McCarthy for performing the final proof, and to John Adcock, Chip Bachmann, Deborah Farrow, Nathan Intrator, Michael Perrone, Ed Real, Lance Riek and Paul Zemany for their comments and assistance. We would also like to thank Khrisna Nathan, our most unbi­ ased and critical reviewer, for his suggestions for improving the content and accuracy of this text. A special thanks goes to Steve Hoffman, who was instrumental in helping us perform the experiments described in Chapter 9