Supervised Sequence Labelling with Recurrent Neural Networks

Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and disto...

Full description

Bibliographic Details
Main Author: Graves, Alex
Format: eBook
Language:English
Published: Berlin, Heidelberg Springer Berlin Heidelberg 2012, 2012
Edition:1st ed. 2012
Series:Studies in Computational Intelligence
Subjects:
Online Access:
Collection: Springer eBooks 2005- - Collection details see MPG.ReNa
LEADER 02714nmm a2200301 u 4500
001 EB000388060
003 EBX01000000000000000241112
005 00000000000000.0
007 cr|||||||||||||||||||||
008 130626 ||| eng
020 |a 9783642247972 
100 1 |a Graves, Alex 
245 0 0 |a Supervised Sequence Labelling with Recurrent Neural Networks  |h Elektronische Ressource  |c by Alex Graves 
250 |a 1st ed. 2012 
260 |a Berlin, Heidelberg  |b Springer Berlin Heidelberg  |c 2012, 2012 
300 |a XIV, 146 p  |b online resource 
505 0 |a Introduction -- Supervised Sequence Labelling -- Neural Networks -- Long Short-Term Memory -- A Comparison of Network Architectures -- Hidden Markov Model Hybrids -- Connectionist Temporal Classification -- Multidimensional Networks -- Hierarchical Subsampling Networks 
653 |a Computational intelligence 
653 |a Artificial Intelligence 
653 |a Computational Intelligence 
653 |a Artificial intelligence 
041 0 7 |a eng  |2 ISO 639-2 
989 |b Springer  |a Springer eBooks 2005- 
490 0 |a Studies in Computational Intelligence 
028 5 0 |a 10.1007/978-3-642-24797-2 
856 4 0 |u https://doi.org/10.1007/978-3-642-24797-2?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 006.3 
520 |a Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools—robust to input noise and distortion, able to exploit long-range contextual information—that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.    The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional recurrent neural networks extend the framework in a natural way to data with more than one spatio-temporal dimension, such as images and videos. Thirdly, the use of hierarchical subsampling makes it feasible to apply the framework to very large or high resolution sequences, such as raw audio or video.   Experimental validation is provided by state-of-the-art results in speech and handwriting recognition