Аналитический обзор интегральных систем распознавания речи
Ключевые слова:
автоматическое распознавание речи, интегральные системы, нейронные сети, глубокое обучениеАннотация
Приведен аналитический обзор разновидностей интегральных (end-to-end) систем для распознавания речи, методов их построения, обучения и оптимизации. Рассмотрены варианты моделей на основе коннекционной временной классификации (CTC) в качестве функции потерь для нейронной сети, модели на основе механизма внимания и шифратор-дешифратор моделей. Также рассмотрены нейронные сети, построенные с использованием условных случайных полей (CRF), которые являются обобщением скрытых марковских моделей, что позволяет исправить многие недостатки стандартных гибридных систем распознавания речи, например, предположение о том, что элементы входных последовательностей звуков речи являются независимыми случайными величинами. Также описаны возможности интеграции с языковыми моделями на этапе декодирования, демонстрирующие существенное сокращение ошибки распознавания для интеграционных моделей. Описаны различные модификации и улучшения стандартных интегральных архитектур систем распознавания речи, как, например, обобщение коннекционной классификации и использовании регуляризации в моделях, основанных на механизмах внимания. Обзор исследований, проводимых в данной предметной области, показывает, что интегральные системы распознавания речи позволяют достичь результатов, сравнимых с результатами стандартных систем, использующих скрытые марковские модели, но с применением более простой конфигурации и быстрой работой системы распознавания как при обучении, так и при декодировании. Рассмотрены наиболее популярные и развивающиеся библиотеки и инструментарии для построения интегральных систем распознавания речи, такие как TensorFlow, Eesen, Kaldi и другие. Проведено сравнение описанных инструментариев по критериям простоты и доступности их использования для реализации интегральных систем распознавания речи.Литература
1. Ронжин А.Л., Карпов А.А., Ли И.В. Речевой и многомодальный интерфейсы // М.: Наука. 2006. 173 с.
2. Ganchev T., Fakotakis N., Kokkinakis G. Comparative evaluation of various MFCC implementations on the speaker verification task // Proceedings of the SPECOM. 2005. pp. 191–194.
3. Hermansky H., Malayath N. Speaker verification using speaker-specific mappings // Proc. RLA2C. 1998. 4 p.
4. Маковкин К.А. Гибридные модели – Скрытые марковские модели. Многослойный персептрон и их применение в системах распознавания речи. Обзор // Речевые технологии. 2012. № 3. С. 58–83.
5. Cosi P. A KALDI-DNN-based ASR system for Italian // 2015 International Joint Conference on Neural Networks (IJCNN). 2015. pp. 1–5.
6. Kipyatkova I., Karpov A. DNN-Based Acoustic Modeling for Russian Speech Recognition Using Kaldi // International Conference on Speech and Computer. 2016. pp. 246–253.
7. LeCun Y., Bengio Y. Convolutional networks for images, speech, and time series // The handbook of brain theory and neural networks. 1995. vol. 3361. no. 10. pp. 1995.
8. Abdel-Hamid O. et al. Convolutional neural networks for speech recognition // IEEE/ACM Transactions on audio, speech, and language processing. 2014. vol. 22. no. 10. pp. 1533–1545.
9. Sainath T.N., Mohamed A.-R., Kingsbury B., Ramabhadran B. Deep convolutional neural networks for LVCSR // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2013. pp. 8614–8618.
10. Robinson T., Hochberg M., Renals S. The use of recurrent neural networks in continuous speech recognition // Automatic speech and speaker recognition. 1996. pp. 233–258.
11. Hochreiter S., Schmidhuber J. Long short-term memory // Neural computation. 1997. vol. 9. no. 8. pp. 1735–1780.
12. Гапочкин А.В. Нейронные сети в системах распознавания речи // Science Time. 2014. № 1(1). pp. 29–36.
13. Graves A., Jaitly N., Mohamed A.-R. Hybrid speech recognition with deep bidirectional LSTM // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2013. pp. 273–278.
14. He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. pp. 770–778.
15. Markovnikov N.M., Kipyatkova I., Karpov A., Filchenkov A. Deep neural networks in Russian speech recognition // Proceedings of 2017 Artificial Intelligence and Natural Language Conference. 2017. pp. 54–67.
16. Кипяткова И.С., Карпов А.А. Разновидности глубоких искусственных нейронных сетей для систем распознавания речи // Труды СПИИРАН. 2016. Вып. 49(6). С. 80–103.
17. Ackley D.H., Hinton G.E., Sejnowski T.J. A learning algorithm for Boltzmann machines // Cognitive science. 1985. vol. 9. no. 1. pp. 147–169.
18. Srivastava N. et al. Dropout: a simple way to prevent neural networks from overfitting // Journal of machine learning research. 2014. vol. 15. no. 1. pp. 1929–1958.
19. Ioffe S., Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift // International Conference on Machine Learning. 2015. pp. 448–456.
20. Levenshtein V.I. Binary codes capable of correcting deletions, insertions, and reversals // Soviet physics. Doklady. 1996. vol. 10. pp. 707–710.
21. Mikolov T., et al. Recurrent neural network based language model // Interspeech. 2010. vol. 2. pp. 1045–1048.
22. Rao K., Peng F., Sak H., Beaufays F. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4225–4229.
23. Jaitly N., Hinton G. Learning a better representation of speech soundwaves using restricted boltzmann machines // 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2011. pp. 5884–5887.
24. Smolensky P. Information processing in dynamical systems: Foundations of harmony theory // Colorado University at Boulder Dept of Computer Science. 1986. pp. 194–281.
25. Bojarski M. et al. End to End Learning for Self-Driving Cars» // 2016. preprint: arXiv: 1604.07316. URL: https://arxiv.org/abs/1604.07316 (дата обращения 17.02.2018).
26. Sayre K.M. Machine recognition of handwritten words: A project report // Pattern recognition. 1973. vol. 5. no. 3. pp. 213–228.
27. Graves A., Ferna´ndez S., Gomez F., Schmidhuber J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks // Proceedings of the 23rd international conference on Machine learning. 2006. pp. 369–376.
28. Bridle J.S. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition // Neurocomputing. 1990. pp. 227–236.
29. Rabiner L.R. A tutorial on hidden Markov models and selected applications in speech recognition // Proceedings of the IEEE. 1989. vol. 77. no. 2. pp. 257–286.
30. Graves A., Jaitly N. Towards end-to-end speech recognition with recurrent neural networks // Proceedings of the 31st International Conference on Machine Learning (ICML-14). 2014. pp. 1764–1772.
31. Povey D. et al. The Kaldi speech recognition toolkit» // IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society. 2011. 4 p.
32. Корпус английской речи WSJ. URL: https://catalog.ldc.upenn.edu/LDC93S6B (дата обращения: 17.02.2018).
33. Miao Y., Gowayyed M., Metze F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding // 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2015. pp. 167–174.
34. Popovi´c B., Pakoci E., Pekar D. End-to-End Large Vocabulary Speech Recognition for the Serbian Language // International Conference on Speech and Computer. 2017. pp. 343–352.
35. Mohri M., Pereira F., Riley M. Weighted finite-state transducers in speech recognition // Computer Speech & Language. 2002. vol. 16. no. 1. pp. 69–88.
36. Allauzen C. et al. A general and efficient weighted finite-state transducer library // International Conference on Implementation and Application of Automata. 2007. pp. 11–23.
37. Collobert R., Puhrsch C., Synnaeve G. Wav2letter: an end-to-end convnetbased speech recognition system // 2016. preprint: arXiv: 1609.03193. URL: https://arxiv.org/abs/1609.03193 (дата обращения 17.02.2018).
38. Panayotov V., Chen G., Povey D., Khudanpur S. Librispeech: an ASR corpus based on public domain audio books // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 5206–5210.
39. Deep learning toolkit Torch. URL: http://www.torch.ch/ (дата обращения 17.02.2018).
40. Zhang Y. et al. Towards end-to-end speech recognition with deep convolutional neural networks // 2017. preprint: arXiv: 1701.02720. URL: https://arxiv.org/abs/1701.02720 (дата обращения 17.02.2018).
41. Корпус английской речи TIMIT. URL: https://catalog.ldc.upenn.edu/ldc93s1 (дата обращения: 17.02.2018).
42. Sak H., de Chaumont Quitry F., Sainath T., Rao K. Acoustic modelling with cd-ctc-smbr lstm rnns // 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2015. pp. 604–609.
43. Kingsbury B. Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling // 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009). 2009. pp. 3761–3764.
44. Sainath T.N., Vinyals O., Senior A., Sak H. Convolutional, long shortterm memory, fully connected deep neural networks // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4580–4584.
45. Fiscus J.G. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER) // Proceedings of 1997 IEEE Workshop on Automatic Speech Recognition and Understanding. 1997. pp. 347–354.
46. Soltau H., Liao H., Sak H. Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition // 2016. preprint: arXiv: 1610. 09975. URL: https://arxiv.org/abs/1610.09975 (дата обращения 17.02.2018).
47. Liao H., McDermott E., Senior A. Large scale deep neural network acoustic modeling with semi-supervised training data for YouTube video transcription // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2013. pp. 368–373.
48. Youtube. URL: https://www.youtube.com/yt/lineups/ (дата обращения 17.02.2018).
49. Graves A. Sequence transduction with recurrent neural networks // 2012. preprint: arXiv: 1211.3711. URL: https://arxiv.org/abs/1211.3711 (дата обращения 17.02.2018).
50. Boulanger-Lewandowski N., Bengio Y., Vincent P. High-dimensional sequence transduction // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2013. pp. 3178–3182.
51. Graves A., Mohamed A.-R., Hinton G. Speech recognition with deep recurrent neural networks // 2013 IEEE International Conference on Acoustics, speech and signal processing (ICASSP). 2013. pp. 6645–6649.
52. Zhang Z. et al. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition // 2016. preprint: arXiv: 1611.07174. URL: https://arxiv.org/abs/1611.07174 (дата обращения 17.02.2018).
53. Wang Y., Deng X., Pu S., Huang Z. Residual convolutional CTC networks for automatic speech recognition // 2017. preprint: arXiv: 1702.07793. URL: https://arxiv.org/abs/1702.07793 (дата обращения 17.02.2018).
54. Keras: The Python Deep Learning library. URL: https://keras.io/ (дата обращения 17.02.2018).
55. TensorFlow. An open source machine learning framework for everyone. URL: https://www.tensorflow.org/ (дата обращения 17.02.2018).
56. Инструментарий для глубокого обучения Theano. URL: http://deeplearning.net/software/theano/ (дата обращения 17.02.2018).
57. The Microsoft Cognitive Toolkit. URL: https://docs.microsoft.com/ru-ru/cognitive-toolkit/ (дата обращения 17.02.2018).
58. Example implementation of speech recognition system. URL: https://github.com/Microsoft/CNTK/tree/master/Tests/EndToEndTests/Speech/LSTM_CTC_MLF (дата обращения 17.02.2018).
59. CTC loss-function implementation. URL: https://github.com/baidu-research/warp-ctc (дата обращения 17.02.2018).
60. CTC model implementation using Kaldi. URL: https://github.com/lingochamp/kaldi-ctc (дата обращения 17.02.2018).
61. Cho K. et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation // 2014. preprint: arXiv: 1406.1078. URL: https://arxiv.org/abs/1406.1078 (дата обращения 17.02.2018).
62. Sutskever I., Vinyals O., Le Q.V. Sequence to sequence learning with neural networks // Advances in neural information processing systems. 2014. pp. 3104–3112.
63. Chorowski J.K. et al. Attentionbased models for speech recognition // Advances in Neural Information Processing Systems. 2015. pp. 577–585.
64. Bahdanau D., Cho K., Bengio Y. Neural machine translation by jointly learning to align and translate // 2014. preprint: arXiv: 1409.0473. URL: https://arxiv.org/abs/1409.0473 (дата обращения 17.02.2018).
65. Mnih V., Heess N., Graves A. Recurrent models of visual attention // Advances in neural information processing systems. 2014. pp. 2204–2212.
66. Bahdanau D. et al. End-to-end attention-based large vocabulary speech recognition // 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016. pp. 4945–4949.
67. Chan W., Jaitly N., Le Q., Vinyals O. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition // 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016. pp. 4960–4964.
68. Kim S., Hori T., Watanabe S. Joint CTC-attention based end-to-end speech recognition using multi-task learning // 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017. pp. 4835–4839.
69. Chorowski J., Bahdanau D., Cho K., Bengio Y. End-to-end continuous speech recognition using attention-based recurrent NN: first results // 2014. preprint: arXiv: 1412.1602. URL: https://arxiv.org/abs/1412.1602 (дата обращения 17.02.2018).
70. Amodei D. et al. End to end speech recognition in English and Mandarin // ICLR 2016 workshop. 2016. 12 p.
71. Zhang Y., Chan W., Jaitly N. Very deep convolutional networks for end-to-end speech recognition // 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017. pp. 4845–4849.
72. Tjandra A., Sakti S., Nakamura S. Attention-based Wav2Text with feature transfer learning // 2017. preprint: arXiv: 1709.07814. URL: https://arxiv.org/abs/1709.07814 (дата обращения 17.02.2018).
73. Implementation of attention-based model. URL: https://github.com/rizar/attention-lvcsr (дата обращения 17.02.2018).
74. Van Merri¨enboer et al. Blocks and fuel: Frameworks for deep learning // 2015. preprint: arXiv: 1506.00619. URL: https://arxiv.org/abs/1506.00619. (дата обращения 17.02.2018).
75. Bahdanau D. et al. Task loss estimation for sequence prediction // 2015. preprint: arXiv: 1511.06456. URL: https://arxiv.org/abs/1511.06456 (дата обращения 17.02.2018).
76. Luong T., Brevdo E., Zhao R. Neural machine translation (seq2seq) tutorial. 2017. URL: https://www.tensorflow.org/tutorials/seq2seq (дата обращения 17.02.2018).
77. Implementation of end-to-end models. URL: https://github.com/farizrahman4u/seq2seq (дата обращения 17.02.2018).
78. Lafferty J., McCallum A., Pereira F.C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data // 2001. 8 p.
79. Fosler-Lussier E., He Y., Jyothi P., Prabhavalkar R. Conditional random fields in speech, audio, and language processing // Proceedings of the IEEE. 2013. vol. 101. no. 5. pp. 1054–1075.
80. Bottou L. Une Approche th´eorique de l’Apprentissage Connexioniste; Applications `a la reconnaissance de la Parole // Ph.D. thesis. Universite de Paris XI. 1991. 236 p.
81. Hifny Y., Renals S. Speech recognition using augmented conditional random fields // IEEE Transactions on Audio, Speech, and Language Processing. 2009. vol. 17. no. 2. pp. 354–365.
82. Kong L., Dyer C., Smith N.A. Segmental recurrent neural networks // 2015. preprint: arXiv: 1511.06018. URL: https://arxiv.org/abs/1511.06018 (дата обращения 17.02.2018).
83. Lu L., et al. Segmental recurrent neural networks for end-to-end speech recognition // 2016. preprint: arXiv: 1603.00223. URL: https://arxiv.org/abs/1603.00223 (дата обращения 17.02.2018).
84. Palaz D., Doss M.M., Collobert R. Convolutional neural networksbased continuous speech recognition using raw speech signal // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4295–4299.
85. Amodei D. et al. Deep speech 2: End-to-end speech recognition in english and mandarin // International Conference on Machine Learning. 2016. pp. 173–182.
86. Deep learning toolkit Lasagne. URL: http://lasagne.readthedocs.io/en/latest/ (дата обращения 17.02.2018).
87. Chainer. A Powerful, Flexible, and Intuitive Framework for Neural Networks. URL: https://chainer.org/ (дата обращения 17.02.2018).
88. Dean J. et al. Large scale distributed deep networks // Advances in neural information processing systems. 2012. pp. 1223–1231.
89. Lu L., Kong L., Dyer C., Smith N.A. Multi-task Learning with CTC and Segmental CRF for Speech Recognition // 2017. preprint: arXiv: 1702.06378. URL: https://arxiv.org/abs/1702.06378 (дата обращения 17.02.2018).
90. Neubig G. et al. DyNet: The Dynamic Neural Network Toolkit // 2017. preprint: arXiv: 1701.03980. URL: https://arxiv.org/abs/1701.03980 (дата обращения 17.02.2018).
2. Ganchev T., Fakotakis N., Kokkinakis G. Comparative evaluation of various MFCC implementations on the speaker verification task // Proceedings of the SPECOM. 2005. pp. 191–194.
3. Hermansky H., Malayath N. Speaker verification using speaker-specific mappings // Proc. RLA2C. 1998. 4 p.
4. Маковкин К.А. Гибридные модели – Скрытые марковские модели. Многослойный персептрон и их применение в системах распознавания речи. Обзор // Речевые технологии. 2012. № 3. С. 58–83.
5. Cosi P. A KALDI-DNN-based ASR system for Italian // 2015 International Joint Conference on Neural Networks (IJCNN). 2015. pp. 1–5.
6. Kipyatkova I., Karpov A. DNN-Based Acoustic Modeling for Russian Speech Recognition Using Kaldi // International Conference on Speech and Computer. 2016. pp. 246–253.
7. LeCun Y., Bengio Y. Convolutional networks for images, speech, and time series // The handbook of brain theory and neural networks. 1995. vol. 3361. no. 10. pp. 1995.
8. Abdel-Hamid O. et al. Convolutional neural networks for speech recognition // IEEE/ACM Transactions on audio, speech, and language processing. 2014. vol. 22. no. 10. pp. 1533–1545.
9. Sainath T.N., Mohamed A.-R., Kingsbury B., Ramabhadran B. Deep convolutional neural networks for LVCSR // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2013. pp. 8614–8618.
10. Robinson T., Hochberg M., Renals S. The use of recurrent neural networks in continuous speech recognition // Automatic speech and speaker recognition. 1996. pp. 233–258.
11. Hochreiter S., Schmidhuber J. Long short-term memory // Neural computation. 1997. vol. 9. no. 8. pp. 1735–1780.
12. Гапочкин А.В. Нейронные сети в системах распознавания речи // Science Time. 2014. № 1(1). pp. 29–36.
13. Graves A., Jaitly N., Mohamed A.-R. Hybrid speech recognition with deep bidirectional LSTM // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2013. pp. 273–278.
14. He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. pp. 770–778.
15. Markovnikov N.M., Kipyatkova I., Karpov A., Filchenkov A. Deep neural networks in Russian speech recognition // Proceedings of 2017 Artificial Intelligence and Natural Language Conference. 2017. pp. 54–67.
16. Кипяткова И.С., Карпов А.А. Разновидности глубоких искусственных нейронных сетей для систем распознавания речи // Труды СПИИРАН. 2016. Вып. 49(6). С. 80–103.
17. Ackley D.H., Hinton G.E., Sejnowski T.J. A learning algorithm for Boltzmann machines // Cognitive science. 1985. vol. 9. no. 1. pp. 147–169.
18. Srivastava N. et al. Dropout: a simple way to prevent neural networks from overfitting // Journal of machine learning research. 2014. vol. 15. no. 1. pp. 1929–1958.
19. Ioffe S., Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift // International Conference on Machine Learning. 2015. pp. 448–456.
20. Levenshtein V.I. Binary codes capable of correcting deletions, insertions, and reversals // Soviet physics. Doklady. 1996. vol. 10. pp. 707–710.
21. Mikolov T., et al. Recurrent neural network based language model // Interspeech. 2010. vol. 2. pp. 1045–1048.
22. Rao K., Peng F., Sak H., Beaufays F. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4225–4229.
23. Jaitly N., Hinton G. Learning a better representation of speech soundwaves using restricted boltzmann machines // 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2011. pp. 5884–5887.
24. Smolensky P. Information processing in dynamical systems: Foundations of harmony theory // Colorado University at Boulder Dept of Computer Science. 1986. pp. 194–281.
25. Bojarski M. et al. End to End Learning for Self-Driving Cars» // 2016. preprint: arXiv: 1604.07316. URL: https://arxiv.org/abs/1604.07316 (дата обращения 17.02.2018).
26. Sayre K.M. Machine recognition of handwritten words: A project report // Pattern recognition. 1973. vol. 5. no. 3. pp. 213–228.
27. Graves A., Ferna´ndez S., Gomez F., Schmidhuber J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks // Proceedings of the 23rd international conference on Machine learning. 2006. pp. 369–376.
28. Bridle J.S. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition // Neurocomputing. 1990. pp. 227–236.
29. Rabiner L.R. A tutorial on hidden Markov models and selected applications in speech recognition // Proceedings of the IEEE. 1989. vol. 77. no. 2. pp. 257–286.
30. Graves A., Jaitly N. Towards end-to-end speech recognition with recurrent neural networks // Proceedings of the 31st International Conference on Machine Learning (ICML-14). 2014. pp. 1764–1772.
31. Povey D. et al. The Kaldi speech recognition toolkit» // IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society. 2011. 4 p.
32. Корпус английской речи WSJ. URL: https://catalog.ldc.upenn.edu/LDC93S6B (дата обращения: 17.02.2018).
33. Miao Y., Gowayyed M., Metze F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding // 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2015. pp. 167–174.
34. Popovi´c B., Pakoci E., Pekar D. End-to-End Large Vocabulary Speech Recognition for the Serbian Language // International Conference on Speech and Computer. 2017. pp. 343–352.
35. Mohri M., Pereira F., Riley M. Weighted finite-state transducers in speech recognition // Computer Speech & Language. 2002. vol. 16. no. 1. pp. 69–88.
36. Allauzen C. et al. A general and efficient weighted finite-state transducer library // International Conference on Implementation and Application of Automata. 2007. pp. 11–23.
37. Collobert R., Puhrsch C., Synnaeve G. Wav2letter: an end-to-end convnetbased speech recognition system // 2016. preprint: arXiv: 1609.03193. URL: https://arxiv.org/abs/1609.03193 (дата обращения 17.02.2018).
38. Panayotov V., Chen G., Povey D., Khudanpur S. Librispeech: an ASR corpus based on public domain audio books // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 5206–5210.
39. Deep learning toolkit Torch. URL: http://www.torch.ch/ (дата обращения 17.02.2018).
40. Zhang Y. et al. Towards end-to-end speech recognition with deep convolutional neural networks // 2017. preprint: arXiv: 1701.02720. URL: https://arxiv.org/abs/1701.02720 (дата обращения 17.02.2018).
41. Корпус английской речи TIMIT. URL: https://catalog.ldc.upenn.edu/ldc93s1 (дата обращения: 17.02.2018).
42. Sak H., de Chaumont Quitry F., Sainath T., Rao K. Acoustic modelling with cd-ctc-smbr lstm rnns // 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2015. pp. 604–609.
43. Kingsbury B. Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling // 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009). 2009. pp. 3761–3764.
44. Sainath T.N., Vinyals O., Senior A., Sak H. Convolutional, long shortterm memory, fully connected deep neural networks // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4580–4584.
45. Fiscus J.G. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER) // Proceedings of 1997 IEEE Workshop on Automatic Speech Recognition and Understanding. 1997. pp. 347–354.
46. Soltau H., Liao H., Sak H. Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition // 2016. preprint: arXiv: 1610. 09975. URL: https://arxiv.org/abs/1610.09975 (дата обращения 17.02.2018).
47. Liao H., McDermott E., Senior A. Large scale deep neural network acoustic modeling with semi-supervised training data for YouTube video transcription // 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). 2013. pp. 368–373.
48. Youtube. URL: https://www.youtube.com/yt/lineups/ (дата обращения 17.02.2018).
49. Graves A. Sequence transduction with recurrent neural networks // 2012. preprint: arXiv: 1211.3711. URL: https://arxiv.org/abs/1211.3711 (дата обращения 17.02.2018).
50. Boulanger-Lewandowski N., Bengio Y., Vincent P. High-dimensional sequence transduction // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2013. pp. 3178–3182.
51. Graves A., Mohamed A.-R., Hinton G. Speech recognition with deep recurrent neural networks // 2013 IEEE International Conference on Acoustics, speech and signal processing (ICASSP). 2013. pp. 6645–6649.
52. Zhang Z. et al. Deep Recurrent Convolutional Neural Network: Improving Performance For Speech Recognition // 2016. preprint: arXiv: 1611.07174. URL: https://arxiv.org/abs/1611.07174 (дата обращения 17.02.2018).
53. Wang Y., Deng X., Pu S., Huang Z. Residual convolutional CTC networks for automatic speech recognition // 2017. preprint: arXiv: 1702.07793. URL: https://arxiv.org/abs/1702.07793 (дата обращения 17.02.2018).
54. Keras: The Python Deep Learning library. URL: https://keras.io/ (дата обращения 17.02.2018).
55. TensorFlow. An open source machine learning framework for everyone. URL: https://www.tensorflow.org/ (дата обращения 17.02.2018).
56. Инструментарий для глубокого обучения Theano. URL: http://deeplearning.net/software/theano/ (дата обращения 17.02.2018).
57. The Microsoft Cognitive Toolkit. URL: https://docs.microsoft.com/ru-ru/cognitive-toolkit/ (дата обращения 17.02.2018).
58. Example implementation of speech recognition system. URL: https://github.com/Microsoft/CNTK/tree/master/Tests/EndToEndTests/Speech/LSTM_CTC_MLF (дата обращения 17.02.2018).
59. CTC loss-function implementation. URL: https://github.com/baidu-research/warp-ctc (дата обращения 17.02.2018).
60. CTC model implementation using Kaldi. URL: https://github.com/lingochamp/kaldi-ctc (дата обращения 17.02.2018).
61. Cho K. et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation // 2014. preprint: arXiv: 1406.1078. URL: https://arxiv.org/abs/1406.1078 (дата обращения 17.02.2018).
62. Sutskever I., Vinyals O., Le Q.V. Sequence to sequence learning with neural networks // Advances in neural information processing systems. 2014. pp. 3104–3112.
63. Chorowski J.K. et al. Attentionbased models for speech recognition // Advances in Neural Information Processing Systems. 2015. pp. 577–585.
64. Bahdanau D., Cho K., Bengio Y. Neural machine translation by jointly learning to align and translate // 2014. preprint: arXiv: 1409.0473. URL: https://arxiv.org/abs/1409.0473 (дата обращения 17.02.2018).
65. Mnih V., Heess N., Graves A. Recurrent models of visual attention // Advances in neural information processing systems. 2014. pp. 2204–2212.
66. Bahdanau D. et al. End-to-end attention-based large vocabulary speech recognition // 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016. pp. 4945–4949.
67. Chan W., Jaitly N., Le Q., Vinyals O. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition // 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016. pp. 4960–4964.
68. Kim S., Hori T., Watanabe S. Joint CTC-attention based end-to-end speech recognition using multi-task learning // 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017. pp. 4835–4839.
69. Chorowski J., Bahdanau D., Cho K., Bengio Y. End-to-end continuous speech recognition using attention-based recurrent NN: first results // 2014. preprint: arXiv: 1412.1602. URL: https://arxiv.org/abs/1412.1602 (дата обращения 17.02.2018).
70. Amodei D. et al. End to end speech recognition in English and Mandarin // ICLR 2016 workshop. 2016. 12 p.
71. Zhang Y., Chan W., Jaitly N. Very deep convolutional networks for end-to-end speech recognition // 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017. pp. 4845–4849.
72. Tjandra A., Sakti S., Nakamura S. Attention-based Wav2Text with feature transfer learning // 2017. preprint: arXiv: 1709.07814. URL: https://arxiv.org/abs/1709.07814 (дата обращения 17.02.2018).
73. Implementation of attention-based model. URL: https://github.com/rizar/attention-lvcsr (дата обращения 17.02.2018).
74. Van Merri¨enboer et al. Blocks and fuel: Frameworks for deep learning // 2015. preprint: arXiv: 1506.00619. URL: https://arxiv.org/abs/1506.00619. (дата обращения 17.02.2018).
75. Bahdanau D. et al. Task loss estimation for sequence prediction // 2015. preprint: arXiv: 1511.06456. URL: https://arxiv.org/abs/1511.06456 (дата обращения 17.02.2018).
76. Luong T., Brevdo E., Zhao R. Neural machine translation (seq2seq) tutorial. 2017. URL: https://www.tensorflow.org/tutorials/seq2seq (дата обращения 17.02.2018).
77. Implementation of end-to-end models. URL: https://github.com/farizrahman4u/seq2seq (дата обращения 17.02.2018).
78. Lafferty J., McCallum A., Pereira F.C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data // 2001. 8 p.
79. Fosler-Lussier E., He Y., Jyothi P., Prabhavalkar R. Conditional random fields in speech, audio, and language processing // Proceedings of the IEEE. 2013. vol. 101. no. 5. pp. 1054–1075.
80. Bottou L. Une Approche th´eorique de l’Apprentissage Connexioniste; Applications `a la reconnaissance de la Parole // Ph.D. thesis. Universite de Paris XI. 1991. 236 p.
81. Hifny Y., Renals S. Speech recognition using augmented conditional random fields // IEEE Transactions on Audio, Speech, and Language Processing. 2009. vol. 17. no. 2. pp. 354–365.
82. Kong L., Dyer C., Smith N.A. Segmental recurrent neural networks // 2015. preprint: arXiv: 1511.06018. URL: https://arxiv.org/abs/1511.06018 (дата обращения 17.02.2018).
83. Lu L., et al. Segmental recurrent neural networks for end-to-end speech recognition // 2016. preprint: arXiv: 1603.00223. URL: https://arxiv.org/abs/1603.00223 (дата обращения 17.02.2018).
84. Palaz D., Doss M.M., Collobert R. Convolutional neural networksbased continuous speech recognition using raw speech signal // 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2015. pp. 4295–4299.
85. Amodei D. et al. Deep speech 2: End-to-end speech recognition in english and mandarin // International Conference on Machine Learning. 2016. pp. 173–182.
86. Deep learning toolkit Lasagne. URL: http://lasagne.readthedocs.io/en/latest/ (дата обращения 17.02.2018).
87. Chainer. A Powerful, Flexible, and Intuitive Framework for Neural Networks. URL: https://chainer.org/ (дата обращения 17.02.2018).
88. Dean J. et al. Large scale distributed deep networks // Advances in neural information processing systems. 2012. pp. 1223–1231.
89. Lu L., Kong L., Dyer C., Smith N.A. Multi-task Learning with CTC and Segmental CRF for Speech Recognition // 2017. preprint: arXiv: 1702.06378. URL: https://arxiv.org/abs/1702.06378 (дата обращения 17.02.2018).
90. Neubig G. et al. DyNet: The Dynamic Neural Network Toolkit // 2017. preprint: arXiv: 1701.03980. URL: https://arxiv.org/abs/1701.03980 (дата обращения 17.02.2018).
Опубликован
2018-06-01
Как цитировать
Марковников, Н. М., & Кипяткова, И. С. (2018). Аналитический обзор интегральных систем распознавания речи. Труды СПИИРАН, 3(58), 77-110. https://doi.org/10.15622/sp.58.4
Раздел
Искусственный интеллект, инженерия данных и знаний
Авторы, которые публикуются в данном журнале, соглашаются со следующими условиями:
Авторы сохраняют за собой авторские права на работу и передают журналу право первой публикации вместе с работой, одновременно лицензируя ее на условиях Creative Commons Attribution License, которая позволяет другим распространять данную работу с обязательным указанием авторства данной работы и ссылкой на оригинальную публикацию в этом журнале.
Авторы сохраняют право заключать отдельные, дополнительные контрактные соглашения на неэксклюзивное распространение версии работы, опубликованной этим журналом (например, разместить ее в университетском хранилище или опубликовать ее в книге), со ссылкой на оригинальную публикацию в этом журнале.
Авторам разрешается размещать их работу в сети Интернет (например, в университетском хранилище или на их персональном веб-сайте) до и во время процесса рассмотрения ее данным журналом, так как это может привести к продуктивному обсуждению, а также к большему количеству ссылок на данную опубликованную работу (Смотри The Effect of Open Access).