Our goal is to generally improve the speech recognition methodology with the help of the new algorithms developed in our laboratory.
Speech recognition offers challenging benchmarking tasks for efficient algorithms that can process and learn to represent large quantities of data. In addition to improving the acoustic models of phonemes we aim at developing new learning statistical language models for difficult large vocabulary continuous speech recognition tasks.
We currently specialize in the following research areas in speech recognition:
- Sub-word units and deep learning in language modeling
- Speaker adaptation and pronunciation rating in acoustic modeling
- Unlimited vocabulary continuous speech recognition
- Speech recognition and language modeling methods for under-resourced languages
- Methods for improving speech, audio and video indexing and translation
The relevant pilot applications where we are evaluating our systems:
- Unlimited vocabulary continuous dictation in different languages
- Acoustic and language model adaptation for speakers and dialects
- Speech recognition in games and second language learning
- Speech, audio and video indexing and retrieval
- Speech-to-speech translation and multimodal interfaces
Automatic speech recognition (ASR) and the modern parametric speech synthesis (text-to-speech, TTS) systems typically share the same underlying statistical modeling scheme, the hidden Markov models (HMM). The acoustic features and their statistical models (HMMs) in ASR and TTS resemble to each other. The language modelling tasks in ASR and TTS differ more, but some shared tools such as deep neural networks and unsupervised and semi-supervised morphological analysis can be helpful for text analysis in both. Speech-to-speech translation (S2ST) is a challenging application that combines both these problems with statistical machine translation (SMT).