Dynamic sharings of gaussian densities using phonetic features

Lee, Kyung-Tak;Wellekens, Christian J
ASRU 2001, IEEE Workshop on Automatic Speech Recognition and Understanding, December 9-13, 2001, Madonna di Campiglio, Trento, Italy

This paper describes a way to adapt the recognizer to pronunciation variability by dynamically sharing Gaussian densities across phonetic models. The method is divided in three steps. First, a HMM recognizer outputs a lattice of the most likely word hypotheses given an input utterance. Then, the canonical pronunciation of each hypothesis is checked by comparing its theoretical phonetic features to those automatically extracted from speech. If the comparisons show that a phoneme of a hypothesis has likely been pronounced differently, its model is transformed by sharing its Gaussian densities with the ones of its possible alternate phone realization(s). Finally, the transformed models are used in a second-pass recognition. Sharings are dynamic because they are automatically adapted to each input speech. Experiments showed a 5.4% relative reduction in Word Error Rate compared to the baseline and a 2.7% compared to a static method.


Type:
Conférence
City:
Trento
Date:
2001-12-09
Department:
Sécurité numérique
Eurecom Ref:
738
Copyright:
© 2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PERMALINK : https://www.eurecom.fr/publication/738