摘要:The paper presents the authors’ experience with HMMs (Hidden Markov Models) used for isolated word speech recognition in Bulgarian. Two methods provoked by experiments are discussed, namely: (i) the precise quantization of Gaussian probability density function (G-pdf) modeling the HMM states’ output, and (ii) a method for averaging a set of HMMs trained for different versions of a given word. A universal threshold is evaluated for switching between the “large” and “slim” models of G-pdf defined here. Experimental results of the threshold usage for averaging of HMMs are also reported.rwicz