Noise robust formant frequency estimation method based on spectral model of repeated autocorrelation of speech

Abu Shafin Mohammad Mahdee Jameel, Shaikh Anowarul Fattah, Rajib Goswami, Weiping Zhu, M. Omair Ahmad

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, a noise robust formant frequency estimation scheme is developed based on a spectral model matching algorithm. Considering the vocal tract as an autoregressive system, a spectral model of repeated autocorrelation function (RACF) of band-limited speech signal is proposed. It is shown that because of the repeated autocorrelation operation on band-limited signal, the proposed model can exhibit prominent formant characteristics. First from given noisy speech observations, an adaptive band selection criterion is developed. Next, on each resulting band-limited noisy speech signal, a repeated autocorrelation operation is carried out, which not only reduces the effect of noise but also strengthens the dominant poles corresponding to the formant frequencies. Finally, spectrum of the RACF is computed and instead of direct spectral peak picking, a model fitting scheme is introduced to find out model parameters which lead to formant estimation. The proposed algorithm has been tested on natural vowels as well as some naturally spoken sentences in the presence of different environmental noises. It is found that the proposed scheme provides better formant estimation accuracy in comparison to some of the existing methods at low levels of signal-to-noise ratio.
Original languageEnglish
Pages (from-to)1357-1370
Number of pages14
JournalIEEE/ACM Transactions on Audio, Speech, and Language Processing
Volume25
Issue number6
DOIs
Publication statusPublished - 4 Nov 2016
Externally publishedYes

Keywords

  • Autocorrelation
  • formant estimation
  • repeated autocorrelation
  • speech analysis
  • spectrum
  • spectral model

Fingerprint

Dive into the research topics of 'Noise robust formant frequency estimation method based on spectral model of repeated autocorrelation of speech'. Together they form a unique fingerprint.

Cite this