Psychoacoustics for Speech Denoising
Last Updated: Oct 29, 2017 —
2 min read
We present a psychoacoustically enhanced cost function to balance network complexity and perceptual performance of deep neural networks for speech denoising. While training the network, we utilize perceptual weights added to the ordinary mean-squared error to emphasize contribution from frequency bins which are most audible while ignoring error from inaudible bins. To generate the weights, we employ psychoacoustic models to compute the global masking threshold from the clean speech spectra. We then evaluate the speech denoising performance of our perceptually guided neural network by using both objective and perceptual sound quality metrics, testing on various network structures ranging from shallow and narrow ones to deep and wide ones. The experimental results showcase our method as a valid approach for infusing perceptual significance to deep neural network operations. In particular, the more perceptually sensible enhancement in performance seen by simple neural network topologies proves that the proposed method can lead to resource-efficient speech denoising implementations in small devices without degrading the percieved signal fidelity.
Built with: Python TensorFlow
|Fig 1. Plot of pyschoacoustic model 1 (PAM-1) components. PAM-1 identifies the tonal set and then estimates a global masking threshold on top of the absolute treshold of hearing. These components are all determined using the input signal's power spectral density (PSD). The shaded area between the PSD curve and the global masking threshold represents audible spectral energy.|