4th International Conference on Spoken Language Processing

Philadelphia, PA, USA
October 3-6, 1996

Neural Networks Learning with L1 Criteria and Its Efficiency in Linear Prediction of Speech Signals

Munehiro Namba, Hiroyuki Kamata, Yoshihisa Ishida

School of Science and Technology, Department of Electronics and Communications, Meiji University, Japan

The classical learning technique such as the back-propagation algorithm minimizes the expectation of the squared error that arise between the actual output and the desired output of supervised neural networks. The network trained by such a technique, however, does not behave in the desired way, when it is embedded in the system that deals with non-Gaussian signals. As the least absolute estimation is known to be robust for noisy signals or a certain type of non-Gaussian signals, the network trained with this criterion might be less sensitive to the type of signals. This paper discusses the least absolute error criterion for the error minimization in supervised neural networks. We especially pay attention to its efficiency for the linear prediction of speech. The computational loads of the conventional approaches to this estimation have been much heavier than the usual least squares estimator. But the proposed approach can significantly improve the analysis performance, since the method is based on the simple gradient descent algorithm.

Full Paper

Bibliographic reference.  Namba, Munehiro / Kamata, Hiroyuki / Ishida, Yoshihisa (1996): "Neural networks learning with L1 criteria and its efficiency in linear prediction of speech signals", In ICSLP-1996, 1245-1248.