When: Apr 23 2020 @ 3:00 PM

This presentation will be done remotely. Follow this link for access to the Zoom meeting where it will be taking place. It is advised that you do not log in to the meeting until at least 15 minutes before the presentation’s start time.
Title: A Synergistic Combination of Signal Processing and Deep Learning for Robust Speech Processing
Abstract: When speech is captured with a distant microphone it includes distortions caused by noise, reverberation and overlapping speakers. Far-field speech processing systems need to be robust to those distortions to function in real-world applications and hence have front-end components to handle them. The front-end components are typically optimized based on signal reconstruction objectives. This makes the overall speech processing system sub-optimal as the front-end is optimized independently of the downstream task. This approach also has another significant constraint that the enhancement/separation system can be trained with only simulated data and hence does not generalize well for real data. Alternatively, these front-end systems can be trained with application-oriented objectives. Emergent end-to-end neural methods have made it easier to optimize the frontend in such a manner. The goal of this work is to encompass carefully designed multichannel speech enhancement/separation subnetworks inside a sequence-to-sequence automatic speech recognition (ASR) system. This work takes an explainable AI approach to this problem where the intermediate outputs of the subnetworks can be interpreted although the entire network is trained only based on the speech recognition error minimization criteria. This proposal looks at two directions: (1) simultaneous dereverberation and denoising using a single differentiable speech recognition network which also learns some important hyperparameters from the data, (2) target speech extraction combining both anchor speech and location information which is optimized based on only the transcription as the target. In the first direction, dereverberation subnetwork is based on linear prediction where the filter order hyperparameter is estimated using a reinforcement learning approach, and the denoising (beamforming) subnetwork is based on a parametric multichannel Wiener filter where the speech distortion factor is also estimated inside the network. This method has shown a considerable gain in performance on real and unseen conditions. It is also shown how such a system optimized based on the ASR objective improves the speech enhancement quality on various signal level metrics in addition to the ASR word error rate (WER) metric. In the second direction, a location and anchor speech guided target speech extraction subnetwork is trained end-to-end with an ASR network. From experimental comparison with a traditional pipeline system, it is verified that this task can be realized by end-to-end ASR training objectives without using parallel clean data. The results are promising in mixtures of two speakers and noise. The future plan is to optimize an explicit source localization frontend with a speech recognition objective. This can play an important role in realizing a conversation system that recognizes who is speaking what, when, and where.