Calendar

Sep
23
Thu
Thesis Proposal: Jaejin Cho
Sep 23 @ 3:00 pm
Thesis Proposal: Jaejin Cho

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Improving speaker embedding in speaker verification: Beyond speaker discrimanitive training

Abstract: Speaker verification (SV) is a task to verify a claimed identity from the voice signal. A well-performing SV system requires a method to transform a variable-length recording into a fixed-length representation (a.k.a. embedding vector), compacting the speaker biometric information that captures distinctive features over different speakers. There are two popular methods: i-vector and x-vector. Although i-vector is still used nowadays, x-vector outperforms i-vector in many SV tasks as deep learning research surges. The x-vector, however, has limitations, and we mainly tackle two of them in this proposal: 1) the embedding still includes information about the spoken text, 2) it cannot leverage data that do not have speaker labels since the training requires the labels.

In the first half, we tackle the text-dependency in the x-vector speaker embedding. Spoken text remaining in x-vector can degrade its performance in text-independent SV because utterances of the same speaker may have different embeddings due to different spoken text. This could lead to a false rejection, i.e., the system rejects a valid target speaker. To tackle this issue, we propose to disentangle the spoken text and speaker identity into separate latent factors using a text-to-speech (TTS) model. First, the multi-speaker end-to-end TTS system has text and speech encoders, each of which focuses on encoding information in its corresponding modality. These encoders enable text-independent speaker embedding learning by reconstructing the frames of a target speech segment, given a speaker embedding of another speech segment of the same utterance. Second, many efforts to the neural TTS research over recent years have improved the speech synthesis quality. We hypothesize that speech synthesis and speaker embedding qualities positively correlate since the speaker encoder in a TTS system needs to learn well for better speech synthesis of multiple speakers. We confirm the above two points through a series of experiments.

In the second half, we focus on leveraging unlabeled data to learn embedding. Considering that much more unlabeled data exists than labeled data, leveraging the unlabeled data is essential, which is not straightforward with the x-vector training. This, however, is possible with the proposed TTS method. First, we show how to use the TTS method for this purpose. The results show that it can leverage the unlabeled data, but it still requires some labeled data to post-process the embeddings for the final SV system. To develop a completely unsupervised SV system, we apply a self-supervised technique proposed in computer vision research, distillation with no labels (DINO), and compare this to the TTS method. The results show that the DINO method outperforms the TTS method in unsupervised scenarios and enables SV with no labels.

Future work will focus on 1) exploring the DINO-based method in semi-supervised scenarios, 2) fine-tuning the network for downstream tasks such as emotion recognition.

Committee Members

  • Najim Dehak, Department of Electrical and Computer Engineering
  • Jesús Villalba, Department of Electrical and Computer Engineering
  • Sanjeev Khudanpur, Department of Electrical and Computer Engineering
  • Hynek Hermansky, Department of Electrical and Computer Engineering
  • Laureano Moro-Velazquez, Department of Electrical and Computer Engineering
Sep
29
Wed
Dissertation Defense: Raghavendra Pappagari
Sep 29 @ 3:30 pm
Dissertation Defense: Raghavendra Pappagari

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Towards Better Understanding of Spoken Conversations: Assessment of Emotion and Sentiment

Abstract: Emotions play a vital role in our daily life as they help us convey information impossible to express verbally to other parties. While humans can easily perceive emotions, these are notoriously difficult to define and recognize by machines. However, automatically detecting the emotion of a spoken conversation can be useful for a diverse range of applications such as human machine interaction and conversation analysis. Automatic speech emotion recognition (SER) can be broadly classified into two types: SER from isolated utterances and SER from long recordings. In this thesis, we present machine learning based approaches to recognize emotion from both isolated utterances and long recordings.

Isolated utterances are usually shorter than 10s in duration and assumed to contain only one major emotion. One of the main obstacles in achieving high emotion recognition accuracy in this case is lack of large annotated data. We proposed to mitigate this problem by using transfer learning and data augmentation techniques. We show that utterance representations (x-vectors) extracted from speaker recognition models (x-vector models) contain emotion predictive information and adapting those models provide significant improvements in emotion recognition performance. To further improve the performance, we proposed a novel perceptually motivated data augmentation method, CopyPaste on isolated utterances. Assuming that the presence of emotions other than neutral dictates a speaker’s overall perceived emotion in a recording, concatenation of an emotional (emotion E) and a neutral utterance can still be labeled with emotion E. We show that using this concatenated data along with the original training data to train the model improves the model performance. We presented three CopyPaste schemes and evaluate on two models – one trained independently and another using transfer learning from an x-vector model, a speaker recognition model – in both clean and test conditions. We validated the proposed approaches on three datasets each collected with different elicitation methods: Crema-D (acted emotions), IEMOCAP (induced emotions) and MSP-Podcast (spontaneous emotions).

As isolated utterances are assumed to contain only one emotion, the proposed models make predictions on the utterance level i.e., one emotion prediction for the whole utterance. However, these models can not be directly applied to the conversations which can have multiple emotions unless we know locations of emotion boundaries. In this work, we propose to recognize emotions in the conversations by doing frame-level classification where predictions are made at regular intervals. We investigated several deep learning architectures – transformers, ResNet-34 and BiLSTM – that can exploit context in the conversations. We show that models trained on isolated utterances perform worse than models trained on conversations suggesting the importance of context. Based on inner-workings of attention operation, we propose a data augmentation method, DiverseCatAugment (DCA) to equip the transformer models with better classification ability. However, these models does not exploit turn-taking pattern available in conversations. Speakers in the conversations take turns to exchange information and emotion in each turn could depend on the speaker’s and the corresponding partner’s emotions in the past turns. We show that exploiting the information of who is speaking when in the conversation improves the emotion recognition performance.
The proposed models can exploit speaker information even in the absence of speaker segmentation information.

Annotating utterances with emotions is not a simple task – it is very expensive, time consuming and depends on the number of emotions used for annotation. However, annotation schemes can be changed to reduce annotation efforts based on application. For example, for some applications, the goal is to only classify into positive or negative emotions instead of more detailed emotions like angry, happy, sad and disgust. We considered one such application in this thesis: predicting customer’s satisfaction (CSAT) in a call center conversation. CSAT is defined as the overall sentiment (positive vs. negative) of the customer about his/her interaction with the agent. As the goal is to predict only one label for the whole conversation, we perform utterance-level classification. We conducted a comprehensive search for adequate acoustic and lexical representations at different granular levels of conversations such as word/frame-, turn-. and call-level. From the acoustic signal, we found that the proposed x-vector representation combined with feed-forward deep neural network outperformed widely used prosodic features. From transcripts, CSAT Tracker, a novel method that computes overall prediction based on individual segment outcomes performed best. Both methods rely on transfer learning to obtain the best performance. We also performed fusion of acoustic and lexical features using a convolutional network. We evaluated our systems on US English telephone speech from call center data. We found that lexical models perform better than acoustic models and fusion of them provided significant gains. The analysis of errors revealed that the calls where customers accomplished their goal but were still dissatisfied are the most difficult to predict correctly. Also, we found that the customer’s speech is more emotional compared to the agent’s speech.

Committee Members:

  • Najim Dehak, Department of Electrical and Computer Engineering
  • Jesús Villalba, Department of Electrical and Computer Engineering
  • Hynek Hermansky, Department of Electrical and Computer Engineering
Back to top