Calendar

Nov
14
Thu
Distinguished Lecture Series: Reimund Gerhard, University of Potsdam @ Olin Hall 305
Nov 14 @ 3:00 pm – 4:00 pm
Distinguished Lecture Series: Reimund Gerhard, University of Potsdam @ Olin Hall 305

Title: Electrets (Dielectrics with quasi-permanent Charges or Dipoles) – A long history and a bright future

Abstract: The history of electrets can be traced back to Thales of Miletus (approx. 624-546 B.C.E.) who reported that pieces of amber (“electron”) attract or repel each other. The science of fundamental electrical phenomena is closely intertwined with the development of electrets which came under such terms as “electrics”, “electrophores”, “charged/poled dielectrics”, etc. until about one century ago. Modern electret research started with Oliver Heaviside (1850-1925), who defined the concept of a “permanently electrized body” and proposed the name “electret” in 1885, and Mototarô Eguchi, who experimentally investigated carnauba wax electrets at the Higher Naval College in Tokyo around 1920. Today, we see a wide range of electret types, electret materials, and electret applications, which are being investigated and developed all over the world in a truly global endeavour. A classification of electrets will be followed by a few examples of useful electret effects and exciting device applications – mainly in the area of electro-mechanical and electro-acoustical transduction which started with the invention of the electret microphone by Sessler and West in the early 1960s. Furthermore, possible synergies between electret research and ultra-high-voltage DC electrical insulation will be mentioned.

Bio: Reimund Gerhard is a Professor of Physics and Astronomy at the University of Potsdam and the current President of the IEEE Dielectrics and Electrical Insulation Society (DEIS). He graduated from the Technical University of Darmstadt as Diplom-Physiker in 1978 and earned his PhD (Doktor-Ingenieur) in Communications Engineering from TU Darmstadt in 1984. From 1985 to 1994, Gerhard was a Research Scientist and Project Manager at the Heinrich-Hertz Institute for Communications Technology (now the Fraunhofer Institute) in Berlin, Germany. He was appointed as a Professor at the University of Potsdam in 1994. From 2004 to 2012, Gerhard served as the Chairman of the Joint Board for the Master-of-Science Program in Polymer Science of FU Berlin, HU Berlin, TU Berlin, and the University of Potsdam. He also served as the Dean of the Faculty of Science at the University of Potsdam from 2008 to 2012, eventually serving as a Senator of the University of Potsdam from 2014 to 2016.

Prof. Gerhard has received many awards and honors over his long career, including an Award (ITG-Preis) from the Information Technology Society (ITG) in the VDE, a silver medal from the Foundation Werner-von-Siemens-Ring, a First Prize Technology Transfer Award Brandenburg, Whitehead Memorial Lecturer of the IEEE CEIDP, and the Award of the EuroEAP Society “for his fundamental scientific contributions in the field of transducers based on dielectric polymers.” He is a Fellow of the American Physical Society (APS) and the Institute of Electrical and Electronics Engineers (IEEE). His research interests include polymer electrets with quasi-permanent space charge, ferro- or piezoelectrets (polymer films with electrically charged cavities), ferroelectric polymers with piezo- and pyroelectric properties, polymer composites with novel property combinations, physical mechanisms of dipole orientation and charge storage, electrically deformable dielectric elastomers (sometimes also called “electro-electrets”), as well as the physics of musical instruments.

Research Interests: 

  • Global or patterned electric charging or poling of dielectric polymer films (electrets)
  • Thermal (pyroelectrical) and acoustical (piezoelectrical) probing of electric-field profiles
  • Dielectric spectroscopy over large temperature and frequency ranges and at high voltages
  • Dipole orientation, ferroelectricity (switching, hysteresis, etc.), quasi-static and dynamic pyroelectricity, direct and inverse piezoelectricity in polymer films (including ferro-electrets)
  • Charge storage and transport and their molecular mechanisms in dielectric polymers
  • Dielectric elastomers (electro-electrets) and their applications in sensors and actuators
  • Demonstration and assessment of applications-relevant electro-mechanical, mechanoelectrical, and thermo-electrical transducer properties for device applications
  • Investigation of musical instruments (organs, pianos, violins) with use of polymer sensors

Note: There will be a reception after the lecture.

Nov
21
Thu
Thesis Proposal: Ruizhi Li @ Olin Hall 305
Nov 21 @ 3:00 pm – 4:00 pm
Thesis Proposal: Ruizhi Li @ Olin Hall 305

Title: A Practical and Efficient Multi-Stream Framework for End-to-End Speech Recognition

Abstract: The multi-stream paradigm in Automatic Speech Recognition (ASR) considers scenarios where parallel streams carry diverse or complementary task-related knowledge. In these cases, an appropriate strategy to fuse streams or select the most informative source is necessary. In recent years, with the increasing use of Deep Neural Networks (DNNs) in ASR, End-to-End (E2E) approaches, which directly transcribe human speech into text, have received greater attention. In this proposal, a multi-stream framework is present based on joint CTC/Attention E2E model, where parallel streams are represented by separate encoders aiming to capture diverse information. On top of the regular attention networks, a secondary stream-fusion network is introduced to steer the decoder toward the most informative encoders.

Two representative framework have been proposed, which are MultiEncoder Multi-Resolution (MEM-Res) and Multi-Encoder Multi-Array (MEM-Array), respectively. Moreover, with an increasing number of streams (encoders) requiring substantial memory and massive amounts of parallel data, a practical two-stage training scheme is further proposed in this work. Experiments are conducted on various corpora including Wall Street Journal (WSJ), CHiME-4, DIRHA and AMI. Compared with the best single-stream performance, the proposed framework has achieved substantial improvement, which also outperforms various conventional fusion strategies.

The future plan aims to improve robustness of the proposed multistream framework. Measuring performance of an ASR system without ground-truth could be beneficial in multi-stream scenarios to emphasize on more informative streams than corrupted ones. In this proposal, four different Performance Monitoring (PM) techniques are investigated. The preliminary results suggest that PM measures on attention distributions and decoder posteriors are well-correlated with true performances. Integration of PM measures and more sophisticated fusion mechanism in multi-stream framework will be the focus for future exploration.

Jan
30
Thu
Thesis Proposal: Pramuditha Perera @ Hackerman Hall B-17
Jan 30 @ 3:00 pm – 4:00 pm
Thesis Proposal: Pramuditha Perera @ Hackerman Hall B-17

Title: Deep Learning-based Novelty Detection

Abstract: In recent years, intelligent systems powered by artificial intelligence and computer vision that perform visual recognition have gained much attention. These systems observe instances and labels of known object classes during training and learn association patterns that can be
used during inference. A practical visual recognition system should first determine whether an observed instance is from a known class. If it is from a known class, then the identity of the instance is queried through classification. The former process is commonly known as novelty detection (or novel class detection) in the literature. Given a set of image instances from known classes, the goal of novelty detection is to determine whether an observed image during inference belongs to one of the known classes.

We consider one-class novelty detection, where all training data are assumed to belong to a single class without any finer-annotations available. We identify limitations of conventional approaches in one-class novelty detection and present a Generative Adversarial Network(GAN) based solution. Our solution is based on learning latent representations of in-class examples using a denoising auto-encoder network. The key contribution of our work is our proposal to explicitly constrain the latent space to exclusively represent the given class. In order to accomplish this goal, firstly, we force the latent space to have bounded support by introducing a tanh activation in the encoder’s output layer. Secondly, using a discriminator in the latent space that is trained adversarially, we ensure that encoded representations of in-class examples resemble uniform random samples drawn from the same bounded space. Thirdly, using a second adversarial discriminator in the input space, we ensure all randomly drawn latent samples generate examples that look real.

Finally, we introduce a gradient-descent based sampling technique that explores points in the latent space that generate potential out-of-class examples, which are fed back to the network to further train it to generate in-class examples from those points. The effectiveness of the proposed method is measured across four publicly available datasets using two one-class novelty detection protocols where we achieve state-of-the-art results.

Feb
27
Thu
Thesis Proposal: Raghavendra Pappagari @ Hackerman Hall B-17
Feb 27 @ 3:00 pm
Thesis Proposal: Raghavendra Pappagari @ Hackerman Hall B-17

Title: Towards a better understanding of spoken conversations: Assessment of sentiment and emotion

Abstract: In this talk, we present our work on understanding the emotional aspects of spoken conversations. Emotions play a vital role in our daily life as they help us convey information impossible to express verbally to other parties.

While humans can easily perceive emotions, these are notoriously difficult to define and recognize by machines. However, automatically detecting the emotion of a spoken conversation can be useful for a diverse range of applications such as human-machine interaction and conversation analysis. In this work, we considered emotion recognition in two particular scenarios. The first scenario is predicting customer sentiment/satisfaction (CSAT) in a call center conversation, and the second consists of emotion prediction in short utterances.

CSAT is defined as the overall sentiment (positive vs. negative) of the customer about his/her interaction with the agent. In this work, we perform a comprehensive search for adequate acoustic and lexical representations.

For acoustic representation, we propose to use the x-vector model, which is known for its state-of-the-art performance in the speaker recognition task. The motivation behind using x-vectors for CSAT is we observed that emotion information encoded in x-vectors affected speaker recognition performance. For lexical, we introduce a novel method, CSAT Tracker, which computes the overall prediction based on individual segment outcomes. Both methods rely on transfer learning to obtain the best performance. We classified using convolutional neural networks combining the acoustic and lexical features. We evaluated our systems on US English telephone speech from call center data. We found that lexical models perform better than acoustic models and fusion of them provided significant gains. The analysis of errors uncovers that the calls where customers accomplished their goal but were still dissatisfied are the most difficult to predict correctly. Also, we found that the customer’s speech is more emotional compared to the agent’s speech.

For the second scenario of predicting emotion, we present a novel approach based on x-vectors. We show that adapting the x-vector model for emotion recognition provides the best-published results on three public datasets.

Mar
5
Thu
Thesis Proposal: Matthew Maciejewski @ Hackerman Hall B-17
Mar 5 @ 3:00 pm
Thesis Proposal: Matthew Maciejewski @ Hackerman Hall B-17

Title: Single-Channel Speech Separation in Noisy and Reverberant Conditions

Abstract: An inevitable property of multi-party conversations is that more than one speaker will end up speaking simultaneously for portions of time. Many speech technologies, such as automatic speech recognition and speaker identification, are not designed to function on overlapping speech and suffer severe performance degradation under such conditions. Speech separation techniques aim to solve this problem by producing a separate waveform for each speaker in an audio recording with multiple talkers speaking simultaneously. The advent of deep neural networks has resulted in strong performance gains on the speech separation task. However, training and evaluation has been nearly ubiquitously restricted to a single dataset of clean, near-field read speech, not representative of many multi-person conversational settings which are frequently recorded on room microphones, introducing noise and reverberation. Due to the degradation of other speech technologies in these sorts of conditions, speech separation systems are expected to suffer a decrease in performance as well.

The primary goal of this proposal is to develop novel techniques to improve speech separation in noisy and reverberant recording conditions. One core component of this work is the creation of additional synthetic overlap corpora spanning a range of more realistic and challenging conditions. The lack of appropriate data necessitates a first step of creating appropriate conditions with which to benchmark the performance of state-of-the-art methods in these more challenging conditions. Another proposed line of investigation is the integration of speech separation techniques with speech enhancement, the task of enhancing a speech signal through the removal of noise or reverberation. This is a natural combination due to similarities in problem formulation and general approach. Finally, we propose an investigation into the effectiveness of speech separation as a pre-processing step to speech technologies, such as automatic speech recognition, that struggle with overlapping speech, as well as tighter integration of speech separation with these “downstream” systems.

Apr
2
Thu
Thesis Proposal: John Franklin
Apr 2 @ 3:00 pm
Thesis Proposal: John Franklin

This presentation will be happening remotely over Zoom. Click this link as early as 15 minutes before the scheduled start time of the presentation to watch in a Zoom meeting.

Meeting ID: 618-589-385
Meeting Password: 261713

Title: Compressive Sensing for Wireless Systems with Massive Antenna Arrays

Abstract: Over the past two decades the world has enjoyed exponential growth in wireless connectivity that has fundamentally changed the way people communicate and has opened the door to limitless new applications. With the advent of 5G, users will now begin to enjoy enhanced mobile broadband links supporting peak rates of over 10 gigabit per second. The 5G capability will also support massive machine type communications and less than one millisecond latency communications to support ultra-reliable low communication. Continuing to achieve greater increases in system capacity requires the continual advancement of new technology to make efficient use of finite spectrum resources.

Researchers have studied Multiple-Input-Multiple-Output (MIMO) communications over the last several decades as a way to increase system capacity. The MIMO channel is composed of multiple transmit (input) antennas and multiple (output) receive antennas. The channel is represented as the impulse response between each transmit and receive antenna pair. In the simplest of channels, the pairwise impulse response reduces to a single coefficient. Many theoretical MIMO results rely on Rayleigh channels featuring independently distributed complex Gaussian variables as channel coefficients.

The concept of Massive MIMO emerged a decade ago and is a leading technology in 5G wireless. Massive MIMO features base stations that have massive antenna arrays that simultaneously service many users. The Massive MIMO array has many more antennas than users. Unlike traditional phased array antennas, Massive MIMO arrays have all (or a large portion of) their antennas connected to receive chains for baseband processing. Successfully decoding each user’s data stream requires estimates of the propagation channel. Channel estimation is usually aided through the use of pilot signals that are known to both the user terminal and the base station. Simultaneously estimating the channel matrix between each user and each antenna in a massive MIMO array creates challenges for pilot sequence design. More channel resources reserved for pilot sequences for channel estimation result in fewer resources for user data.

Several efforts have shown that the mm wave massive MIMO channel exhibits several sparse features. The number of distinct and resolvable paths between a user and a massive MIMO array is generally much less than the number of base station antennas. Early theoretical MIMO work relied on Rayleigh channels as they are useful for closed form solutions. In reality, the Massive MIMO mm wave channel is low rank as it can be modeled by a smaller number of resolvable multipath components. This opens opportunities for new channel estimation techniques using compressive sensing and sparse recovery.

Although Massive MIMO will be featured in future 5G services, there is still much untapped potential. Through developing better channel estimation schemes, additional system throughput can be achieved. This work will consider:

  • Generation of sparse mm Wave channels for analysis
  • Multi-user pilot design approaches for measuring the massive MIMO channel
  • Channel estimates formed through sparse recovery methods
Apr
16
Thu
Thesis Proposal: Golnoosh Kamali
Apr 16 @ 3:00 pm
Thesis Proposal: Golnoosh Kamali

This event will occur remotely in a Zoom meeting at this link. Please do not join the meeting until at least 15 minutes before the presentation is scheduled to start. 

Title: Using Systems Modeling to Localize the Seizure Onset Zone in Epilepsy Patients from Single Pulse Electrical Stimulation Recordings

Abstract: Surgical resection of the seizure onset zone (SOZ) could potentially lead to seizure-freedom in medically refractory epilepsy patients. However, localizing the SOZ can be a time consuming and tedious process involving visual inspection of intracranial electroencephalographic (iEEG) recordings captured during passive patient monitoring. Single pulse electrical stimulation (SPES) is currently performed on patients undergoing invasive EEG monitoring for the main purposes of mapping functional brain networks such as language and motor networks. We hypothesize that evoked responses from SPES can also be used to localize the SOZ as they may express the natural frequencies and connectivity of the iEEG network. To test our hypothesis, we construct patient specific single-input multi-output transfer function models from the evoked responses recorded from eight epilepsy patients that underwent SPES evaluation and iEEG monitoring. Our preliminary results suggest that the stimulation electrodes that produced the highest system gain, as measured by the 𝓗∞ norm, correspond to those electrodes clinically defined in the SOZ in successfully treated patients.

Apr
23
Thu
Thesis Proposal: Aswin Shanmugam Subramanian
Apr 23 @ 3:00 pm
Thesis Proposal: Aswin Shanmugam Subramanian

This presentation will be done remotely. Follow this link for access to the Zoom meeting where it will be taking place. It is advised that you do not log in to the meeting until at least 15 minutes before the presentation’s start time.

Title: A Synergistic Combination of Signal Processing and Deep Learning for Robust Speech Processing

Abstract: When speech is captured with a distant microphone it includes distortions caused by noise, reverberation and overlapping speakers. Far-field speech processing systems need to be robust to those distortions to function in real-world applications and hence have front-end components to handle them. The front-end components are typically optimized based on signal reconstruction objectives. This makes the overall speech processing system sub-optimal as the front-end is optimized independently of the downstream task. This approach also has another significant constraint that the enhancement/separation system can be trained with only simulated data and hence does not generalize well for real data. Alternatively, these front-end systems can be trained with application-oriented objectives. Emergent end-to-end neural methods have made it easier to optimize the frontend in such a manner. The goal of this work is to encompass carefully designed multichannel speech enhancement/separation subnetworks inside a sequence-to-sequence automatic speech recognition (ASR) system. This work takes an explainable AI approach to this problem where the intermediate outputs of the subnetworks can be interpreted although the entire network is trained only based on the speech recognition error minimization criteria. This proposal looks at two directions: (1) simultaneous dereverberation and denoising using a single differentiable speech recognition network which also learns some important hyperparameters from the data, (2) target speech extraction combining both anchor speech and location information which is optimized based on only the transcription as the target. In the first direction, dereverberation subnetwork is based on linear prediction where the filter order hyperparameter is estimated using a reinforcement learning approach, and the denoising (beamforming) subnetwork is based on a parametric multichannel Wiener filter where the speech distortion factor is also estimated inside the network. This method has shown a considerable gain in performance on real and unseen conditions. It is also shown how such a system optimized based on the ASR objective improves the speech enhancement quality on various signal level metrics in addition to the ASR word error rate (WER) metric. In the second direction, a location and anchor speech guided target speech extraction subnetwork is trained end-to-end with an ASR network. From experimental comparison with a traditional pipeline system, it is verified that this task can be realized by end-to-end ASR training objectives without using parallel clean data. The results are promising in mixtures of two speakers and noise. The future plan is to optimize an explicit source localization frontend with a speech recognition objective. This can play an important role in realizing a conversation system that recognizes who is speaking what, when, and where.

Apr
30
Thu
Thesis Proposal: Ke Li
Apr 30 @ 3:00 pm
Thesis Proposal: Ke Li

This presentation is happening remotely. Click this link as early as 15 minutes before the scheduled start time of the presentation to watch in a Zoom meeting.

Title: Context-aware Language Modeling and Adaptation for Automatic Speech Recognition

Abstract: Language models (LMs) are an important component in automatic speech recognition (ASR) and usually trained on transcriptions. Language use is strongly influenced by factors such as domain, topic, style, and user-preference. However, transcriptions from speech corpora are usually too limited to fully capture contextual variability in test domains. And some of the information is only available at test time. It is easily observed that the change of application domains often induces mismatch in lexicon and distribution of words. Even within the same domain, topics can shift and user-preference can vary. These observations indicate that LMs trained purely on transcriptions that may not be well representative for test domains are far from ideal and may severely affect ASR performance. To mitigate the mismatches, adapting LMs to contextual variables is desirable.

The goal of this work is to explore general and lightweight approaches for neural LM adaptation and context-aware modeling for ASR. In the adaptation direction, two approaches are investigated. The first is based on cache models. Although neural LMs outperform n-gram LMs on modeling longer context, previous studies show that some of them, for example, LSTMs, still only capture a relatively short span of context. Cache models that capture relatively long-term self-trigger information have been proved useful for n-gram LMs adaptation. This work extends a fast margin adaptation framework for neural LMs and adapts LSTM LMs in an unsupervised way. Specifically, pre-trained LMs are adapted to cache models estimated from decoded hypotheses. This method is lightweight as it does not require retraining. The second approach is interpolation-based. Linear interpolation is a simple and robust adaptation approach, while it is suboptimal since weights are globally optimized and not aware of local context. To tackle this issue, a mixer model that combines pre-trained neural LMs with dynamic weighting is proposed. Experimental results show that it outperforms finetuning and linear interpolation on most scenarios. As for context-aware modeling, this work proposes a simple and effective way to implicitly integrate cache models into neural LMs. It provides a simple alternative to the pointer sentinel mixture model. Experiments show that the proposed method is more effective on relatively rare words and outperforms several baselines. Future work is focused on analyzing the importance and the effect of various contextual factors on ASR and developing approaches for representing and modeling these factors to improve ASR performance.
May
14
Thu
Thesis Proposal: Arun Nair
May 14 @ 3:00 pm
Thesis Proposal: Arun Nair

This presentation will be taking place remotely. Follow this link to enter the Zoom meeting where it will be hosted. It is advised that you do not enter the meeting until at least 15 minutes before the talk is scheduled to take place. 

Title: Machine Learning for Collaborative Signal Processing in Beamforming and Compressed Sensing

Abstract: Life today has become inextricably linked with the many sensors working in concert in our environment, from the webcam and microphone in our laptops to the arrays of wireless transmitters and receivers in cellphone towers. Collaborative signal processing methods tackle the challenge of efficiently processing data from multiple sources. Recently, machine learning methods have become very popular tools for collaborative signal processing, largely due to the success of deep learning. The large volume of data created by multiple sensors pairs well with the data-hungry nature of modern machine learning models, holding great promise for efficient solutions.

This proposal extends ideas from machine learning to problems in collaborative signal processing. Specifically, this work will focus on two collaborative signal processing methods – beamforming and compressed sensing. Beamforming is commonly employed in sensor arrays for directional signal transmission and reception by combining the signals received in the array elements to enhance a signal of interest. On the other hand, compressed sensing is a widely applicable mathematical framework that guarantees exact signal recovery even at sub-Nyquist sampling rates if suitable sparsity and incoherence assumptions are satisfied. Compressed sensing accomplishes this via convex or greedy optimization to fuse the information in a small number of signal measurements.

The first part of this work was motivated by the common experience of attempting to capture a video on a mobile device but having the target of interest contaminated by the surrounding environment (e.g., construction sounds from outside the camera’s field of view). Fusing visual and auditory information, we propose a novel audio-visual zooming algorithm that directionally filters the received audio data using beamforming to focus only on audio originating from within the field of view of the camera. Second, we improve the quality of ultrasound image formation by introducing a novel beamforming framework that leverages the benefits of deep learning. Ultrasound images currently suffer from severe speckle and clutter degradations which cause poor image quality and reduce diagnostic utility. We propose to design a deep neural network to learn end-to-end transformations that extract information directly from raw received US channel data. Finally, we improve upon optimization-based compressed sensing recovery by replacing the slow iterative optimization algorithms with far faster convolutional neural networks.

Back to top