Title: Collaborative Regression and Classification via Bootstrapping
Abstract: In modern machine learning problems and applications, the data that we are dealing with have large dimensions as well as amount, making data analysis time-consuming and computationally inefficient. Sparse recovery algorithms are developed to extract the underlining low dimensional structure from the data. Classical signal recovery based on l1 minimization solves the least squares problem with all available measurements via sparsity-promoting regularization. It has shown promising performances in regression and classification. Previous work on Compressed Sensing (CS) theory reveals that when the true solution is sparse and if the number of measurements is large enough, then solutions to l1 converge to the ground truths. In practice, when the number of measurements is low or when the noise level is high or when measurements arrive sequentially in streaming fashion, conventional l1 minimization algorithms tend to struggle in signal recovery.
This research work aims at using multiple local measurements generated from resampling using bootstrap or sub-sampling to efficiently make global predictions to deal with aforementioned challenging scenarios in practice. We develop two main approaches – one extends the conventional bagging scheme in sparse regression from a fixed bootstrapping ratio whereas the other called JOBS applies a support consistency among bootstrapped estimators in a collaborative fashion. We first derive rigorous theoretical guarantees for both proposed approaches and then carefully evaluate them with extensive simulations to quantify their performances. Our algorithms are quite robust compared to the conventional l1 minimization, especially in the scenarios with high measurements noise and low number of measurements. Our theoretical analysis also provides key guidance on how to choose optimal parameters, including bootstrapping ratios and number of collaborative estimates. Finally, we demonstrate that our proposed approaches yield significant performance gains in both sparse regression and classification, which are two crucial problems in the field of signal processing and machine learning.
Title: Brain structure segmentation using multiple MRI pulse sequences
Abstract: Medical image segmentation is the process of delineating anatomical structures of interest in images. Automatic segmentation algorithms applied to brain magnetic resonance images (MRI) allow for the processing of large volumes of data for the study of neurodegenerative diseases. Widely-used segmentation software packages only require T1-weighted (T1-w) MRI and segment cortical and subcortical structures, but are unable to segment structures that do not appear in T1-w MRI. Other MRI pulse sequences have properties that allow for the segmentation of structures that are invisible (or barely discernible) in T1-w MRI.
In this dissertation, three novel medical image segmentation algorithms are proposed to segment the following structures of interest: the thalamus; the falx and tentorium; and the meninges. The common theme that connects these segmentation algorithms is that they use information from multiple MRI pulse sequences because the structures they target are nearly invisible in T1-w MRI. Segmentation of these structures is used in the study of neurodegenerative diseases such as multiple sclerosis and for the development of computational models of the brain for the study of traumatic brain injury.
Our automatic thalamus and thalamic nuclei segmentation algorithm extracts features from T1-w MRI, T2-w MRI, and diffusion tensor imaging (DTI) to train a random forest classifier. Using a leave-one-out cross-validation on nine subjects, our algorithm achieves mean Dice coefficients of 0.897 and 0.902 for the left and right thalami, respectively, which are higher Dice scores than the three state-of-art methods we compared against.
Our falx and tentorium segmentation algorithm uses T1-w MRI and susceptibility-weighted imaging (SWI) to register multiple atlases and fuse their boundary points to generate a subject-specific falx and tentorium. Our method is compared against single-atlas approaches and achieves the lowest mean surface distance of 0.86 mm and 0.99 mm to a manually delineated falx and tentorium, respectively.
Our meninges reconstruction algorithm uses T1-w MRI, T2-w MRI, and a synthetic computed tomography (CT) image generated via convolutional neural network to find two layers of the meninges: the subarachnoid space and dura mater. We compare our method with other brain extraction and intracranial volume estimation algorithms. Our method produces a subarachnoid space segmentation with a mean Dice score of 0.991, which is comparable to the top-performing state-of-art method, and produces a dura mater segmentation with a mean Dice score of 0.983, which is the highest among the compared methods.
Title: Minimally-Invasive Lens-free Computational Microendoscopy
Abstract: Ultra-miniaturized imaging tools are vital for numerous biomedical applications. Such minimally invasive imagers allow for navigation into hard-toreach regions and, for example, observation of deep brain activity in freely moving animals with minimal ancillary tissue damage. Conventional solutions employ distal microlenses. However, as lenses become smaller and thus less invasive they develop greater optical aberrations, requiring bulkier compound designs with restricted field-of-view. In addition, tools capable of 3-dimensional volumetric imaging require components that physically scan the focal plane, which ultimately increases the distal complexity, footprint, and weight. Simply put, minimally-invasive imaging systems have limited information capacity due to their given cross-sectional area.
This thesis explores minimally-invasive lens-free microendoscopy enabled by a successful integration of signal processing, optical hardware, and image reconstruction algorithms. Several computational microendoscopy architectures that simultaneously achieve miniaturization and high information content are presented. Leveraging the computational imaging techniques enables color-resolved imaging with wide field-of-view, and 3-dimensional volumetric reconstruction of an unknown scene using a single camera frame without any actuated parts, further advancing the performance versus invasiveness of microendoscopy.
Title: Semi-supervised training for automatic speech recognition.
Abstract: State-of-the-art automatic speech recognition (ASR) systems use sequence-level objectives like Connectionist Temporal Classification (CTC) and Lattice-free Maximum Mutual Information (LF-MMI) for training neural network-based acoustic models. These methods are known to be most effective with large size datasets with hundreds or thousands of hours of data. It is difficult to obtain large amounts of supervised data other than in a few major languages like English and Mandarin. It is also difficult to obtain supervised data in a myriad of channel and envirormental conditions. On the other hand, large amounts of
unsupervised audio can be obtained fairly easily. There are enormous amounts of unsupervised data available in broadcast TV, call centers and YouTube for many different languages and in many environment conditions. The goal of this research is to discover how to best leverage the available unsupervised data for training acoustic models for ASR.
In the first part of this thesis, we extend the Maximum Mutual Information (MMI) training to the semi-supervised training scenario. We show that maximizing Negative Conditional Entropy (NCE) over lattices from unsupervised data, along with state-level Minimum Bayes Risk (sMBR) on supervised data, in a multi-task architecture gives word error rate (WER) improvements without needing any confidence-based filtering.
In the second part of this thesis, we investigate using lattice-based supervision as numerator graph to incorporate uncertainities in unsupervised data in the LF-MMI training framework. We explore various aspects of creating the numerator graph including splitting lattices for minibatch training, applying tolerance to frame-level alignments, pruning beam sizes, word LM scale and inclusion of pronunciation variants. We show that the WER recovery rate (WRR) of our proposed approach is 5-10\% absolute better than that of the baseline of using 1-best transcript as supervision, and is stable in the 40-60\% range even on large-scale setups and multiple different languages.
Finally, we explore transfer learning for the scenario where we have unsupervised data in a mismatched domain. First, we look at the teacher-student learning approach for cases where parallel data is available in source and target domains. Here, we train a “student” neural network on the target domain to mimic a “teacher” neural network on the source domain data, but using sequence-level posteriors instead of the traditional approach of using frame-level posteriors.
We show that the proposed approach is very effective to deal with acoustic domain mismatch in multiple scenarios of unsupervised domain adaptation — clean to noisy speech, 8kHz to 16kHz speech, close-talk microphone to distant microphone.
Second, we investigate approaches to mitigate language domain mismatch, and show that a matched language model significantly improves WRR. We finally show that our proposed semi-supervised transfer learning approach works effectively even on large-scale unsupervised datasets with 2000 hours of
audio in natural and realistic conditions.
Title: Strategies for Handling Out-of-Vocabulary Words in Automatic Speech Recognition
Abstract: Nowadays, most ASR (automatic speech recognition) systems deployed in industry are closed-vocabulary systems, meaning we have a limited vocabulary of words the system can recognize, and where pronunciations are provided to the system. Words out of this vocabulary are called out-of-vocabulary (OOV) words, for which either pronunciations or both spellings and pronunciations are not known to the system. The basic motivations of developing strategies to handle OOV words are: First, in the training phase, missing or wrong pronunciations of words in training data results in poor acoustic models. Second, in the test phase, words out of the vocabulary cannot be recognized at all, and mis-recognition of OOV words may affect recognition performance of its in-vocabulary neighbors as well. Therefore, this dissertation is dedicated to exploring strategies of handling OOV words in closed-vocabulary ASR.
First, we investigate dealing with OOV words in ASR training data, by introducing an acoustic-data driven pronunciation learning framework using a likelihood-reduction based criterion for selecting pronunciation candidates from multiple sources, i.e. standard grapheme-to-phoneme algorithms (G2P) and phonetic decoding, in a greedy fashion. This framework effectively expands a small hand-crafted pronunciation lexicon to cover OOV words, for which the learned pronunciations have higher quality than approaches using G2P alone or using other baseline pruning criteria. Furthermore, applying the proposed framework to generate alternative pronunciations for in-vocabulary (IV) words improves both recognition performance on relevant words and overall acoustic model performance.
Second, we investigate dealing with OOV words in ASR test data, i.e. OOV detection and recovery. We first conduct a comparative study of a hybrid lexical model (HLM) approach for OOV detection, and several baseline approaches, with the conclusion that the HLM approach outperforms others in both OOV detection and first pass OOV recovery performance. Next, we introduce a grammar-decoding framework for efficient second pass OOV recovery, showing that with properly designed schemes of estimating OOV unigram probabilities, the framework significantly improves OOV recovery and overall decoding performance compared to first pass decoding.
Finally we propose an open-vocabulary word-level recurrent neural network language model (RNNLM) re scoring framework, making it possible to re-score lattices containing recovered OOVs using a word-level RNNLM, that was ignorant of OOVs when it was trained. Above all, the whole OOV recovery pipeline shows the potential of a highly efficient open-vocabulary word-level ASR decoding framework, tightly integrated into a standard WFST decoding pipeline.
Note: This is a virtual seminar that will be broadcast in Olin Hall 305. Refreshments will be available outside Olin Hall 305 at 2:30 PM.
Title: Computational infrastructure to improve scientific reproducibility
Abstract: The massive increase in the dimensionality of scientific data and the proliferation of complex data analysis methods has raised increasing concerns about the reproducibility of scientific results in many domains of science. I will first present evidence that analytic flexibility in neuroimaging research is associated with surprising variability in scientific outcomes in the wild, even holding the raw data constant. These findings motivate the development of well-tested software tools for neuroimaging data processing and analysis. I will focus in particular on the role of software development tools such as containerization and continuous integration, which provide the potential to deliver automated and reproducible data analysis at scale. I will also discuss the challenging tradeoffs inherent in the usage of complex software by scientists, and the need for increased transparency and validation of scientific software.
Bio: Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology.
Title: Exploring scalable coating of inorganic semiconductor inks: the surface structure-property-performance correlations
Abstract: Inorganic semiconductor inks – such as colloidal quantum dots (CQDs) and transition metal oxides (MOs) – can potentially enable low-cost flexible and transparent electronics via ‘roll-to-roll’ printing. Surfaces of these nanometer-sized CQDs and MO ultra-thin films lead to surface phenomenon with implications on film formation during coating, crystallinity and charge transport. In this talk, I will describe my recent efforts aimed at understanding the crucial role of surface structure in these materials using photoemission spectroscopy and X-ray scattering. Time-resolved X-ray scattering helps reveal the various stages during CQD ink-to-film transformation during blade-coating. Interesting insights include evidence of an early onset of CQD nucleation toward self-assembly and superlattice formation. I will close by discussing fresh results which suggest that nanoscale morphology significantly impacts charge transport in MO ultra-thin (≈5 nm) films. Control over crystallographic texture and film densification allows us to achieve high-performing (electron mobility ≈40 cm2V-1s-1), blade-coated MO thin-film transistors.
Bio: Dr. Ahmad R. Kirmani is a Guest Researcher in the Materials Science and Engineering Division, National Institute of Standards and Technology (NIST) in the group of Dr. Dean M. DeLongchamp and Dr. Lee J. Richter. He is exploring scalable coating of inorganic semiconductor inks using X-ray scattering. He received his PhD in Materials Science and Engineering from the King Abdullah University of Science and Technology (KAUST) under the supervision of Prof. Aram Amassian in 2017 for probing the surface structure-property relationship in colloidal quantum dot photovoltaics. He has published 30 articles in high-impact journals such Advanced Materials, ACS Energy Letters and the Nature family, and is also a volunteer science writer for the Materials Research Society (MRS) since the last couple of years and has contributed 10 news articles, opinions and perspectives.
Title: Electrets (Dielectrics with quasi-permanent Charges or Dipoles) – A long history and a bright future
Abstract: The history of electrets can be traced back to Thales of Miletus (approx. 624-546 B.C.E.) who reported that pieces of amber (“electron”) attract or repel each other. The science of fundamental electrical phenomena is closely intertwined with the development of electrets which came under such terms as “electrics”, “electrophores”, “charged/poled dielectrics”, etc. until about one century ago. Modern electret research started with Oliver Heaviside (1850-1925), who defined the concept of a “permanently electrized body” and proposed the name “electret” in 1885, and Mototarô Eguchi, who experimentally investigated carnauba wax electrets at the Higher Naval College in Tokyo around 1920. Today, we see a wide range of electret types, electret materials, and electret applications, which are being investigated and developed all over the world in a truly global endeavour. A classification of electrets will be followed by a few examples of useful electret effects and exciting device applications – mainly in the area of electro-mechanical and electro-acoustical transduction which started with the invention of the electret microphone by Sessler and West in the early 1960s. Furthermore, possible synergies between electret research and ultra-high-voltage DC electrical insulation will be mentioned.
Bio: Reimund Gerhard is a Professor of Physics and Astronomy at the University of Potsdam and the current President of the IEEE Dielectrics and Electrical Insulation Society (DEIS). He graduated from the Technical University of Darmstadt as Diplom-Physiker in 1978 and earned his PhD (Doktor-Ingenieur) in Communications Engineering from TU Darmstadt in 1984. From 1985 to 1994, Gerhard was a Research Scientist and Project Manager at the Heinrich-Hertz Institute for Communications Technology (now the Fraunhofer Institute) in Berlin, Germany. He was appointed as a Professor at the University of Potsdam in 1994. From 2004 to 2012, Gerhard served as the Chairman of the Joint Board for the Master-of-Science Program in Polymer Science of FU Berlin, HU Berlin, TU Berlin, and the University of Potsdam. He also served as the Dean of the Faculty of Science at the University of Potsdam from 2008 to 2012, eventually serving as a Senator of the University of Potsdam from 2014 to 2016.
Prof. Gerhard has received many awards and honors over his long career, including an Award (ITG-Preis) from the Information Technology Society (ITG) in the VDE, a silver medal from the Foundation Werner-von-Siemens-Ring, a First Prize Technology Transfer Award Brandenburg, Whitehead Memorial Lecturer of the IEEE CEIDP, and the Award of the EuroEAP Society “for his fundamental scientific contributions in the field of transducers based on dielectric polymers.” He is a Fellow of the American Physical Society (APS) and the Institute of Electrical and Electronics Engineers (IEEE). His research interests include polymer electrets with quasi-permanent space charge, ferro- or piezoelectrets (polymer films with electrically charged cavities), ferroelectric polymers with piezo- and pyroelectric properties, polymer composites with novel property combinations, physical mechanisms of dipole orientation and charge storage, electrically deformable dielectric elastomers (sometimes also called “electro-electrets”), as well as the physics of musical instruments.
Note: There will be a reception after the lecture.
Title: A Theory and Practice of the Lifelong Learnable Forest
Abstract: Since Vapnik’s and Valiant’s seminal papers on learnability, various lines of research have generalized his concept of learning and learners. In this paper, we formally define what it means to be a lifelong learner. Given this definition, we propose the first lifelong learning algorithm with theoretical guarantees that it can perform forward transfer and reverse transfer, while not experiencing catastrophic forgetting. Our algorithm, dubbed Lifelong Learning Forests, outperforms the current state-of-the-art deep lifelong learning algorithm on the CIFAR 10-by-10 challenge problem, despite its simplicity and mathematical tractability. Our approach immediately lends to further algorithmic developments that promise to exceed current performance limits of existing approaches.
Title: Automated Spore Analysis Using Bright-Field Imaging and Raman Microscopy
Abstract: In 2015, it was determined that the United States Department of Defense had been shipping samples of B. anthracis spores which had undergone gamma irradiation but were not fully inactivated. In the aftermath of this event alternative and orthogonal methods were investigated to analyze spores determine their viability. In this thesis we demonstrate a novel analysis technique that combines bright-field microscopy images with Raman chemical microscopy.
We first developed an image segmentation routine based on the watershed method to locate individual spores within bright-field images. This routine was able to effectively demarcate 97.4% of the Bacillus spores within the bright-field images with minimal over-segmentation. Size and shape measurements, to include major and minor axis and area, were then extracted for 4048 viable spores which showed very good agreement with previously published values. When similar measurements were taken on 3627 gamma-irradiated spores, a statistically significant difference was noted for the minor axis length, ratio of major to minor axis, and total area when compared to the non-irradiated spores. Classification results show the ability to correctly classify 67% of viable spores with an 18% misclassification rate using the bright-field image by thresholding the minimum classification length.
Raman chemical imaging microscopy (RCIM) was then used to measure populations of viable, gamma irradiated, and autoclaved spores of B. anthracis Sterne, B. atrophaeus. B. megaterium, and B. thuringensis kurstaki. Significant spectral differences were observed between viable and inactivated spores due to the disappearance of features associated with calcium dipicolinate after irradiation. Principal component analysis was used which showed the ability to distinguish viable spores of B. anthracis Sterne and B. atrophaeus from each other and the other two Bacillus species.
Finally, Raman microscopy was used to classify mixtures of viable and gamma inactivated spores. A technique was developed that fuses the size and shape characteristics obtained from the bright-field image to preferentially target viable spores. Simulating a scenario of a practical demonstration of the technique was performed on a field of view containing approximately 7,000 total spores of which are only 12 were viable to simulate a sample that was not fully irradiated. Ten of these spores are properly classified while interrogating just 25% of the total spores.