Abstract: Despite major advances in artificial intelligence through deep learning methods, computer algorithms remain vastly inferior to mammalian brains, and lack a fundamental feature of animal intelligence: they generalize poorly outside the domain of the data they have been trained on. This results in brittleness (e.g. adversarial attacks) and poor performance in transfer learning, few-shot learning, casual reasoning and scene understanding, as well as difficulty with lifelong and unsupervised learning – all important hallmarks of human intelligence. We conjecture that this gap is caused by the fact that current deep learning architectures are severely under-constrained, lacking key model biases found in the brain that are instantiated by the multitude of cell types, pervasive feedback, innately structured connectivity, specific non-linearities, and local learning rules. There is ample behavioral evidence that the brain performs approximate Bayesian inference under a generative model of the world (also known as inverse graphics or analysis by synthesis), so the brain must have evolved a strong and useful model bias that allows it to efficiently learn such a generative model. Therefore, our goal is to learn the brain’s model bias in order to engineer less artificial, and more intelligent, neural networks. Experimental neuroscience now has technologies that enable us to analyze how brain circuits work in great detail and with impressive breadth. Using tour-de-force experimental methods we have been collecting an unprecedented amount of neural responses (e.g. more than 1.5 million neuron-hours) from the visual cortex, and developed computational models that we use to extract principles of functional organization of the brain and learn the brain’s model biases.
Biography: Dr. Andreas Tolias’ research goal is to decipher brain’s mechanisms of intelligence. He studies how networks of neurons are structurally and functionally organized to process information. Research in his lab combines computational and machine learning approaches to electrophysiological (whole-cell and multi-electrode extracellular), multi-photon imaging, molecular and behavioral methods. He got his Ph.D. from MIT in Computational and Systems Neuroscience. The current focus of research in his lab is to reverse engineer neocortical intelligence. To this end his lab is deciphering the structure of microcircuits in visual cortex (define cell types and connectivity), elucidate the computations they perform and apply these principles to develop novel machine learning algorithms. He has trained numerous graduate students and postdoctoral fellows and enjoys mentoring immensely.
Abstract: Chemically synthesized quantum dots (QDs) can potentially enable new classes of
highly flexible, spectrally tunable lasers processible from solutions [1,2]. Despite a considerable progress over the past years, colloidal-QD lasing, however, is still at the laboratory stage and an important challenge – realization of lasing with electrical injection – is still unresolved. A major complication, which hinders the progress in this field, is fast nonradiative Auger recombination of gain-active multicarrier species such as trions (charged excitons) and biexcitons [3,4]. Recently, we explored several approaches for mitigating the problem of Auger decay by taking advantage of a new generation of core/multi-shell QDs with a radially graded composition that allow for considerable (nearly complete) suppression of Auger recombination by “softening” the electron and hole confinement potentials [5,6]. Using these specially engineered QDs, we have been able to realize optical gain with direct-current electrical pumping , which has been a long-standing goal in the field of colloidal nanostructures. Further, we apply these dots to practically demonstrated the viability of a “zero-threshold-optical-gain” concept using not neutral but negatively charged particles wherein the pre-existing electrons block either partially or completely ground-state absorption . Such charged QDs are optical-gain-ready without excitation and, in principle, can exhibit lasing at vanishingly small pump levels. All of these exciting recent developments demonstrate a considerable promise of colloidal nanomaterials for implementing solution-processible optically and electrically pumped laser devices operating across a wide range of wavelengths.
Title: Dose Optimization for Pediatric Renal SPECT Imaging
Abstract: Like any real-world problem, the design of an imaging system always requires tradeoffs. For medical imaging modalities using ionization radiation, a major tradeoff is between diagnostic image quality (IQ) and risk to the patient from absorbed radiation dose. In nuclear medicine, reducing the radiation dose to the patient will always result in increased Poisson noise in the image. At the same time, reducing the radiation dose (RD), below some level at least, will always result in reduced risk of adverse effects to the patient. The overall goal of this research is to propose a rigorous IQ-RD tradeoff analysis method for pediatric nuclear medicine renal imaging. However, the methodologies developed in this proposal can also be applied to other nuclear medicine imaging applications and other important medical modalities involving ionization radiation such as computed tomography and planar X-rays.
Balancing the tradeoffs between RD and IQ is especially important for children, as they are often considered more vulnerable to radiation than adults. In nuclear medicine imaging, reducing the RD requires reducing the administered activity (AA). Lower AA results in increased Poisson noise in the images or requires longer acquisition durations to maintain the noise level. In pediatric nuclear medicine, it is desirable to use the lowest AA and the shortest acquisition duration that gives sufficient IQ for clinical diagnosis. In current clinical practice, AA for pediatric molecular imaging is often based on the North American consensus guidelines (U.S.) and the European pediatric dosage card (Europe). Both of these dosing guidelines involve scaling the adult AA by patient weight subject to upper and lower constraints on the AA. However, these guidelines were developed based on expert consensus or rough estimates (estimated count rates) of IQ rather than rigorous, objective measures of performance on the diagnostic task.
In this research, we propose a general framework for optimizing RD with task-based assessment of IQ. Here, IQ is defined as an objective measure of the user performing the diagnostic task that the images were acquired to answer. Specifically, we propose to establish relationships between AA, acquisition duration, measures of body habitus, and IQ for pediatric patients undergoing renal molecular imaging procedures. To investigate IQ as a function of renal defect detectability, we have developed a projection image database modeling imaging of 99mTc-DMSA, a renal function agent. The database uses a highly-realistic population of pediatric phantoms with anatomical and body morphological variations. Using the developed projection image database, we have explored patient factors that affect IQ and are currently in the process of determining relationships between IQ and AA (IQ-AA curve) in terms of these found factors. Our preliminary data have shown that the current weight-based guidelines, based on scaling the AA by patient weight, are not optimal in the sense that they do not give the same image quality for patients with the same weight. Furthermore, we have found that factors that are more local to the target organ may be more robust than weight for estimating the AA needed to provide a constant IQ across a population of patients. In the case of renal imaging, we have discovered that girth is more robust than weight in predicting AA needed to provide a desired IQ. In addition, in order to simulate a full clinical multi-slice detection task (just like what a nuclear medicine physician would do), we propose to develop a CNN-based model observer. We will perform human observer studies to verify and calibrate the developed model observers used to generate the IQ-AA curves. The results of this proposal will provide the data needed by standards bodies to develop improved dosing guidelines for pediatric molecular imaging that result in more consistent image quality and absorbed dose.
Title: Modeling Cellular Events: Chemotaxis and Aneuploidy
Abstract: Biology is the ‘study of complex natural things’ and the biologists are mostly interested in details of those complexity in a system. But often a simpler mathematical model is proved to be very efficient in deciphering the underlying basic working principle of the system. Despite the usefulness, these models are often criticized for not being able to explain the sufficient details of the wide range of experimental observations of different cases of pharmacological/genetic perturbations.
Title: Extending the potential of thin-film optoelectronics via optical and photonic engineering
Project summary: Thin-film optoelectronics using solution-processed materials have become a strong research focus in recent decades. These technologies have demonstrated convenience and versatility, due to their solution-processed nature, in a wide range of applications such as solar power harvesting, photodetection, light emitting devices and even lasing. Some of the variants of these materials also enabled and dominate the field of flexible electronics, especially for display technologies, achieving large-scale industrialization and commercialization years ago specifically in applications where their conventional counterparts – bulk semiconductors – are limited. The development of optoelectronics applications using organic materials, colloidal quantum dots, perovskites, etc., has been made possible by research progress in materials and chemical engineering of the active material itself, as well as in optical and photonic engineering in the device architecture and related structures. The focus of this project is mainly on the latter set of approaches applied to lead chalcogenide-based colloidal quantum dot thin films.
Colloidal quantum dots (CQDs) are a type of semiconductor material in the form of nanocrystals (1-10 nm in diameter) of the corresponding bulk material. The spatial confinement of electrons and holes leads to significantly reconstructed energy band structures. Usually this manifests as a series of discrete energy levels above or below the corresponding bulk conduction and valence band edges, instead of the corresponding semi-continuum of states observed in bulk semiconductors. The spacings between the discrete energy levels are highly dependent on the size of the quantum dots, which at the same time determines the properties of optical transitions responsible for absorption (Figure 1b), modulation of the refractive index, etc. In this sense, CQDs are considered “tunable” by controlling the ensemble so that it predominantly consisting CQDs of one desired shape and size.
CQDs are solution-processed materials. The processing of CQDs starts from synthesis using solutions containing metal-organic precursors. The controlled growth of nanocrystals results in a dispersion of pristine CQDs in certain solvents. After that, the CQDs are purified and chemically treated to modify their surface ligands, through a series of precipitation, redispersion, phase transfer and concentration steps. The deposition of films of CQDs onto desired substrates is achieved by solution-compatible techniques such as spin-casting, blade coating and screen printing. A functional CQD film is usually 10-500 nm thick depending on its application and is usually preceded and/or succeeded by the deposition of other electronically functional device layers.
Lead sulfide (PbS) CQDs are widely used for applications involving solar photon absorption and resulting energy conversion. In the example of a CQD solar cell, PbS CQDs with effective band gaps of 1.3 eV are chosen as the active material. The full device utilizes a p-n or p-i-n structure, and a typical device architecture consists of a transparent conductive oxide (TCO) electrode layer, an electron transport layer (ETL), the absorbing PbS CQD film, a hole transport layer (HTL) and metal top electrode. Similar structures are also used in photodetectors and light emitting diodes, with critical layers substituted.
For the first section of the project, we studied and exploited the color reproduction capabilities using reflective interference from CQD solar cells, while maintaining high photon absorption and current generation. The second section is aimed at exploring the possibility of simultaneously controlling the spectral reflection, transmission and absorption of thin film optoelectronics using embedded photonic crystal structures in CQD films and other highly absorptive materials. In the third section, we devised and built a 2D multi-modal scanning characterization system for spatial mapping of photoluminescence (PL), transient photocurrent and transient photovoltage from a realistically large device area with micron-resolution. The last section of the project focuses on economical and scalable solar concentration solutions for CQD and other thin film solar cells.
We mostly limit our discussion and demonstration to PbS CQD solar cells within the
scope of this proposal; however, it is worth pointing out that the techniques and
principles described below could be applied to most optoelectronic materials that share
the solution-compatible deposition and processing procedures.
Title: New Diagnostic and Therapeutic Tools for Intravascular Magnetic Resonance Imaging (IVMRI)
Abstract: Intravascular (IV) magnetic resonance imaging (IVMRI) is a developing technology that uses minimally-invasive MRI coils to guide diagnosis and treatment. The combination of signal-to-noise (SNR) enhancement from the microscopic MRI local coils and the multi-contrast mechanisms provided by MRI has enlarged the possibilities of high-resolution imaging-guided diagnosis and treatment of atherosclerosis and nearby or surrounding cancers. Recent years have seen the development of many advanced MRI techniques including MRI thermometry and real-time MRI, yet the development of procedures that apply these advances to intravascular MRI remain challenging.
Among interventional diagnostic techniques, MRI endoscopy is an IVMRI technique that transfers MRI from the laboratory frame-of-reference to the IV-coil’s frame-of-reference. This enables high-resolution MRI of blood vessels with endoscopic-style functionality. Prior MRI endoscopy work was limited to ~2 frames-per-second (fps), which is not real-time and potentially limiting in clinical applications. Improving the speed of MRI endoscopy further without excessive undersampling artifacts could enable the rapid deployment and advancement of an IVMRI endoscope entirely by MRI guidance to evaluate local, advanced, intra- and extra-vascular disease at high resolution using MRI’s unique multi-contrast and multi-functional assessment capabilities. Furthermore, with its unique capability in high-resolution thermometry, IVMRI is suitable to guide and monitor ablation therapy delivery in disease such as vessel-involving cancers. Prior work using an IVMRI loopless antenna for both MRI and radiofrequency ablation (RFA) was limited in precision and ablated only the tissue in direct contact with the probe. Thus, one goal is to extend IVMRI methods using state-of-the-art real-time MRI acceleration methods to provide MRI endoscopy at a speed comparable to that of existing catherization and optical endoscopy procedures.
A second goal is to provide a minimally-invasive, IV-accessed ablation technology that could provide precision localization and perivascular ablation to render resectable, an otherwise inaccessible or non-resectable cancer with vascular involvement.
To these ends, a Max-Planck Institute (MPI) real-time MRI system employing graphic processing units (GPU) is first adapted to facilitate MRI endoscopy at 10 fps endoscopy with real-time display and is demonstrated in vitro and in vivo. To further improve image quality, we propose to use a neural network (CNN) trained on artifact patterns generated from motionless endoscopy to ameliorate artifacts during real-time imaging. A new method based on generative models and manifold learning is then proposed to optimize image contrast responsive to the varying endoscopic surroundings.
To address the second goal, an intravascular ultrasound ablation transducer is integrated with IVMRI to provide a tool that can also deliver therapy. By integrating an IV high-intensity ultrasound (HIFU) ablation component, the precision and depth of ablation is extended and contact injuries can be avoided. Procedures are developed to evaluate accuracy using ex vivo samples and feasibility is demonstrated in animals in vivo.
Title: Soroban: A Mixed-Signal Neuromorphic Processing in Memory Architecture
Abstract: To meet the scientific demand for future data-intensive processing for every day mundane tasks such as searching via images to the uttermost serious health care disease diagnosis in personalized medicine, we urgently need a new cloud computing paradigm and energy efficient i.e. “green” technologies. We believe that a brain-inspired approach that employs unconventional processing offers an alternative paradigm for BIGDATA computing.
My research aims to go beyond the state of the art processor in memory architectures. In the realm of un-conventional processors, charge based computing has been an attractive solution since it’s introduction with charged-coupled device (CCD) imagers in the seventies. Such architectures have been modified to compute-in-memory arrays that have been used for signal processing, neural networks and pattern recognition using the same underlying physics. Other work has utilized the same concept in the charge-injection devices (CIDs), which have also been used for similar pattern recognition tasks. However, these computing elements have not been integrated with the support infrastructure for high speed input/output commensurate with BIGDATA processing streaming applications. In this work, the CID concept is taken to a smaller CMOS 55nm node and has shown promising preliminary results as a multilevel input computing element for hardware inference applications. A mixed signal charge-based vector-vector multiplier (VMM) is explored which computes directly on a common readout line of a dynamic random-access memory (DRAM). Low power consumption and high area density is achieved by storing local parameters in a DRAM computing crossbar.
Title: Advanced Image Reconstruction and Analysis for Fluorescence Molecular Tomography (FMT) and Positron Emission Tomography (PET)
Abstract: Molecular imaging provides efficient ways to monitor different biological processes noninvasively, and high-quality imaging is necessary in order to fully explore the value of molecular imaging. To this end, advanced image generation algorithms are able to significantly improve image quality and quantitative performance. In this research proposal, we focus on two imaging modalities, fluorescence molecular tomography (FMT) and positron emission tomography (PET), that fall in the category of molecular imaging. Specifically, we studied the following two problems: i) reconstruction problem in FMT and ii) partial volume correction in brain PET imaging.
Reconstruction in FMT: FMT is an optical imaging modality that uses diffuse light for imaging. Reconstruction problem for FMT is highly ill-posed due to photon scattering in biological tissue, and thus, regularization techniques tend to be used to alleviate the ill-posed nature of the problem. Conventional reconstruction algorithms cause oversmoothing which reduces resolution of the reconstructed images. Moreover, a Gaussian model is commonly chosen as the noise model although most FMT systems based on charged-couple device (CCD) or photon multiplier tube (PMT) are contaminated by Poisson noise. In our work, we propose a reconstruction algorithm for FMT using sparsity-initialized maximum-likelihood expectation maximization (MLEM). The algorithm preserves edges by exploiting sparsity, as well as taking Poisson noise into consideration. Through simulation experiments, we compare the proposed method with pure sparse reconstruction method and MLEM with uniform initialization. We show the proposed method holds several advantages compared to the other two methods.
Partial volume correction of brain PET imaging: The so-called partial volume effect (PVE) is caused by the limited resolution of PET systems, reducing quantitative accuracy of PET imaging. Based on the stage of implementation, partial volume correction (PVC) algorithms could be categorized into reconstruction-based and post-reconstruction methods.Post reconstruction PVC methods can be directly implemented on reconstructed PET images and do not require access to raw data or reconstruction algorithms of PET scanners. Many of these methods use anatomical information from MRI to further improve their performance. However, conventional MR guided post-reconstruction PVC methods require segmentation of MR images and assume uniform activity distribution within each segmented region. In this proposal, we develop post-reconstruction PVC method based on deconvolution via parallel level set regularization. The method is implemented with non-smooth optimization based on the split Bregman method. The proposed method incorporates MRI information without requiring segmentation or making any assumption on activity distribution. Simulation experiments are conducted to compare the proposed method with several other segmentationfree method, as well as conventional segmentation-based PVC method. The results show the proposed method outperforms other segmentation-free method and shows stronger resistance to MR information mismatch compared to conventional segmentation-based PVC method.
Note: This is a virtual seminar that will be broadcast in Olin Hall 305. Refreshments will be available outside Olin Hall 305 at 2:30 PM.
Title: Computational infrastructure to improve scientific reproducibility
Abstract: The massive increase in the dimensionality of scientific data and the proliferation of complex data analysis methods has raised increasing concerns about the reproducibility of scientific results in many domains of science. I will first present evidence that analytic flexibility in neuroimaging research is associated with surprising variability in scientific outcomes in the wild, even holding the raw data constant. These findings motivate the development of well-tested software tools for neuroimaging data processing and analysis. I will focus in particular on the role of software development tools such as containerization and continuous integration, which provide the potential to deliver automated and reproducible data analysis at scale. I will also discuss the challenging tradeoffs inherent in the usage of complex software by scientists, and the need for increased transparency and validation of scientific software.
Bio: Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology.