Calendar

Apr
15
Thu
Thesis Proposal: Sanjukta Nandi Bose
Apr 15 @ 3:00 pm
Thesis Proposal: Sanjukta Nandi Bose

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Early prediction of adverse clinical events and optimal intervention in ICUs

Abstract: Personalized healthcare is a rapidly evolving research area with tremendous potential for optimizing patient care strategies and improving patient outcomes. Traditionally, clinical decision making relies on assessment and intervention based on the collective experience of physicians. Using big-data analytics techniques, we can now harness data-driven models to enable early prediction of patients at risk of adverse clinical events. These predictive models can provide timely analytical information to physicians facilitating early therapeutic intervention and efficient management of patients in intensive care units (ICUs).

In addition to early prediction, it is equally important to optimize intervention strategies for critically ill patients. One such urgent need is to optimally oxygenate COVID-19 patients diagnosed with acute respiratory distress syndrome (ARDS). Moderate to severe ARDS patients generally require mechanical ventilation to improve oxygen saturation and to reduce the risk of organ failure and death. The most common ventilator settings across all modes of mechanical ventilation are positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (FiO2). Increasing either of these settings is expected to increase oxygen saturation. However, prolonged ventilation of patients with high PEEP and FiO2 significantly increases the risk of ventilator associated lung injury. Therefore, an optimal strategy is required to improve patient outcomes.

This thesis presents two overarching aims: (1) early prediction of adverse events and (2) optimal intervention for mechanically ventilated patients. In contrast to fixed lead-time prediction models in prior work, our methodology proposes a new framework which hypothesizes the presence of a time-varying pre-event physiologic state that differentiates the target patients from the control group. We also present a unique approach to patient risk-stratification using unsupervised clustering technique that could enable identification of a high-risk group among all positive predicted cases with a positive predictive value of more than 93% when applied to multiple organ dysfunction prediction.

In the second aim, we propose a novel application of data-driven linear parameter varying systems to capture time-varying dynamics of oxygen saturation in response to ventilator settings with a changing physiological state of a patient and its comparison with linear time invariant models.  Most prior studies on closed loop ventilator control have used stepwise, rule-based procedures, fuzzy logic, and a combination of rule-based methods and proportional integral derivative (PID) controller for closed loop control of FiO2. Other studies have worked on control strategies based on ventilator measured variables and on various mathematical lung models. In contrast we design optimal closed-loop ventilator strategies that are model based. A simulation of optimal ventilation settings for maintaining desired oxygen saturation using feedback control of LPV systems is presented.

Committee Members

  • Raimond L. Winslow, Department of Biomedical Engineering
  • Sridevi V. Sarma, Department of Biomedical Engineering
  • Enrique Mallada, Department of Electrical Engineering
  • Melania M. Bembea, Department of Anesthesiology and Critical Care Medicine
Apr
29
Thu
Thesis Proposal: Michelle Graham
Apr 29 @ 3:00 pm
Thesis Proposal: Michelle Graham

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Photoacoustic imaging to detect major blood vessels and nerves during neurosurgery and head and neck surgery

Abstract: Real-time intraoperative guidance during minimally invasive neurosurgical and head and neck procedures is often limited to endoscopy, CT-guided image navigation, and electromyography, which are generally insufficient to locate major blood vessels and nerves hidden by tissue. Accidental damage to these hidden structures has incidence rates of 6.8% in surgeries to remove pituitary tumors (i.e., endonasal transsphenoidal surgery) and 3-4% in surgeries to remove parotid tumors (i.e., parotidectomy), often resulting in severe consequences, such as patient blindness, paralysis, and death. Photoacoustic imaging is a promising emerging imaging technique to provide real-time guidance of subsurface blood vessels and nerves during these surgeries.

Limited optical penetration through bone and the presence of acoustic clutter, reverberations, aberration, and attenuation can degrade photoacoustic image quality and potentially corrupt the usefulness of this promising intraoperative guidance technique. In order to mitigate image degradation, photoacoustic imaging system parameters may be adjusted and optimized to cater to the specific imaging environment. In particular, parameter adjustment can be categorized into the optimization of photoacoustic signal generation and the optimization of photoacoustic image formation (i.e., beamforming) and image display methods.

In this talk, I will describe my contributions to leverage amplitude- and coherence-based beamforming techniques to improve photoacoustic image display for the detection of blood vessels during endonasal transsphenoidal surgery. I will then present my contributions to the derivation of a novel photoacoustic spatial coherence theory, which provides a fundamental understanding critical to the optimization of coherence-based photoacoustic images. Finally, I will present a plan to translate this work from the visualization of blood vessels during neurosurgery to the visualization of nerves during head and neck surgery. Successful completion of this work will lay the foundation necessary to introduce novel, intraoperative, photoacoustic image guidance techniques that will eliminate the incidence of accidental injury to major blood vessels and nerves during minimally invasive surgeries.

Committee Members:

  • Muyinatu Bell, Department of Electrical and Computer Engineering
  • Xindge Li, Department of Biomedical Engineering
  • Jin Kang, Department of Electrical and Computer Engineering
May
10
Mon
Dissertation Defense: Jordi Abante
May 10 @ 3:00 pm
Dissertation Defense: Jordi Abante

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Statistical Signal Processing Methods for Epigenetic Landscape Analysis

Abstract: Since the DNA structure was discovered in 1953, a great deal of effort has been put into studying this molecule in detail. We now know DNA comprises an organism’s genetic makeup and constitutes a blueprint for life. The study of DNA has dramatically increased our knowledge about cell function and evolution and has led to remarkable discoveries in biology and medicine.

Just as DNA is replicated during cell division, several chemical marks are also passed onto progeny during this process. Epigenetics studies these marks and represents a fascinating research area given their crucial role. Among all known epigenetic marks, 5mc DNA methylation is probably one of the most important ones given its well-established association with various biological processes, such as development and aging, and disease, such as cancer. The work in this dissertation focuses primarily on this epigenetic mark, although it has the potential to be applied to other heritable marks.

In the 1940s, Waddington introduced the term epigenetic landscape to conceptually describe cell pluripotency and differentiation. This concept lived in the abstract plane until Jenkinson et al. 2017 & 2018 estimated actual epigenetic landscapes from WGBS data, and the work led to startling results with biological implications in development and disease. Here, we introduce an array of novel computational methods that draw from that work. First, we present CPEL, a method that uses a variant of the original landscape proposed by Jenkinson et al., which, together with a new hypothesis testing framework, allows for the detection of DNA methylation imbalances between homologous chromosomes. Then, we present CpelTdm, a method that builds upon CPEL to perform differential methylation analysis between groups of samples using targeted bisulfite sequencing data. Finally, we extend the original probabilistic model proposed by Jenkinson et al. to estimate methylation landscapes and perform differential analysis from nanopore data.

Overall, this work addresses immediate needs in the study of DNA methylation. The methods presented here can lead to a better characterization of this critical epigenetic mark and enable biological discoveries with implications for diagnosing and treating complex human diseases.

Committee Members

  • John Goutsias, Department of Electrical and Computer Engineering
  • Archana Venkataraman, Department of Electrical and Computer Engineering
  • Sanjeev Khudanpur, Department of Electrical and Computer Engineering
May
24
Mon
Dissertation Defense: Xing Di
May 24 @ 12:00 pm
Dissertation Defense: Xing Di

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Deep Learning Based Face Image Synthesis

Abstract: Face image synthesis is an important problem in the biometrics and computer vision communities due to its applications in law enforcement and entertainment. In this thesis, we develop novel deep neural network models and associated loss functions for two face image synthesis problems, namely thermal to visible face synthesis and visual attribute to face synthesis.

In particular, for thermal to visible face synthesis, we propose a model which makes use of facial attributes to obtain better synthesis. We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery. A pre-trained attribute predictor network is used to extract attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.

In addition, we propose another thermal to visible face synthesis method based on a self-attention generative adversarial network (SAGAN) which allows efficient attention-guided image synthesis. Rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for cross-modal face verification.

Regarding attribute to face image synthesis, we propose the Att2SK2Face model for face image synthesis from visual attributes via sketch. In this approach, we first synthesize a facial sketch corresponding to the visual attributes and then generate the face image based on the synthesized sketch. The proposed framework is based on a combination of two different Generative Adversarial Networks (GANs) – (1) a sketch generator network which synthesizes realistic sketch from the input attributes, and (2) a face generator network which synthesizes facial images from the synthesized sketch images with the help of facial attributes.

Finally, we propose another synthesis model, called Att2MFace, which can simultaneously synthesize multimodal faces from visual attributes without requiring paired data in different domains for training the network. We introduce a novel generator with multimodal stretch-out modules to simultaneously synthesize multimodal face images. Additionally, multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.

Committee Members

  • Vishal Patel, Department of Electrical and Computer Engineering
  • Rama Chellappa, Department of Electrical and Computer Engineering
  • Carlos Castillo, Department of Electrical and Computer Engineering
May
25
Tue
Dissertation Defense: Arun Nair
May 25 @ 12:30 pm
Dissertation Defense: Arun Nair

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Machine Learning for Beamforming in Ultrasound, Radar, and Audio

Abstract: Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of ultrasound, radar, and audio.

Machine learning, succinctly defined by Tom Mitchell as “the study of algorithms that improve automatically through experience” is the other central part of this thesis. Machine learning, especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more.

In this dissertation, we look at beamforming pipelines in ultrasound, radar, and audio from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. Starting off in the ultrasound domain, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data.

Next, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar.

Finally, we move to the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zoom. Unlike prior work which is capable of only enhancing a few individual directions, our method enhances audio from a contiguous field of view.

Committee Members

  • Trac Tran, Department of Electrical and Computer Engineering
  • Muyinatu Bell, Department of Electrical and Computer Engineering
  • Vishal Patel, Department of Electrical and Computer Engineering
May
28
Fri
Dissertation Defense: Takeshi Uejima
May 28 @ 9:00 am
Dissertation Defense: Takeshi Uejima

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: A Unified Visual Saliency Model for Neuromorphic Implementation

Abstract: Although computer capabilities have expanded tremendously, a significant wall remains between the computer and the human brain. The brain can process massive amounts of information obtained from a complex environment and control the entire body in real time with low energy consumption. This thesis tackles this mystery by modeling and emulating how the brain processes information based on the available knowledge of biological and artificial intelligence as studied in neuroscience, cognitive science, computer science, and computer engineering.

Saliency modeling relates to visual sense and biological intelligence. The retina captures and sends much data about the environment to the brain. However, as the visual cortex cannot process all the information in detail at once, the early stages of visual processing discard unimportant information. Because only the fovea has high-resolution imaging, individuals move their eyeballs in the direction of the important part of the scene. Therefore, eye movements can be thought of as an observable output of the early visual process in the brain. Saliency modeling aims to understand this mechanism and predict eye fixations.

Researchers have built biologically plausible saliency models that emulate the biological process from the retina through the visual cortex. Although many saliency models have been proposed, most are not bio-realistic. This thesis models the biological mechanisms for the perception of texture, depth, and motion. While texture plays a vital role in the perception process, defining texture in a mathematical way is not easy. Thus, it is necessary to build an architecture of texture processing based on the biological perception mechanism. Binocular stereopsis is another intriguing function of the brain. While scholars have evaluated many computational algorithms for stereovision, pursuing biological plausibility means implementing a neuromorphic method into a saliency model. Motion is another critical clue that helps animals survive. In this thesis, the motion feature is implemented in a bio-realistic way based on neurophysiological observation.

Moreover, the thesis will integrate these processes and propose a unified saliency model that can handle 3D dynamic scenes in a similar way to how the brain deals with the real environment. Thus, this investigation will use saliency modeling to examine intriguing properties of human visual processing and discuss how the brain achieves this remarkable capability.

Committee Members

  • Ralph Etienne-Cummings, Department of Electrical and Computer Engineering
  • Andreas Andreou, Department of Electrical and Computer Engineering
  • Philippe Pouliquen, Department of Electrical and Computer Engineering
  • Ernst Niebur, Department of Neuroscience
Jun
29
Tue
Dissertation Defense: Yan Jiang
Jun 29 @ 1:00 pm
Dissertation Defense: Yan Jiang

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Leveraging Inverter-Interfaced Energy Storage for Frequency Control in Low-Inertia Power Systems

Abstract: The shift from conventional synchronous generation to renewable inverter-interfaced sources has led to a noticeable degradation of frequency dynamics in power systems, mainly due to a loss of inertia. Fortunately, the recent technology advancement and cost reduction in energy storage facilitate the potential for higher renewable energy penetration via inverter-interfaced energy storage. With proper control laws imposed on inverters, the rapid power-frequency response from energy storage contributes to mitigating the degradation. A straightforward choice is to emulate the droop response and/or inertial response of synchronous generators through droop control (DC) or virtual inertia (VI), yet they do not necessarily fully exploit the benefits of inverter-interfaced energy storage. This thesis thus seeks to challenge this naive choice of mimicking synchronous generator characteristics by advocating for a principled control design perspective.

To achieve this goal, we build an analysis framework for quantifying the performance of power systems using signal and system norms, within which we perform a systematic study to evaluate the effect of different control laws on both frequency response metrics and storage economic metrics. More precisely, under a mild yet insightful proportionality assumption, we are able to perform a modal decomposition which allows us to get closed-form expressions or conditions for synchronous frequency, Nadir, rate of change of frequency (RoCoF), synchronization cost, frequency variance, and steady-state effort share. All of them pave the way for a better understanding of the sensitivities of various performance metrics to different control laws.

Our analysis unveils several limitations of traditional control laws, such as the inability of DC to improve the dynamic performance without sacrificing the steady-state performance and  the unbounded frequency variance introduced by VI in  the presence of frequency measurement noise. Therefore, rather than clinging to the idea of imitating synchronous generator behavior via inverter-interfaced energy storage, we prefer searching for better solutions.

We first propose dynam-i-c Droop control (iDroop)—inspired by the classical lead/lag compensator—which is proved to enjoy many good properties. First of all, the added degrees of freedom in iDroop allow to decouple the dynamic performance improvement from the steady-state performance. In addition, the lead/lag property of iDroop makes it less sensitive to stochastic power fluctuations and frequency measurement noise. Last but not least, iDroop can also be tuned either to achieve the zero synchronization cost or to achieve the Nadir elimination, by which we mean to remove the overshoot in the transient system frequency. Particularly, the Nadir elimination tuning of iDroop exhibits the potential for a balance among various performance metrics in reality. However, iDroop has no control over the RoCoF, which is undesirable in low-inertia power systems for the risk of falsely triggering protection.

We then propose frequency shaping control (FS)—an extension of iDroop—whose most outstanding feature is its ability to shape the system frequency dynamics following a sudden power imbalance into a first-order one with the specified synchronous frequency and RoCoF by adjusting two independent control parameters respectively.

We finally validate theoretical results through extensive numerical experiments performed on a more realistic power system test case that violates the proportionality assumption, which clearly confirms that our proposed control laws outperform the traditional ones in an overall sense.

Committee Members

  • Enrique Mallada, Department of Electrical and Computer Engineering
  • Pablo A. Iglesias, Department of Electrical and Computer Engineering
  • Dennice F. Gayme, Department of Mechanical Engineering
  • Petr Vorobev, Center for Energy Science and Technology, Skolkovo Institute of Science and Technology
Jun
30
Wed
Dissertation Defense: Ashwin Bellur
Jun 30 @ 10:00 am
Dissertation Defense: Ashwin Bellur

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Bio-Mimetic Sensory Mapping with Attention for Auditory Scene Analysis

Abstract: The human auditory system performs complex auditory tasks such as having a conversation in a busy cafe or picking the melodic line of a particular instrument in an ensemble orchestra, with remarkable ease. The human auditory system also exhibits the ability to effortlessly adapt to constantly changing conditions and novel stimulus. The human auditory system achieves these through complex neuronal processes. First the low dimensional signal representing the acoustic stimulus is mapped to a higher dimensional space through a series of feed-forward neuronal transformations; wherein the different auditory objects in the scene are discernible. These feed-forward processes are then further complemented by the top-down processes like attention, driven by the cognitive regions to modulate the feed-forward processes in a manner that shines the spotlight on the object of interest; the interlocutor in the example of a busy cafe or the instrument of interest in the ensemble orchestra.

In this work, we explore leveraging these mechanisms observed in the mammalian brain, within computational frameworks, for addressing various auditory scene analysis tasks such as speech activity detection, environmental sound classification and source separation. We develop bio-mimetic computational strategies to model the feed-forward sensory mapping processes as well as the corresponding complementary top-down mechanisms capable of modulating the feed-forward processes during attention.

In the first part of this work, we show using Gabor filters as an approximation for the feed-forward processes, that retuning the feed-forward processes under top-down attentional feedback are extremely potent in enabling robust behavior in detecting speech activity. We introduce the notion of memory to represent prior knowledge of the acoustic objects and show that memories of objects can be used to deploy the necessary top-down feedback. Next, we expand the feed-forward processes using data-driven distributed deep belief system consisting of multiple streams to capture the stimulus from different spectrotemporal resolutions, a feature observed in the human auditory system. We show that such a distributed system with inherent redundancies, further complemented by top-down attentional mechanisms using distributed object memories allow for robust classification of environmental sounds in mismatched conditions. Finally, we show that incorporating the ideas of distributed processing and attentional mechanisms using deep neural networks leads to state-of-the-art performance for even complex tasks such as source separation. Further, we show that in such a distributed system, the sum of the parts are better than the individual parts and that this aspect can be used to generate real-time top-down feedback; which in turn can be used to adapt the network to novel conditions during inference.

Overall, the results of the work show that leveraging theses biologically inspired mechanisms within computational frameworks lead to enhanced robustness and adaptability to novel conditions, traits of the human auditory system that we sought to emulate.

Committee Members

Mounya Elhilali, Department of Electrical and Computer Engineering

Najim Dehak, Department of Electrical and Computer Engineering

Rama Chellappa, Department of Electrical and Computer Engineering

Dissertation Defense: Soohyun Lee
Jun 30 @ 2:00 pm
Dissertation Defense: Soohyun Lee

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Optical coherence tomography (OCT) – guided ophthalmic therapy

Abstract: Optical coherence tomography (OCT), which provides cross-sectional images noninvasively with a micro-scale in real-time, has been widely applied for the diagnosis and treatment guidance for various ocular diseases.

In the first part of this work, we develop a hand-held subretinal-injector actively guided by a common-path OCT (CP-OCT) distal sensor. Subretinal injection is becoming increasingly prevalent in both scientific research and clinical communities as an efficient way of treating retinal diseases. It delivers drug or stem cells in the space between RPE and photoreceptor layers and, thus, directly affect resident cell and tissues in the subretinal space. However, the technique requires high stability and dexterity of surgeon due to fine anatomy of the retina, and it is challenging because of physiological motions of surgeons like hand tremor. We mainly focus on two aspects of the CP-OCT guided subretinal-injector: (i) A high-performance fiber probe based on high-index epoxy lensed-fiber to enhance the CP-OCT retinal image quality; (ii) Automated layer identification and tracking: Each retinal layer boundary, as well as retinal surface, is tracked using 1D convolutional neural network (CNN)-based segmentation on A-scans for accurate localization of a needle. The CNN model is integrated into the CP-OCT system for real-time target boundary distance sensing, and unwanted axial motions are compensated based on the target boundary tracking. The CP-OCT distal sensor guided system is tested on ex vivo bovine retina and achieves micro-scale depth targeting accuracy, showing its promising possibility for clinical application.

In the second part, we propose and demonstrate selective retina therapy (SRT) monitoring and temperature estimation based on speckle variance OCT (svOCT) for dosimetry control. SRT is an effective laser treatment method for retinal diseases associated with a degradation of the retinal pigment epithelium (RPE). The SRT selectively targets the RPE, so it reduces negative side effects and facilitates healing of the induced retinal lesions. However, the selection of proper laser energy is challenging because of ophthalmoscopically invisible lesions in the RPE and variance in melanin concentration between patients and even between regions within an eye. SvOCT quantifies speckle pattern variation caused by moving particles or structural changes in biological tissues. SvOCT images were calculated as interframe intensity variance of the sequence, and they show abrupt speckle variance change induced by laser pulse irradiation. We find that svOCT peak values have a reliable correlation with the degree of retinal lesion formation. The temperature at the neural retina and RPE is estimated from the svOCT peak values using numerically calculated temperature, which is consistent with the observed lesion creation.

Committee Members

  • Jin U. Kang, Department of Electrical and Computer Engineering
  • Israel Gannot, Department of Electrical and Computer Engineering
  • Mark Foster, Department of Electrical and Computer Engineering
Jul
6
Tue
Thesis Proposal: Honghua Guan
Jul 6 @ 12:30 pm
Thesis Proposal: Honghua Guan

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: High-throughput Optical Explorer in Freely-behaving Rodents

Abstract: One critical goal for neuroscience is to explore the mechanisms underlying neuronal information processing. A suitable brain imaging tool is of great significance to be capable of recording clear neuronal signals over prolonged periods. Among different imaging modalities, multiphoton microscopy becomes the choice for in vivo brain applications owing to its subcellular resolution, optical sectioning and deep penetration. The current experimental routine, however, requires head-fixation of animals during data acquisition. This configuration will inevitably introduce unwanted stress and limit many behavior studies such as social interaction. The scanning two-photon fiberscope is a promising technical direction to bridge this gap. Benefiting from its ultra-compact design and light-weight, it is an ideal optical brain imaging modality to assess dynamic neuronal activities in freely-behaving rodents with subcellular resolution. One significant challenge with the compact scanning two-photon fiberscope is its suboptimal imaging throughput due to the limited choices of miniature optomechanical components.

In this project, we present a compact multicolor two-photon fiberscope platform. We achieve three-wavelength excitation by synchronizing the pulse trains from a femtosecond OPO and its pump. The imaging results demonstrate that we can excite several different fluorescent proteins simultaneously with an optimal excitation efficiency. In addition, we propose a deep neural network (DNN) based solution that significantly improves the imaging frame rate with minimal loss in image quality. This innovation enables 10-fold speed enhancement for the scanning two-photon fiberscope, making it feasible to perform video-rate (26 fps) two-photon imaging in freely-moving mice with excellent imaging resolution and SNR that were previously not possible.

Committee Members

  • Xingde Li, Department of Biomedical Engineering
  • Mark Foster, Department of Electrical and Computer Engineering
  • Jing U. Kang, Department of Electrical and Computer Engineering
  • Israel Gannot, Department of Electrical and Computer Engineering
  • Hui Lu, Department of Pharmacology and Physiology, George Washington University
Back to top