Dissertation Defense: Jordi Abante
May 10 @ 3:00 pm
Dissertation Defense: Jordi Abante

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Statistical Signal Processing Methods for Epigenetic Landscape Analysis

Abstract: Since the DNA structure was discovered in 1953, a great deal of effort has been put into studying this molecule in detail. We now know DNA comprises an organism’s genetic makeup and constitutes a blueprint for life. The study of DNA has dramatically increased our knowledge about cell function and evolution and has led to remarkable discoveries in biology and medicine.

Just as DNA is replicated during cell division, several chemical marks are also passed onto progeny during this process. Epigenetics studies these marks and represents a fascinating research area given their crucial role. Among all known epigenetic marks, 5mc DNA methylation is probably one of the most important ones given its well-established association with various biological processes, such as development and aging, and disease, such as cancer. The work in this dissertation focuses primarily on this epigenetic mark, although it has the potential to be applied to other heritable marks.

In the 1940s, Waddington introduced the term epigenetic landscape to conceptually describe cell pluripotency and differentiation. This concept lived in the abstract plane until Jenkinson et al. 2017 & 2018 estimated actual epigenetic landscapes from WGBS data, and the work led to startling results with biological implications in development and disease. Here, we introduce an array of novel computational methods that draw from that work. First, we present CPEL, a method that uses a variant of the original landscape proposed by Jenkinson et al., which, together with a new hypothesis testing framework, allows for the detection of DNA methylation imbalances between homologous chromosomes. Then, we present CpelTdm, a method that builds upon CPEL to perform differential methylation analysis between groups of samples using targeted bisulfite sequencing data. Finally, we extend the original probabilistic model proposed by Jenkinson et al. to estimate methylation landscapes and perform differential analysis from nanopore data.

Overall, this work addresses immediate needs in the study of DNA methylation. The methods presented here can lead to a better characterization of this critical epigenetic mark and enable biological discoveries with implications for diagnosing and treating complex human diseases.

Committee Members

  • John Goutsias, Department of Electrical and Computer Engineering
  • Archana Venkataraman, Department of Electrical and Computer Engineering
  • Sanjeev Khudanpur, Department of Electrical and Computer Engineering
Dissertation Defense: Xing Di
May 24 @ 12:00 pm
Dissertation Defense: Xing Di

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Deep Learning Based Face Image Synthesis

Abstract: Face image synthesis is an important problem in the biometrics and computer vision communities due to its applications in law enforcement and entertainment. In this thesis, we develop novel deep neural network models and associated loss functions for two face image synthesis problems, namely thermal to visible face synthesis and visual attribute to face synthesis.

In particular, for thermal to visible face synthesis, we propose a model which makes use of facial attributes to obtain better synthesis. We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery. A pre-trained attribute predictor network is used to extract attributes from the visible image. Then, a novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.

In addition, we propose another thermal to visible face synthesis method based on a self-attention generative adversarial network (SAGAN) which allows efficient attention-guided image synthesis. Rather than focusing only on synthesizing visible faces from thermal faces, we also propose to synthesize thermal faces from visible faces. Our intuition is based on the fact that thermal images also contain some discriminative information about the person for verification. Deep features from a pre-trained Convolutional Neural Network (CNN) are extracted from the original as well as the synthesized images. These features are then fused to generate a template which is then used for cross-modal face verification.

Regarding attribute to face image synthesis, we propose the Att2SK2Face model for face image synthesis from visual attributes via sketch. In this approach, we first synthesize a facial sketch corresponding to the visual attributes and then generate the face image based on the synthesized sketch. The proposed framework is based on a combination of two different Generative Adversarial Networks (GANs) – (1) a sketch generator network which synthesizes realistic sketch from the input attributes, and (2) a face generator network which synthesizes facial images from the synthesized sketch images with the help of facial attributes.

Finally, we propose another synthesis model, called Att2MFace, which can simultaneously synthesize multimodal faces from visual attributes without requiring paired data in different domains for training the network. We introduce a novel generator with multimodal stretch-out modules to simultaneously synthesize multimodal face images. Additionally, multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.

Committee Members

  • Vishal Patel, Department of Electrical and Computer Engineering
  • Rama Chellappa, Department of Electrical and Computer Engineering
  • Carlos Castillo, Department of Electrical and Computer Engineering
Dissertation Defense: Arun Nair
May 25 @ 12:30 pm
Dissertation Defense: Arun Nair

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Machine Learning for Beamforming in Ultrasound, Radar, and Audio

Abstract: Multi-sensor signal processing plays a crucial role in the working of several everyday technologies, from correctly understanding speech on smart home devices to ensuring aircraft fly safely. A specific type of multi-sensor signal processing called beamforming forms a central part of this thesis. Beamforming works by combining the information from several spatially distributed sensors to directionally filter information, boosting the signal from a certain direction but suppressing others. The idea of beamforming is key to the domains of ultrasound, radar, and audio.

Machine learning, succinctly defined by Tom Mitchell as “the study of algorithms that improve automatically through experience” is the other central part of this thesis. Machine learning, especially its sub-field of deep learning, has enabled breakneck progress in tackling several problems that were previously thought intractable. Today, machine learning powers many of the cutting edge systems we see on the internet for image classification, speech recognition, language translation, and more.

In this dissertation, we look at beamforming pipelines in ultrasound, radar, and audio from a machine learning lens and endeavor to improve different parts of the pipelines using ideas from machine learning. Starting off in the ultrasound domain, we use deep learning as an alternative to beamforming in ultrasound and improve the information extraction pipeline by simultaneously generating both a segmentation map and B-mode image of high quality directly from raw received ultrasound data.

Next, we move to the radar domain and study how deep learning can be used to improve signal quality in ultra-wideband synthetic aperture radar by suppressing radio frequency interference, random spectral gaps, and contiguous block spectral gaps. By training and applying the networks on raw single-aperture data prior to beamforming, it can work with myriad sensor geometries and different beamforming equations, a crucial requirement in synthetic aperture radar.

Finally, we move to the audio domain and derive a machine learning inspired beamformer to tackle the problem of ensuring the audio captured by a camera matches its visual content, a problem we term audiovisual zoom. Unlike prior work which is capable of only enhancing a few individual directions, our method enhances audio from a contiguous field of view.

Committee Members

  • Trac Tran, Department of Electrical and Computer Engineering
  • Muyinatu Bell, Department of Electrical and Computer Engineering
  • Vishal Patel, Department of Electrical and Computer Engineering
Dissertation Defense: Takeshi Uejima
May 28 @ 9:00 am
Dissertation Defense: Takeshi Uejima

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: A Unified Visual Saliency Model for Neuromorphic Implementation

Abstract: Although computer capabilities have expanded tremendously, a significant wall remains between the computer and the human brain. The brain can process massive amounts of information obtained from a complex environment and control the entire body in real time with low energy consumption. This thesis tackles this mystery by modeling and emulating how the brain processes information based on the available knowledge of biological and artificial intelligence as studied in neuroscience, cognitive science, computer science, and computer engineering.

Saliency modeling relates to visual sense and biological intelligence. The retina captures and sends much data about the environment to the brain. However, as the visual cortex cannot process all the information in detail at once, the early stages of visual processing discard unimportant information. Because only the fovea has high-resolution imaging, individuals move their eyeballs in the direction of the important part of the scene. Therefore, eye movements can be thought of as an observable output of the early visual process in the brain. Saliency modeling aims to understand this mechanism and predict eye fixations.

Researchers have built biologically plausible saliency models that emulate the biological process from the retina through the visual cortex. Although many saliency models have been proposed, most are not bio-realistic. This thesis models the biological mechanisms for the perception of texture, depth, and motion. While texture plays a vital role in the perception process, defining texture in a mathematical way is not easy. Thus, it is necessary to build an architecture of texture processing based on the biological perception mechanism. Binocular stereopsis is another intriguing function of the brain. While scholars have evaluated many computational algorithms for stereovision, pursuing biological plausibility means implementing a neuromorphic method into a saliency model. Motion is another critical clue that helps animals survive. In this thesis, the motion feature is implemented in a bio-realistic way based on neurophysiological observation.

Moreover, the thesis will integrate these processes and propose a unified saliency model that can handle 3D dynamic scenes in a similar way to how the brain deals with the real environment. Thus, this investigation will use saliency modeling to examine intriguing properties of human visual processing and discuss how the brain achieves this remarkable capability.

Committee Members

  • Ralph Etienne-Cummings, Department of Electrical and Computer Engineering
  • Andreas Andreou, Department of Electrical and Computer Engineering
  • Philippe Pouliquen, Department of Electrical and Computer Engineering
  • Ernst Niebur, Department of Neuroscience
Dissertation Defense: Yan Jiang
Jun 29 @ 1:00 pm
Dissertation Defense: Yan Jiang

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Leveraging Inverter-Interfaced Energy Storage for Frequency Control in Low-Inertia Power Systems

Abstract: The shift from conventional synchronous generation to renewable inverter-interfaced sources has led to a noticeable degradation of frequency dynamics in power systems, mainly due to a loss of inertia. Fortunately, the recent technology advancement and cost reduction in energy storage facilitate the potential for higher renewable energy penetration via inverter-interfaced energy storage. With proper control laws imposed on inverters, the rapid power-frequency response from energy storage contributes to mitigating the degradation. A straightforward choice is to emulate the droop response and/or inertial response of synchronous generators through droop control (DC) or virtual inertia (VI), yet they do not necessarily fully exploit the benefits of inverter-interfaced energy storage. This thesis thus seeks to challenge this naive choice of mimicking synchronous generator characteristics by advocating for a principled control design perspective.

To achieve this goal, we build an analysis framework for quantifying the performance of power systems using signal and system norms, within which we perform a systematic study to evaluate the effect of different control laws on both frequency response metrics and storage economic metrics. More precisely, under a mild yet insightful proportionality assumption, we are able to perform a modal decomposition which allows us to get closed-form expressions or conditions for synchronous frequency, Nadir, rate of change of frequency (RoCoF), synchronization cost, frequency variance, and steady-state effort share. All of them pave the way for a better understanding of the sensitivities of various performance metrics to different control laws.

Our analysis unveils several limitations of traditional control laws, such as the inability of DC to improve the dynamic performance without sacrificing the steady-state performance and  the unbounded frequency variance introduced by VI in  the presence of frequency measurement noise. Therefore, rather than clinging to the idea of imitating synchronous generator behavior via inverter-interfaced energy storage, we prefer searching for better solutions.

We first propose dynam-i-c Droop control (iDroop)—inspired by the classical lead/lag compensator—which is proved to enjoy many good properties. First of all, the added degrees of freedom in iDroop allow to decouple the dynamic performance improvement from the steady-state performance. In addition, the lead/lag property of iDroop makes it less sensitive to stochastic power fluctuations and frequency measurement noise. Last but not least, iDroop can also be tuned either to achieve the zero synchronization cost or to achieve the Nadir elimination, by which we mean to remove the overshoot in the transient system frequency. Particularly, the Nadir elimination tuning of iDroop exhibits the potential for a balance among various performance metrics in reality. However, iDroop has no control over the RoCoF, which is undesirable in low-inertia power systems for the risk of falsely triggering protection.

We then propose frequency shaping control (FS)—an extension of iDroop—whose most outstanding feature is its ability to shape the system frequency dynamics following a sudden power imbalance into a first-order one with the specified synchronous frequency and RoCoF by adjusting two independent control parameters respectively.

We finally validate theoretical results through extensive numerical experiments performed on a more realistic power system test case that violates the proportionality assumption, which clearly confirms that our proposed control laws outperform the traditional ones in an overall sense.

Committee Members

  • Enrique Mallada, Department of Electrical and Computer Engineering
  • Pablo A. Iglesias, Department of Electrical and Computer Engineering
  • Dennice F. Gayme, Department of Mechanical Engineering
  • Petr Vorobev, Center for Energy Science and Technology, Skolkovo Institute of Science and Technology
Dissertation Defense: Ashwin Bellur
Jun 30 @ 10:00 am
Dissertation Defense: Ashwin Bellur

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Bio-Mimetic Sensory Mapping with Attention for Auditory Scene Analysis

Abstract: The human auditory system performs complex auditory tasks such as having a conversation in a busy cafe or picking the melodic line of a particular instrument in an ensemble orchestra, with remarkable ease. The human auditory system also exhibits the ability to effortlessly adapt to constantly changing conditions and novel stimulus. The human auditory system achieves these through complex neuronal processes. First the low dimensional signal representing the acoustic stimulus is mapped to a higher dimensional space through a series of feed-forward neuronal transformations; wherein the different auditory objects in the scene are discernible. These feed-forward processes are then further complemented by the top-down processes like attention, driven by the cognitive regions to modulate the feed-forward processes in a manner that shines the spotlight on the object of interest; the interlocutor in the example of a busy cafe or the instrument of interest in the ensemble orchestra.

In this work, we explore leveraging these mechanisms observed in the mammalian brain, within computational frameworks, for addressing various auditory scene analysis tasks such as speech activity detection, environmental sound classification and source separation. We develop bio-mimetic computational strategies to model the feed-forward sensory mapping processes as well as the corresponding complementary top-down mechanisms capable of modulating the feed-forward processes during attention.

In the first part of this work, we show using Gabor filters as an approximation for the feed-forward processes, that retuning the feed-forward processes under top-down attentional feedback are extremely potent in enabling robust behavior in detecting speech activity. We introduce the notion of memory to represent prior knowledge of the acoustic objects and show that memories of objects can be used to deploy the necessary top-down feedback. Next, we expand the feed-forward processes using data-driven distributed deep belief system consisting of multiple streams to capture the stimulus from different spectrotemporal resolutions, a feature observed in the human auditory system. We show that such a distributed system with inherent redundancies, further complemented by top-down attentional mechanisms using distributed object memories allow for robust classification of environmental sounds in mismatched conditions. Finally, we show that incorporating the ideas of distributed processing and attentional mechanisms using deep neural networks leads to state-of-the-art performance for even complex tasks such as source separation. Further, we show that in such a distributed system, the sum of the parts are better than the individual parts and that this aspect can be used to generate real-time top-down feedback; which in turn can be used to adapt the network to novel conditions during inference.

Overall, the results of the work show that leveraging theses biologically inspired mechanisms within computational frameworks lead to enhanced robustness and adaptability to novel conditions, traits of the human auditory system that we sought to emulate.

Committee Members

Mounya Elhilali, Department of Electrical and Computer Engineering

Najim Dehak, Department of Electrical and Computer Engineering

Rama Chellappa, Department of Electrical and Computer Engineering

Dissertation Defense: Soohyun Lee
Jun 30 @ 2:00 pm
Dissertation Defense: Soohyun Lee

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Optical coherence tomography (OCT) – guided ophthalmic therapy

Abstract: Optical coherence tomography (OCT), which provides cross-sectional images noninvasively with a micro-scale in real-time, has been widely applied for the diagnosis and treatment guidance for various ocular diseases.

In the first part of this work, we develop a hand-held subretinal-injector actively guided by a common-path OCT (CP-OCT) distal sensor. Subretinal injection is becoming increasingly prevalent in both scientific research and clinical communities as an efficient way of treating retinal diseases. It delivers drug or stem cells in the space between RPE and photoreceptor layers and, thus, directly affect resident cell and tissues in the subretinal space. However, the technique requires high stability and dexterity of surgeon due to fine anatomy of the retina, and it is challenging because of physiological motions of surgeons like hand tremor. We mainly focus on two aspects of the CP-OCT guided subretinal-injector: (i) A high-performance fiber probe based on high-index epoxy lensed-fiber to enhance the CP-OCT retinal image quality; (ii) Automated layer identification and tracking: Each retinal layer boundary, as well as retinal surface, is tracked using 1D convolutional neural network (CNN)-based segmentation on A-scans for accurate localization of a needle. The CNN model is integrated into the CP-OCT system for real-time target boundary distance sensing, and unwanted axial motions are compensated based on the target boundary tracking. The CP-OCT distal sensor guided system is tested on ex vivo bovine retina and achieves micro-scale depth targeting accuracy, showing its promising possibility for clinical application.

In the second part, we propose and demonstrate selective retina therapy (SRT) monitoring and temperature estimation based on speckle variance OCT (svOCT) for dosimetry control. SRT is an effective laser treatment method for retinal diseases associated with a degradation of the retinal pigment epithelium (RPE). The SRT selectively targets the RPE, so it reduces negative side effects and facilitates healing of the induced retinal lesions. However, the selection of proper laser energy is challenging because of ophthalmoscopically invisible lesions in the RPE and variance in melanin concentration between patients and even between regions within an eye. SvOCT quantifies speckle pattern variation caused by moving particles or structural changes in biological tissues. SvOCT images were calculated as interframe intensity variance of the sequence, and they show abrupt speckle variance change induced by laser pulse irradiation. We find that svOCT peak values have a reliable correlation with the degree of retinal lesion formation. The temperature at the neural retina and RPE is estimated from the svOCT peak values using numerically calculated temperature, which is consistent with the observed lesion creation.

Committee Members

  • Jin U. Kang, Department of Electrical and Computer Engineering
  • Israel Gannot, Department of Electrical and Computer Engineering
  • Mark Foster, Department of Electrical and Computer Engineering
Thesis Proposal: Honghua Guan
Jul 6 @ 12:30 pm
Thesis Proposal: Honghua Guan

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: High-throughput Optical Explorer in Freely-behaving Rodents

Abstract: One critical goal for neuroscience is to explore the mechanisms underlying neuronal information processing. A suitable brain imaging tool is of great significance to be capable of recording clear neuronal signals over prolonged periods. Among different imaging modalities, multiphoton microscopy becomes the choice for in vivo brain applications owing to its subcellular resolution, optical sectioning and deep penetration. The current experimental routine, however, requires head-fixation of animals during data acquisition. This configuration will inevitably introduce unwanted stress and limit many behavior studies such as social interaction. The scanning two-photon fiberscope is a promising technical direction to bridge this gap. Benefiting from its ultra-compact design and light-weight, it is an ideal optical brain imaging modality to assess dynamic neuronal activities in freely-behaving rodents with subcellular resolution. One significant challenge with the compact scanning two-photon fiberscope is its suboptimal imaging throughput due to the limited choices of miniature optomechanical components.

In this project, we present a compact multicolor two-photon fiberscope platform. We achieve three-wavelength excitation by synchronizing the pulse trains from a femtosecond OPO and its pump. The imaging results demonstrate that we can excite several different fluorescent proteins simultaneously with an optimal excitation efficiency. In addition, we propose a deep neural network (DNN) based solution that significantly improves the imaging frame rate with minimal loss in image quality. This innovation enables 10-fold speed enhancement for the scanning two-photon fiberscope, making it feasible to perform video-rate (26 fps) two-photon imaging in freely-moving mice with excellent imaging resolution and SNR that were previously not possible.

Committee Members

  • Xingde Li, Department of Biomedical Engineering
  • Mark Foster, Department of Electrical and Computer Engineering
  • Jing U. Kang, Department of Electrical and Computer Engineering
  • Israel Gannot, Department of Electrical and Computer Engineering
  • Hui Lu, Department of Pharmacology and Physiology, George Washington University
Closing Ceremonies for Computational Sensing and Medical Robotics (CSMR) REU
Aug 6 @ 9:00 am – 3:00 pm

The closing ceremonies of the Computational Sensing and Medical Robotics (CSMR) REU are set to take place Friday, August 6 from 9am until 3pm at this Zoom link. Seventeen undergraduate students from across the country are eager to share the culmination of their work for the past 10 weeks this summer.

The schedule for the day is listed below, but each presentation is featured in more detail in the program. Please invite your students and faculty, and feel free to distribute this flyer to advertise the event.

We would love for everyone to come learn about the amazing summer research these students have been conducting!


2021 REU Final Presentations
Time Presenter Project Title Faculty Mentor Student/Postdoc/Research Engineer Mentors

Ben Frey


Deep Learning for Lung Ultrasound Imaging of COVID-19 Patients Muyinatu Bell Lingyi Zhao

Camryn Graham


Optimization of a Photoacoustic Technique to Differentiate Methylene Blue from Hemoglobin Muyinatu Bell Eduardo Gonzalez

Ariadna Rivera


Autonomous Quadcopter Flying and Swarming Enrique Mallada Yue Shen

Katie Sapozhnikov


Force Sensing Surgical Drill Russell Taylor Anna Goodridge

Savannah Hays


Evaluating SLANT Brain Segmentation using CALAMITI Jerry Prince Lianrui Zuo

Ammaar Firozi


Robustness of Deep Networks to Adversarial Attacks René Vidal Kaleab Kinfu, Carolina Pacheco
10:30 Break

Karina Soto Perez


Brain Tumor Segmentation in Structural MRIs Archana Venkataraman Naresh Nandakumar

Jonathan Mi


Design of a Small Legged Robot to Traverse a Field of Multiple Types of Large Obstacles Chen Li Ratan Othayoth, Yaqing Wang, Qihan Xuan

Arko Chatterjee


Telerobotic System for Satellite Servicing Peter Kazanzides, Louis Whitcomb, Simon Leonard Will Pryor

Lauren Peterson


Can a Fish Learn to Ride a Bicycle? Noah Cowan Yu Yang

Josiah Lozano


Robotic System for Mosquito Dissection Russel Taylor,

Lulian Lordachita

Anna Goodridge

Zulekha Karachiwalla


Application of dual modality haptic feedback within surgical robotic Jeremy Brown
12:15 Break

James Campbell


Understanding Overparameterization from Symmetry René Vidal Salma Tarmoun

Evan Dramko


Establishing FDR Control For Genetic Marker Selection Soledad Villar, Jeremias Sulam N/A

Chase Lahr


Modeling Dynamic Systems Through a Classroom Testbed Jeremy Brown Mohit Singhala

Anire Egbe


Object Discrimination Using Vibrotactile Feedback for Upper Limb Prosthetic Users Jeremy Brown

Harrison Menkes


Measuring Proprioceptive Impairment in Stroke Survivors (Pre-Recorded) Jeremy Brown



3:00 Winner Announced
Dissertation Defense: Debojyoti Biswas
Aug 9 @ 10:00 am
Dissertation Defense: Debojyoti Biswas

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Stochastic Models of Chemotaxing Signaling Processes

Abstract: Stochasticity is ubiquitous in all processes. Its contribution in shaping the output response is not only restricted to systems involving entities with low copy numbers. Intrinsic fluctuations can also affect systems in which the interacting species are present in abundance. Chemotaxis, the migration of cells towards chemical cues, is one such example. Chemotaxis is a fundamental process that is behind a wide range of biological events, ranging from the innate immune response of organisms to cancer metastasis. In this dissertation, we study the role that stochastic fluctuations play in the regulatory mechanism that regulates chemotaxis in the social amoeba Dictyostelium discoideum. It has been argued theoretically and shown experimentally that stochastically driven threshold crossings of an underlying excitable system, lead to the protrusions that enable amoeboid cells to move. To date, however, there has been no good computational model that accurately accounts for the effects of noise, as most models merely inject noise extraneously to deterministic models leading to stochastic differential equations. In contrast, in this study, we employ an entirely different paradigm to account for the noise effects, based on the reaction-diffusion master equation. Using a modular approach and a three-dimensional description of the cell model with specific subdomains attributed to the cell membrane and cortex, we develop a detailed model of the receptor-mediated regulation of the signal transduction excitable network (STEN), which has been shown to drive actin dynamics. Using this model, we recreate the patterns of wave propagation seen in both front- and back-side markers that are seen experimentally. Moreover, we recreate various perturbations. Our model provides further support for the biased excitable network hypothesis that posits that directed motion occurs from a spatially biased regulation of the threshold for activation of an excitable network.

Here we also consider another aspect of the chemotactic response. While front- and back-markers redistribute in response to chemoattractant gradients, over time, this spatial heterogeneity becomes established and can exist even when the external chemoattractant gradient is removed. We refer to this persistent segregation of the cell into back and front regions as polarity. In this dissertation, we study various methods by which polarity can be established. For example, we consider the role of vesicular trafficking as a means of bringing back-markers from the front to the rear of the cell. Then, we study how BAR-domain proteins that are sensitive to membrane curvature, can amplify small shape heterogeneities leading to cell polarization. Finally, we develop computational models that describe a novel framework by which polarity can be established and perturbed through the alteration of the charge distribution on the inner leaf of the cell membrane.

Committee Members

  • Pablo A. Iglesias, Department of Electrical and Computer Engineering
  • Noah J . Cowan, Department of Mechanical Engineering
  • Enrique Mallada, Department of Electrical and Computer Engineering
  • Peter N. Devreotes, Department of Cell Biology
Back to top