Dissertation Defense: Yufan He

Note: This is a virtual presentation. Here is the link for where the presentation will be taking place.

Title: Retinal OCT Image Analysis Using Deep Learning

Abstract: Optical coherence tomography (OCT) is a noninvasive imaging modality which uses low coherence light waves to take cross-sectional images of optical scattering media. OCT has been widely used in diagnosing retinal and neural diseases by imaging the human retina. The thickness of retinal layers are important biomarkers for neurological diseases like multiple sclerosis (MS). The peripapillary retinal nerve fiber layer (pRNFL) and ganglion cell plus inner plexiform layer (GCIP) thickness can be used to assess global disease progression of MS patients. Automated OCT image analysis tools are critical for quantitatively monitoring disease progression and explore biomarkers. With the development of more powerful computational resources, deep learning based methods have achieved much better performance in accuracy, speed, and algorithm flexibility for many image analysis tasks. However, without task-specific modifications, these emerging deep learning methods are not satisfactory if directly applied to tasks like retinal layer segmentation.

In this thesis, we present a set of novel deep learning based methods for OCT image analysis. Specifically, we focus on automated retinal layer segmentation from macular OCT images. A first problem we address is that existing deep learning methods do not incorporate explicit anatomical rules and cannot guarantee the layer segmentation hierarchy (pixels of the upper layers should have no overlap or gap with pixels of layers beneath it). To solve this, we developed an efficient fully convolutional network to generate structured layer surfaces with correct topology that is also able to perform retinal lesion (cysts or edema) segmentation. A second problem we addressed is that the segmentation uncertainty reduces the sensitivity of detecting mild retinal changes in MS patients overtime. To solve this, we developed a longitudinal deep learning pipeline that considers both inter-slice and longitudinal segmentation priors to achieve a more consistent segmentation for monitoring patient-specific retinal changes. A third problem we addressed is that the performance of the deep learning models will degrade when test data is generated from different scanners (domain shift). We address this problem by developing a novel test-time domain adaptation method. Different than existing solutions, our model can dynamically adapt to each test subject during inference without time-consuming retraining. Our deep networks achieved state-of-the-art segmentation accuracy, speed, and flexibility comparing to the existing methods.

Committee Members

  • Jerry Prince, Department of Electrical and Computer Engineering
  • Archana Venkataraman, Department of Electrical and Computer Engineering
  • Vishal Patel, Department of Electrical and Computer Engineering
Back to top