Our current research focus includes three thrusts: programmable devices and networks, scalable machine learning and algorithmic approaches to healthcare.
Our mission includes improving the large-scale networked systems with the cooperation of programmable networks and efficient data structures. Rack-scale computers are emerging to fundamentally change how datacenters are designed, built and managed. Rack-scale computers disaggregate resources in each rack of servers into separate pools and organize them at the rack level. Such resource disaggregation can enable fine-grained resource allocation and increase resource utilization. Resource management is essential for rack-scale computers to realize fully these benefits. Yet, the densely-packed resources and the rise of millisecond-scale and microsecond-scale tasks pose unprecedented requirements on the throughput and latency for the resource manager. Today’s server-based solutions fall short to meet these requirements. This project investigates a new architecture that leverages the power and flexibility of new-generation programmable switches for resource management in rack-scale computers.
Scalable Machine Learning
Recent results indicate that sublinear methods are a promising direction scalable machine learning for reducing communication costs in distributed and federated learning while maintaining training accuracy, compressing deep neural networks, by pruning “insignificant” neurons, and improving selective plasticity methods in continual learning.
Algorithmic Methods for Healthcare
Our current focus is on computational approaches in targeted cancer treatments, including immunotherapy, where we aim to combine multiple quantitative image-derived parameters to determine the best metrics for evaluating a treatment response in therapy. Newer directions include radiomic approaches for faster COVID-19 diagnostics, scalable tSNE and UMAP for cancer cell analysis, and fast methods for single-Cell RNA-sequencing for data sets spanning millions of cells.
Our objective is to build a medical imaging learning system, termed MARINE (Medical image Analysis and Reasoning with Interactive Networks). The MARINE framework will be equipped with two major capabilities – lifelong learning where the model can constantly learn and integrate novel features characterizing a specific disease, and experience sharing where the models should be able to communicate with each other to learn their individual tasks and features corresponding to a disease or a subset of diseases.