Title: Dose Optimization for Pediatric Renal SPECT Imaging
Abstract: Like any real-world problem, the design of an imaging system always requires tradeoffs. For medical imaging modalities using ionization radiation, a major tradeoff is between diagnostic image quality (IQ) and risk to the patient from absorbed radiation dose. In nuclear medicine, reducing the radiation dose to the patient will always result in increased Poisson noise in the image. At the same time, reducing the radiation dose (RD), below some level at least, will always result in reduced risk of adverse effects to the patient. The overall goal of this research is to propose a rigorous IQ-RD tradeoff analysis method for pediatric nuclear medicine renal imaging. However, the methodologies developed in this proposal can also be applied to other nuclear medicine imaging applications and other important medical modalities involving ionization radiation such as computed tomography and planar X-rays.
Balancing the tradeoffs between RD and IQ is especially important for children, as they are often considered more vulnerable to radiation than adults. In nuclear medicine imaging, reducing the RD requires reducing the administered activity (AA). Lower AA results in increased Poisson noise in the images or requires longer acquisition durations to maintain the noise level. In pediatric nuclear medicine, it is desirable to use the lowest AA and the shortest acquisition duration that gives sufficient IQ for clinical diagnosis. In current clinical practice, AA for pediatric molecular imaging is often based on the North American consensus guidelines (U.S.) and the European pediatric dosage card (Europe). Both of these dosing guidelines involve scaling the adult AA by patient weight subject to upper and lower constraints on the AA. However, these guidelines were developed based on expert consensus or rough estimates (estimated count rates) of IQ rather than rigorous, objective measures of performance on the diagnostic task.
In this research, we propose a general framework for optimizing RD with task-based assessment of IQ. Here, IQ is defined as an objective measure of the user performing the diagnostic task that the images were acquired to answer. Specifically, we propose to establish relationships between AA, acquisition duration, measures of body habitus, and IQ for pediatric patients undergoing renal molecular imaging procedures. To investigate IQ as a function of renal defect detectability, we have developed a projection image database modeling imaging of 99mTc-DMSA, a renal function agent. The database uses a highly-realistic population of pediatric phantoms with anatomical and body morphological variations. Using the developed projection image database, we have explored patient factors that affect IQ and are currently in the process of determining relationships between IQ and AA (IQ-AA curve) in terms of these found factors. Our preliminary data have shown that the current weight-based guidelines, based on scaling the AA by patient weight, are not optimal in the sense that they do not give the same image quality for patients with the same weight. Furthermore, we have found that factors that are more local to the target organ may be more robust than weight for estimating the AA needed to provide a constant IQ across a population of patients. In the case of renal imaging, we have discovered that girth is more robust than weight in predicting AA needed to provide a desired IQ. In addition, in order to simulate a full clinical multi-slice detection task (just like what a nuclear medicine physician would do), we propose to develop a CNN-based model observer. We will perform human observer studies to verify and calibrate the developed model observers used to generate the IQ-AA curves. The results of this proposal will provide the data needed by standards bodies to develop improved dosing guidelines for pediatric molecular imaging that result in more consistent image quality and absorbed dose.
Title: Applications of high-speed optical signal processing in high-dimensional data acquisition
Abstract: Thanks to large bandwidth, and the ability to capture large amount of information in parallel, optical technologies have transformed the way we capture, process, and communicate information. During this talk I will discuss how optical signal processing can be used in conjunction with novel data compression strategies in order to break the decades long bottleneck faced by electronic systems. I will particularly discuss utility of optical signal processing on big data applications ranging from high speed material characterization, to capturing neural signals over large volume at unprecedented depth and speed.
During the first half of this talk I will discuss how we are taking advantage of parallel image acquisition techniques in order to gain a deeper understanding of rapidly evolving combustion events over a broad spectral range. Despite the rich body of scientific research, the volatile nature of the combustion process has presented an obstacle to our understanding of the chemical kinetics involved in flame propagation and evolution. Many combustive reactions occur in the sub mili-second time scale and involve high velocity motion and interaction of fuel reagents. Hyperspectral imaging technologies are an attractive solution which combine high spatial resolution with fine spectral resolution. However, most conventional hyperspectral cameras rely on slow scanning mechanisms and therefore are ill-suited for capturing fast evolving events. The emergence of Compressive Sensing (CS) over the past decade, has opened the doors to acquiring high dimensional signals at high speed. In the first part of this talk I will discuss how novel optical techniques can be combined with CS algorithms to realize Mega Frame hyperspectral imaging platforms for material diagnostics.
The second portion of my talk will focus on high spatio-temporal neural recording applications. Multi-photon microscopy has been a major breakthrough in overcoming optical scattering when imaging individual neurons deep inside the brain of live animals. Despite the impressive image quality and robustness to scattering, point scanning multi-photon microscopes face a fundamental trade-off between the field of view (FOV) and imaging speed. Higher speed, volumetric multi-photon imaging and stimulation technologies have the potential to revolutionize monitoring of neural network activity in vivo. In this part I will discuss our efforts to develop a scalable, volumetric, two-photon neural recording technology that combines rapid, volumetric scanning of a wide illumination field with synchronized high-resolution dynamic spatial patterning within the illumination field. This approach will allow us to both rapidly address large volumes and also achieve high-resolution random access within the sub-regions of the scan. We will leverage the random access capabilities of this hardware to implement compressive and adaptive imaging strategies that maximize the image information acquired for a given time and laser energy.
Title: Modeling Cellular Events: Chemotaxis and Aneuploidy
Abstract: Biology is the ‘study of complex natural things’ and the biologists are mostly interested in details of those complexity in a system. But often a simpler mathematical model is proved to be very efficient in deciphering the underlying basic working principle of the system. Despite the usefulness, these models are often criticized for not being able to explain the sufficient details of the wide range of experimental observations of different cases of pharmacological/genetic perturbations.