Research Project

A Better View from Repeated Exposures

Did you know that stars twinkle due to turbulence in the Earth’s atmosphere? We use repeated exposures to simultaneously infer the latent image of the sky and characterize the blur in each observation.

Today dedicated telescopes systematically survey the sky every night and produce a constant supply of raw data. In particular, the widest and deepest observations are obtained by multicolor imaging cameras, such as the one used by the Sloan Digital Sky Survey (SDSS). The development of these high-throughput devices changed the way astronomers think and conduct their studies today. Inexpensive point-and-shoot cameras take pictures at tens of megapixel resolutions, while ongoing scientific experiments already use gigapixel sensors – the largest planned astronomical camera is a 3.2 Gigapixel array of CCD devices, which will take 600,000 CDs worth of data every day. The vision of these cameras is often blurred by ever-changing, unknown distortions, such as those induced by the atmosphere and the other mediums that act as a screen between the sky and the sensor. Developing instruments to correct for these artifacts is often completely impossible or extremely expensive in practice. For example, we pay a high price for launching telescopes into space to eliminate the fluctuating air between the camera’s lenses and the celestial objects or developing deformable mirrors to correct for the distortions in the wavefront due to the atmosphere.

How can we address these issues at a cost that is tractable? The solution will ultimately rely on repeated measurements. The current state-of-the-art is to select a small number of lucky images with minimal blur and combine those for best signal-to-noise. The images, however, can only be consistently co-added if their point-spread functions (PSFs) are the same. This is achieved by convolution, which essentially degrades the measurements to the worst acceptable quality. There has to be a better way!

This research seeks to advance observational astronomy by building an automated learning system that will address the aforementioned key issues. Using elements of blind deconvolution, robust statistics and online/streaming algorithms, we are developing new strategies to extract high-resolution images that are also high signal-to-noise. The primary hardware platform for this project is based on Graphics Processing Units (GPUs) for efficient computations, which has the potential to scale to the next-generation surveys.

Our approach allows us to extract information from multiple noisy measurements (left) to create a single higher-quality image (right).The preliminary results show sharper features and reveal new sources, circled in red).

Our approach allows us to extract information from multiple noisy measurements (left) to create a single higher-quality image (right).The preliminary results show sharper features and reveal new sources, circled in red).

 

Back to top