Our lab develops Computational Imaging methods, which jointly designs optics, devices, signal processing and algorithms, and enables novel capabilities that each one alone cannot. Our research is inherently interdisciplinary, combining expertise in optics, signal processing and information science.
- We build imaging systems that capture data with much richer information (e.g. 3D, optical phase, coherence) that the traditional systems cannot.
- We use algorithms from signal processing, linear and nonlinear optimization, machine learning, and compressed sensing.
- We work on scientific and biomedical microscopy, and commercial cameras.
We work on a new computational microscopy platform, where a commercial microscope is hacked by replacing the original lamp with an LED array. One can achieve Gigapixel (wide field of view and high resolution), 3D wide-field high-resolution phase, and real-time multi-modal imaging (brightfield, dark-field, differential phase contrast), all by computationally designing illumination patterns, without any moving parts. Read more here.
Our recent work on coded illumination achieves Gigapixel phase imaging with sub-second capture times. This enables capturing videos of live samples without motion artifacts. Below is a Gigapixel video of unstained HeLa cells under-going division over the course of 4 hours.
Imaging in Scattering Media
Multiple scattering is a longstanding challenge in many important areas, such as remote sensing and deep tissue imaging. It is the reason why we cannot see far in a foggy day and see through one’s palm.
The problem stems from that each photon no longer follows a straight line (like in air), but undergoes a random path. However, our eyes and traditional optical instruments are tuned to recognize the “ballistic photons” (those who follow straight paths); the “multiply scattered photons” only generate a diffuse background that obscure the objects of interest.
Fortunately, these multiply scattered photons still contain useful information about the object (an example is in the right figure). Our goal is to develop novel computational imaging methods to extract extra information that traditional methods would otherwise miss. Our work spans modeling of multiple scattering, imaging system design, and reconstruction algorithm development. Read more here.
Most of real-life signals can be concisely represented in some domain, i.e. they are sparse. Sparsity is the reason why we can use efficient compression algorithms (e.g. JPEG, MPEG) to access a wealth of information with minimal data usage.
Compressive imaging is a way to exploit the sparsity of optical signals when designing the computational imaging systems, in order to drastically reduce the data requirement. It has shown a wide spectrum of applications, such as object tracking (figure in the top right), imaging through scattering, and multi-dimensional imaging.
Our research exploits sparsity in both vector representation (i.e. the number of non-zero coefficients in some basis) (top right figure), and matrix correlations (i.e. rank of a matrix) (bottom right figure). Read more here.
Lightfield / Phase Space / Coherence Imaging
When we see the 3D world, one rely on our binocular vision and perspective cues. More generally speaking, 3D information is obtained through our knowledge of both space and angle simultaneously.
In imaging, the functions that simultaneously describe space and angle information is known as “lightfield”. Angular information is related to the “spatial frequency” of light; and the angular spread is further determined by the “spatial coherence” of light. These relations are captured elegantly by the framework of “phase space” and “optical coherence”.
Our lab develops computational imaging methods that fully captures the space-angle information to enable novel capabilities, such as super-resolution 3D microscopy (right figure), and imaging through scattering. Read more here.
Digital Holographic Imaging
Digital holography is a simple yet powerful 3D imaging technique. It can be easily implemented with a lensless setup. By recording the interference between the object and reference wave, a hologram records the 3D information in a single shot. Intuitively speaking, the amount of spread in the diffraction fringes encodes the depth information. The 3D information can then be digitally reconstructed in the post-processing.
Holography has a wide range of applications, such as particle tracking (the movie on the right), and biological imaging. Read more here.
Phase Retrieval Algorithms
Phase retrieval is a fundamental problem in optical imaging. Light is a wave, having both intensity and phase. However, our eyes or cameras can only sense intensity (the power of light), but not the phase, since the optical wave oscillates too fast to capture directly.
Fortunately, one can design imaging systems that first convert the phase information into measurable intensity variations, and then computationally reconstruct the phase. These phase retrieval techniques have a wide range of applications. For example, it was used to characterize aberrations in Hubble Telescope, reconstructing Gigapixel phase in Fourier Ptychography, and imaging through a diffusing screen.
Our lab develops phase retrieval algorithms that based on measurements from angled illumination (e.g. Fourier Ptychography), defocus (e.g. Transport of Intensity), and spatial translations (e.g. Ptychography). We develop algorithms based on frameworks of deconvolution, nonlinear optimization, statistical inference, and compressed sensing. Read more here.