Our lab develops Computational Imaging methods, which jointly designs optics, devices, signal processing and algorithms, and enables novel capabilities that each one alone cannot. Our research is inherently interdisciplinary, combining expertise in optical engineering, physics and computation. We work on imaging technologies for scientific, biomedical, and neuroscience applications.
Reliable deep learning for quantitative biomedical imaging
Deep learning has gained tremendous success in solving complex inverse problems in imaging. We are particularly interested in developing innovative deep learning techniques that are scalable and reliable to solve biomedical imaging problems that have wide applications, such as single cell imaging, neurophotonics, cancer cell characterization. Read more here.
Our recent work on Bayesian deep learning demonstrates a new framework to provide uncertainty quantification to enable reliable predictions.
Physics-informed deep learning imaging
The next generation deep learning techniques will embed physics into its design and spur innovations and discovery in many areas of science and technology. Our recent work demonstrates exploitation of “deep” correlations of speckle patterns in diffusive media, enabling highly scalable imaging through scattering media. Read more here.
Computational biophotonic imaging systems
Computational imaging is useful for a wide range of impactful biomedical imaging applications by integrating novel biophotonic instrumentation and state-of-the art algorithms.
We are actively developing new computational biophotonic imaging systems in the following areas:
Intensity diffraction tomography for multi-scale label-free imaging
“Phase” information is typically considered “invisible” to pure intensity measurements. Quantifying label-free phase information in 3D is even more challenging and is often achieved using interferometers paired with complex scanning mechanisms.
Our recent work develops scan-free, motion-free intensity diffraction tomography techniques that do not require interferometry yet allow quantitative reconstruction of 3D phase information.
Our latest work demonstrate >10Hz volume rate, making real-time label-free phase quantification on dynamic phenomena possible.
We work on a new computational microscopy platform, where a commercial microscope is hacked by replacing the original lamp with an LED array. One can achieve Gigapixel (wide field of view and high resolution), 3D wide-field high-resolution phase, and real-time multi-modal imaging (brightfield, dark-field, differential phase contrast), all by computationally designing illumination patterns, without any moving parts. Read more here.
Our recent work on coded illumination achieves Gigapixel phase imaging with sub-second capture times. This enables capturing videos of live samples without motion artifacts. Below is a Gigapixel video of unstained HeLa cells under-going division over the course of 4 hours.
Imaging in Scattering Media
Multiple scattering is a longstanding challenge in many important areas, such as remote sensing and deep tissue imaging. It is the reason why we cannot see far in a foggy day and see through one’s palm.
The problem stems from that each photon no longer follows a straight line (like in air), but undergoes a random path. However, our eyes and traditional optical instruments are tuned to recognize the “ballistic photons” (those who follow straight paths); the “multiply scattered photons” only generate a diffuse background that obscure the objects of interest.
Fortunately, these multiply scattered photons still contain useful information about the object (an example is in the right figure). Our goal is to develop novel computational imaging methods to extract extra information that traditional methods would otherwise miss. Our work spans modeling of multiple scattering, imaging system design, and reconstruction algorithm development. Read more here.
Digital Holographic Imaging
Digital holography is a simple yet powerful 3D imaging technique. It can be easily implemented with a lensless setup. By recording the interference between the object and reference wave, a hologram records the 3D information in a single shot. Intuitively speaking, the amount of spread in the diffraction fringes encodes the depth information. The 3D information can then be digitally reconstructed in the post-processing.
Holography has a wide range of applications, such as particle tracking (the movie on the right), and biological imaging. Read more here.
Our latest work develops a multiple-scattering based holographic model to enable accurate particle localization in 3D.
Lightfield / Coherence Imaging
When we see the 3D world, one rely on our binocular vision and perspective cues. More generally speaking, 3D information is obtained through our knowledge of both space and angle simultaneously.
In imaging, the functions that simultaneously describe space and angle information is known as “lightfield”. Angular information is related to the “spatial frequency” of light; and the angular spread is further determined by the “spatial coherence” of light. These relations are captured elegantly by the framework of “phase space” and “optical coherence”.
Our lab develops computational imaging methods that fully captures the space-angle information to enable novel capabilities, such as super-resolution 3D microscopy (right figure), and imaging through scattering. Read more here.
Phase retrieval is a fundamental problem in optical imaging. Light is a wave, having both intensity and phase. However, our eyes or cameras can only sense intensity (the power of light), but not the phase, since the optical wave oscillates too fast to capture directly.
Fortunately, one can design imaging systems that first convert the phase information into measurable intensity variations, and then computationally reconstruct the phase. These phase retrieval techniques have a wide range of applications. For example, it was used to characterize aberrations in Hubble Telescope, reconstructing Gigapixel phase in Fourier Ptychography, and imaging through a diffusing screen.
Our lab develops phase retrieval algorithms that based on measurements from angled illumination (e.g. Fourier Ptychography), defocus (e.g. Transport of Intensity), and spatial translations (e.g. Ptychography). We develop algorithms based on frameworks of deconvolution, nonlinear optimization, statistical inference, and compressed sensing. Read more here.
Most of real-life signals can be concisely represented in some domain, i.e. they are sparse. Sparsity is the reason why we can use efficient compression algorithms (e.g. JPEG, MPEG) to access a wealth of information with minimal data usage.
Compressive imaging is a way to exploit the sparsity of optical signals when designing the computational imaging systems, in order to drastically reduce the data requirement. It has shown a wide spectrum of applications, such as object tracking (figure in the top right), imaging through scattering, and multi-dimensional imaging.
Our research exploits sparsity in both vector representation (i.e. the number of non-zero coefficients in some basis) (top right figure), and matrix correlations (i.e. rank of a matrix) (bottom right figure). Read more here