Full publication list at Google Scholar.


Diffuser-based computational imaging funduscope
Yunzhe Li, Gregory N. McKay, Nicholas J. Durr, Lei Tian
Optics Express 28, pp. 19641-19654 (2020)

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.

Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network
Y Li, S Cheng, Y Xue, L Tian
arXiv preprint arXiv:2005.07318

Coherent imaging through scatter is a challenging topic in computational imaging. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach for coherent imaging through scatter can make high-quality predictions through unseen diffusers. Here, we propose a new deep neural network (DNN) model that is agnostic to a broader class of perturbations including scatter change, displacements, and system defocus up to 10X depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our DNN model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that the DNN can unmix the diffuser/displacement information and distill the object-specific information to achieve generalization under different scattering conditions. Our work paves the way to a highly scalable deep learning approach to different scattering conditions and a new framework for network interpretation.

Comparing the fundamental imaging depth limit of two-photon, three-photon, and non-degenerate two-photon microscopy
Xiaojun Cheng, Sanaz Sadegh, Sharvari Zilpelwar, Anna Devor, Lei Tian, and David A. Boas
Optics Letters 45, pp. 2934-2937 (2020).

We have systematically characterized the degradation of imaging quality with depth in deep brain multi-photon microscopy, utilizing our recently developed numerical model that computes wave propagation in scattering media. The signal-to-background ratio (SBR) and the resolution determined by the width of the point spread function are obtained as functions of depth. We compare the imaging quality of two-photon (2PM), three-photon (3PM), and non-degenerate two-photon microscopy (ND-2PM) for mouse brain imaging. We show that the imaging depth of 2PM and ND-2PM are fundamentally limited by the SBR, while the SBR remains approximately invariant with imaging depth for 3PM. Instead, the imaging depth of 3PM is limited by the degradation of the resolution, if there is sufficient laser power to maintain the signal level at large depth. The roles of the concentration of dye molecules, the numerical aperture of the input light, the anisotropy factor , noise level, input laser power, and the effect of temporal broadening are also discussed.

Single-Shot 3D Widefield Fluorescence Imaging with a Computational Miniature Mesoscope
Yujia Xue, Ian G. Davison, David A. Boas, Lei Tian

Fluorescence imaging is indispensable to biology and neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes / miniscopes are inherently constrained by their limited space-bandwidth-product, shallow depth-of-field, and the inability to resolve 3D distributed emitters. Here, we present a Computational Miniature Mesoscope (CM2) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 × 7-mm2 field-of-view and 2.5-mm depth-of-field, achieving 7-μm lateral and 250-μm axial resolution. Notably, the CM2 has a compact lightweight design that integrates a microlens array for imaging and an LED array for excitation in a single platform. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers. We further quantify the effects of bulk scattering and background fluorescence on phantom experiments.

Plasmonic ommatidia for lensless compound-eye vision
Leonard C. Kogos, Yunzhe Li, Jianing Liu, Yuyu Li, Lei Tian & Roberto Paiella
Nature Communications 11: 1637 (2020).
In the news:
– BU ENG news: A Bug’s-Eye View

The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects.

High-Throughput, High-Resolution Interferometric Light Microscopy of Biological Nanoparticles
C. Yurdakul, O. Avci, A. Matlock, A. J Devaux, M. V Quintero, E. Ozbay, R. A Davey, J. H Connor, W C. Karl, L. Tian, M S. Ünlü
ACS Nano 2020, 14, 2, 2002-2013

Label-free, visible light microscopy is an indispensable tool for studying biological nanoparticles (BNPs). However, conventional imaging techniques have two major challenges: (i) weak contrast due to low-refractive-index difference with the surrounding medium and exceptionally small size and (ii) limited spatial resolution. Advances in interferometric microscopy have overcome the weak contrast limitation and enabled direct detection of BNPs, yet lateral resolution remains as a challenge in studying BNP morphology. Here, we introduce a wide-field interferometric microscopy technique augmented by computational imaging to demonstrate a 2-fold lateral resolution improvement over a large field-of-view (>100 × 100 μm2), enabling simultaneous imaging of more than 104 BNPs at a resolution of ∼150 nm without any labels or sample preparation. We present a rigorous vectorial-optics-based forward model establishing the relationship between the intensity images captured under partially coherent asymmetric illumination and the complex permittivity distribution of nanoparticles. We demonstrate high-throughput morphological visualization of a diverse population of Ebola virus-like particles and a structurally distinct Ebola vaccine candidate. Our approach offers a low-cost and robust label-free imaging platform for high-throughput and high-resolution characterization of a broad size range of BNPs.

LED array reflectance microscopy for scattering-based multi-contrast imaging
Weiye Song, Alex Matlock, Sipei Fu, Xiaodan Qin, Hui Feng, Christopher V. Gabel, Lei Tian, and Ji Yi
Opt. Lett. 45, 1647-1650 (2020)

LED array microscopy is an emerging platform for computational imaging with significant utility for biological imaging. Existing LED array systems often exploit transmission imaging geometries of standard brightfield microscopes that leave the rich backscattered field undetected. This backscattered signal contains high-resolution sample information with superb sensitivity to subtle structural features that make it ideal for biological sensing and detection. Here, we develop an LED array reflectance microscope capturing the sample’s backscattered signal. In particular, we demonstrate multimodal brightfield, darkfield, and differential phase contrast imaging on fixed and living biological specimens including Caenorhabditis elegans (C. elegans), zebrafish embryos, and live cell cultures. Video-rate multimodal imaging at 20 Hz records real time features of freely moving C. elegans and the fast beating heart of zebrafish embryos. Our new reflectance mode is a valuable addition to the LED array microscopic toolbox.

Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue
Yanqin Chen, Bo Xiong, Yujia Xue, Xin Jin, Joseph Greene, and Lei Tian
Biomedical Optics Express. 11, pp. 1662-1678 (2020).

Integrating light field microscopy techniques with existing miniscope architectures has allowed for volumetric imaging of targeted brain regions in freely moving animals. However, the current design of light field miniscopes is limited by non-uniform resolution and long imaging path length. In an effort to overcome these limitations, this paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts. In addition, this paper provides a novel framework that incorporates the anticipated aberrations of the proposed Gali-MiniLFM into the point spread function (PSF) modeling. This more accurate PSF model can then be used in 3D reconstruction algorithms to further improve the resolution of the platform. Volumetric imaging in the brain necessitates the consideration of the effects of scattering. We conduct Monte Carlo simulations to demonstrate the robustness of the proposed Gali-MiniLFM for volumetric imaging in scattering tissue.

Inverse scattering for reflection intensity phase microscopy
Alex Matlock, Anne Sentenac, Patrick C. Chaumet, Ji Yi, and Lei Tian
Biomedical Optics Express. 11, pp. 911-926 (2020).

Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model’s validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.

High-speed in vitro intensity diffraction tomography
Jiaji Li, Alex Matlock, Yunzhe Li, Qian Chen, Chao Zuo, Lei Tian
Advanced Photonics, 1(6), 066004 (2019).
on the cover story
⭑ Highlighted at Programmable LED ring enables label-free 3D tomography for conventional microscopes

We demonstrate a label-free, scan-free intensity diffraction tomography technique utilizing annular illumination (aIDT) to rapidly characterize large-volume 3D refractive index distributions in vitro. By optimally matching the illumination geometry to the microscope pupil, our technique reduces the data requirement by 60× to achieve high-speed 10 Hz volume rates. Using 8 intensity images, we recover 350×100×20μm3 volumes with near diffraction-limited lateral resolution of 487 nm and axial resolution of 3.4 μm. Our technique’s large volume rate and high resolution enables 3D quantitative phase imaging of complex living biological samples across multiple length scales. We demonstrate aIDT’s capabilities on unicellular diatom microalgae, epithelial buccal cell clusters with native bacteria, and live Caenorhabditis elegans specimens. Within these samples, we recover macroscale cellular structures, subcellular organelles, and dynamic micro-organism tissues with minimal motion artifacts. Quantifying such features has significant utility in oncology, immunology, and cellular pathophysiology, where these morphological features are evaluated for changes in the presence of disease, parasites, and new drug treatments. aIDT shows promise as a powerful high-speed, label-free microscopy technique for these applications where natural imaging is required to evaluate environmental effects on a sample in real-time.

SIMBA: Scalable Inversion in Optical Tomography using Deep Denoising Priors
Zihui Wu, Yu Sun, Alex Matlock, Jiaming Liu, Lei Tian, Ulugbek S. Kamilov

Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative mini-batch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables high-quality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixed-point convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.

High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography
Alex Matlock, Lei Tian
Biomed. Opt. Express 10, pp. 6432-6448 (2019).

Intensity diffraction tomography (IDT) provides quantitative, volumetric refractive index reconstructions of unlabeled biological samples from intensity-only measurements. IDT is scanless and easily implemented in standard optical microscopes using an LED array but suffers from large data requirements and slow acquisition speeds. Here, we develop multiplexed IDT (mIDT), a coded illumination framework providing high volume-rate IDT for evaluating dynamic biological samples. mIDT combines illuminations from an LED grid using physical model-based design choices to improve acquisition rates and reduce dataset size with minimal loss to resolution and reconstruction quality. We analyze the optimal design scheme with our mIDT framework in simulation using the reconstruction error compared to conventional IDT and theoretical acquisition speed. With the optimally determined mIDT scheme, we achieve hardware-limited 4Hz acquisition rates enabling 3D refractive index distribution recovery on live Caenorhabditis elegans worms and embryos as well as epithelial buccal cells. Our mIDT architecture provides a 60 × speed improvement over conventional IDT and is robust across different illumination hardware designs, making it an easily adoptable imaging tool for volumetrically quantifying biological samples in their natural state.

Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification
Rongrong LiuShiyi ChengLei TianJi Yi
Light: Science & Applications 8: 102 (2019).

Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.

Development of a beam propagation method to simulate the point spread function degradation in scattering media
Xiaojun Cheng, Yunzhe Li, Jerome Mertz, Sava Sakadžić, Anna Devor, David A. Boas, Lei Tian
Opt. Lett. 44, 4989-4992 (2019).

Scattering is one of the main issues that limit the imaging depth in deep tissue optical imaging. To characterize the role of scattering, we have developed a forward model based on the beam propagation method and established the link between the macroscopic optical properties of the media and the statistical parameters of the phase masks applied to the wavefront. Using this model, we have analyzed the degradation of the point-spread function of the illumination beam in the transition regime from ballistic to diffusive light transport. Our method provides a wave-optic simulation toolkit to analyze the effects of scattering on image quality degradation in scanning microscopy. Our open-source implementation is available at

Holographic particle-localization under multiple scattering
Waleed Tahir, Ulugbek S. Kamilov, Lei Tian
Advanced Photonics, 1(3), 036003 (2019).

We introduce a computational framework that incorporates multiple scattering for large-scale three-dimensional (3-D) particle localization using single-shot in-line holography. Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts. Existing multiple scattering solvers become computationally prohibitive for large-scale problems, which comprise millions of voxels within the scattering volume. Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework. In the forward model, each recursion estimates the next higher-order multiple scattered field among the object slices. In the inverse model, each order of scattering is recursively estimated by a nonlinear optimization procedure. This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles. We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation. Our experiments demonstrate robust inverse multiple scattering, allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior. The performance bound of our approach is quantified in simulation and validated experimentally. Our work promises utilization of multiple scattering for versatile large-scale applications.

Reliable deep learning-based phase imaging with uncertainty quantification
Yujia Xue, Shiyi Cheng, Yunzhe Li, Lei Tian
Optica 6, 618-629 (2019).

Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space–bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.

Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media
Yunzhe Li, Yujia Xue, Lei Tian
Optica 5, 1181-1190 (2018).
Top 15 most cited articles in Optica published in 2018 (Source: OSA)

Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic input–output “transmission matrix” for a fixed medium. However, this “one-to-one” mapping is highly susceptible to speckle decorrelations – small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical “one-to-all” deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.



Deep learning approach to Fourier ptychographic microscopy
Thanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, George Nehmetallah
Opt. Express 26, 26470-26484 (2018).

Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.


High-throughput intensity diffraction tomography with a computational microscope
Ruilong Ling, Waleed Tahir, Hsing-Ying Lin, Hakho Lee, and Lei Tian
Biomed. Opt. Express 9, 2130-2141 (2018).

We demonstrate a motion-free intensity diffraction tomography technique that enables the direct inversion of 3D phase and absorption from intensity-only measurements for weakly scattering samples. We derive a novel linear forward model featuring slice-wise phase and absorption transfer functions using angled illumination. This new framework facilitates flexible and efficient data acquisition, enabling arbitrary sampling of the illumination angles. The reconstruction algorithm performs 3D synthetic aperture using a robust computation and memory efficient slice-wise deconvolution to achieve resolution up to the incoherent limit. We demonstrate our technique with thick biological samples having both sparse 3D structures and dense cell clusters. We further investigate the limitation of our technique when imaging strongly scattering samples. Imaging performance and the influence of multiple scattering is evaluated using a 3D sample consisting of stacked phase and absorption resolution targets. This computational microscopy system is directly built on a standard commercial microscope with a simple LED array source add-on, and promises broad applications by leveraging the ubiquitous microscopy platforms with minimal hardware modifications.



Structured illumination microscopy with unknown patterns and a statistical prior
Li-Hao Yeh, Lei Tian, and Laura Waller
Biomed. Opt. Express 8, 695-711 (2017).

Structured illumination microscopy (SIM) improves resolution by down-modulating high-frequency information of an object to fit within the passband of the optical system. Generally, the reconstruction process requires prior knowledge of the illumination patterns, which implies a well-calibrated and aberration-free system. Here, we propose a new algorithmic self-calibration strategy for SIM that does not need to know the exact patterns a priori, but only their covariance. The algorithm, termed PE-SIMS, includes a pattern-estimation (PE) step requiring the uniformity of the sum of the illumination patterns and a SIM reconstruction procedure using a statistical prior (SIMS). Additionally, we perform a pixel reassignment process (SIMS-PR) to enhance the reconstruction quality. We achieve 2× better resolution than a conventional widefield microscope, while remaining insensitive to aberration-induced pattern distortion and robust against parameter tuning.



Compressive holographic video
Zihao Wang, Leonidas Spinoulas, Kuan He, Lei Tian, Oliver Cossairt, Aggelos K. Katsaggelos, and Huaijin Chen
Opt. Express 25, 250-262 (2017).

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.



3D differential phase contrast microscopy
Michael Chen, Lei Tian, Laura Waller
Biomed. Opt. Express 7, 3940-3950 (2016).

We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample’s complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution.


Nonlinear Optimization Algorithm for Partially Coherent Phase Retrieval and Source Recovery
J. Zhong, L. Tian, P. Varma, L. Waller
IEEE Transactions on Computational Imaging 2 (3), 310 – 322 (2016).

We propose a new algorithm for recovering both complex field (phase and amplitude) and source distribution (illumination spatial coherence) from a stack of intensity images captured through focus. The joint recovery is formulated as a nonlinear least-square-error optimization problem, which is solved iteratively by a modified Gauss-Newton method. We derive the gradient and Hessian of the cost function and show that our second-order optimization approach outperforms previously proposed phase retrieval algorithms, for datasets taken with both coherent and partially coherent illumination. The method is validated experimentally in a commercial microscope with both Kohler illumination and a programmable LED dome.


Experimental robustness of Fourier Ptychography phase retrieval algorithms
L. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, L. Waller
Opt. Express 23(26) 33212-33238 (2015).

Fourier ptychography is a new computational microscopy technique that provides gigapixel-scale intensity and phase images with both wide field-of-view and high resolution. By capturing a stack of low-resolution images under different illumination angles, an inverse algorithm can be used to computationally reconstruct the high-resolution complex field. Here, we compare and classify multiple proposed inverse algorithms in terms of experimental robustness. We find that the main sources of error are noise, aberrations and mis-calibration (i.e. model mis-match). Using simulations and experiments, we demonstrate that the choice of cost function plays a critical role, with amplitude-based cost functions performing better than intensity-based ones. The reason for this is that Fourier ptychography datasets consist of images from both brightfield and darkfield illumination, representing a large range of measured intensities. Both noise (e.g. Poisson noise) and model mis-match errors are shown to scale with intensity. Hence, algorithms that use an appropriate cost function will be more tolerant to both noise and model mis-match. Given these insights, we propose a global Newton’s method algorithm which is robust and accurate. Finally, we discuss the impact of procedures for algorithmic correction of aberrations and mis-calibration.


Computational illumination for high-speed in vitro Fourier ptychographic microscopy
L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, L. Waller
Optica 2(10), 904-911 (2015).

We demonstrate a new computational illumination technique that achieves a large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either a large field of view (FOV) or high resolution, and not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both a wide FOV and high resolution, i.e., a large space-bandwidth product. FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (of the order of minutes), limiting throughput. Faster capture times would not only improve the imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g., pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with subsecond capture times. We propose an improved algorithm and a new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time.


Computational imaging: Machine learning for 3D microscopy
L. Waller, L. Tian
Nature, 523, 416–417 (2015).

Artificial neural networks have been combined with microscopy to visualize the 3D structure of biological cells. This could lead to solutions for difficult imaging problems, such as the multiple scattering of light.

3D imaging in volumetric scattering media using phase-space measurements
H. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, L. Waller
Opt. Express 23, 14461-14471 (2015).

We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media.


Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array
Z. Phillips, M. D’Ambrosio, L. Tian, J. Rulison, H. Patel, N. Sadras, A. Gande, N. Switz, D. Fletcher, L. Waller
PLoS ONE 10, e0124938 (2015).

We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.


Quantitative differential phase contrast imaging in an LED array microscope
L. Tian, L. Waller
Opt. Express 23, 11394-11403 (2015).

Illumination-based differential phase contrast (DPC) is a phase imaging method that uses a pair of images with asymmetric illumination patterns. Distinct from coherent techniques, DPC relies on spatially partially coherent light, providing 2× better lateral resolution, better optical sectioning and immunity to speckle noise. In this paper, we derive the 2D weak object transfer function (WOTF) and develop a quantitative phase reconstruction method that is robust to noise. The effect of spatial coherence is studied experimentally, and multiple-angle DPC is shown to provide improved frequency coverage for more stable phase recovery. Our method uses an LED array microscope to achieve real-time (10 Hz) quantitative phase imaging with in vitro live cell samples.


Motion deblurring with temporally coded illumination in an LED array microscope
C. Ma, Z. Liu, L. Tian, Q. Dai, L. Waller
Opt. Lett. 40, 2281-2284 (2015).

Motion blur, which results from time-averaging an image over the camera’s exposure time, is a common problem in microscopy of moving samples. Here, we demonstrate linear motion deblurring using temporally coded illumination in an LED array microscope. By illuminating moving objects with a well-designed temporal coded sequence that varies during each single camera exposure, the resulting motion blur is invertible and can be computationally removed. This scheme is implemented in an existing LED array microscope, providing benefits of being grayscale, fast, and adaptive, which leads to high-quality deblur performance and a flexible implementation with no moving parts. The proposed method is demonstrated experimentally for fast moving targets in a microfluidic environment.

motiondeblur LED

3D intensity and phase imaging from light field measurements in an LED array microscope
Lei Tian, L. Waller
Optica 2, 104-111 (2015).
the 15 Most Cited Articles in Optica published in 2015 (Source: OSA, 2019)

Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier Ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective (5x larger NA) and axial resolution better than the depth of field, using a low magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multi-slice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data is captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.


Partially coherent phase imaging with unknown source shape
J. Zhong, Lei Tian, J. Dauwels, L. Waller
Biomedical Optics Express 6, 257-265 (2015).

We propose a new method for phase retrieval that uses partially coherent illumination created by any arbitrary source shape in Kohler geometry. Using a stack of defocused intensity images, we recover not only the phase and amplitude of the sample, but also an estimate of the unknown source shape, which describes the spatial coherence of the illumination. Our algorithm uses a Kalman filtering approach which is fast, accurate and robust to noise. The method is experimentally simple and flexible, so should find use in optical, electron, X-ray and other phase imaging systems which employ partially coherent light. We provide an experimental demonstration in an optical microscope with various condenser apertures.


Real-time brightfield, darkfield and phase contrast imaging in an LED array microscope
Z. Liu, Lei Tian, S. Liu, L. Waller
Journal of Biomedical Optics, 19(10), 106002 (2014).

We demonstrate a single-camera imaging system that can simultaneously acquire brightfield, darkfield and phase contrast images in real-time. Our method uses computational illumination via a programmable LED array at the source plane, providing flexible patterning of illumination angles. Brightfield, darkfield and differential phase contrast (DPC) images are obtained by changing the LED patterns, without any moving parts. Previous work with LED array illumination was only valid for static samples because the hardware speed was not fast enough to meet real-time acquisition and processing requirements. Here, we time multiplex patterns for each of the three contrast modes in order to image dynamic biological processes in all three contrast modes simultaneously. We demonstrate multi-contrast operation at the maximum frame rate of our camera (50 Hz with 2160×2560 pixels).


Multiplexed coded illumination for Fourier Ptychography with an LED array microscope
Lei Tian, X. Li, K. Ramchandran, L. Waller
Biomedical Optics Express 5, 2376-2389 (2014).
the decade’s most highly cited Articles in Biomed. Opt. Express (Source: OSA, 2020)
⭑ Highly cited (Top 1%) papers between 2008-2018 (source: Web of Science, 2019)

Fourier Ptychography is a new computational microscopy technique that achieves gigapixel images with both wide field of view and high resolution in both phase and amplitude. The hardware setup involves a simple replacement of the microscope’s illumination unit with a programmable LED array, allowing one to flexibly pattern illumination angles without any moving parts. In previous work, a series of low-resolution images was taken by sequentially turning on each single LED in the array, and the data were then combined to recover a bandwidth much higher than the one allowed by the original imaging system. Here, we demonstrate a multiplexed illumination strategy in which multiple randomly selected LEDs are turned on for each image. Since each LED corresponds to a different area of Fourier space, the total number of images can be significantly reduced, without sacrificing image quality. We demonstrate this method experimentally in a modified commercial microscope. Compared to sequential scanning, our multiplexed strategy achieves similar results with approximately an order of magnitude reduction in both acquisition time and data capture requirements.


3D differential phase contrast microscopy with computational illumination using an LED array
Lei Tian, J. Wang, L. Waller
Optics Letters 39, 1326 – 1329 (2014).

We demonstrate 3D differential phase-contrast (DPC) microscopy, based on computational illumination with a programmable LED array. By capturing intensity images with various illumination angles generated by sequentially patterning an LED array source, we digitally refocus images through various depths via light field processing. The intensity differences from images taken at complementary illumination angles are then used to generate DPC images, which are related to the gradient of phase. The proposed method achieves 3D DPC with simple, inexpensive optics and no moving parts. We experimentally demonstrate our method by imaging a camel hair sample in 3D.