Full publication list at Google Scholar.
Inverse scattering for reflection intensity phase microscopy
Alex Matlock, Anne Sentenac, Patrick C. Chaumet, Ji Yi, and Lei Tian
Biomedical Optics Express Vol. 11, Issue 2, pp. 911-926 (2020).
Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model’s validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.
High-speed in vitro intensity diffraction tomography
Jiaji Li, Alex Matlock, Yunzhe Li, Qian Chen, Chao Zuo, Lei Tian
Advanced Photonics, 1(6), 066004 (2019). (on the cover)
We demonstrate a label-free, scan-free intensity diffraction tomography technique utilizing annular illumination (aIDT) to rapidly characterize large-volume 3D refractive index distributions in vitro. By optimally matching the illumination geometry to the microscope pupil, our technique reduces the data requirement by 60
SIMBA: Scalable Inversion in Optical Tomography using Deep Denoising Priors
Zihui Wu, Yu Sun, Alex Matlock, Jiaming Liu, Lei Tian, Ulugbek S. Kamilov
Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative mini-batch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables high-quality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixed-point convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.
High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography
Alex Matlock, Lei Tian
Biomed. Opt. Express Vol. 10, Issue 12, pp. 6432-6448 (2019).
Intensity diffraction tomography (IDT) provides quantitative, volumetric refractive index reconstructions of unlabeled biological samples from intensity-only measurements. IDT is scanless and easily implemented in standard optical microscopes using an LED array but suffers from large data requirements and slow acquisition speeds. Here, we develop multiplexed IDT (mIDT), a coded illumination framework providing high volume-rate IDT for evaluating dynamic biological samples. mIDT combines illuminations from an LED grid using physical model-based design choices to improve acquisition rates and reduce dataset size with minimal loss to resolution and reconstruction quality. We analyze the optimal design scheme with our mIDT framework in simulation using the reconstruction error compared to conventional IDT and theoretical acquisition speed. With the optimally determined mIDT scheme, we achieve hardware-limited 4Hz acquisition rates enabling 3D refractive index distribution recovery on live Caenorhabditis elegans worms and embryos as well as epithelial buccal cells. Our mIDT architecture provides a 60 × speed improvement over conventional IDT and is robust across different illumination hardware designs, making it an easily adoptable imaging tool for volumetrically quantifying biological samples in their natural state.
Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.
Development of a beam propagation method to simulate the point spread function degradation in scattering media
Xiaojun Cheng, Yunzhe Li, Jerome Mertz, Sava Sakadžić, Anna Devor, David A. Boas, Lei Tian
Opt. Lett. 44, 4989-4992 (2019).
Scattering is one of the main issues that limit the imaging depth in deep tissue optical imaging. To characterize the role of scattering, we have developed a forward model based on the beam propagation method and established the link between the macroscopic optical properties of the media and the statistical parameters of the phase masks applied to the wavefront. Using this model, we have analyzed the degradation of the point-spread function of the illumination beam in the transition regime from ballistic to diffusive light transport. Our method provides a wave-optic simulation toolkit to analyze the effects of scattering on image quality degradation in scanning microscopy. Our open-source implementation is available at https://github.com/BUNPC/Beam-Propagation-Method.
Holographic particle-localization under multiple scattering
Waleed Tahir, Ulugbek S. Kamilov, Lei Tian
Advanced Photonics, 1(3), 036003 (2019).
We introduce a computational framework that incorporates multiple scattering for large-scale three-dimensional (3-D) particle localization using single-shot in-line holography. Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts. Existing multiple scattering solvers become computationally prohibitive for large-scale problems, which comprise millions of voxels within the scattering volume. Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework. In the forward model, each recursion estimates the next higher-order multiple scattered field among the object slices. In the inverse model, each order of scattering is recursively estimated by a nonlinear optimization procedure. This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles. We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation. Our experiments demonstrate robust inverse multiple scattering, allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior. The performance bound of our approach is quantified in simulation and validated experimentally. Our work promises utilization of multiple scattering for versatile large-scale applications.
Reliable deep learning-based phase imaging with uncertainty quantification
Yujia Xue, Shiyi Cheng, Yunzhe Li, Lei Tian
Optica 6, 618-629 (2019).
Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space–bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.
Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media
Yunzhe Li, Yujia Xue, Lei Tian
Optica 5, 1181-1190 (2018).
Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic input–output “transmission matrix” for a fixed medium. However, this “one-to-one” mapping is highly susceptible to speckle decorrelations – small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical “one-to-all” deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.
Deep learning approach to Fourier ptychographic microscopy
Thanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, George Nehmetallah
Opt. Express 26, 26470-26484 (2018).
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.
High-throughput intensity diffraction tomography with a computational microscope
Ruilong Ling, Waleed Tahir, Hsing-Ying Lin, Hakho Lee, and Lei Tian
Biomed. Opt. Express 9, 2130-2141 (2018).
We demonstrate a motion-free intensity diffraction tomography technique that enables the direct inversion of 3D phase and absorption from intensity-only measurements for weakly scattering samples. We derive a novel linear forward model featuring slice-wise phase and absorption transfer functions using angled illumination. This new framework facilitates flexible and efficient data acquisition, enabling arbitrary sampling of the illumination angles. The reconstruction algorithm performs 3D synthetic aperture using a robust computation and memory efficient slice-wise deconvolution to achieve resolution up to the incoherent limit. We demonstrate our technique with thick biological samples having both sparse 3D structures and dense cell clusters. We further investigate the limitation of our technique when imaging strongly scattering samples. Imaging performance and the influence of multiple scattering is evaluated using a 3D sample consisting of stacked phase and absorption resolution targets. This computational microscopy system is directly built on a standard commercial microscope with a simple LED array source add-on, and promises broad applications by leveraging the ubiquitous microscopy platforms with minimal hardware modifications.
Structured illumination microscopy with unknown patterns and a statistical prior
Li-Hao Yeh, Lei Tian, and Laura Waller
Biomed. Opt. Express 8, 695-711 (2017).
Structured illumination microscopy (SIM) improves resolution by down-modulating high-frequency information of an object to fit within the passband of the optical system. Generally, the reconstruction process requires prior knowledge of the illumination patterns, which implies a well-calibrated and aberration-free system. Here, we propose a new algorithmic self-calibration strategy for SIM that does not need to know the exact patterns a priori, but only their covariance. The algorithm, termed PE-SIMS, includes a pattern-estimation (PE) step requiring the uniformity of the sum of the illumination patterns and a SIM reconstruction procedure using a statistical prior (SIMS). Additionally, we perform a pixel reassignment process (SIMS-PR) to enhance the reconstruction quality. We achieve 2× better resolution than a conventional widefield microscope, while remaining insensitive to aberration-induced pattern distortion and robust against parameter tuning.
Compressive holographic video
Zihao Wang, Leonidas Spinoulas, Kuan He, Lei Tian, Oliver Cossairt, Aggelos K. Katsaggelos, and Huaijin Chen
Opt. Express 25, 250-262 (2017).
Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.
3D differential phase contrast microscopy
Michael Chen, Lei Tian, Laura Waller
Biomed. Opt. Express 7, 3940-3950 (2016).
We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample’s complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution.
Nonlinear Optimization Algorithm for Partially Coherent Phase Retrieval and Source Recovery
J. Zhong, L. Tian, P. Varma, L. Waller
IEEE Transactions on Computational Imaging 2 (3), 310 – 322 (2016).
We propose a new algorithm for recovering both complex field (phase and amplitude) and source distribution (illumination spatial coherence) from a stack of intensity images captured through focus. The joint recovery is formulated as a nonlinear least-square-error optimization problem, which is solved iteratively by a modified Gauss-Newton method. We derive the gradient and Hessian of the cost function and show that our second-order optimization approach outperforms previously proposed phase retrieval algorithms, for datasets taken with both coherent and partially coherent illumination. The method is validated experimentally in a commercial microscope with both Kohler illumination and a programmable LED dome.
Relaxation of mask design for single-shot phase imaging with a coded aperture
R. Egami, R. Horisaki, L. Tian, J. Tanida
Appl. Opt. 55, 1830-1837 (2016).
We present a method of relaxing the conditions of mask design in single-shot phase imaging with a coded aperture (SPICA), for extending the applications of SPICA. SPICA, based on compressive sensing, enables the acquisition of wide, high-resolution optical complex fields in a single exposure without the need for reference light. In our previous work on SPICA, a coded aperture (CA) was implemented with only amplitude modulation, resulting in a low transmission factor and low light efficiency because of the need for an independent phase retrieval process in the reconstruction. We attempt to alleviate these limitations by adapting a reconstruction algorithm to directly associate the phase-retrieval process with a sparsity-based reconstruction. With this approach, it is possible to realize SPICA with an amplitude-modulation-based CA having a high transmission factor, a phase-modulation-based CA, and a complex-amplitude (amplitude and phase)-modulation-based CA. We verified the effectiveness of these relaxed CA designs numerically and experimentally.
Experimental robustness of Fourier Ptychography phase retrieval algorithms
L. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, L. Waller
Opt. Express 23(26) 33212-33238 (2015).
Fourier ptychography is a new computational microscopy technique that provides gigapixel-scale intensity and phase images with both wide field-of-view and high resolution. By capturing a stack of low-resolution images under different illumination angles, an inverse algorithm can be used to computationally reconstruct the high-resolution complex field. Here, we compare and classify multiple proposed inverse algorithms in terms of experimental robustness. We find that the main sources of error are noise, aberrations and mis-calibration (i.e. model mis-match). Using simulations and experiments, we demonstrate that the choice of cost function plays a critical role, with amplitude-based cost functions performing better than intensity-based ones. The reason for this is that Fourier ptychography datasets consist of images from both brightfield and darkfield illumination, representing a large range of measured intensities. Both noise (e.g. Poisson noise) and model mis-match errors are shown to scale with intensity. Hence, algorithms that use an appropriate cost function will be more tolerant to both noise and model mis-match. Given these insights, we propose a global Newton’s method algorithm which is robust and accurate. Finally, we discuss the impact of procedures for algorithmic correction of aberrations and mis-calibration.
Computational illumination for high-speed in vitro Fourier ptychographic microscopy
L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, L. Waller
Optica 2(10), 904-911 (2015).
We demonstrate a new computational illumination technique that achieves a large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either a large field of view (FOV) or high resolution, and not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both a wide FOV and high resolution, i.e., a large space-bandwidth product. FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (of the order of minutes), limiting throughput. Faster capture times would not only improve the imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g., pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with subsecond capture times. We propose an improved algorithm and a new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time.
Computational imaging: Machine learning for 3D microscopy
L. Waller, L. Tian
Nature, 523, 416–417 (2015).
Self-learning based Fourier ptychographic microscopy
Y. Zhang, W. Jiang, L. Tian, L. Waller, Q. Dai
Opt. Express 23, 18471-18486 (2015).
Fourier Ptychographic Microscopy (FPM) is a newly proposed computational imaging method aimed at reconstructing a high-resolution wide-field image from a sequence of low-resolution images. These low-resolution images are captured under varied illumination angles and the FPM recovery routine then stitches them together in the Fourier domain iteratively. Although FPM has achieved success with static sample reconstructions, the long acquisition time inhibits real-time application. To address this problem, we propose here a self-learning based FPM which accelerates the acquisition and reconstruction procedure. We first capture a single image under normally incident illumination, and then use it to simulate the corresponding low-resolution images under other illumination angles. The simulation is based on the relationship between the illumination angles and the shift of the sample’s spectrum. We analyze the importance of the simulated low-resolution images in order to devise a selection scheme which only collects the ones with higher importance. The measurements are then captured with the selection scheme and employed to perform the FPM reconstruction. Since only measurements of high importance are captured, the time requirements of data collection as well as image reconstruction can be greatly reduced. We validate the effectiveness of the proposed method with simulation and experimental results showing that the reduction ratio of data size requirements can reach over 70%, without sacrificing image reconstruction quality.
3D imaging in volumetric scattering media using phase-space measurements
H. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, L. Waller
Opt. Express 23, 14461-14471 (2015).
We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media.
Transport of intensity phase retrieval and computational imaging for partially coherent fields: The phase space perspective
C. Zuo, Q. Chen, L. Tian, L. Waller, A. Asundi
Optics and Lasers in Engineering 71, 20-32 (2015).
The well-known transport of intensity equation (TIE) allows the phase of a coherent field to be retrieved non-interferometrically given positive defined intensity measurements and appropriate boundary conditions. However, in many cases like the optical microscopy, the imaging systems often involve extended and polychromatic sources for which the effect of the partial coherence is not negligible. In this work, we present a phase-space formulation for the TIE for analyzing phase retrieval under partially coherent illumination. The conventional TIE is reformulated in the joint space-spatial frequency domain using Wigner distribution functions. The phase-space formulation clarifies the physical meaning of the phase of partially coherent fields, and enables explicit account of partial coherence effects on phase retrieval. The correspondence between the Wigner distribution function and the light field in geometric optics limit further enables TIE to become a simple yet effective approach to realize high-resolution light field imaging for slowly varying phase specimens, in a purely computational way.
Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array
Z. Phillips, M. D’Ambrosio, L. Tian, J. Rulison, H. Patel, N. Sadras, A. Gande, N. Switz, D. Fletcher, L. Waller
PLoS ONE 10, e0124938 (2015).
We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.
Quantitative differential phase contrast imaging in an LED array microscope
L. Tian, L. Waller
Opt. Express 23, 11394-11403 (2015).
Illumination-based differential phase contrast (DPC) is a phase imaging method that uses a pair of images with asymmetric illumination patterns. Distinct from coherent techniques, DPC relies on spatially partially coherent light, providing 2× better lateral resolution, better optical sectioning and immunity to speckle noise. In this paper, we derive the 2D weak object transfer function (WOTF) and develop a quantitative phase reconstruction method that is robust to noise. The effect of spatial coherence is studied experimentally, and multiple-angle DPC is shown to provide improved frequency coverage for more stable phase recovery. Our method uses an LED array microscope to achieve real-time (10 Hz) quantitative phase imaging with in vitro live cell samples.
Motion deblurring with temporally coded illumination in an LED array microscope
C. Ma, Z. Liu, L. Tian, Q. Dai, L. Waller
Opt. Lett. 40, 2281-2284 (2015).
Motion blur, which results from time-averaging an image over the camera’s exposure time, is a common problem in microscopy of moving samples. Here, we demonstrate linear motion deblurring using temporally coded illumination in an LED array microscope. By illuminating moving objects with a well-designed temporal coded sequence that varies during each single camera exposure, the resulting motion blur is invertible and can be computationally removed. This scheme is implemented in an existing LED array microscope, providing benefits of being grayscale, fast, and adaptive, which leads to high-quality deblur performance and a flexible implementation with no moving parts. The proposed method is demonstrated experimentally for fast moving targets in a microfluidic environment.
3D intensity and phase imaging from light field measurements in an LED array microscope
Lei Tian, L. Waller
Optica 2, 104-111 (2015).
Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier Ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective (5x larger NA) and axial resolution better than the depth of field, using a low magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multi-slice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data is captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.
Partially coherent phase imaging with unknown source shape
J. Zhong, Lei Tian, J. Dauwels, L. Waller
Biomedical Optics Express 6, 257-265 (2015).
We propose a new method for phase retrieval that uses partially coherent illumination created by any arbitrary source shape in Kohler geometry. Using a stack of defocused intensity images, we recover not only the phase and amplitude of the sample, but also an estimate of the unknown source shape, which describes the spatial coherence of the illumination. Our algorithm uses a Kalman filtering approach which is fast, accurate and robust to noise. The method is experimentally simple and flexible, so should find use in optical, electron, X-ray and other phase imaging systems which employ partially coherent light. We provide an experimental demonstration in an optical microscope with various condenser apertures.
Empirical concentration bounds for compressive holographic bubble imaging based on a Mie scattering model
W. Chen, Lei Tian, S. Rehman, Z. Zhang, H. P. Lee, G. Barbastathis
Opt. Express 23, (2015).
We use compressive in–line holography to image air bubbles in water and investigate the effect of bubble concentration on reconstruction performance by simulation. Our forward model treats bubbles as finite spheres and uses Mie scattering to compute the scattered field in a physically rigorous manner. Although no simple analytical bounds on maximum concentration can be derived within the classical compressed sensing framework due to the complexity of the forward model, the receiver operating characteristic (ROC) curves in our simulation provide an empirical concentration bound for accurate bubble detection by compressive holography at different noise levels, resulting in a maximum tolerable concentration much higher than the traditional back-propagation method.
Real-time brightfield, darkfield and phase contrast imaging in an LED array microscope
Z. Liu, Lei Tian, S. Liu, L. Waller
Journal of Biomedical Optics, 19(10), 106002 (2014).
We demonstrate a single-camera imaging system that can simultaneously acquire brightfield, darkfield and phase contrast images in real-time. Our method uses computational illumination via a programmable LED array at the source plane, providing flexible patterning of illumination angles. Brightfield, darkfield and differential phase contrast (DPC) images are obtained by changing the LED patterns, without any moving parts. Previous work with LED array illumination was only valid for static samples because the hardware speed was not fast enough to meet real-time acquisition and processing requirements. Here, we time multiplex patterns for each of the three contrast modes in order to image dynamic biological processes in all three contrast modes simultaneously. We demonstrate multi-contrast operation at the maximum frame rate of our camera (50 Hz with 2160×2560 pixels).
Multiplexed coded illumination for Fourier Ptychography with an LED array microscope
Lei Tian, X. Li, K. Ramchandran, L. Waller
Biomedical Optics Express 5, 2376-2389 (2014).
Fourier Ptychography is a new computational microscopy technique that achieves gigapixel images with both wide field of view and high resolution in both phase and amplitude. The hardware setup involves a simple replacement of the microscope’s illumination unit with a programmable LED array, allowing one to flexibly pattern illumination angles without any moving parts. In previous work, a series of low-resolution images was taken by sequentially turning on each single LED in the array, and the data were then combined to recover a bandwidth much higher than the one allowed by the original imaging system. Here, we demonstrate a multiplexed illumination strategy in which multiple randomly selected LEDs are turned on for each image. Since each LED corresponds to a different area of Fourier space, the total number of images can be significantly reduced, without sacrificing image quality. We demonstrate this method experimentally in a modified commercial microscope. Compared to sequential scanning, our multiplexed strategy achieves similar results with approximately an order of magnitude reduction in both acquisition time and data capture requirements.
3D differential phase contrast microscopy with computational illumination using an LED array
Lei Tian, J. Wang, L. Waller
Optics Letters 39, 1326 – 1329 (2014).
We demonstrate 3D differential phase-contrast (DPC) microscopy, based on computational illumination with a programmable LED array. By capturing intensity images with various illumination angles generated by sequentially patterning an LED array source, we digitally refocus images through various depths via light field processing. The intensity differences from images taken at complementary illumination angles are then used to generate DPC images, which are related to the gradient of phase. The proposed method achieves 3D DPC with simple, inexpensive optics and no moving parts. We experimentally demonstrate our method by imaging a camel hair sample in 3D.