Full publication list at Google Scholar.


Pupil engineering for extended depth-of-field imaging in a fluorescence miniscope
Joseph Greene, Yujia Xue, Jeffrey Alido, Alex Matlock, Guorong Hu, Kivilcim Kiliç, Ian Davison, Lei Tian
Neurophotonics, Vol. 10, Issue 4, 044302 (2023).

Fluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze in-vivo neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses. We present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8 × between twin foci in fixed scattering samples. We use a genetic algorithm that considers the GRIN lens’ aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 μm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight. We characterize the performance of EDoF-Miniscope across 5- and 10-μm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm-thick mouse brain sample and vessels in a whole mouse brain sample.  Built from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.

High-Speed Low-Light In Vivo Two-Photon Voltage Imaging of Large Neuronal Populations
Jelena Platisa, Xin Ye, Allison M Ahrens, Chang Liu, Ichun A Chen, Ian G Davison, Lei Tian, Vincent A Pieribone, Jerry L Chen
Nature Methods (2023).
 Github Project

Monitoring spiking activity across large neuronal populations at behaviorally relevant timescales is critical for understanding neural circuit function. Unlike calcium imaging, voltage imaging requires kilohertz sampling rates that reduce fluorescence detection to near shot-noise levels. High-photon flux excitation can overcome photon-limited shot noise, but photobleaching and photodamage restrict the number and duration of simultaneously imaged neurons. We investigated an alternative approach aimed at low two-photon flux, which is voltage imaging below the shot-noise limit. This framework involved developing positive-going voltage indicators with improved spike detection (SpikeyGi and SpikeyGi2); a two-photon microscope (‘SMURF’) for kilohertz frame rate imaging across a 0.4 mm × 0.4 mm field of view; and a self-supervised denoising algorithm (DeepVID) for inferring fluorescence from shot-noise-limited signals. Through these combined advances, we achieved simultaneous high-speed deep-tissue imaging of more than 100 densely labeled neurons over 1 hour in awake behaving mice. This demonstrates a scalable approach for voltage imaging across increasing neuronal populations.


Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network
J. Alido, J. Greene, Y. Xue, G. Hu, Y. Li, K. Monk, B. DeBenedicts, I. Davison, L. Tian

Imaging through scattering is a pervasive and difficult problem in many biological applications. The high background and the exponentially attenuated target signals due to scattering fundamentally limits the imaging depth of fluorescence microscopy. Light-field systems are favorable for high-speed volumetric imaging, but the 2D-to-3D reconstruction is fundamentally ill-posed and scattering exacerbates the condition of the inverse problem. Here, we develop a scattering simulator that models low-contrast target signals buried in heterogeneous strong background. We then train a deep neural network solely on synthetic data to descatter and reconstruct a 3D volume from a single-shot light-field measurement with low signal-to-background ratio (SBR). We apply this network to our previously developed Computational Miniature Mesoscope and demonstrate the robustness of our deep learning algorithm on a 75μm thick fixed mouse brain section and on bulk scattering phantoms with different scattering conditions. The network can robustly reconstruct emitters in 3D with a 2D measurement of SBR as low as 1.05 and as deep as a scattering length. We analyze fundamental tradeoffs based on network design factors and out-of-distribution data that affect the deep learning model’s generalizability to real experimental data. Broadly, we believe that our simulator-based deep learning approach can be applied to a wide range of imaging through scattering techniques where experimental paired training data is lacking.

3D Chemical Imaging by Fluorescence-detected Mid-Infrared Photothermal Fourier Light Field Microscopy
Danchen Jia, Yi Zhang, Qianwan Yang, Yujia Xue, Yuying Tan, Zhongyue Guo, Meng Zhang, Lei Tian, Ji-Xin Cheng
Chem. Biomed. Imaging 2023.

Three-dimensional molecular imaging of living organisms and cells plays a significant role in modern biology. Yet, current volumetric imaging modalities are largely fluorescence-based and thus lack chemical content information. Mid-infrared photothermal microscopy as a chemical imaging technology provides infrared spectroscopic information at submicrometer spatial resolution. Here, by harnessing thermosensitive fluorescent dyes to sense the mid-infrared photothermal effect, we demonstrate 3D fluorescence-detected mid-infrared photothermal Fourier light field (FMIP-FLF) microscopy at the speed of 8 volumes per second and submicron spatial resolution. Protein contents in bacteria and lipid droplets in living pancreatic cancer cells are visualized. Altered lipid metabolism in drug-resistant pancreatic cancer cells is observed with the FMIP-FLF microscope.

Fourier ptychographic topography
H. Wang, J. Zhu, J. Sung, G. Hu, J. Greene, Y. Li, S. Park, W. Kim, M. Lee, Y. Yang, L. Tian
Optics Express 31, pp. 11007-11018 (2023)

Topography measurement is essential for surface characterization, semiconductor metrology, and inspection applications. To date, performing high-throughput and accurate topography remains challenging due to the trade-off between field-of-view (FOV) and spatial resolution. Here we demonstrate a novel topography technique based on the reflection-mode Fourier ptychographic microscopy, termed Fourier ptychograhpic topography (FPT). We show that FPT provides both a wide FOV and high resolution, and achieves nanoscale height reconstruction accuracy. Our FPT prototype is based on a custom-built computational microscope consisting of programmable brightfield and darkfield LED arrays. The topography reconstruction is performed by a sequential Gauss-Newton-based Fourier ptychographic phase retrieval algorithm augmented with total variation regularization. We achieve a synthetic numerical aperture (NA) of 0.84 and a diffraction-limited resolution of 750 nm, increasing the native objective NA (0.28) by 3×, across a 1.2 × 1.2 mm2 FOV. We experimentally demonstrate the FPT on a variety of reflective samples with different patterned structures. The reconstructed resolution is validated on both amplitude and phase resolution test features. The accuracy of the reconstructed surface profile is benchmarked against high-resolution optical profilometry measurements. In addition, we show that the FPT provides robust surface profile reconstructions even on complex patterns with fine features that cannot be reliably measured by the standard optical profilometer. The spatial and temporal noise of our FPT system is characterized to be 0.529 nm and 0.027 nm, respectively.

Multiple-scattering simulator-trained neural network for intensity diffraction tomography
A. Matlock, J. Zhu, L. Tian
Optics Express 31, 4094-4107 (2023)

Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering C. elegans worms. We benchmark the network’s performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network’s robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network’s generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.

Bond-Selective Intensity Diffraction Tomography
Jian Zhao, Alex Matlock, Hongbo Zhu, Ziqi Song, Jiabei Zhu, Biao Wang, Fukai Chen, Yuewei Zhan, Zhicong Chen, Yihong Xu, Xingchen Lin, Lei Tian, Ji-Xin Cheng
Nature Commun. 13, 7767 (2022).

Recovering molecular information remains a grand challenge in the widely used holographic and computational imaging technologies. To address this challenge, we developed a computational mid-infrared photothermal microscope, termed Bond-selective Intensity Diffraction Tomography (BS-IDT). Based on a low-cost brightfield microscope with an add-on pulsed light source, BS-IDT recovers both infrared spectra and bond-selective 3D refractive index maps from intensity-only measurements. High-fidelity infrared fingerprint spectra extraction is validated. Volumetric chemical imaging of biological cells is demonstrated at a speed of ~20 s per volume, with a lateral and axial resolution of ~350 nm and ~1.1 µm, respectively. BS-IDT’s application potential is investigated by chemically quantifying lipids stored in cancer cells and volumetric chemical imaging on Caenorhabditis elegans with a large field of view (~100 µm x 100 µm).

Recovery of Continuous 3D Refractive Index Maps from Discrete Intensity-Only Measurements using Neural Fields
Renhao Liu, Yu Sun, Jiabei Zhu, Lei Tian, Ulugbek Kamilov
Nature Machine Intelligence 4, 781–791 (2022).

Intensity diffraction tomography (IDT) refers to a class of optical microscopy techniques for imaging the three-dimensional refractive index (RI) distribution of a sample from a set of two-dimensional intensity-only measurements. The reconstruction of artefact-free RI maps is a fundamental challenge in IDT due to the loss of phase information and the missing-cone problem. Neural fields has recently emerged as a new deep learning approach for learning continuous representations of physical fields. The technique uses a coordinate-based neural network to represent the field by mapping the spatial coordinates to the corresponding physical quantities, in our case the complex-valued refractive index values. We present Deep Continuous Artefact-free RI Field (DeCAF) as a neural-fields-based IDT method that can learn a high-quality continuous representation of a RI volume from its intensity-only and limited-angle measurements. The representation in DeCAF is learned directly from the measurements of the test sample by using the IDT forward model without any ground-truth RI maps. We qualitatively and quantitatively evaluate DeCAF on the simulated and experimental biological samples. Our results show that DeCAF can generate high-contrast and artefact-free RI maps and lead to an up to 2.1-fold reduction in the mean squared error over existing methods.

Deep-learning-augmented Computational Miniature Mesoscope
Yujia Xue, Qianwan Yang, Guorong Hu, Kehan Guo, Lei Tian
Optica 9, 1009-1021 (2022)

 Github Project

Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a tradeoff between field-of-view (FOV), resolution, and complexity, and thus cannot fulfill the emerging need of miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed Computational Miniature Mesoscope (CM2) that exploits a computational imaging strategy to enable single-shot 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM2 V2 that significantly advances both the hardware and computation. We complement the 3×3 microlens array with a new hybrid emission filter that improves the imaging contrast by 5×, and design a 3D-printed freeform collimator for the LED illuminator that improves the excitation efficiency by 3×. To enable high-resolution reconstruction across the large imaging volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model that characterizes the spatially varying aberrations. We then train a multi-module deep learning model, CM2Net, using only the 3D-LSV simulator. We show that CM2Net generalizes well to experiments and achieves accurate 3D reconstruction across a 7-mm FOV and 800-μm depth, and provides 6-μm lateral and 25-μm axial resolution. This provides 8× better axial localization and 1400× faster speed as compared to the previous model-based algorithm. We anticipate this simple and low-cost computational miniature imaging system will be impactful to many large-scale 3D fluorescence imaging applications.

High-fidelity intensity diffraction tomography with a non-paraxial multiple-scattering model
Jiabei Zhu, Hao Wang, Lei Tian
Optics Express Vol. 30, Issue 18, pp. 32808-32821 (2022).

 Github Project

We propose a novel intensity diffraction tomography (IDT) reconstruction algorithm based on the split-step non-paraxial (SSNP) model for recovering the 3D refractive index (RI) distribution of multiple-scattering biological samples. High-quality IDT reconstruction requires high-angle illumination to encode both low- and high- spatial frequency information of the 3D biological sample. We show that our SSNP model can more accurately compute multiple scattering from high-angle illumination compared to paraxial approximation-based multiple-scattering models. We apply this SSNP model to both sequential and multiplexed IDT techniques. We develop a unified reconstruction algorithm for both IDT modalities that is highly computationally efficient and is implemented by a modular automatic differentiation framework. We demonstrate the capability of our reconstruction algorithm on both weakly scattering buccal epithelial cells and strongly scattering live C. elegans worms and live C. elegans embryos.

Optical spatial filtering with plasmonic directional image sensors
Jianing Liu, Hao Wang, Leonard C. Kogos, Yuyu Li, Yunzhe Li, Lei Tian, and Roberto Paiella
Optics Express Vol. 30, Issue 16, pp. 29074-29087 (2022).
Editors’ pick

Photonics provides a promising approach for image processing by spatial filtering, with the advantage of faster speeds and lower power consumption compared to electronic digital solutions. However, traditional optical spatial filters suffer from bulky form factors that limit their portability. Here we present a new approach based on pixel arrays of plasmonic directional image sensors, designed to selectively detect light incident along a small, geometrically tunable set of directions. The resulting imaging systems can function as optical spatial filters without any external filtering elements, leading to extreme size miniaturization. Furthermore, they offer the distinct capability to perform multiple filtering operations at the same time, through the use of sensor arrays partitioned into blocks of adjacent pixels with different angular responses. To establish the image processing capabilities of these devices, we present a rigorous theoretical model of their filter transfer function under both coherent and incoherent illumination. Next, we use the measured angle-resolved responsivity of prototype devices to demonstrate two examples of relevant functionalities: (1) the visualization of otherwise invisible phase objects and (2) spatial differentiation with incoherent light. These results are significant for a multitude of imaging applications ranging from microscopy in biomedicine to object recognition for computer vision.

Fig. 5.

Neurophotonic tools for microscopic measurements and manipulation: status report
Ahmed Abdelfattah, Sapna Ahuja, Taner Akkin, Srinivasa Rao Allu, David A. Boas, Joshua Brake, Erin M. Buckley, Robert E. Campbell, Anderson I. Chen, Xiaojun Cheng, Tomáš Cižmár, Irene Costantini, Massimo De Vittorio, Anna Devor, Patrick R. Doran, Mirna El Khatib, Valentina Emiliani, Natalie Fomin-Thunemann, Yeshaiahu Fainman, Tomás Fernández Alfonso, Christopher G. L. Ferri, Ariel Gilad, Xue Han, Andrew Harris, Elizabeth M. C. Hillman, Ute Hochgeschwender, Matthew G. Holt, Na Ji, Kivilcim Kiliç, Evelyn M. R. Lake, Lei Li, Tianqi Li, Philipp Mächler, Rickson C. Mesquita, Evan W. Miller, K.M. Naga Srinivas Nadella, U. Valentin Nägerl, Yusuke Nasu, Axel Nimmerjahn, Petra Ondrácková, Francesco S. Pavone, Citlali Perez Campos, Darcy S. Peterka, Filippo Pisano, Ferruccio Pisanello, Francesca Puppo, Bernardo L. Sabatini, Sanaz Sadegh, Sava Sakadžic, Shy Shoham, Sanaya N. Shroff, R. Angus Silver, Ruth R. Sims, Spencer L. Smith, Vivek J. Srinivasan, Martin Thunemann, Lei Tian, Lin Tian, Thomas Troxler, Antoine Valera, Alipasha Vaziri, Sergei A. Vinogradov, Flavia Vitale, Lihong V. Wang, Hana Uhlířová, Chris Xu, Changhuei Yang, Mu-Han Yang, Gary Yellen, Ofer Yizhar, Yongxin Zhao
Neurophotonics, 9(S1), 013001 (2022).

Neurophotonics was launched in 2014 coinciding with the launch of the BRAIN Initiative focused on development of technologies for advancement of neuroscience. For the last seven years, Neurophotonics’ agenda has been well aligned with this focus on neurotechnologies featuring new optical methods and tools applicable to brain studies. While the BRAIN Initiative 2.0 is pivoting towards applications of these novel tools in the quest to understand the brain, in this article we review an extensive and diverse toolkit of novel methods to explore brain function that have emerged from the BRAIN Initiative and related large-scale efforts for measurement and manipulation of brain structure and function. Here, we focus on neurophotonic tools mostly applicable to animal studies. A companion article, scheduled to appear later this year, will cover diffuse optical imaging methods applicable to noninvasive human studies. For each domain, we outline the current state-of-the-art of the respective technologies, identify the areas where innovation is needed and provide an outlook for the future directions.

Adaptive 3D descattering with a dynamic synthesis network
Waleed Tahir, Hao Wang, Lei Tian
Light: Science & Applications 11, 42, 2022.
 On the Cover

 Github Project

Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a “descattering” neural network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual “expert” networks have to be trained for each condition. However, the performance of the expert sharply degrades when the scattering level at the testing time differs from the training. An alternative approach is to train a “generalist” network using data from a variety of scattering conditions. However, the generalist generally suffers from worse performance as compared to the expert trained for each scattering condition. Here, we develop a drastically different approach, termed dynamic synthesis network (DSN), that can dynamically adjust the model weights and adapt to different scattering conditions. The adaptability is achieved by a novel architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. Notably, our DSN adaptively removes scattering artifacts across a continuum of scattering conditions regardless of whether the condition has been used for the training, and consistently outperforms the generalist. By training the DSN entirely on a multiple-scattering simulator, we experimentally demonstrate the network’s adaptability and robustness for 3D descattering in holographic 3D particle imaging. We expect the same concept can be adapted to many other imaging applications, such as denoising, and imaging through scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.

Roadmap on digital holography
Bahram Javidi, Artur Carnicer, Arun Anand, George Barbastathis, Wen Chen, Pietro Ferraro, J. W. Goodman, Ryoichi Horisaki, Kedar Khare, Malgorzata Kujawinska, Rainer A. Leitgeb, Pierre Marquet, Takanori Nomura, Aydogan Ozcan, YongKeun Park, Giancarlo Pedrini, Pascal Picart, Joseph Rosen, Genaro Saavedra, Natan T. Shaked, Adrian Stern, Enrique Tajahuerce, Lei Tian, Gordon Wetzstein, and Masahiro Yamaguchi
Optics Express Vol. 29, Issue 22, pp. 35078-35118 (2021).

This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.

Review of bio-optical imaging systems with a high space-bandwidth product
Jongchan Park, David J. Brady, Guoan Zheng, Lei Tian, Liang Gao
Advanced Photonics, 3(4), 044001 (2021).

Optical imaging has served as a primary method to collect information about biosystems across scales—from functionalities of tissues to morphological structures of cells and even at biomolecular levels. However, to adequately characterize a complex biosystem, an imaging system with a number of resolvable points, referred to as a space-bandwidth product (SBP), in excess of one billion is typically needed. Since a gigapixel-scale far exceeds the capacity of current optical imagers, compromises must be made to obtain either a low spatial resolution or a narrow field-of-view (FOV). The problem originates from constituent refractive optics—the larger the aperture, the more challenging the correction of lens aberrations. Therefore, it is impractical for a conventional optical imaging system to achieve an SBP over hundreds of millions. To address this unmet need, a variety of high-SBP imagers have emerged over the past decade, enabling an unprecedented resolution and FOV beyond the limit of conventional optics. We provide a comprehensive survey of high-SBP imaging techniques, exploring their underlying principles and applications in bioimaging.

Acousto-optic ptychography
M. Rosenfeld, G. Weinberg, D. Doktofsky, Y. Li, L. Tian, O. Katz
Optica 8, 936-943 (2021).

Acousto-optic imaging (AOI) enables optical-contrast imaging deep inside scattering samples via localized ultrasound-modulation of scattered light. While AOI allows optical investigations at depths, its imaging resolution is inherently limited by the ultrasound wavelength, prohibiting microscopic investigations. Here, we propose a computational imaging approach that allows optical diffraction-limited imaging using a conventional AOI system. We achieve this by extracting diffraction-limited imaging information from speckle correlations in the conventionally detected ultrasound-modulated scattered-light fields. Specifically, we identify that since “memory-effect” speckle correlations allow estimation of the Fourier magnitude of the field inside the ultrasound focus, scanning the ultrasound focus enables robust diffraction-limited reconstruction of extended objects using ptychography (i.e., we exploit the ultrasound focus as the scanned spatial-gate probe required for ptychographic phase retrieval). Moreover, we exploit the short speckle decorrelation-time in dynamic media, which is usually considered a hurdle for wavefront-shaping- based approaches, for improved ptychographic reconstruction. We experimentally demonstrate noninvasive imaging of targets that extend well beyond the memory-effect range, with a 40-times resolution improvement over conventional AOI.

Microsecond fingerprint stimulated Raman spectroscopic imaging by ultrafast tuning and spatial-spectral learning
H. Lin, H.J. Lee, N. Tague, J.-B. Lugagne, C. Zong, F. Deng, J. Shin, L. Tian, W. Wong, M.J. Dunlop, J.-X. Cheng
Nature Communications 12(1) (2021).
In the news:
– BU ECE news.

Label-free vibrational imaging by stimulated Raman scattering (SRS) provides unprecedented insight into real-time chemical distributions. Specifically, SRS in the fingerprint region (4001800 cm1) can resolve multiple chemicals in a complex bio-environment. However, due to the intrinsic weak Raman cross-sections and the lack of ultrafast spectral acquisition schemes with high spectral fidelity, SRS in the fingerprint region is not viable for studying living cells or large-scale tissue samples. Here, we report a fingerprint spectroscopic SRS platform that acquires a distortion-free SRS spectrum at 10 cm1 spectral resolution within 20 μs using a polygon scanner. Meanwhile, we significantly improve the signal-to-noise ratio by employing a spatial-spectral residual learning network, reaching a level comparable to that with 100 times integration. Collectively, our system enables high-speed vibrational spectroscopic imaging of multiple biomolecules in samples ranging from a single live microbe to a tissue slice.

Deep Learning in Biomedical Optics
L. Tian, B. Hunt, M. Bell, J. Yi, J. Smith, M. Ochoa, X. Intes, N. Durr
Lasers in Surgery and Medicine 53(6), 748, (2021).
On the Cover

This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized.

Large-scale holographic particle 3D imaging with the beam propagation model
Hao Wang, Waleed Tahir, Jiabei Zhu, Lei Tian
Opt. Express 29, 17159-17172 (2021)

 Github Project

We develop a novel algorithm for large-scale holographic reconstruction of 3D particle fields. Our method is based on a multiple-scattering beam propagation method (BPM) combined with sparse regularization that enables recovering dense 3D particles of high refractive index contrast from a single hologram. We show that the BPM-computed hologram generates intensity statistics closely matching with the experimental measurements and provides up to 9× higher accuracy than the single-scattering model. To solve the inverse problem, we devise a computationally efficient algorithm, which reduces the computation time by two orders of magnitude as compared to the state-of-the-art multiple-scattering-based technique. We demonstrate superior reconstruction accuracy in both simulations and experiments under different scattering strengths. We show that the BPM reconstruction significantly outperforms the single-scattering method in particular for deep imaging depths and high particle densities.

Single-cell cytometry via multiplexed fluorescence prediction by label-free reflectance microscopy
Shiyi Cheng, Sipei Fu, Yumi Mun Kim, Weiye Song, Yunzhe Li, Yujia Xue, Ji Yi, Lei Tian
Science Advances  15 Jan 2021: Vol. 7, no. 3, eabe0431
In the news:
– BU Hariri Institute:
Deep Learning Allows for Digital Labeling of Multiple Cellular Structures

Traditional imaging cytometry uses fluorescence markers to identify specific structures but is limited in throughput by the labeling process. We develop a label-free technique that alleviates the physical staining and provides multiplexed readouts via a deep learning–augmented digital labeling method. We leverage the rich structural information and superior sensitivity in reflectance microscopy and show that digital labeling predicts accurate subcellular features after training on immunofluorescence images. We demonstrate up to three times improvement in the prediction accuracy over the state of the art. Beyond fluorescence prediction, we demonstrate that single cell–level structural phenotypes of cell cycles are correctly reproduced by the digital multiplexed images, including Golgi twins, Golgi haze during mitosis, and DNA synthesis. We further show that the multiplexed readouts enable accurate multiparametric single-cell profiling across a large cell population. Our method can markedly improve the throughput for imaging cytometry toward applications for phenotyping, pathology, and high-content screening.

Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network
Y Li, S Cheng, Y Xue, L Tian
Optics Express Vol. 29, Issue 2, pp. 2244-2257 (2021).

Coherent imaging through scatter is a challenging task. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach can make high-quality and highly generalizable predictions through unseen diffusers. Here, we propose a new deep neural network model that is agnostic to a broader class of perturbations including scatterer change, displacements, and system defocus up to 10× depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our deep learning model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that our model can unmix the scattering-specific information and extract the object-specific information and achieve generalization under different scattering conditions. Our work paves the way to a robust and interpretable deep learning approach to imaging through scattering media.

Anatomical modeling of brain vasculature in two-photon microscopy by generalizable deep learning
Waleed Tahir, Sreekanth Kura, Jiabei Zhu, Xiaojun Cheng, Rafat Damseh, Fetsum Tadesse, Alex Seibel, Blaire S. Lee, Frederic Lesage, Sava Sakadzic, David A. Boas, Lei Tian
BME Frontiers, vol. 2021, Article ID 8620932

 Github Project

Segmentation of blood vessels from two-photon microscopy (2PM) angiograms of brains has important applications in hemodynamic analysis and disease diagnosis. Here we develop a generalizable deep-learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups. In addition, the technique is computationally efficient, making it ideal for large-scale neurovascular analysis.

Single-Shot 3D Widefield Fluorescence Imaging with a Computational Miniature Mesoscope
Yujia Xue, Ian G. Davison, David A. Boas, Lei Tian
Science Advances 21 OCT 2020: EABB7508
On the Cover
In the news:
– BU ENG news:
Brain Imaging Scaled Down
– BU CISE news: How Computational Imaging is Helping to Advance In-Vivo Studies of Brain Function

 Github Project

Fluorescence microscopes are indispensable to biology and neuroscience. The need for recording in freely behaving animals has further driven the development in miniaturized  microscopes (miniscopes). However, conventional microscopes/miniscopes are inherently constrained by their limited space-bandwidth product, shallow depth of field (DOF), and inability to resolve three-dimensional (3D) distributed emitters. Here, we present a Computational Miniature Mesoscope (CM2) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 mm by 7 mm field of view and 2.5-mm DOF, achieving 7-μm lateral resolution and better than 200-μm axial resolution. The CM2 features a compact lightweight design that integrates a microlens array for imaging and a light-emitting diode array for excitation. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic imaging capability on 3D fluorescent samples. We further quantify the effects of scattering and background fluorescence on phantom experiments.


Single-Shot Ultraviolet Compressed Ultrafast Photography
Yingming Lai, Yujia Xue, Christian‐Yves Côté, Xianglei Liu, Antoine,Laramée, Nicolas Jaouen, François Légaré, Lei Tian, Jinyang Liang
Laser & Photonics Reviews 2020, 14, 2000122.
on the cover story

Compressed ultrafast photography (CUP) is an emerging potent technique that allows imaging a nonrepeatable or difficult‐to‐produce transient event in a single shot. Despite many recent advances, existing CUP techniques operate only at visible and near‐infrared wavelengths. In addition, spatial encoding via a digital micromirror device (DMD) in CUP systems often limits its field of view and imaging speeds. Finally, conventional reconstruction algorithms have limited control of the reconstruction process to further improve the image quality in the recovered datacubes of the scene. To overcome these limitations, this article reports a single‐shot UV‐CUP that exhibits a sequence depth of up to 1500 frames with a size of 1750 × 500 pixels at an imaging speed of 0.5 trillion frames per second. A patterned photocathode is integrated into a streak camera, which overcomes the previous restrictions in DMD‐based spatial encoding and improves the system’s compactness. Meanwhile, the plug‐and‐play alternating direction method of multipliers algorithm is implemented to CUP’s image reconstruction to enhance reconstructed image quality. UV‐CUP’s single‐shot ultrafast imaging ability is demonstrated by recording UV pulses transmitting through various spatial patterns. UV‐CUP is expected to find many applications in both fundamental and applied science.


SIMBA: Scalable Inversion in Optical Tomography using Deep Denoising Priors
Zihui Wu, Yu Sun, Alex Matlock, Jiaming Liu, Lei Tian, Ulugbek S. Kamilov
IEEE Journal of Selected Topics in Signal Processing 14(6), 2020.

Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative minibatch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables high-quality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixed-point convergence of SIMBA under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.

Resolution-enhanced intensity diffraction tomography in high numerical aperture label-free microscopy
Jiaji Li, Alex Matlock, Yunzhe Li, Qian Chen, Lei Tian, and Chao Zuo
Photonics Research 8(12), 1818-1826 (2020)

We propose label-free and motion-free resolution-enhanced intensity diffraction tomography (reIDT) recovering the 3D complex refractive index distribution of an object. By combining an annular illumination strategy with a high numerical aperture (NA) condenser, we achieve near-diffraction-limited lateral resolution of 346 nm and axial resolution of 1.2  μm over 130μm×130μm×8μm volume. Our annular pattern matches the system’s maximum NA to reduce the data requirement to 48 intensity frames. The reIDT system is directly built on a standard commercial microscope with a simple LED array source and condenser lens adds-on, and promises broad applications for natural biological imaging with minimal hardware modifications. To test the capabilities of our technique, we present the 3D complex refractive index reconstructions on an absorptive USAF resolution target and HeLa and HT29 human cancer cells. Our work provides an important step in intensity-based diffraction tomography toward high-resolution imaging applications.

Diffuser-based computational imaging funduscope
Yunzhe Li, Gregory N. McKay, Nicholas J. Durr, Lei Tian
Optics Express 28, 19641-19654 (2020)

Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33° FOV and robustness to ±4D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.


Comparing the fundamental imaging depth limit of two-photon, three-photon, and non-degenerate two-photon microscopy
Xiaojun Cheng, Sanaz Sadegh, Sharvari Zilpelwar, Anna Devor, Lei Tian, and David A. Boas
Optics Letters 45, pp. 2934-2937 (2020).

We have systematically characterized the degradation of imaging quality with depth in deep brain multi-photon microscopy, utilizing our recently developed numerical model that computes wave propagation in scattering media. The signal-to-background ratio (SBR) and the resolution determined by the width of the point spread function are obtained as functions of depth. We compare the imaging quality of two-photon (2PM), three-photon (3PM), and non-degenerate two-photon microscopy (ND-2PM) for mouse brain imaging. We show that the imaging depth of 2PM and ND-2PM are fundamentally limited by the SBR, while the SBR remains approximately invariant with imaging depth for 3PM. Instead, the imaging depth of 3PM is limited by the degradation of the resolution, if there is sufficient laser power to maintain the signal level at large depth. The roles of the concentration of dye molecules, the numerical aperture of the input light, the anisotropy factor , noise level, input laser power, and the effect of temporal broadening are also discussed.

Plasmonic ommatidia for lensless compound-eye vision
Leonard C. Kogos, Yunzhe Li, Jianing Liu, Yuyu Li, Lei Tian & Roberto Paiella
Nature Communications 11: 1637 (2020).
In the news:
– BU ENG news: A Bug’s-Eye View
Highlighted in OPN Optics in 2020: Plasmonic Computational Compound-Eye Camera

 Github Project

The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects.

High-Throughput, High-Resolution Interferometric Light Microscopy of Biological Nanoparticles
C. Yurdakul, O. Avci, A. Matlock, A. J Devaux, M. V Quintero, E. Ozbay, R. A Davey, J. H Connor, W C. Karl, L. Tian, M S. Ünlü
ACS Nano 2020, 14, 2, 2002-2013

Label-free, visible light microscopy is an indispensable tool for studying biological nanoparticles (BNPs). However, conventional imaging techniques have two major challenges: (i) weak contrast due to low-refractive-index difference with the surrounding medium and exceptionally small size and (ii) limited spatial resolution. Advances in interferometric microscopy have overcome the weak contrast limitation and enabled direct detection of BNPs, yet lateral resolution remains as a challenge in studying BNP morphology. Here, we introduce a wide-field interferometric microscopy technique augmented by computational imaging to demonstrate a 2-fold lateral resolution improvement over a large field-of-view (>100 × 100 μm2), enabling simultaneous imaging of more than 104 BNPs at a resolution of ∼150 nm without any labels or sample preparation. We present a rigorous vectorial-optics-based forward model establishing the relationship between the intensity images captured under partially coherent asymmetric illumination and the complex permittivity distribution of nanoparticles. We demonstrate high-throughput morphological visualization of a diverse population of Ebola virus-like particles and a structurally distinct Ebola vaccine candidate. Our approach offers a low-cost and robust label-free imaging platform for high-throughput and high-resolution characterization of a broad size range of BNPs.

LED array reflectance microscopy for scattering-based multi-contrast imaging
Weiye Song, Alex Matlock, Sipei Fu, Xiaodan Qin, Hui Feng, Christopher V. Gabel, Lei Tian, and Ji Yi
Opt. Lett. 45, 1647-1650 (2020)

LED array microscopy is an emerging platform for computational imaging with significant utility for biological imaging. Existing LED array systems often exploit transmission imaging geometries of standard brightfield microscopes that leave the rich backscattered field undetected. This backscattered signal contains high-resolution sample information with superb sensitivity to subtle structural features that make it ideal for biological sensing and detection. Here, we develop an LED array reflectance microscope capturing the sample’s backscattered signal. In particular, we demonstrate multimodal brightfield, darkfield, and differential phase contrast imaging on fixed and living biological specimens including Caenorhabditis elegans (C. elegans), zebrafish embryos, and live cell cultures. Video-rate multimodal imaging at 20 Hz records real time features of freely moving C. elegans and the fast beating heart of zebrafish embryos. Our new reflectance mode is a valuable addition to the LED array microscopic toolbox.

Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue
Yanqin Chen, Bo Xiong, Yujia Xue, Xin Jin, Joseph Greene, and Lei Tian
Biomedical Optics Express. 11, pp. 1662-1678 (2020).

Integrating light field microscopy techniques with existing miniscope architectures has allowed for volumetric imaging of targeted brain regions in freely moving animals. However, the current design of light field miniscopes is limited by non-uniform resolution and long imaging path length. In an effort to overcome these limitations, this paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts. In addition, this paper provides a novel framework that incorporates the anticipated aberrations of the proposed Gali-MiniLFM into the point spread function (PSF) modeling. This more accurate PSF model can then be used in 3D reconstruction algorithms to further improve the resolution of the platform. Volumetric imaging in the brain necessitates the consideration of the effects of scattering. We conduct Monte Carlo simulations to demonstrate the robustness of the proposed Gali-MiniLFM for volumetric imaging in scattering tissue.

Inverse scattering for reflection intensity phase microscopy
Alex Matlock, Anne Sentenac, Patrick C. Chaumet, Ji Yi, and Lei Tian
Biomedical Optics Express. 11, pp. 911-926 (2020)

 Github Project

Reflection phase imaging provides label-free, high-resolution characterization of biological samples, typically using interferometric-based techniques. Here, we investigate reflection phase microscopy from intensity-only measurements under diverse illumination. We evaluate the forward and inverse scattering model based on the first Born approximation for imaging scattering objects above a glass slide. Under this design, the measured field combines linear forward-scattering and height-dependent nonlinear back-scattering from the object that complicates object phase recovery. Using only the forward-scattering, we derive a linear inverse scattering model and evaluate this model’s validity range in simulation and experiment using a standard reflection microscope modified with a programmable light source. Our method provides enhanced contrast of thin, weakly scattering samples that complement transmission techniques. This model provides a promising development for creating simplified intensity-based reflection quantitative phase imaging systems easily adoptable for biological research.

High-speed in vitro intensity diffraction tomography
Jiaji Li, Alex Matlock, Yunzhe Li, Qian Chen, Chao Zuo, Lei Tian
Advanced Photonics, 1(6), 066004 (2019).
on the cover story
⭑ Highlighted at Programmable LED ring enables label-free 3D tomography for conventional microscopes

 Github Project

We demonstrate a label-free, scan-free intensity diffraction tomography technique utilizing annular illumination (aIDT) to rapidly characterize large-volume 3D refractive index distributions in vitro. By optimally matching the illumination geometry to the microscope pupil, our technique reduces the data requirement by 60× to achieve high-speed 10 Hz volume rates. Using 8 intensity images, we recover 350×100×20μm3 volumes with near diffraction-limited lateral resolution of 487 nm and axial resolution of 3.4 μm. Our technique’s large volume rate and high resolution enables 3D quantitative phase imaging of complex living biological samples across multiple length scales. We demonstrate aIDT’s capabilities on unicellular diatom microalgae, epithelial buccal cell clusters with native bacteria, and live Caenorhabditis elegans specimens. Within these samples, we recover macroscale cellular structures, subcellular organelles, and dynamic micro-organism tissues with minimal motion artifacts. Quantifying such features has significant utility in oncology, immunology, and cellular pathophysiology, where these morphological features are evaluated for changes in the presence of disease, parasites, and new drug treatments. aIDT shows promise as a powerful high-speed, label-free microscopy technique for these applications where natural imaging is required to evaluate environmental effects on a sample in real-time.

High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography
Alex Matlock, Lei Tian
Biomed. Opt. Express 10, pp. 6432-6448 (2019).

Intensity diffraction tomography (IDT) provides quantitative, volumetric refractive index reconstructions of unlabeled biological samples from intensity-only measurements. IDT is scanless and easily implemented in standard optical microscopes using an LED array but suffers from large data requirements and slow acquisition speeds. Here, we develop multiplexed IDT (mIDT), a coded illumination framework providing high volume-rate IDT for evaluating dynamic biological samples. mIDT combines illuminations from an LED grid using physical model-based design choices to improve acquisition rates and reduce dataset size with minimal loss to resolution and reconstruction quality. We analyze the optimal design scheme with our mIDT framework in simulation using the reconstruction error compared to conventional IDT and theoretical acquisition speed. With the optimally determined mIDT scheme, we achieve hardware-limited 4Hz acquisition rates enabling 3D refractive index distribution recovery on live Caenorhabditis elegans worms and embryos as well as epithelial buccal cells. Our mIDT architecture provides a 60 × speed improvement over conventional IDT and is robust across different illumination hardware designs, making it an easily adoptable imaging tool for volumetrically quantifying biological samples in their natural state.

Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification
Rongrong LiuShiyi ChengLei TianJi Yi
Light: Science & Applications 8: 102 (2019).

Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.

Development of a beam propagation method to simulate the point spread function degradation in scattering media
Xiaojun Cheng, Yunzhe Li, Jerome Mertz, Sava Sakadžić, Anna Devor, David A. Boas, Lei Tian
Opt. Lett. 44, 4989-4992 (2019).

 Github Project

Scattering is one of the main issues that limit the imaging depth in deep tissue optical imaging. To characterize the role of scattering, we have developed a forward model based on the beam propagation method and established the link between the macroscopic optical properties of the media and the statistical parameters of the phase masks applied to the wavefront. Using this model, we have analyzed the degradation of the point-spread function of the illumination beam in the transition regime from ballistic to diffusive light transport. Our method provides a wave-optic simulation toolkit to analyze the effects of scattering on image quality degradation in scanning microscopy.

Holographic particle-localization under multiple scattering
Waleed Tahir, Ulugbek S. Kamilov, Lei Tian
Advanced Photonics, 1(3), 036003 (2019).

We introduce a computational framework that incorporates multiple scattering for large-scale three-dimensional (3-D) particle localization using single-shot in-line holography. Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts. Existing multiple scattering solvers become computationally prohibitive for large-scale problems, which comprise millions of voxels within the scattering volume. Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework. In the forward model, each recursion estimates the next higher-order multiple scattered field among the object slices. In the inverse model, each order of scattering is recursively estimated by a nonlinear optimization procedure. This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles. We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation. Our experiments demonstrate robust inverse multiple scattering, allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior. The performance bound of our approach is quantified in simulation and validated experimentally. Our work promises utilization of multiple scattering for versatile large-scale applications.

Reliable deep learning-based phase imaging with uncertainty quantification
Yujia Xue, Shiyi Cheng, Yunzhe Li, Lei Tian
Optica 6, 618-629 (2019).

 Github Project

Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space–bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.

Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media
Yunzhe Li, Yujia Xue, Lei Tian
Optica 5, 1181-1190 (2018).
Top 5 most cited articles in Optica published in 2018 (Source: Google Scholar)

 Github Project

Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic input–output “transmission matrix” for a fixed medium. However, this “one-to-one” mapping is highly susceptible to speckle decorrelations – small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical “one-to-all” deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.



Deep learning approach to Fourier ptychographic microscopy
Thanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, George Nehmetallah
Opt. Express 26, 26470-26484 (2018).

Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.


High-throughput intensity diffraction tomography with a computational microscope
Ruilong Ling, Waleed Tahir, Hsing-Ying Lin, Hakho Lee, and Lei Tian
Biomed. Opt. Express 9, 2130-2141 (2018)

 Github Project

We demonstrate a motion-free intensity diffraction tomography technique that enables the direct inversion of 3D phase and absorption from intensity-only measurements for weakly scattering samples. We derive a novel linear forward model featuring slice-wise phase and absorption transfer functions using angled illumination. This new framework facilitates flexible and efficient data acquisition, enabling arbitrary sampling of the illumination angles. The reconstruction algorithm performs 3D synthetic aperture using a robust computation and memory efficient slice-wise deconvolution to achieve resolution up to the incoherent limit. We demonstrate our technique with thick biological samples having both sparse 3D structures and dense cell clusters. We further investigate the limitation of our technique when imaging strongly scattering samples. Imaging performance and the influence of multiple scattering is evaluated using a 3D sample consisting of stacked phase and absorption resolution targets. This computational microscopy system is directly built on a standard commercial microscope with a simple LED array source add-on, and promises broad applications by leveraging the ubiquitous microscopy platforms with minimal hardware modifications.



Structured illumination microscopy with unknown patterns and a statistical prior
Li-Hao Yeh, Lei Tian, and Laura Waller
Biomed. Opt. Express 8, 695-711 (2017).

Structured illumination microscopy (SIM) improves resolution by down-modulating high-frequency information of an object to fit within the passband of the optical system. Generally, the reconstruction process requires prior knowledge of the illumination patterns, which implies a well-calibrated and aberration-free system. Here, we propose a new algorithmic self-calibration strategy for SIM that does not need to know the exact patterns a priori, but only their covariance. The algorithm, termed PE-SIMS, includes a pattern-estimation (PE) step requiring the uniformity of the sum of the illumination patterns and a SIM reconstruction procedure using a statistical prior (SIMS). Additionally, we perform a pixel reassignment process (SIMS-PR) to enhance the reconstruction quality. We achieve 2× better resolution than a conventional widefield microscope, while remaining insensitive to aberration-induced pattern distortion and robust against parameter tuning.



Compressive holographic video
Zihao Wang, Leonidas Spinoulas, Kuan He, Lei Tian, Oliver Cossairt, Aggelos K. Katsaggelos, and Huaijin Chen
Opt. Express 25, 250-262 (2017).

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.



3D differential phase contrast microscopy
Michael Chen, Lei Tian, Laura Waller
Biomed. Opt. Express 7, 3940-3950 (2016).

We demonstrate 3D phase and absorption recovery from partially coherent intensity images captured with a programmable LED array source. Images are captured through-focus with four different illumination patterns. Using first Born and weak object approximations (WOA), a linear 3D differential phase contrast (DPC) model is derived. The partially coherent transfer functions relate the sample’s complex refractive index distribution to intensity measurements at varying defocus. Volumetric reconstruction is achieved by a global FFT-based method, without an intermediate 2D phase retrieval step. Because the illumination is spatially partially coherent, the transverse resolution of the reconstructed field achieves twice the NA of coherent systems and improved axial resolution.

Nonlinear Optimization Algorithm for Partially Coherent Phase Retrieval and Source Recovery
J. Zhong, L. Tian, P. Varma, L. Waller
IEEE Transactions on Computational Imaging 2 (3), 310 – 322 (2016).

We propose a new algorithm for recovering both complex field (phase and amplitude) and source distribution (illumination spatial coherence) from a stack of intensity images captured through focus. The joint recovery is formulated as a nonlinear least-square-error optimization problem, which is solved iteratively by a modified Gauss-Newton method. We derive the gradient and Hessian of the cost function and show that our second-order optimization approach outperforms previously proposed phase retrieval algorithms, for datasets taken with both coherent and partially coherent illumination. The method is validated experimentally in a commercial microscope with both Kohler illumination and a programmable LED dome.

Experimental robustness of Fourier Ptychography phase retrieval algorithms
L. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, L. Waller
Opt. Express 23(26) 33212-33238 (2015).

Fourier ptychography is a new computational microscopy technique that provides gigapixel-scale intensity and phase images with both wide field-of-view and high resolution. By capturing a stack of low-resolution images under different illumination angles, an inverse algorithm can be used to computationally reconstruct the high-resolution complex field. Here, we compare and classify multiple proposed inverse algorithms in terms of experimental robustness. We find that the main sources of error are noise, aberrations and mis-calibration (i.e. model mis-match). Using simulations and experiments, we demonstrate that the choice of cost function plays a critical role, with amplitude-based cost functions performing better than intensity-based ones. The reason for this is that Fourier ptychography datasets consist of images from both brightfield and darkfield illumination, representing a large range of measured intensities. Both noise (e.g. Poisson noise) and model mis-match errors are shown to scale with intensity. Hence, algorithms that use an appropriate cost function will be more tolerant to both noise and model mis-match. Given these insights, we propose a global Newton’s method algorithm which is robust and accurate. Finally, we discuss the impact of procedures for algorithmic correction of aberrations and mis-calibration.

Computational illumination for high-speed in vitro Fourier ptychographic microscopy
L. Tian, Z. Liu, L. Yeh, M. Chen, J. Zhong, L. Waller
Optica 2(10), 904-911 (2015).

We demonstrate a new computational illumination technique that achieves a large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either a large field of view (FOV) or high resolution, and not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both a wide FOV and high resolution, i.e., a large space-bandwidth product. FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (of the order of minutes), limiting throughput. Faster capture times would not only improve the imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g., pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with subsecond capture times. We propose an improved algorithm and a new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time.


Computational imaging: Machine learning for 3D microscopy
L. Waller, L. Tian
Nature, 523, 416–417 (2015).

Artificial neural networks have been combined with microscopy to visualize the 3D structure of biological cells. This could lead to solutions for difficult imaging problems, such as the multiple scattering of light.

3D imaging in volumetric scattering media using phase-space measurements
H. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, L. Waller
Opt. Express 23, 14461-14471 (2015).

We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media.


Quantitative differential phase contrast imaging in an LED array microscope
L. Tian, L. Waller
Opt. Express 23, 11394-11403 (2015).

Illumination-based differential phase contrast (DPC) is a phase imaging method that uses a pair of images with asymmetric illumination patterns. Distinct from coherent techniques, DPC relies on spatially partially coherent light, providing 2× better lateral resolution, better optical sectioning and immunity to speckle noise. In this paper, we derive the 2D weak object transfer function (WOTF) and develop a quantitative phase reconstruction method that is robust to noise. The effect of spatial coherence is studied experimentally, and multiple-angle DPC is shown to provide improved frequency coverage for more stable phase recovery. Our method uses an LED array microscope to achieve real-time (10 Hz) quantitative phase imaging with in vitro live cell samples.


Motion deblurring with temporally coded illumination in an LED array microscope
C. Ma, Z. Liu, L. Tian, Q. Dai, L. Waller
Opt. Lett. 40, 2281-2284 (2015).

Motion blur, which results from time-averaging an image over the camera’s exposure time, is a common problem in microscopy of moving samples. Here, we demonstrate linear motion deblurring using temporally coded illumination in an LED array microscope. By illuminating moving objects with a well-designed temporal coded sequence that varies during each single camera exposure, the resulting motion blur is invertible and can be computationally removed. This scheme is implemented in an existing LED array microscope, providing benefits of being grayscale, fast, and adaptive, which leads to high-quality deblur performance and a flexible implementation with no moving parts. The proposed method is demonstrated experimentally for fast moving targets in a microfluidic environment.

3D intensity and phase imaging from light field measurements in an LED array microscope
Lei Tian, L. Waller
Optica 2, 104-111 (2015).
the 15 Most Cited Articles in Optica published in 2015 (Source: OSA, 2019)

Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier Ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective (5x larger NA) and axial resolution better than the depth of field, using a low magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multi-slice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data is captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.

Partially coherent phase imaging with unknown source shape
J. Zhong, Lei Tian, J. Dauwels, L. Waller
Biomedical Optics Express 6, 257-265 (2015).

We propose a new method for phase retrieval that uses partially coherent illumination created by any arbitrary source shape in Kohler geometry. Using a stack of defocused intensity images, we recover not only the phase and amplitude of the sample, but also an estimate of the unknown source shape, which describes the spatial coherence of the illumination. Our algorithm uses a Kalman filtering approach which is fast, accurate and robust to noise. The method is experimentally simple and flexible, so should find use in optical, electron, X-ray and other phase imaging systems which employ partially coherent light. We provide an experimental demonstration in an optical microscope with various condenser apertures.


Real-time brightfield, darkfield and phase contrast imaging in an LED array microscope
Z. Liu, Lei Tian, S. Liu, L. Waller
Journal of Biomedical Optics, 19(10), 106002 (2014).

We demonstrate a single-camera imaging system that can simultaneously acquire brightfield, darkfield and phase contrast images in real-time. Our method uses computational illumination via a programmable LED array at the source plane, providing flexible patterning of illumination angles. Brightfield, darkfield and differential phase contrast (DPC) images are obtained by changing the LED patterns, without any moving parts. Previous work with LED array illumination was only valid for static samples because the hardware speed was not fast enough to meet real-time acquisition and processing requirements. Here, we time multiplex patterns for each of the three contrast modes in order to image dynamic biological processes in all three contrast modes simultaneously. We demonstrate multi-contrast operation at the maximum frame rate of our camera (50 Hz with 2160×2560 pixels).


Multiplexed coded illumination for Fourier Ptychography with an LED array microscope
Lei Tian, X. Li, K. Ramchandran, L. Waller
Biomedical Optics Express 5, 2376-2389 (2014).
the decade’s most highly cited Articles in Biomed. Opt. Express (Source: OSA, 2020)
⭑ Highly cited (Top 1%) papers between 2008-2018 (source: Web of Science, 2019)

Fourier Ptychography is a new computational microscopy technique that achieves gigapixel images with both wide field of view and high resolution in both phase and amplitude. The hardware setup involves a simple replacement of the microscope’s illumination unit with a programmable LED array, allowing one to flexibly pattern illumination angles without any moving parts. In previous work, a series of low-resolution images was taken by sequentially turning on each single LED in the array, and the data were then combined to recover a bandwidth much higher than the one allowed by the original imaging system. Here, we demonstrate a multiplexed illumination strategy in which multiple randomly selected LEDs are turned on for each image. Since each LED corresponds to a different area of Fourier space, the total number of images can be significantly reduced, without sacrificing image quality. We demonstrate this method experimentally in a modified commercial microscope. Compared to sequential scanning, our multiplexed strategy achieves similar results with approximately an order of magnitude reduction in both acquisition time and data capture requirements.


3D differential phase contrast microscopy with computational illumination using an LED array
Lei Tian, J. Wang, L. Waller
Optics Letters 39, 1326 – 1329 (2014).

We demonstrate 3D differential phase-contrast (DPC) microscopy, based on computational illumination with a programmable LED array. By capturing intensity images with various illumination angles generated by sequentially patterning an LED array source, we digitally refocus images through various depths via light field processing. The intensity differences from images taken at complementary illumination angles are then used to generate DPC images, which are related to the gradient of phase. The proposed method achieves 3D DPC with simple, inexpensive optics and no moving parts. We experimentally demonstrate our method by imaging a camel hair sample in 3D.