Imaging Through Scattering

Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media
Yunzhe Li, Yujia Xue, Lei Tian
arXiv:1806.04139, 2018

Imaging through scattering is an important, yet challenging problem. Tremendous progress has been made by exploiting the deterministic input-output relation for a static medium. However, this approach is highly susceptible to speckle decorrelations – small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. In addition, this is complicated by the large number of phase-sensitive measurements required for characterizing the input-output ‘transmission matrix’. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we abandon the traditional deterministic approach, instead propose a statistical framework that permits higher representation power to encapsulate a wide range of statistical variations needed for model generalization. Specifically, we develop a convolutional neural network (CNN) that takes intensity-only speckle patterns as input and predicts unscattered object as output. Importantly, instead of characterizing a single input-output relation of a fixed medium, we train our CNN to learn statistical information contained in several scattering media of the same class. We then show that the CNN is able to generalize over a completely different set of scattering media from the same class, demonstrating its superior adaptability to medium perturbations. In our proof of concept experiment, we first train our CNN using speckle patterns captured on diffusers having the same macroscopic parameter (e.g. grits); the trained CNN is then able to make high-quality reconstruction from speckle patterns that were captured from an entirely different set of diffusers of the same grits. To investigate the physical underpinnings of our CNN, we conduct correlation analysis and show that the captured speckle patterns, although are decorrelated (e.g. < e−1 ) using the classical Pearson correlation coefficient metric, still contain statistically invariant information. These invariance is hard to invert using deterministic models, but can be effectively utilized using our statistical CNN model. Our work paves the way to a highly scalable deep learning approach for imaging through scattering media.


3D imaging in volumetric scattering media using phase-space measurements
H. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, L. Waller
Opt. Express 23, 14461-14471 (2015).

We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media.


3D intensity and phase imaging from light field measurements in an LED array microscope
Lei Tian, L. Waller
Optica 2, 104-111 (2015).

Realizing high resolution across large volumes is challenging for 3D imaging techniques with high-speed acquisition. Here, we describe a new method for 3D intensity and phase recovery from 4D light field measurements, achieving enhanced resolution via Fourier Ptychography. Starting from geometric optics light field refocusing, we incorporate phase retrieval and correct diffraction artifacts. Further, we incorporate dark-field images to achieve lateral resolution beyond the diffraction limit of the objective (5x larger NA) and axial resolution better than the depth of field, using a low magnification objective with a large field of view. Our iterative reconstruction algorithm uses a multi-slice coherent model to estimate the 3D complex transmittance function of the sample at multiple depths, without any weak or single-scattering approximations. Data is captured by an LED array microscope with computational illumination, which enables rapid scanning of angles for fast acquisition. We demonstrate the method with thick biological samples in a modified commercial microscope, indicating the technique’s versatility for a wide range of applications.