Chang and Qianwan present posters at Sculpted Light in the Brain

Voltage imaging is an evolving tool to continuously image neuronal activities for large number of neurons. Recently, a high-speed low-light two-photon voltage imaging framework was developed, which enabled kilohertz-scanning on population-level neurons in the awake behaving animal. However, with a high frame rate and a large field-of-view (FOV), shot noise dominates pixel-wise measurements and the neuronal signals are difficult to be identified in the single-frame raw measurement. Another issue is that although deep-learning-based methods has exhibited promising results in image denoising, the traditional supervised learning is not applicable to this problem as the lack of ground-truth “clean” (high SNR) measurements. To address these issues, we developed a self-supervised deep learning framework for voltage imaging denoising (DeepVID) without the need for any ground-truth data. Inspired by previous self-supervised algorithms, DeepVID infers the underlying fluorescence signal based on the independent temporal and spatial statistics of the measurement that is attributed to shot noise. DeepVID reduced the frame-to-frame variably of the image and achieved a 15-fold improvement in SNR when comparing denoised and raw image data.

 

Conventional microscopes are inherently constrained by its space-bandwidth product, which means compromises must be made to obtain either a low spatial resolution or a narrow field-of-view. Computational Miniature Mesoscope (CM2) is a novel fluorescence imaging device that overcomes this bottleneck by jointly designing the optics and algorithm. The CM2 platform achieves single-shot large-scale volumetric imaging with single cell resolution on a compact platform. Here, we demonstrate CM2 V2 – an advanced CM2 system that integrates novel hardware improvements and a new deep learning reconstruction framework. On the hardware side, the platform features a 3D-printed freeform illuminator that achieves ~80% excitation efficiency – a ~3X improvement over our V1 design, and a hybrid emission filter design that improves the measurement contrast by >5X. On the computational side, the new proposed computational pipeline, termed CM2Net, is fueled by simulated realistic field varying data to perform fast and reliable 3D reconstruction. As compared to the model-based deconvolution in our V1 system, CM2Net achieves ~8X better axial localization and ~1400X faster reconstruction speed. The trained CM2Net is validated by imaging phantom objects with embedded fluorescent particles. We experimentally demonstrate the CM2Net offers 6um lateral, and 24um axial resolution in a 7mm FOV and 800um depth range. We anticipate that this simple and low-cost computational miniature imaging system may be applied to a wide range of large-scale 3D fluorescence imaging and wearable in-vivo neural recordings on mice and other small animals.

View all posts