8th PhD from Tian lab: Chang Liu

Date: Wednesday, April 9

Time: 10:30 am

Location: 8 St. Mary’s Street, Room 339 (PHO 339)

Title: “Pushing the Limits of SNR and Resolution for In Vivo Neural Imaging via Self-supervised Learning”

BME PhD Dissertation Defense: April 9, 2025, Chang Liu

Advisory Committee:

Lei Tian, PhD – ECE, BME (Advisor)

Jerome C. Mertz, PhD – BME, ECE, Physics (Chair)

Jerry L. Chen, PhD – Biology, BME

Michael N. Economo, PhD – BME

Kayhan Batmanghelich, PhD – ECE

Abstract:

Imaging techniques capable of monitoring large populations of neurons at behaviorally relevant timescales are critical to understand the brain and neural system. However, shot noise fundamentally limits the signal-to-noise ratio (SNR) and the optical resolution of in vivo neural imaging, necessitating advanced denoising methods to recover fast neuronal activities from noisy measurements in both spatial and temporal domains. A particular challenge for in-vivo neuronal activity denoising is the lack of “ground-truth” high SNR measurements which makes traditional supervised deep learning not applicable. In this dissertation, I present two generations of self-supervised deep learning frameworks for two-photon voltage imaging denoising (DeepVID) in low-photon regimes, which model noise distributions directly from the data, enabling effective denoising without reliance on ground-truth high SNR images. The first framework, DeepVID, is designed to infer the underlying fluorescence signal based on the temporal and spatial statistics of raw measurements. Through qualitative and quantitative analyses, I demonstrate its superior denoising capabilities in both spatial and temporal domains with improved single-pixel SNR and enhanced spike detection. To address the limitation in balancing spatial and temporal denoising performance, I develop the second framework, DeepVID v2, which achieves decoupled spatiotemporal enhancement tailored for low-photon voltage imaging. By integrating an additional spatial prior extraction branch into the DeepVID architecture and incorporating two adjustable parameters, DeepVID v2 effectively addresses the inherent tradeoff between spatial and temporal performance, enhancing its denoising capabilities for resolving both fine spatial neuronal structures and rapid temporal dynamics. I further demonstrate the robustness of DeepVID v2 across a range of imaging conditions, including varying SNR levels and extreme low-photon scenarios. These results underscore its potential as a powerful tool for denoising in vivo neural imaging data and advancing the study of neuronal activities within the brain.

View all posts