{"id":954,"date":"2018-10-21T12:19:41","date_gmt":"2018-10-21T17:19:41","guid":{"rendered":"https:\/\/sites.bu.edu\/tianlab\/?page_id=954"},"modified":"2025-11-16T15:29:49","modified_gmt":"2025-11-16T20:29:49","slug":"physics-embedded-deep-learning","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/tianlab\/publications\/physics-embedded-deep-learning\/","title":{"rendered":"Deep learning for physics-based imaging"},"content":{"rendered":"<p><a href=\"https:\/\/arxiv.org\/abs\/2505.10311\"><strong>Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems<br \/>\n<\/strong><\/a>J Alido, T Li, Y Sun, L Tian<br \/>\n<em><strong>NeurIPS<\/strong><\/em> 2025.<br \/>\n<strong>\u2b51<span>\u00a0<\/span><a href=\"https:\/\/github.com\/jeffreyalido\/wsdiffusion\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2025\/11\/overview-441x636.png\" alt=\"\" width=\"441\" height=\"636\" class=\"size-medium wp-image-2491 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2025\/11\/overview-441x636.png 441w, https:\/\/sites.bu.edu\/tianlab\/files\/2025\/11\/overview.png 531w\" sizes=\"(max-width: 441px) 100vw, 441px\" \/><\/p>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-11-6-860&amp;id=552177\"><strong>Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network<\/strong><\/a><br \/>\nQianwan Yang, Ruipeng Guo, Guorong Hu, Yujia Xue, Yunzhe Li, and Lei Tian<br \/>\n<em><strong>Optica<\/strong><\/em> Vol. 11, Issue 6, pp. 860-871 (2024).<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/SV-FourierNet\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2024\/05\/FTnet-636x414.png\" alt=\"\" width=\"636\" height=\"414\" class=\"size-medium wp-image-2255 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-636x414.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1024x667.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-768x500.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1536x1001.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-2048x1335.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/opg.optica.org\/oe\/fulltext.cfm?uri=oe-31-3-4094&amp;id=525403\"><strong>Multiple-scattering simulator-trained neural network for intensity diffraction tomography<\/strong><\/a><br \/>\nA. Matlock, J. Zhu, L. Tian<br \/>\n<strong><em>Optics Express<\/em><\/strong> 31, 4094-4107 (2023)<\/p>\n<p xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\">Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering<span>\u00a0<\/span><i>C. elegans<\/i><span>\u00a0<\/span>worms. We benchmark the network\u2019s performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network\u2019s robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network\u2019s generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/05\/DL-IDT-636x411.png\" alt=\"\" width=\"636\" height=\"411\" class=\"size-medium wp-image-2103 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-636x411.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-1024x662.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-768x496.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT.png 1405w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s42256-022-00530-3\"><strong>Recovery of Continuous 3D Refractive Index Maps from Discrete Intensity-Only Measurements using Neural Fields<br \/>\n<\/strong><\/a>Renhao Liu, Yu Sun, Jiabei Zhu, Lei Tian, Ulugbek Kamilov<br \/>\n<em><strong>Nature Machine Intelligence<\/strong><\/em> 4<span>, <\/span><span>781\u2013791 <\/span>(2022).<\/p>\n<p>Intensity diffraction tomography (IDT) refers to a class of optical microscopy techniques for imaging the three-dimensional refractive index (RI) distribution of a sample from a set of two-dimensional intensity-only measurements. The reconstruction of artefact-free RI maps is a fundamental challenge in IDT due to the loss of phase information and the missing-cone problem. Neural fields has recently emerged as a new deep learning approach for learning continuous representations of physical fields. The technique uses a coordinate-based neural network to represent the field by mapping the spatial coordinates to the corresponding physical quantities, in our case the complex-valued refractive index values. We present Deep Continuous Artefact-free RI Field (DeCAF) as a neural-fields-based IDT method that can learn a high-quality continuous representation of a RI volume from its intensity-only and limited-angle measurements. The representation in DeCAF is learned directly from the measurements of the test sample by using the IDT forward model without any ground-truth RI maps. We qualitatively and quantitatively evaluate DeCAF on the simulated and experimental biological samples. Our results show that DeCAF can generate high-contrast and artefact-free RI maps and lead to an up to 2.1-fold reduction in the mean squared error over existing methods.<\/p>\n<section aria-labelledby=\"Abs1\" data-title=\"Abstract\" lang=\"en\" data-gtm-vis-polling-id-50443292_562=\"130\" data-gtm-vis-polling-id-50443292_563=\"131\" data-gtm-vis-recent-on-screen-50443292_562=\"792\" data-gtm-vis-first-on-screen-50443292_562=\"792\" data-gtm-vis-total-visible-time-50443292_562=\"9900\" data-gtm-vis-recent-on-screen-50443292_563=\"792\" data-gtm-vis-first-on-screen-50443292_563=\"792\" data-gtm-vis-total-visible-time-50443292_563=\"9900\">\n<div class=\"c-article-section\" id=\"Abs1-section\">\n<div class=\"c-article-section__content\" id=\"Abs1-content\">\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2022\/09\/NF-IDT-636x363.png\" alt=\"\" width=\"636\" height=\"363\" class=\"aligncenter wp-image-2008 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-636x363.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-1024x585.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-768x439.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-1536x877.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-2048x1169.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<\/div>\n<\/div>\n<\/section>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-9-9-1009&amp;id=497528\"><strong>Deep learning-augmented Computational Miniature Mesoscope<\/strong><\/a><br \/>\nYujia Xue, Qianwan Yang, Guorong Hu, Kehan Guo, Lei Tian<br \/>\n<span><em><strong>Optica<\/strong><\/em>\u00a0<\/span>9<span>, 1009-1021 (2022)<\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Computational-Miniature-Mesoscope-CM2\">Github Project<\/a><\/strong><\/p>\n<p><span>Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a tradeoff between field-of-view (FOV), resolution, and complexity, and thus cannot fulfill the emerging need of miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed Computational Miniature Mesoscope (CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-1-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-1\"><span><span class=\"mrow\" id=\"MathJax-Span-2\"><span class=\"msubsup\" id=\"MathJax-Span-3\"><span class=\"mi\" id=\"MathJax-Span-4\"><\/span><span class=\"mn\" id=\"MathJax-Span-5\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>) that exploits a computational imaging strategy to enable single-shot 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-2-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-6\"><span><span class=\"mrow\" id=\"MathJax-Span-7\"><span class=\"msubsup\" id=\"MathJax-Span-8\"><span class=\"mi\" id=\"MathJax-Span-9\"><\/span><span class=\"mn\" id=\"MathJax-Span-10\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0V2 that significantly advances both the hardware and computation. We complement the 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-3-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-11\"><span><span class=\"mrow\" id=\"MathJax-Span-12\"><span class=\"mo\" id=\"MathJax-Span-13\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>3 microlens array with a new hybrid emission filter that improves the imaging contrast by 5<\/span><span class=\"MathJax\" id=\"MathJax-Element-4-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-14\"><span><span class=\"mrow\" id=\"MathJax-Span-15\"><span class=\"mo\" id=\"MathJax-Span-16\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>, and design a 3D-printed freeform collimator for the LED illuminator that improves the excitation efficiency by 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-5-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-17\"><span><span class=\"mrow\" id=\"MathJax-Span-18\"><span class=\"mo\" id=\"MathJax-Span-19\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>. To enable high-resolution reconstruction across the large imaging volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model that characterizes the spatially varying aberrations. We then train a multi-module deep learning model, CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-6-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-20\"><span><span class=\"mrow\" id=\"MathJax-Span-21\"><span class=\"msubsup\" id=\"MathJax-Span-22\"><span class=\"mi\" id=\"MathJax-Span-23\"><\/span><span class=\"mn\" id=\"MathJax-Span-24\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net, using only the 3D-LSV simulator. We show that CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-7-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-25\"><span><span class=\"mrow\" id=\"MathJax-Span-26\"><span class=\"msubsup\" id=\"MathJax-Span-27\"><span class=\"mi\" id=\"MathJax-Span-28\"><\/span><span class=\"mn\" id=\"MathJax-Span-29\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net generalizes well to experiments and achieves accurate 3D reconstruction across a\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-8-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-30\"><span><span class=\"mrow\" id=\"MathJax-Span-31\"><span class=\"mo\" id=\"MathJax-Span-32\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>7-mm FOV and 800-<\/span><span class=\"MathJax\" id=\"MathJax-Element-9-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-33\"><span><span class=\"mrow\" id=\"MathJax-Span-34\"><span class=\"mi\" id=\"MathJax-Span-35\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m depth, and provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-10-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-36\"><span><span class=\"mrow\" id=\"MathJax-Span-37\"><span class=\"mo\" id=\"MathJax-Span-38\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>6-<\/span><span class=\"MathJax\" id=\"MathJax-Element-11-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-39\"><span><span class=\"mrow\" id=\"MathJax-Span-40\"><span class=\"mi\" id=\"MathJax-Span-41\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m lateral and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-12-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-42\"><span><span class=\"mrow\" id=\"MathJax-Span-43\"><span class=\"mo\" id=\"MathJax-Span-44\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>25-<\/span><span class=\"MathJax\" id=\"MathJax-Element-13-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-45\"><span><span class=\"mrow\" id=\"MathJax-Span-46\"><span class=\"mi\" id=\"MathJax-Span-47\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m axial resolution. This provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-14-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-48\"><span><span class=\"mrow\" id=\"MathJax-Span-49\"><span class=\"mo\" id=\"MathJax-Span-50\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>8<\/span><span class=\"MathJax\" id=\"MathJax-Element-15-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-51\"><span><span class=\"mrow\" id=\"MathJax-Span-52\"><span class=\"mo\" id=\"MathJax-Span-53\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0better axial localization and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-16-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-54\"><span><span class=\"mrow\" id=\"MathJax-Span-55\"><span class=\"mo\" id=\"MathJax-Span-56\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>1400<\/span><span class=\"MathJax\" id=\"MathJax-Element-17-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-57\"><span><span class=\"mrow\" id=\"MathJax-Span-58\"><span class=\"mo\" id=\"MathJax-Span-59\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0faster speed as compared to the previous model-based algorithm. We anticipate this simple and low-cost computational miniature imaging system will be impactful to many large-scale 3D fluorescence imaging applications.<\/span><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png\" alt=\"\" width=\"800\" height=\"226\" class=\"aligncenter wp-image-1954\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-636x180.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-768x217.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1536x434.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-2048x578.png 2048w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41377-022-00730-x\"><strong>Adaptive 3D descattering with a dynamic synthesis network<\/strong><\/a><br \/>\nWaleed Tahir, Hao Wang, Lei Tian<br \/>\n<em><strong>Light: Science &amp; Applications<\/strong><\/em>\u00a011, 42, 2022<\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/DynamicSyntesisNetwork\">Github Project<\/a><\/strong><\/p>\n<p>Deep learning has been broadly applied to imaging in scattering applications. A common framework is to train a &#8220;descattering&#8221; neural network for image recovery by removing scattering artifacts. To achieve the best results on a broad spectrum of scattering conditions, individual &#8220;expert&#8221; networks have to be trained for each condition. However, the performance of the expert sharply degrades when the scattering level at the testing time differs from the training. An alternative approach is to train a &#8220;generalist&#8221; network using data from a variety of scattering conditions. However, the generalist generally suffers from worse performance as compared to the expert trained for each scattering condition. Here, we develop a drastically different approach, termed dynamic synthesis network (DSN), that can dynamically adjust the model weights and adapt to different scattering conditions. The adaptability is achieved by a novel architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network. Notably, our DSN adaptively removes scattering artifacts across a continuum of scattering conditions regardless of whether the condition has been used for the training, and consistently outperforms the generalist. By training the DSN entirely on a multiple-scattering simulator, we experimentally demonstrate the network&#8217;s adaptability and robustness for 3D descattering in holographic 3D particle imaging. We expect the same concept can be adapted to many other imaging applications, such as denoising, and imaging through scattering media. Broadly, our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2021\/07\/DSN-636x219.png\" alt=\"\" width=\"636\" height=\"219\" class=\"size-medium wp-image-1717 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2021\/07\/DSN-636x219.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/07\/DSN-1024x353.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/07\/DSN-768x264.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/07\/DSN-1536x529.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/07\/DSN.png 1699w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.osapublishing.org\/oe\/fulltext.cfm?uri=oe-29-2-2244&amp;id=446557\"><strong>Displacement-agnostic coherent imaging through scatter with an interpretable deep neural network<\/strong><\/a><br \/>\nY. Li, S. Cheng, Y. Xue, L. Tian<br \/>\n<em><strong>Optics Express<\/strong><\/em> Vol. 29, Issue 2, pp. 2244-2257 (2021).<\/p>\n<p xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\">Coherent imaging through scatter is a challenging task. Both model-based and data-driven approaches have been explored to solve the inverse scattering problem. In our previous work, we have shown that a deep learning approach can make high-quality and highly generalizable predictions through unseen diffusers. Here, we propose a new deep neural network model that is agnostic to a broader class of perturbations including scatterer change, displacements, and system defocus up to 10\u00d7 depth of field. In addition, we develop a new analysis framework for interpreting the mechanism of our deep learning model and visualizing its generalizability based on an unsupervised dimension reduction technique. We show that our model can unmix the scattering-specific information and extract the object-specific information and achieve generalization under different scattering conditions. Our work paves the way to a<span>\u00a0<\/span><i>robust<\/i><span>\u00a0<\/span>and<span>\u00a0<\/span><i>interpretable<\/i><span>\u00a0<\/span>deep learning approach to imaging through scattering media.<\/p>\n<p xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\"><img loading=\"lazy\" src=\"\/tianlab\/files\/2021\/01\/DSCv2.jpeg\" alt=\"\" width=\"500\" height=\"439\" class=\"size-full wp-image-1578 aligncenter\" \/><\/p>\n<p><a href=\"http:\/\/10.1109\/JSTSP.2020.2999820\"><strong>SIMBA: Scalable Inversion in Optical Tomography using Deep Denoising Priors<\/strong><\/a><br \/>\nZihui Wu, Yu Sun, Alex Matlock, Jiaming Liu, Lei Tian, Ulugbek S. Kamilov<br \/>\n<em><strong>IEEE Journal of Selected Topics in Signal Processing<\/strong><\/em> 14(6), 2020.<\/p>\n<div _ngcontent-oty-c27=\"\" class=\"abstract-text row\">\n<div _ngcontent-oty-c27=\"\" class=\"col-12\">\n<div _ngcontent-oty-c27=\"\" class=\"u-mb-1\">\n<div _ngcontent-oty-c27=\"\" xplmathjax=\"\">Two features desired in a three-dimensional (3D) optical tomographic image reconstruction algorithm are the ability to reduce imaging artifacts and to do fast processing of large data volumes. Traditional iterative inversion algorithms are impractical in this context due to their heavy computational and memory requirements. We propose and experimentally validate a novel scalable iterative minibatch algorithm (SIMBA) for fast and high-quality optical tomographic imaging. SIMBA enables high-quality imaging by combining two complementary information sources: the physics of the imaging system characterized by its forward model and the imaging prior characterized by a denoising deep neural net. SIMBA easily scales to very large 3D tomographic datasets by processing only a small subset of measurements at each iteration. We establish the theoretical fixed-point convergence of SIMBA\u00a0under nonexpansive denoisers for convex data-fidelity terms. We validate SIMBA\u00a0on both simulated and experimentally collected intensity diffraction tomography (IDT) datasets. Our results show that SIMBA can significantly reduce the computational burden of 3D image formation without sacrificing the imaging quality.<\/div>\n<\/div>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2019\/12\/Screen-Shot-2019-12-01-at-9.03.31-PM-1-636x169.png\" alt=\"\" width=\"636\" height=\"169\" class=\"size-medium wp-image-1182 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2019\/12\/Screen-Shot-2019-12-01-at-9.03.31-PM-1-636x169.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/12\/Screen-Shot-2019-12-01-at-9.03.31-PM-1-768x204.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/12\/Screen-Shot-2019-12-01-at-9.03.31-PM-1-1024x272.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/12\/Screen-Shot-2019-12-01-at-9.03.31-PM-1.png 1880w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/www.osapublishing.org\/optica\/abstract.cfm?uri=optica-6-5-618\"><strong>Reliable deep learning-based phase imaging with uncertainty quantification<\/strong><\/a><br \/>\nYujia Xue, Shiyi Cheng, Yunzhe Li, Lei Tian<br \/>\n<span><strong><em>Optica<\/em><\/strong>\u00a0<\/span>6<span>, 618-629 (2019)<\/span>.<\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Illumination-Coding-Meets-Uncertainty-Learning\">Github Project<\/a><\/strong><\/p>\n<p><span>Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space\u2013bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.<\/span><\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2019\/02\/intro-636x308.png\" alt=\"\" width=\"636\" height=\"308\" class=\"size-medium wp-image-978 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-636x308.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-768x371.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-1024x495.png 1024w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/doi.org\/10.1364\/OPTICA.5.001181\" target=\"_blank\" rel=\"noopener noreferrer\"><strong>Deep speckle correlation: a deep learning approach towards scalable imaging through scattering media<br \/>\n<\/strong><\/a>Yunzhe Li, Yujia Xue,\u00a0Lei Tian<br \/>\n<strong><em>Optica<\/em><\/strong> 5, 1181-1190 (2018).<br \/>\n<span style=\"color: #993300;\"><strong>\u2b51<\/strong><strong><em> Top 5 most cited articles in Optica published in 2018 (Source: Google Scholar)<\/em><\/strong><\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Deep-Speckle-Correlation\">Github Project<\/a><\/strong><\/p>\n<p>Imaging through scattering is an important yet challenging problem. Tremendous progress has been made by exploiting the deterministic input\u2013output \u201ctransmission matrix\u201d for a fixed medium. However, this \u201cone-to-one\u201d mapping is highly susceptible to speckle decorrelations &#8211; small perturbations to the scattering medium lead to model errors and severe degradation of the imaging performance. Our goal here is to develop a new framework that is highly scalable to both medium perturbations and measurement requirement. To do so, we propose a statistical \u201cone-to-all\u201d deep learning (DL) technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show for the first time, to the best of our knowledge, that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable DL approach for imaging through scattering media.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2016\/08\/intro-636x443.png\" alt=\"intro\" width=\"636\" height=\"443\" class=\"aligncenter wp-image-854 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2016\/08\/intro-636x443.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2016\/08\/intro-768x535.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2016\/08\/intro-1024x713.png 1024w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems J Alido, T Li, Y Sun, L Tian NeurIPS 2025. \u2b51\u00a0Github Project Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network Qianwan Yang, Ruipeng Guo, Guorong Hu, Yujia Xue, Yunzhe Li, and Lei Tian Optica Vol. 11, Issue 6, pp. 860-871 (2024). [&hellip;]<\/p>\n","protected":false},"author":12228,"featured_media":854,"parent":133,"menu_order":4,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/954"}],"collection":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/users\/12228"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/comments?post=954"}],"version-history":[{"count":29,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/954\/revisions"}],"predecessor-version":[{"id":2504,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/954\/revisions\/2504"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/133"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media\/854"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media?parent=954"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}