{"id":823,"date":"2018-05-09T09:06:51","date_gmt":"2018-05-09T13:06:51","guid":{"rendered":"https:\/\/sites.bu.edu\/tianlab\/?page_id=823"},"modified":"2025-11-05T22:12:44","modified_gmt":"2025-11-06T03:12:44","slug":"deep-learning-for-biomedical-imaging","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/tianlab\/publications\/deep-learning-for-biomedical-imaging\/","title":{"rendered":"Deep learning for biomedical imaging"},"content":{"rendered":"<p><a href=\"https:\/\/www.cell.com\/newton\/fulltext\/S2950-6360(25)00187-2\"><strong>Self-supervised elimination of non-independent noise in hyperspectral imaging<\/strong><\/a><br \/>\nG Ding, C Liu, J Yin, X Teng, Y Tan, H He, H Lin, L Tian, JX Cheng<br \/>\n<em><strong>Newton<\/strong><\/em> 1 (6)<br \/>\n<img loading=\"lazy\" src=\"https:\/\/www.cell.com\/cms\/10.1016\/j.newton.2025.100195\/asset\/91af2470-52cf-4321-b5a1-287f30f031dc\/main.assets\/fx1_lrg.jpg\" width=\"539\" height=\"539\" class=\"aligncenter\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41592-024-02575-1\"><strong>Label-free nanoscopy of cell metabolism by ultrasensitive reweighted visible stimulated Raman scattering<\/strong><\/a><br \/>\nHaonan Lin, Scott Seitz, Yuying Tan, Jean-Baptiste Lugagne, Le Wang, Guangrui Ding, Hongjian He, Tyler J. Rauwolf, Mary J. Dunlop, John H. Connor, John A. Porco Jr., Lei Tian &amp; Ji-Xin Cheng<br \/>\n<em><strong>Nature Methods<\/strong><\/em> (2025).<\/p>\n<p><img loading=\"lazy\" src=\"https:\/\/media.springernature.com\/full\/springer-static\/image\/art%3A10.1038%2Fs41592-024-02575-1\/MediaObjects\/41592_2024_2575_Fig1_HTML.png\" alt=\"Fig. 1\" width=\"775\" height=\"711\" class=\"aligncenter\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41377-024-01658-0\"><strong>Enhanced multiscale human brain imaging by semi-supervised digital staining and serial sectioning optical coherence tomography<\/strong><\/a><br \/>\nShiyi Cheng, Shuaibin Chang, Yunzhe Li, Anna Novoseltseva, Sunni Lin, Yicun Wu, Jiahui Zhu, Ann C. McKee, Douglas L. Rosene, Hui Wang, Irving J. Bigio, David A. Boas &amp; Lei Tian<br \/>\n<em><strong>Light: Science &amp; Applications<\/strong><\/em> 14, 57 (2025).<br \/>\n<strong>\u2b51<span>\u00a0<\/span><a href=\"https:\/\/github.com\/bu-cisl\/DS-OCT\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"https:\/\/media.springernature.com\/full\/springer-static\/image\/art%3A10.1038%2Fs41377-024-01658-0\/MediaObjects\/41377_2024_1658_Fig1_HTML.png\" alt=\"Fig. 1\" class=\"aligncenter\" width=\"919\" height=\"396\" \/><\/p>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-11-6-860&amp;id=552177\"><strong>Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network<\/strong><\/a><br \/>\nQianwan Yang, Ruipeng Guo, Guorong Hu, Yujia Xue, Yunzhe Li, and Lei Tian<br \/>\n<em><strong>Optica<\/strong><\/em> Vol. 11, Issue 6, pp. 860-871 (2024).<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/SV-FourierNet\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2024\/05\/FTnet-636x414.png\" alt=\"\" width=\"636\" height=\"414\" class=\"size-medium wp-image-2255 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-636x414.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1024x667.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-768x500.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1536x1001.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-2048x1335.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"http:\/\/arxiv.org\/abs\/2303.12573\"><strong>Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network <\/strong><\/a><br \/>\nJ. Alido, J. Greene, Y. Xue, G. Hu, Y. Li, K. Monk, B. DeBenedicts, I. Davison, L. Tian<br \/>\n<em><strong>Optics Express<\/strong><\/em> Vol. 32, Issue 4, pp. 6241-6257 (2024).<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/sbrnet\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/03\/sbrnet-636x436.png\" alt=\"\" width=\"636\" height=\"436\" class=\"aligncenter wp-image-2087 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-636x436.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-1024x702.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-768x527.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-1536x1054.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-2048x1405.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41592-023-01820-3\"><strong>High-Speed Low-Light In Vivo Two-Photon Voltage Imaging of Large Neuronal Populations<\/strong><\/a><br \/>\nJelena Platisa, Xin Ye, Allison M Ahrens, Chang Liu, Ichun A Chen, Ian G Davison, Lei Tian, Vincent A Pieribone, Jerry L Chen<br \/>\n<em><strong>Nature Methods<\/strong><\/em> 20<span>,\u00a0<\/span><span>1095\u20131103 (<\/span><span data-test=\"article-publication-year\">2023<\/span><span>)<\/span>.<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/DeepVID\">Github Project<\/a><br \/>\n<\/strong><span><span style=\"color: #993300;\"><strong>\u2b51<\/strong><strong><em> Spotlight: <\/em><a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2667237523001340?via%3Dihub#bib1\">AI to the rescue of voltage imaging,<\/a> Cell Reports Methods<br \/>\n<\/strong><\/span><\/span><\/p>\n<p>Monitoring spiking activity across large neuronal populations at behaviorally relevant timescales is critical for understanding neural circuit function. Unlike calcium imaging, voltage imaging requires kilohertz sampling rates that reduce fluorescence detection to near shot-noise levels. High-photon flux excitation can overcome photon-limited shot noise, but photobleaching and photodamage restrict the number and duration of simultaneously imaged neurons. We investigated an alternative approach aimed at low two-photon flux, which is voltage imaging below the shot-noise limit. This framework involved developing positive-going voltage indicators with improved spike detection (SpikeyGi and SpikeyGi2); a two-photon microscope (\u2018SMURF\u2019) for kilohertz frame rate imaging across a 0.4\u2009mm\u2009\u00d7\u20090.4\u2009mm field of view; and <strong><span style=\"color: #993300;\">a self-supervised denoising algorithm (DeepVID) for inferring fluorescence from shot-noise-limited signals<\/span><\/strong>. Through these combined advances, we achieved simultaneous high-speed deep-tissue imaging of more than 100 densely labeled neurons over 1 hour in awake behaving mice. This demonstrates a scalable approach for voltage imaging across increasing neuronal populations.<\/p>\n<div class=\"gs_scl\">\n<div class=\"gsc_oci_value\" id=\"gsc_oci_descr\">\n<div class=\"gsh_small\">\n<div class=\"gsh_csp\">\n<section aria-labelledby=\"Abs1\" data-title=\"Abstract\" lang=\"en\" data-gtm-vis-first-on-screen-50443292_562=\"673\" data-gtm-vis-total-visible-time-50443292_562=\"7700\" data-gtm-vis-first-on-screen-50443292_563=\"673\" data-gtm-vis-total-visible-time-50443292_563=\"7700\" data-gtm-vis-polling-id-50443292_562=\"547\" data-gtm-vis-polling-id-50443292_563=\"548\" data-gtm-vis-recent-on-screen-50443292_562=\"173488\" data-gtm-vis-recent-on-screen-50443292_563=\"173488\">\n<div class=\"c-article-section\" id=\"Abs1-section\">\n<div class=\"c-article-section__content\" id=\"Abs1-content\">\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/03\/DeepVID-593x636.png\" alt=\"\" width=\"593\" height=\"636\" class=\"size-medium wp-image-2093 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/DeepVID-593x636.png 593w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/DeepVID-954x1024.png 954w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/DeepVID-768x824.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/DeepVID-1431x1536.png 1431w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/DeepVID-1908x2048.png 1908w\" sizes=\"(max-width: 593px) 100vw, 593px\" \/><\/p>\n<\/div>\n<\/div>\n<\/section>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><a href=\"https:\/\/opg.optica.org\/oe\/fulltext.cfm?uri=oe-31-3-4094&amp;id=525403\"><strong>Multiple-scattering simulator-trained neural network for intensity diffraction tomography<\/strong><\/a><br \/>\nA. Matlock, J. Zhu, L. Tian<br \/>\n<strong><em>Optics Express<\/em><\/strong> 31, 4094-4107 (2023)<\/p>\n<p xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\">Recovering 3D phase features of complex biological samples traditionally sacrifices computational efficiency and processing time for physical model accuracy and reconstruction quality. Here, we overcome this challenge using an approximant-guided deep learning framework in a high-speed intensity diffraction tomography system. Applying a physics model simulator-based learning strategy trained entirely on natural image datasets, we show our network can robustly reconstruct complex 3D biological samples. To achieve highly efficient training and prediction, we implement a lightweight 2D network structure that utilizes a multi-channel input for encoding the axial information. We demonstrate this framework on experimental measurements of weakly scattering epithelial buccal cells and strongly scattering<span>\u00a0<\/span><i>C. elegans<\/i><span>\u00a0<\/span>worms. We benchmark the network\u2019s performance against a state-of-the-art multiple-scattering model-based iterative reconstruction algorithm. We highlight the network\u2019s robustness by reconstructing dynamic samples from a living worm video. We further emphasize the network\u2019s generalization capabilities by recovering algae samples imaged from different experimental setups. To assess the prediction quality, we develop a quantitative evaluation metric to show that our predictions are consistent with both multiple-scattering physics and experimental measurements.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/05\/DL-IDT-636x411.png\" alt=\"\" width=\"636\" height=\"411\" class=\"size-medium wp-image-2103 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-636x411.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-1024x662.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT-768x496.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/DL-IDT.png 1405w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s42256-022-00530-3\"><strong>Recovery of Continuous 3D Refractive Index Maps from Discrete Intensity-Only Measurements using Neural Fields<br \/>\n<\/strong><\/a>Renhao Liu, Yu Sun, Jiabei Zhu, Lei Tian, Ulugbek Kamilov<br \/>\n<em><strong>Nature Machine Intelligence<\/strong><\/em> 4<span>, <\/span><span>781\u2013791 <\/span>(2022).<\/p>\n<p>Intensity diffraction tomography (IDT) refers to a class of optical microscopy techniques for imaging the three-dimensional refractive index (RI) distribution of a sample from a set of two-dimensional intensity-only measurements. The reconstruction of artefact-free RI maps is a fundamental challenge in IDT due to the loss of phase information and the missing-cone problem. Neural fields has recently emerged as a new deep learning approach for learning continuous representations of physical fields. The technique uses a coordinate-based neural network to represent the field by mapping the spatial coordinates to the corresponding physical quantities, in our case the complex-valued refractive index values. We present Deep Continuous Artefact-free RI Field (DeCAF) as a neural-fields-based IDT method that can learn a high-quality continuous representation of a RI volume from its intensity-only and limited-angle measurements. The representation in DeCAF is learned directly from the measurements of the test sample by using the IDT forward model without any ground-truth RI maps. We qualitatively and quantitatively evaluate DeCAF on the simulated and experimental biological samples. Our results show that DeCAF can generate high-contrast and artefact-free RI maps and lead to an up to 2.1-fold reduction in the mean squared error over existing methods.<\/p>\n<section aria-labelledby=\"Abs1\" data-title=\"Abstract\" lang=\"en\" data-gtm-vis-polling-id-50443292_562=\"130\" data-gtm-vis-polling-id-50443292_563=\"131\" data-gtm-vis-recent-on-screen-50443292_562=\"792\" data-gtm-vis-first-on-screen-50443292_562=\"792\" data-gtm-vis-total-visible-time-50443292_562=\"9900\" data-gtm-vis-recent-on-screen-50443292_563=\"792\" data-gtm-vis-first-on-screen-50443292_563=\"792\" data-gtm-vis-total-visible-time-50443292_563=\"9900\">\n<div class=\"c-article-section\" id=\"Abs1-section\">\n<div class=\"c-article-section__content\" id=\"Abs1-content\">\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2022\/09\/NF-IDT-636x363.png\" alt=\"\" width=\"636\" height=\"363\" class=\"aligncenter wp-image-2008 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-636x363.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-1024x585.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-768x439.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-1536x877.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/09\/NF-IDT-2048x1169.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<\/div>\n<\/div>\n<\/section>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-9-9-1009&amp;id=497528\"><strong>Deep learning-augmented Computational Miniature Mesoscope<\/strong><\/a><br \/>\nYujia Xue, Qianwan Yang, Guorong Hu, Kehan Guo, Lei Tian<br \/>\n<span><em><strong>Optica<\/strong><\/em>\u00a0<\/span>9<span>, 1009-1021 (2022)<\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Computational-Miniature-Mesoscope-CM2\">Github Project<\/a><\/strong><\/p>\n<p><span>Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a tradeoff between field-of-view (FOV), resolution, and complexity, and thus cannot fulfill the emerging need of miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed Computational Miniature Mesoscope (CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-1-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-1\"><span><span class=\"mrow\" id=\"MathJax-Span-2\"><span class=\"msubsup\" id=\"MathJax-Span-3\"><span class=\"mi\" id=\"MathJax-Span-4\"><\/span><span class=\"mn\" id=\"MathJax-Span-5\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>) that exploits a computational imaging strategy to enable single-shot 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-2-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-6\"><span><span class=\"mrow\" id=\"MathJax-Span-7\"><span class=\"msubsup\" id=\"MathJax-Span-8\"><span class=\"mi\" id=\"MathJax-Span-9\"><\/span><span class=\"mn\" id=\"MathJax-Span-10\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0V2 that significantly advances both the hardware and computation. We complement the 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-3-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-11\"><span><span class=\"mrow\" id=\"MathJax-Span-12\"><span class=\"mo\" id=\"MathJax-Span-13\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>3 microlens array with a new hybrid emission filter that improves the imaging contrast by 5<\/span><span class=\"MathJax\" id=\"MathJax-Element-4-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-14\"><span><span class=\"mrow\" id=\"MathJax-Span-15\"><span class=\"mo\" id=\"MathJax-Span-16\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>, and design a 3D-printed freeform collimator for the LED illuminator that improves the excitation efficiency by 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-5-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-17\"><span><span class=\"mrow\" id=\"MathJax-Span-18\"><span class=\"mo\" id=\"MathJax-Span-19\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>. To enable high-resolution reconstruction across the large imaging volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model that characterizes the spatially varying aberrations. We then train a multi-module deep learning model, CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-6-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-20\"><span><span class=\"mrow\" id=\"MathJax-Span-21\"><span class=\"msubsup\" id=\"MathJax-Span-22\"><span class=\"mi\" id=\"MathJax-Span-23\"><\/span><span class=\"mn\" id=\"MathJax-Span-24\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net, using only the 3D-LSV simulator. We show that CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-7-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-25\"><span><span class=\"mrow\" id=\"MathJax-Span-26\"><span class=\"msubsup\" id=\"MathJax-Span-27\"><span class=\"mi\" id=\"MathJax-Span-28\"><\/span><span class=\"mn\" id=\"MathJax-Span-29\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net generalizes well to experiments and achieves accurate 3D reconstruction across a\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-8-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-30\"><span><span class=\"mrow\" id=\"MathJax-Span-31\"><span class=\"mo\" id=\"MathJax-Span-32\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>7-mm FOV and 800-<\/span><span class=\"MathJax\" id=\"MathJax-Element-9-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-33\"><span><span class=\"mrow\" id=\"MathJax-Span-34\"><span class=\"mi\" id=\"MathJax-Span-35\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m depth, and provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-10-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-36\"><span><span class=\"mrow\" id=\"MathJax-Span-37\"><span class=\"mo\" id=\"MathJax-Span-38\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>6-<\/span><span class=\"MathJax\" id=\"MathJax-Element-11-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-39\"><span><span class=\"mrow\" id=\"MathJax-Span-40\"><span class=\"mi\" id=\"MathJax-Span-41\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m lateral and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-12-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-42\"><span><span class=\"mrow\" id=\"MathJax-Span-43\"><span class=\"mo\" id=\"MathJax-Span-44\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>25-<\/span><span class=\"MathJax\" id=\"MathJax-Element-13-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-45\"><span><span class=\"mrow\" id=\"MathJax-Span-46\"><span class=\"mi\" id=\"MathJax-Span-47\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m axial resolution. This provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-14-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-48\"><span><span class=\"mrow\" id=\"MathJax-Span-49\"><span class=\"mo\" id=\"MathJax-Span-50\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>8<\/span><span class=\"MathJax\" id=\"MathJax-Element-15-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-51\"><span><span class=\"mrow\" id=\"MathJax-Span-52\"><span class=\"mo\" id=\"MathJax-Span-53\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0better axial localization and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-16-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-54\"><span><span class=\"mrow\" id=\"MathJax-Span-55\"><span class=\"mo\" id=\"MathJax-Span-56\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>1400<\/span><span class=\"MathJax\" id=\"MathJax-Element-17-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-57\"><span><span class=\"mrow\" id=\"MathJax-Span-58\"><span class=\"mo\" id=\"MathJax-Span-59\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0faster speed as compared to the previous model-based algorithm. We anticipate this simple and low-cost computational miniature imaging system will be impactful to many large-scale 3D fluorescence imaging applications.<\/span><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png\" alt=\"\" width=\"800\" height=\"226\" class=\"aligncenter wp-image-1954\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-636x180.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-768x217.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1536x434.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-2048x578.png 2048w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41467-021-23202-z\"><strong>Microsecond fingerprint stimulated Raman spectroscopic imaging by ultrafast tuning and spatial-spectral learning<\/strong><\/a><br \/>\n<span>H. Lin, H.J. Lee, N. Tague, J.-B. Lugagne, C. Zong, F. Deng, J. Shin, L. Tian, W. Wong, M.J. Dunlop, J.-X. Cheng<br \/>\n<\/span><em><strong>Nature Communications<\/strong><\/em><span>\u00a012(1) (2021).<br \/>\n<\/span><\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><span>Label-free vibrational imaging by stimulated Raman scattering (SRS) provides unprecedented insight into real-time chemical distributions. Speci<\/span><span>fi<\/span><span>cally, SRS in the <\/span><span>fi<\/span><span>ngerprint region (400<\/span><span>\u2013<\/span><span>1800 cm<\/span><sup><span>\u2212<\/span><span>1<\/span><\/sup><span>) can resolve multiple chemicals in a complex bio-environment. However, due to the intrinsic weak Raman cross-sections and the lack of ultrafast spectral acquisition schemes with high spectral <\/span><span>fi<\/span><span>delity, SRS in the <\/span><span>fi<\/span><span>ngerprint region is not viable for studying living cells or large-scale tissue samples. Here, we report a <\/span><span>fi<\/span><span>ngerprint spectroscopic SRS platform that acquires a distortion-free SRS spectrum at 10 cm<\/span><sup><span>\u2212<\/span><span>1 <\/span><\/sup><span>spectral resolution within 20 <\/span><span>\u03bc<\/span><span>s using a polygon scanner. Meanwhile, we signi<\/span><span>fi<\/span><span>cantly improve the signal-to-noise ratio by employing a spatial-spectral residual learning network, reaching a level comparable to that with 100 times integration. Collectively, our system enables high-speed vibrational spectroscopic\u00a0imaging of multiple biomolecules in samples ranging from a single live microbe to a tissue slice.<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p><span><img loading=\"lazy\" src=\"\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM-636x353.png\" alt=\"\" width=\"636\" height=\"353\" class=\"aligncenter wp-image-1674 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM-636x353.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM-1024x568.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM-768x426.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM-1536x853.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2021\/05\/Screen-Shot-2021-05-24-at-11.02.29-AM.png 1816w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/span><\/p>\n<p><strong><a href=\"https:\/\/advances.sciencemag.org\/content\/7\/3\/eabe0431\">Single-cell cytometry via multiplexed fluorescence prediction by label-free reflectance microscopy<\/a><br \/>\n<\/strong>Shiyi Cheng, Sipei Fu, Yumi Mun Kim, Weiye Song, Yunzhe Li, Yujia Xue, Ji Yi, Lei Tian<br \/>\n<cite><strong>Science Advances<\/strong>\u00a0<\/cite><span>\u00a015 Jan 2021: <\/span><span>Vol. 7, no. 3, eabe0431<\/span><\/p>\n<div class=\"section abstract\" id=\"abstract-1\">\n<p>Traditional imaging cytometry uses fluorescence markers to identify specific structures but is limited in throughput by the labeling process. We develop a label-free technique that alleviates the physical staining and provides multiplexed readouts via a deep learning\u2013augmented digital labeling method. We leverage the rich structural information and superior sensitivity in reflectance microscopy and show that digital labeling predicts accurate subcellular features after training on immunofluorescence images. We demonstrate up to three times improvement in the prediction accuracy over the state of the art. Beyond fluorescence prediction, we demonstrate that single cell\u2013level structural phenotypes of cell cycles are correctly reproduced by the digital multiplexed images, including Golgi twins, Golgi haze during mitosis, and DNA synthesis. We further show that the multiplexed readouts enable accurate multiparametric single-cell profiling across a large cell population. Our method can markedly improve the throughput for imaging cytometry toward applications for phenotyping, pathology, and high-content screening.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2020\/08\/Mux_DIF-636x360.png\" alt=\"\" width=\"636\" height=\"360\" class=\"size-medium wp-image-1459 aligncenter\" \/><br \/>\n<a href=\"https:\/\/spj.sciencemag.org\/journals\/bmef\/2021\/8620932\/\"><strong>Anatomical modeling of brain vasculature in two-photon microscopy by generalizable deep learning<\/strong><\/a><br \/>\n<span class=\"highwire-citation-authors\"><span class=\"highwire-citation-author first\" data-delta=\"0\"><span class=\"nlm-given-names\">Waleed<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Tahir<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"1\"><span class=\"nlm-given-names\">Sreekanth<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Kura<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"2\"><span class=\"nlm-given-names\">Jiabei<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Zhu<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"3\"><span class=\"nlm-given-names\">Xiaojun<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Cheng<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"4\"><span class=\"nlm-given-names\">Rafat<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Damseh<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"5\"><span class=\"nlm-given-names\">Fetsum<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Tadesse<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"6\"><span class=\"nlm-given-names\">Alex<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Seibel<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"7\"><span class=\"nlm-given-names\">Blaire S.<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Lee<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"8\"><span class=\"nlm-given-names\">Frederic<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Lesage<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"9\"><span class=\"nlm-given-names\">Sava<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Sakadzic<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"10\"><span class=\"nlm-given-names\">David A.<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Boas<\/span><\/span>,<span>\u00a0<\/span><span class=\"highwire-citation-author\" data-delta=\"11\"><span class=\"nlm-given-names\">Lei<\/span><span>\u00a0<\/span><span class=\"nlm-surname\">Tian<br \/>\n<\/span><\/span><\/span><strong><i>BME Frontiers<\/i><\/strong><span>,<\/span><span>\u00a0<\/span><span>vol.\u00a0<\/span><span>2021<\/span><span>,<\/span><span>\u00a0<\/span><span>Article ID\u00a0<\/span><span>8620932<\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/2PM_Vascular_Segmentation_DNN\">Github Project<\/a><\/strong><\/p>\n<div class=\"section abstract\" id=\"abstract-1\">\n<p id=\"p-2\">Segmentation of blood vessels from two-photon microscopy (2PM) angiograms of brains has important applications in hemodynamic analysis and disease diagnosis. Here we develop a generalizable deep-learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups. In addition, the technique is computationally efficient, making it ideal for large-scale neurovascular analysis.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2020\/08\/2PM_VAN-636x356.png\" alt=\"\" width=\"636\" height=\"356\" class=\"size-medium wp-image-1463 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2020\/08\/2PM_VAN-636x356.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/08\/2PM_VAN-1024x574.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/08\/2PM_VAN-768x430.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/08\/2PM_VAN-1536x861.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/08\/2PM_VAN.png 1542w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<div class=\"section abstract\" id=\"abstract-1\">\n<div class=\"section abstract\" id=\"abstract-1\">\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41377-019-0216-0\"><strong>Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification<br \/>\n<\/strong><\/a><span class=\"highwire-citation-author first has-tooltip hasTooltip\" data-delta=\"0\" data-hasqtip=\"0\" aria-describedby=\"qtip-0\"><span class=\"nlm-given-names\">Rongrong<\/span>\u00a0<span class=\"nlm-surname\">Liu<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"1\" data-hasqtip=\"1\" aria-describedby=\"qtip-1\"><span class=\"nlm-given-names\">Shiyi\u00a0<\/span><span class=\"nlm-surname\">Cheng<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"2\" data-hasqtip=\"2\" aria-describedby=\"qtip-2\"><span class=\"nlm-given-names\">Lei\u00a0<\/span><span class=\"nlm-surname\">Tian<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"3\" data-hasqtip=\"3\" aria-describedby=\"qtip-3\"><span class=\"nlm-given-names\">Ji<\/span>\u00a0<span class=\"nlm-surname\">Yi<br \/>\n<strong><em>Light: Science &amp; Applications\u00a0<\/em><\/strong>8: 102 (2019).<br \/>\n<\/span><\/span><strong><\/strong><\/p>\n<p><span>Measurement of blood oxygen saturation (<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated\u00a0<\/span><i>en face<\/i><span>\u00a0maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.<\/span><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2019\/10\/OCT.png\" alt=\"\" width=\"400\" height=\"267\" class=\"aligncenter wp-image-1140\" \/><\/p>\n<p><a href=\"https:\/\/www.osapublishing.org\/optica\/abstract.cfm?uri=optica-6-5-618\"><strong>Reliable deep learning-based phase imaging with uncertainty quantification<\/strong><\/a><br \/>\nYujia Xue, Shiyi Cheng, Yunzhe Li, Lei Tian<br \/>\n<span><strong><em>Optica<\/em><\/strong>\u00a0<\/span>6<span>, 618-629 (2019)<\/span>.<\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Illumination-Coding-Meets-Uncertainty-Learning\">Github Project<\/a><\/strong><\/p>\n<p><span>Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space\u2013bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.<\/span><\/p>\n<div class=\"page\" title=\"Page 1\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2019\/02\/intro-636x308.png\" alt=\"\" width=\"636\" height=\"308\" class=\"size-medium wp-image-978 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-636x308.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-768x371.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2019\/02\/intro-1024x495.png 1024w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.osapublishing.org\/oe\/fulltext.cfm?uri=oe-26-20-26470&amp;id=398626\"><strong><span>Deep learning approach to Fourier ptychographic microscopy<\/span><\/strong><\/a><br \/>\nThanh Nguyen, Yujia Xue, Yunzhe Li, Lei Tian, George Nehmetallah<br \/>\n<strong><em>Opt. Express<\/em><\/strong> 26, 26470-26484 (2018).<\/p>\n<p>Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel\u00a0CNN framework to reconstruct video sequence of dynamic live cells captured\u00a0using a computational microscopy technique, Fourier ptychographic microscopy\u00a0(FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large\u00a0space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal\u00a0dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at\u00a0the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the\u00a0required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition\u00a0and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image\u00a0other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2016\/08\/patternAndROI-636x611.jpg\" alt=\"patternAndROI\" width=\"636\" height=\"611\" class=\"size-medium wp-image-825 aligncenter\" \/><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Self-supervised elimination of non-independent noise in hyperspectral imaging G Ding, C Liu, J Yin, X Teng, Y Tan, H He, H Lin, L Tian, JX Cheng Newton 1 (6) Label-free nanoscopy of cell metabolism by ultrasensitive reweighted visible stimulated Raman scattering Haonan Lin, Scott Seitz, Yuying Tan, Jean-Baptiste Lugagne, Le Wang, Guangrui Ding, Hongjian He, [&hellip;]<\/p>\n","protected":false},"author":12228,"featured_media":0,"parent":133,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/823"}],"collection":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/users\/12228"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/comments?post=823"}],"version-history":[{"count":37,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/823\/revisions"}],"predecessor-version":[{"id":2495,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/823\/revisions\/2495"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/133"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media?parent=823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}