{"id":301,"date":"2016-08-13T21:42:24","date_gmt":"2016-08-14T02:42:24","guid":{"rendered":"https:\/\/sites.bu.edu\/tianlab\/?page_id=301"},"modified":"2026-04-23T17:03:06","modified_gmt":"2026-04-23T21:03:06","slug":"comp-fluo-imaging","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/tianlab\/publications\/comp-fluo-imaging\/","title":{"rendered":"Computational Fluorescence Imaging"},"content":{"rendered":"<p><a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2531386123\"><strong>Dual-channel event microscopy for ultrafast biological imaging<\/strong><\/a><br \/>\nR. Guo, X. Pan, Q. Deng, A. Ahmed, Q. Yang, J. Greene, T. Li, S.Y. Chan, Z. Chen, G. Hu, H. Feng, &amp; L. Tian<br \/>\n<strong><i>Proc. Natl. Acad. Sci. U.S.A.<\/i><\/strong><span>\u00a0 (<strong>PNAS<\/strong>) <\/span><span class=\"volumeIssueId mb-0\">123 (17) e2531386123<\/span><br \/>\n<img loading=\"lazy\" src=\"\/tianlab\/files\/2026\/04\/DEM-636x319.jpg\" alt=\"\" width=\"636\" height=\"319\" class=\"size-medium wp-image-2566 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2026\/04\/DEM-636x319.jpg 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2026\/04\/DEM-1024x513.jpg 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2026\/04\/DEM-768x385.jpg 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2026\/04\/DEM-1536x770.jpg 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2026\/04\/DEM.jpg 2016w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41377-024-01502-5\"><strong>EventLFM: event camera integrated Fourier light field microscopy for ultrafast 3D imaging<\/strong><\/a><br \/>\nRuipeng Guo, Qianwan Yang, Andrew S. Chang, Guorong Hu, Joseph Greene, Christopher V. Gabel, Sixian You &amp; Lei Tian<br \/>\n<em><strong>Light: Science &amp; Applications<\/strong><\/em> 13: 144 (2024).<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2024\/06\/EventLFM-636x588.png\" alt=\"\" width=\"636\" height=\"588\" class=\"size-medium wp-image-2315 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/EventLFM-636x588.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/EventLFM-1024x947.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/EventLFM-768x710.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/EventLFM-1536x1421.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/EventLFM-2048x1894.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-11-6-860&amp;id=552177\"><strong>Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network<\/strong><\/a><br \/>\nQianwan Yang, Ruipeng Guo, Guorong Hu, Yujia Xue, Yunzhe Li, and Lei Tian<br \/>\n<em><strong>Optica<\/strong><\/em> Vol. 11, Issue 6, pp. 860-871 (2024).<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/SV-FourierNet\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2024\/05\/FTnet-636x414.png\" alt=\"\" width=\"636\" height=\"414\" class=\"size-medium wp-image-2255 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-636x414.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1024x667.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-768x500.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-1536x1001.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/05\/FTnet-2048x1335.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><strong><a href=\"https:\/\/opg.optica.org\/boe\/fulltext.cfm?uri=boe-15-7-4101&amp;id=551448\">HiLo microscopy with caustic illumination<\/a><\/strong><br \/>\nGuorong Hu, Joseph Greene, Jiabei Zhu, Qianwan Yang, Shuqi Zheng, Yunzhe Li, Jeffrey Alido, Ruipeng Guo, Jerome Mertz, and Lei Tian<br \/>\n<em><strong>Biomedical Optics Express<\/strong><\/em> Vol. 15, Issue 7, pp. 4101-4110 (2024).<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2024\/06\/causticHiLo-443x636.jpeg\" alt=\"\" width=\"443\" height=\"636\" class=\"size-medium wp-image-2265 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-443x636.jpeg 443w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-713x1024.jpeg 713w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-768x1103.jpeg 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-1070x1536.jpeg 1070w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-1426x2048.jpeg 1426w, https:\/\/sites.bu.edu\/tianlab\/files\/2024\/06\/causticHiLo-scaled.jpeg 1783w\" sizes=\"(max-width: 443px) 100vw, 443px\" \/><\/p>\n<p><a href=\"http:\/\/arxiv.org\/abs\/2303.12573\"><strong>Robust single-shot 3D fluorescence imaging in scattering media with a simulator-trained neural network <\/strong><\/a><br \/>\nJ. Alido, J. Greene, Y. Xue, G. Hu, Y. Li, K. Monk, B. DeBenedicts, I. Davison, L. Tian<br \/>\n<em><strong>Optics Express<\/strong><\/em> Vol. 32, Issue 4, pp. 6241-6257 (2024).<br \/>\n<strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/sbrnet\">Github Project<\/a><\/strong><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/03\/sbrnet-636x436.png\" alt=\"\" width=\"636\" height=\"436\" class=\"aligncenter wp-image-2087 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-636x436.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-1024x702.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-768x527.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-1536x1054.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/03\/sbrnet-2048x1405.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p><a href=\"https:\/\/doi.org\/10.1117\/1.NPh.10.4.044302\"><strong>Pupil engineering for extended depth-of-field imaging in a fluorescence miniscope<\/strong><\/a><br \/>\nJoseph Greene, Yujia Xue, Jeffrey Alido, Alex Matlock, Guorong Hu, Kivilcim Kili\u00e7, Ian Davison, Lei Tian<br \/>\n<em><strong>Neurophotonics<\/strong><\/em>, Vol. 10, Issue 4, 044302 (2023).<\/p>\n<div class=\"section abstract\" id=\"abstract-1\">\n<div class=\"section\">\n<p>Fluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze<span>\u00a0<\/span><i>in-vivo<\/i><span>\u00a0<\/span>neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses. We present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8 \u00d7 between twin foci in fixed scattering samples. We use a genetic algorithm that considers the GRIN lens\u2019 aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 \u03bcm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight. We characterize the performance of EDoF-Miniscope across 5- and 10-\u03bcm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-\u03bcm-thick mouse brain sample and vessels in a whole mouse brain sample.\u00a0 <span>Built from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.<\/span><\/p>\n<\/div>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2023\/05\/EDOF_miniscope-636x499.png\" alt=\"\" width=\"636\" height=\"499\" class=\"aligncenter wp-image-2111 size-medium\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/EDOF_miniscope-636x499.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/EDOF_miniscope-1024x804.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/EDOF_miniscope-768x603.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/EDOF_miniscope-1536x1205.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2023\/05\/EDOF_miniscope-2048x1607.png 2048w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<\/div>\n<p><a href=\"https:\/\/opg.optica.org\/optica\/fulltext.cfm?uri=optica-9-9-1009&amp;id=497528\"><strong>Deep learning-augmented Computational Miniature Mesoscope<\/strong><\/a><br \/>\nYujia Xue, Qianwan Yang, Guorong Hu, Kehan Guo, Lei Tian<br \/>\n<span><em><strong>Optica<\/strong><\/em>\u00a0<\/span>9<span>, 1009-1021 (2022)<\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Computational-Miniature-Mesoscope-CM2\">Github Project<\/a><\/strong><\/p>\n<p><span>Fluorescence microscopy is essential to study biological structures and dynamics. However, existing systems suffer from a tradeoff between field-of-view (FOV), resolution, and complexity, and thus cannot fulfill the emerging need of miniaturized platforms providing micron-scale resolution across centimeter-scale FOVs. To overcome this challenge, we developed Computational Miniature Mesoscope (CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-1-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-1\"><span><span class=\"mrow\" id=\"MathJax-Span-2\"><span class=\"msubsup\" id=\"MathJax-Span-3\"><span class=\"mi\" id=\"MathJax-Span-4\"><\/span><span class=\"mn\" id=\"MathJax-Span-5\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>) that exploits a computational imaging strategy to enable single-shot 3D high-resolution imaging across a wide FOV in a miniaturized platform. Here, we present CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-2-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-6\"><span><span class=\"mrow\" id=\"MathJax-Span-7\"><span class=\"msubsup\" id=\"MathJax-Span-8\"><span class=\"mi\" id=\"MathJax-Span-9\"><\/span><span class=\"mn\" id=\"MathJax-Span-10\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0V2 that significantly advances both the hardware and computation. We complement the 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-3-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-11\"><span><span class=\"mrow\" id=\"MathJax-Span-12\"><span class=\"mo\" id=\"MathJax-Span-13\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>3 microlens array with a new hybrid emission filter that improves the imaging contrast by 5<\/span><span class=\"MathJax\" id=\"MathJax-Element-4-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-14\"><span><span class=\"mrow\" id=\"MathJax-Span-15\"><span class=\"mo\" id=\"MathJax-Span-16\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>, and design a 3D-printed freeform collimator for the LED illuminator that improves the excitation efficiency by 3<\/span><span class=\"MathJax\" id=\"MathJax-Element-5-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-17\"><span><span class=\"mrow\" id=\"MathJax-Span-18\"><span class=\"mo\" id=\"MathJax-Span-19\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>. To enable high-resolution reconstruction across the large imaging volume, we develop an accurate and efficient 3D linear shift-variant (LSV) model that characterizes the spatially varying aberrations. We then train a multi-module deep learning model, CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-6-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-20\"><span><span class=\"mrow\" id=\"MathJax-Span-21\"><span class=\"msubsup\" id=\"MathJax-Span-22\"><span class=\"mi\" id=\"MathJax-Span-23\"><\/span><span class=\"mn\" id=\"MathJax-Span-24\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net, using only the 3D-LSV simulator. We show that CM<\/span><span class=\"MathJax\" id=\"MathJax-Element-7-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-25\"><span><span class=\"mrow\" id=\"MathJax-Span-26\"><span class=\"msubsup\" id=\"MathJax-Span-27\"><span class=\"mi\" id=\"MathJax-Span-28\"><\/span><span class=\"mn\" id=\"MathJax-Span-29\">2<\/span><\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>Net generalizes well to experiments and achieves accurate 3D reconstruction across a\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-8-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-30\"><span><span class=\"mrow\" id=\"MathJax-Span-31\"><span class=\"mo\" id=\"MathJax-Span-32\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>7-mm FOV and 800-<\/span><span class=\"MathJax\" id=\"MathJax-Element-9-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-33\"><span><span class=\"mrow\" id=\"MathJax-Span-34\"><span class=\"mi\" id=\"MathJax-Span-35\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m depth, and provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-10-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-36\"><span><span class=\"mrow\" id=\"MathJax-Span-37\"><span class=\"mo\" id=\"MathJax-Span-38\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>6-<\/span><span class=\"MathJax\" id=\"MathJax-Element-11-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-39\"><span><span class=\"mrow\" id=\"MathJax-Span-40\"><span class=\"mi\" id=\"MathJax-Span-41\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m lateral and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-12-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-42\"><span><span class=\"mrow\" id=\"MathJax-Span-43\"><span class=\"mo\" id=\"MathJax-Span-44\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>25-<\/span><span class=\"MathJax\" id=\"MathJax-Element-13-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-45\"><span><span class=\"mrow\" id=\"MathJax-Span-46\"><span class=\"mi\" id=\"MathJax-Span-47\">\u03bc<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>m axial resolution. This provides\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-14-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-48\"><span><span class=\"mrow\" id=\"MathJax-Span-49\"><span class=\"mo\" id=\"MathJax-Span-50\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>8<\/span><span class=\"MathJax\" id=\"MathJax-Element-15-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-51\"><span><span class=\"mrow\" id=\"MathJax-Span-52\"><span class=\"mo\" id=\"MathJax-Span-53\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0better axial localization and\u00a0<\/span><span class=\"MathJax\" id=\"MathJax-Element-16-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-54\"><span><span class=\"mrow\" id=\"MathJax-Span-55\"><span class=\"mo\" id=\"MathJax-Span-56\">\u223c<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>1400<\/span><span class=\"MathJax\" id=\"MathJax-Element-17-Frame\" tabindex=\"0\"><nobr><span class=\"math\" id=\"MathJax-Span-57\"><span><span class=\"mrow\" id=\"MathJax-Span-58\"><span class=\"mo\" id=\"MathJax-Span-59\">\u00d7<\/span><\/span><\/span><span><\/span><\/span><\/nobr><\/span><span>\u00a0faster speed as compared to the previous model-based algorithm. We anticipate this simple and low-cost computational miniature imaging system will be impactful to many large-scale 3D fluorescence imaging applications.<\/span><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png\" alt=\"\" width=\"800\" height=\"226\" class=\"aligncenter wp-image-1954\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1024x289.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-636x180.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-768x217.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-1536x434.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2022\/08\/CM2V2-2048x578.png 2048w\" sizes=\"(max-width: 800px) 100vw, 800px\" \/><\/p>\n<p><a href=\"https:\/\/advances.sciencemag.org\/content\/6\/43\/eabb7508\"><strong>Single-Shot 3D Widefield Fluorescence Imaging with a Computational Miniature Mesoscope<\/strong><\/a><br \/>\nYujia Xue, Ian G. Davison, David A. Boas, Lei Tian<br \/>\n<em><strong>Science Advances<\/strong><\/em> 21 OCT 2020: EABB7508<br \/>\n<span><span style=\"color: #993300;\"><strong><\/strong><\/span><\/span><span><span style=\"color: #993300;\"><strong>\u2b51<\/strong><strong><em> O<\/em><\/strong><\/span><span style=\"color: #800000;\"><strong>n the Cover<br \/>\n<span style=\"color: #993300;\">\u2b51 <\/span>In the news:<br \/>\n&#8211; BU ENG news: <\/strong><a href=\"http:\/\/www.bu.edu\/eng\/2020\/10\/21\/brain-imaging-scaled-down\/\">Brain Imaging Scaled Down<\/a><br \/>\n<\/span><\/span><\/p>\n<p><strong><span><span style=\"color: #993300;\"><span style=\"color: #0000ff;\">\u2b51<\/span><em>\u00a0<\/em><\/span><\/span><a href=\"https:\/\/github.com\/bu-cisl\/Computational-Miniature-Mesoscope-CM2\">Github Project<\/a><\/strong><\/p>\n<div class=\"section abstract\" id=\"abstract-2\">\n<p id=\"p-3\">Fluorescence microscopes are indispensable to biology and neuroscience. The need for recording in freely behaving animals has further driven the development in miniaturized \u00a0microscopes (miniscopes). However, conventional microscopes\/miniscopes are inherently constrained by their limited space-bandwidth product, shallow depth of field (DOF), and inability to resolve three-dimensional (3D) distributed emitters. Here, we present a Computational Miniature Mesoscope (CM<sup>2<\/sup>) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 mm by 7 mm field of view and 2.5-mm DOF, achieving 7-\u03bcm lateral resolution and better than 200-\u03bcm axial resolution. The CM<sup>2<\/sup><span>\u00a0<\/span>features a compact lightweight design that integrates a microlens array for imaging and a light-emitting diode array for excitation. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic imaging capability on 3D fluorescent samples. We further quantify the effects of scattering and background fluorescence on phantom experiments.<\/p>\n<\/div>\n<p><journal-interstitial journal=\"advances\"><\/journal-interstitial><\/p>\n<div class=\"promo--newsletters\"><img loading=\"lazy\" src=\"\/tianlab\/files\/2020\/10\/CM2_SA-1024x324.png\" alt=\"\" width=\"1024\" height=\"324\" class=\"alignnone wp-image-1500 size-large\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2020\/10\/CM2_SA-1024x324.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/10\/CM2_SA-636x201.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/10\/CM2_SA-768x243.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/10\/CM2_SA-1536x486.png 1536w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/10\/CM2_SA.png 2048w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/div>\n<p>&nbsp;<\/p>\n<p><strong><a href=\"https:\/\/www.osapublishing.org\/boe\/fulltext.cfm?uri=boe-11-3-1662&amp;id=427971\">Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue<\/a><\/strong><br \/>\nYanqin Chen, Bo Xiong, Yujia Xue, Xin Jin, Joseph Greene, and Lei Tian<br \/>\n<strong><em>Biomedical Optics Express<\/em> <\/strong>11, pp. 1662-1678 (2020).<\/p>\n<p>Integrating light field microscopy techniques with existing miniscope architectures has allowed for volumetric imaging of targeted brain regions in freely moving animals. However, the current design of light field miniscopes is limited by non-uniform resolution and long imaging path length. In an effort to overcome these limitations, this paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts. In addition, this paper provides a novel framework that incorporates the anticipated aberrations of the proposed Gali-MiniLFM into the point spread function (PSF) modeling. This more accurate PSF model can then be used in 3D reconstruction algorithms to further improve the resolution of the platform. Volumetric imaging in the brain necessitates the consideration of the effects of scattering. We conduct Monte Carlo simulations to demonstrate the robustness of the proposed Gali-MiniLFM for volumetric imaging in scattering tissue.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2020\/02\/LFM_Miniscope-636x425.png\" alt=\"\" width=\"636\" height=\"425\" class=\"size-medium wp-image-1233 aligncenter\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/LFM_Miniscope-636x425.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/LFM_Miniscope-768x513.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/LFM_Miniscope-1024x684.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/LFM_Miniscope.png 1392w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dual-channel event microscopy for ultrafast biological imaging R. Guo, X. Pan, Q. Deng, A. Ahmed, Q. Yang, J. Greene, T. Li, S.Y. Chan, Z. Chen, G. Hu, H. Feng, &amp; L. Tian Proc. Natl. Acad. Sci. U.S.A.\u00a0 (PNAS) 123 (17) e2531386123 EventLFM: event camera integrated Fourier light field microscopy for ultrafast 3D imaging Ruipeng Guo, [&hellip;]<\/p>\n","protected":false},"author":12228,"featured_media":1233,"parent":133,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/301"}],"collection":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/users\/12228"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/comments?post=301"}],"version-history":[{"count":21,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/301\/revisions"}],"predecessor-version":[{"id":2569,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/301\/revisions\/2569"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/133"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media\/1233"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media?parent=301"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}