{"id":1248,"date":"2020-02-29T18:37:19","date_gmt":"2020-02-29T23:37:19","guid":{"rendered":"https:\/\/sites.bu.edu\/tianlab\/?page_id=1248"},"modified":"2020-06-19T18:00:49","modified_gmt":"2020-06-19T22:00:49","slug":"computational-ophthalmic-imaging","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/tianlab\/publications\/computational-ophthalmic-imaging\/","title":{"rendered":"Computational Ophthalmic Imaging"},"content":{"rendered":"<p><a href=\"https:\/\/www.osapublishing.org\/oe\/abstract.cfm?uri=oe-28-13-19641\"><strong>Diffuser-based computational imaging funduscope<\/strong><\/a><br \/>\nYunzhe Li, Gregory N. McKay, Nicholas J. Durr, Lei Tian<br \/>\n<em><strong>Optics Express<\/strong><\/em>\u00a028, pp. 19641-19654 (2020)<\/p>\n<p>Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a focused image of an object over a range of defocus errors. Here, we present a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Compared to existing diffuser-imager architectures, our system features an infinite-conjugate design by relaying the ocular lens onto the diffuser. This offers shift-invariance across a wide field-of-view (FOV) and an invariant magnification across an extended depth range. Experimentally, we demonstrate fundus image reconstruction over a 33\u00b0 FOV and robustness to \u00b14D refractive error using a constant point-spread-function. Combined with diffuser-based wavefront sensing, this technology could enable combined ocular aberrometry and funduscopic screening through a single diffuser sensor.<\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2020\/02\/CLO-636x475.png\" alt=\"\" width=\"500\" height=\"373\" class=\"aligncenter wp-image-1229\" srcset=\"https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/CLO-636x475.png 636w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/CLO-768x574.png 768w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/CLO-1024x765.png 1024w, https:\/\/sites.bu.edu\/tianlab\/files\/2020\/02\/CLO.png 1264w\" sizes=\"(max-width: 500px) 100vw, 500px\" \/><\/p>\n<p><a href=\"https:\/\/www.nature.com\/articles\/s41377-019-0216-0\"><strong>Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification<br \/>\n<\/strong><\/a><span class=\"highwire-citation-author first has-tooltip hasTooltip\" data-delta=\"0\" data-hasqtip=\"0\" aria-describedby=\"qtip-0\"><span class=\"nlm-given-names\">Rongrong<\/span>\u00a0<span class=\"nlm-surname\">Liu<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"1\" data-hasqtip=\"1\" aria-describedby=\"qtip-1\"><span class=\"nlm-given-names\">Shiyi\u00a0<\/span><span class=\"nlm-surname\">Cheng<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"2\" data-hasqtip=\"2\" aria-describedby=\"qtip-2\"><span class=\"nlm-given-names\">Lei\u00a0<\/span><span class=\"nlm-surname\">Tian<\/span><\/span>,\u00a0<span class=\"highwire-citation-author has-tooltip hasTooltip\" data-delta=\"3\" data-hasqtip=\"3\" aria-describedby=\"qtip-3\"><span class=\"nlm-given-names\">Ji<\/span>\u00a0<span class=\"nlm-surname\">Yi<br \/>\n<strong><em>Light: Science &amp; Applications<\/em><\/strong>\u00a08: 102 (2019).<br \/>\n<\/span><\/span><strong><\/strong><\/p>\n<p><span>Measurement of blood oxygen saturation (<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted\u00a0<\/span><i>s<\/i><span>O<\/span><sub>2<\/sub><span>\u00a0shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated\u00a0<\/span><i>en face<\/i><span>\u00a0maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.<\/span><\/p>\n<p><img loading=\"lazy\" src=\"\/tianlab\/files\/2019\/10\/OCT.png\" alt=\"\" width=\"595\" height=\"397\" class=\"size-full wp-image-1140 aligncenter\" \/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Diffuser-based computational imaging funduscope Yunzhe Li, Gregory N. McKay, Nicholas J. Durr, Lei Tian Optics Express\u00a028, pp. 19641-19654 (2020) Poor access to eye care is a major global challenge that could be ameliorated by low-cost, portable, and easy-to-use diagnostic technologies. Diffuser-based imaging has the potential to enable inexpensive, compact optical systems that can reconstruct a [&hellip;]<\/p>\n","protected":false},"author":12228,"featured_media":0,"parent":133,"menu_order":8,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/1248"}],"collection":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/users\/12228"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/comments?post=1248"}],"version-history":[{"count":5,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/1248\/revisions"}],"predecessor-version":[{"id":1440,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/1248\/revisions\/1440"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/pages\/133"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/tianlab\/wp-json\/wp\/v2\/media?parent=1248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}