{"id":17,"date":"2015-10-29T14:22:57","date_gmt":"2015-10-29T18:22:57","guid":{"rendered":"https:\/\/sites.bu.edu\/tcn\/?page_id=17"},"modified":"2026-01-24T23:20:57","modified_gmt":"2026-01-25T04:20:57","slug":"research","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/tcn\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<p>Note: Marc is on leave from Boston University academic year 2025-2026 and 2026-2027.  We are not taking new students at this time.  (Also note that this research description is a couple years old.)<\/p>\n<p>The overarching goal of the lab&#8217;s work is to develop a physical theory of cognition.\u00a0 That is, an elegant set of equations that simultaneously describe observable behavioral data and the dynamics of large groups of neurons in many cooperating brain regions.\u00a0 This effort involves three kinds of activities.\u00a0 \u00a0We develop theoretical expressions to describe cognitive computations that could take place in the brain.\u00a0 We analyze neurophysiological data from empirical collaborators and open source datasets to evaluate neural predictions of the equations.\u00a0 We compare predictions of the models to behavioral data.\u00a0 \u00a0Recently with collaborators at the <a href=\"https:\/\/compmem.org\/\">University of Virginia<\/a> and <a href=\"https:\/\/homes.luddy.indiana.edu\/ztiganj\/\">Indiana University<\/a> we have helped develop deep artificial neural networks incorporating equations from theoretical neuroscience.<\/p>\n<h3><\/h3>\n<h3>Neural theory of cognition<\/h3>\n<p>The basic hypothesis for neural computation is that the brain uses the same form of representation to represent information as a function of continuous variables and a relatively small set of data independent operators to manipulate those functions <a href=\"https:\/\/sites.bu.edu\/tcn\/press\/\">(Howard &amp; Hasselmo, 2020)<\/a>.\u00a0 \u00a0This hypothesis proposes that populations of neurons in many different brain regions code for the real Laplace transform of continuous variables.\u00a0 Neurons coding the real Laplace transform of a function over some variable should show exponentially-decaying receptive fields with a diversity of rate constants\u00a0<em>s.<\/em> \u00a0Other populations approximate the inverse Laplace transform, resulting in compact receptive fields over these spaces.\u00a0 These neurons map the rate constants onto the peaks of the receptive fields.\u00a0 Critically, this approach proposes that neural populations should be characterized by smoothly changing parameters.\u00a0 This diversity in parameters of the population is not noise, but inherent to the information processing capacity of the brain.\u00a0 \u00a0Theoretical considerations (<a href=\"http:\/\/jmlr.org\/papers\/volume14\/shankar13a\/shankar13a.pdf\">Shankar &amp; Howard, 2013<\/a>; <a href=\"http:\/\/psycnet.apa.org\/doi\/10.1037\/rev0000081\">Howard &amp; Shankar, 2018<\/a>) lead to the prediction that parameters should evenly tile not the continuous dimension represented, but the logarithm of that dimension.\u00a0 It has been known for some time that this relationship&#8212;known as a Weber-Fechner Law&#8212;applies to many neural and psychological dimensions.<\/p>\n<p>The level of description provided by these equations allows one to easily move back and forth between descriptions of the behavior of many of neurons and cognitive models that can be evaluated with behavioral data.\u00a0 For instance, one can build cognitive models of a wide variety of memory tasks if one assumes a logarithmically-compressed temporal working memory of the past (<a href=\"http:\/\/psycnet.apa.org\/doi\/10.1037\/a0037840\">Howard, et al., 2015<\/a>, <a href=\"https:\/\/doi.org\/10.1016\/j.tics.2017.11.004\">Howard, 2018<\/a>).\u00a0 In addition to working memory, one can write out Laplace-domain models of classic evidence accumulation models used to account for data from simple decision-making tasks (<a href=\"https:\/\/doi.org\/10.1007\/s42113-018-0016-2\">Howard, et al., 2018<\/a>).\u00a0 This raises the possibility that one could build computational models of behavior composed of the same equations&#8212;i.e., the same kind of neural circuit&#8212;throughout (<a href=\"https:\/\/escholarship.org\/content\/qt7m38h6c9\/qt7m38h6c9.pdf\">Tiganj, et al., 2021<\/a>; <a href=\"https:\/\/cogsci.mindmodeling.org\/2019\/papers\/0206\/0206.pdf\">Tiganj, et al., 2019<\/a>).\u00a0 Ongoing work attempts to build cognitive models for reinforcement learning as an estimate of future events as well as models of sequential planning and movement.<\/p>\n<h3>Neural representations of time and space and other variables<\/h3>\n<p>So-called &#8220;time cells&#8221; fire sequentially in the moments following presentation of a salient stimulus.\u00a0 Time cells were originally characterized in the rodent hippocampus (<a href=\"https:\/\/doi.org\/10.1126\/science.1159775\">Pastalkova, et al. 2008<\/a>; <a href=\"https:\/\/doi.org\/10.1016\/j.neuron.2011.07.012\">MacDonald, et al., 2011<\/a>).\u00a0 Time cells were discovered around the time we first began working on a theory for representing the time of past events (<a href=\"http:\/\/dx.doi.org\/10.1016\/j.brainres.2010.07.045\">Shankar &amp; Howard, 2010<\/a>) and the theory made several specific predictions (see <a href=\"https:\/\/psycnet.apa.org\/doi\/10.1037\/a0033621\">Howard &amp; Eichenbuam, 2013<\/a>) that have subsequently been confirmed by many labs independently.\u00a0 For instance, different stimuli trigger distinct sequences so that populations of time cells actually code for &#8220;what happened when&#8221; in the past (Tiganj, et al., 2019; <a href=\"https:\/\/doi.org\/10.1002\/hipo.23282\">Cruzado, et al., 2020<\/a>; Taxidis, et al. 2020).\u00a0 Recent empirical work in our lab has focused on testing more specific questions.\u00a0 Collaborative work with Elizabeth Buffalo&#8217;s lab at University of Washington has shown that neurons in entorhinal cortex decay exponentially with a variety of time constants <a href=\"https:\/\/doi.org\/10.1073\/pnas.1917197117\">(Bright, Meister, et al., 2020)<\/a> implementing the equation for the real Laplace transform of time (see also <a href=\"https:\/\/doi.org\/10.1038\/s41586-018-0459-6\">Tsao, et al. 2018<\/a>).\u00a0 \u00a0In collaboration with scientists studying calcium recordings from rodents, including researchers in the <a href=\"https:\/\/www.bu.edu\/csn\/\">Center for Systems Neuroscience<\/a> at BU, we showed evidence suggesting that time cell-like sequences could persist for several <em>minutes<\/em> <a href=\"https:\/\/doi.org\/10.1002\/hipo.23409\">(Liu, et al., 2022)<\/a>.\u00a0 A recent preprint reporting collaborative work with Michael Hasselmo&#8217;s lab at BU shows that hippocampal time cells fire in a very special kind of sequence <a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2021.10.25.465750v1\">(Cao, Bladon, et al., 2021)<\/a>.\u00a0 \u00a0The time at which each time cell peaks is evenly spaced on a logarithmic scale.\u00a0 This is as predicted by theory (<a href=\"http:\/\/jmlr.org\/papers\/volume14\/shankar13a\/shankar13a.pdf\">Shankar &amp; Howard, 2013<\/a>; <a href=\"http:\/\/psycnet.apa.org\/doi\/10.1037\/rev0000081\">Howard &amp; Shankar, 2018<\/a>) and provides a mathematical connection between the neural representation of time in the brain and many other sensory variables that obey the Weber-Fechner Law.<\/p>\n<p>Our ongoing work studies how temporal representations change systematically across brain regions.\u00a0 \u00a0Moreover, we intend to test the hypothesis that neural representations of different types of information use the same kind of representation as time and space.\u00a0 We hope to study whether evidence accumulation in the brain uses the same kind of equations as the neural representation of past time.\u00a0 We are also extremely interested in neural representations of the time of <em>future<\/em> events.<\/p>\n<h3>Deep networks inspired by theoretical neuroscience<\/h3>\n<p>Whatever equations govern cognition have been shaped by countless eons of evolution, suggesting that they serve some adaptive purpose.\u00a0 Incorporating the equations that govern the brain into deep networks should endow these networks with abilities and properties that are not possible with other approaches and result in more robust and adaptive networks.\u00a0 The DeepSITH model <a href=\"https:\/\/papers.nips.cc\/paper\/2021\/file\/e7dfca01f394755c11f853602cb2608a-Paper.pdf\">(Jacques, et al., 2021)<\/a> builds a deep network of logarithmically-compressed time cells.\u00a0 The DeepSITH model was compared to RNNs, which are widely used in both computational neuroscience and artificial intelligence.\u00a0 Although the models performed roughly similarly, the DeepSITH model dramatically outperformed generic RNNs on problems that required information to be maintained for a longer time.\u00a0 More dramatically, the SITHCon model adds a convolutional layer with shared weights to a deep network of logarithmically-compressed time cells and trained it on a series of time series classification tasks <a href=\"https:\/\/proceedings.mlr.press\/v162\/jacques22a.html\">(Jacques, et al., 2022)<\/a>.\u00a0 For instance, networks would be trained to recognize the identity of spoken digits.\u00a0 Although all networks evaluated can perform this task, the SITHCon network\u00a0 generalizes in a human-like way.\u00a0 You can do this experiment with some friends: say the word &#8220;seven&#8221; as slowly as you can and see if they can still recognize what digit it is.\u00a0 Although not trained on faster or slower speech, SITHCon spontaneously generalizes (perhaps so did your friends).\u00a0 The scale-invariant property is possible only because this model uses the same equation&#8212;logarithmic compression&#8212;predicted by our theoretical work and that we observed in hippocampal time cells <a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2021.10.25.465750v1\">(Cao, et al., 2021)<\/a>,\u00a0 \u00a0An active interest of the lab is integrating cognitive models using logarithmically-compressed Laplace domain representations into deep artificial neural networks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Note: Marc is on leave from Boston University academic year 2025-2026 and 2026-2027. We are not taking new students at this time. (Also note that this research description is a couple years old.) The overarching goal of the lab&#8217;s work is to develop a physical theory of cognition.\u00a0 That is, an elegant set of equations [&hellip;]<\/p>\n","protected":false},"author":9217,"featured_media":0,"parent":0,"menu_order":5,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/pages\/17"}],"collection":[{"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/users\/9217"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/comments?post=17"}],"version-history":[{"count":41,"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/pages\/17\/revisions"}],"predecessor-version":[{"id":938,"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/pages\/17\/revisions\/938"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/tcn\/wp-json\/wp\/v2\/media?parent=17"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}