News

Modeling Urgency Signals in Dorsal Premotor Cortex

By Liam SennottAugust 18th, 2020in Research, Summer Projects

My name is Liam Sennott, and I am a rising senior at Boston University. I am a neuroscience major and a computer science minor. I worked in the Chand Lab remotely in the Summer of 2020 through Boston University’s UROP program. 

For my project, I created a mathematical model to help explain “urgency” signals in the dorsal premotor cortex (PMd). I did all of my work using MATLAB. 

The dorsal premotor cortex (PMd) is a brain area implicated in perceptual decision making. Prior work has shown light on how this brain region processes sensory evidence related to decision making. However, how an internal state such as urgency affects the decision making process and how it manifests in this brain region is currently unclear. 

In order to tackle this project, my goal was to build a model in MATLAB based on a recent study of decision-making done by Peter R. Murphy and his collaborators (Nat. Comm. 2016); this model would be adapted to test possible sources of urgency in PMd. To start, I implemented a simple “Lorenz attractor” system in MATLAB, which taught me the fundamentals of solving differential equations in MATLAB. The following figure displays the output of the solved Lorenz attractor system.

 

From there, I began working on building my model. The model assumes that decisions emerge from a competition between two accumulators that integrate evidence for each choice over time. The accumulator that wins the competition is the decision generated by the model. The model contains a baseline input as well as a gain multiplier on the evidence for each choice. I chose to investigate two hypotheses on the source of urgency in PMd: 1) urgency is related to initial input on this brain region 2) urgency is related to level of gain. 

After testing the effects of altering baseline input and gain levels, I found that the latter had a more dramatic effect on the choice selective signal. When baseline gain increased, the choice selective signal increased more rapidly (see figure below).

These two figures show that a notable difference between the decision accumulator arises as gain levels increase (right), but this isn’t the case for an increasing current (left). This suggests that the gain parameter could be emulating an urgency signal.

During my two months of work this summer, I learned much about the process of using MATLAB to solve differential equations as well as using a model to experiment with a simulated system. I also refined my MATLAB programming knowledge and ability to produce quality plots. Finally, this project was met with some difficulty, due to it being conducted remotely. However, I am grateful to have had the opportunity to perform a research project from home.

Modeling neural activity: from cells to laminae to brain areas

By Morgane ButlerAugust 7th, 2020

My name is Morgane Butler, I am a first year PhD Student rotating in Chand's lab (remotely!) for the summer. I began the project with no experience in MATLAB and little experience in computational neuroscience. My hope was to improve my MATLAB skills and that I did!

Dr. Chand had me start out with a simple model of Predator-Prey interactions called the Lotka-Volterra model. I used this exercise to understand important MATLAB basics and created an Euler method solver for the system of differential equations representing the model.

Next, I turned to Feedforward and feedback frequency-dependent interactions in large-scale laminar network of the primate cortex (Mejias et al. 2016). Based on empirical data this paper creates models of neuronal activity across layers of the cortex and scales it up to feed forward and feedback connections across brain areas such as V1 and V4.

 

Using my code from my Euler method solver and the differential equations provided in the supplementary materials, I recreated the models of oscillatory activity, as seen to the left.  With a bit of troubleshooting and Dr. Chand's advice, I also conducted a power analysis of the oscillatory activity using a multi taper analysis from the Chronox MATLAB toolbox (http://chronux.org/). By modulating external inputs to the "neurons" I was able to show the behavior of the model in various conditions and across anatomical scales. I was able to first build E-I models that replicate the high-frequency oscillatory activity in Layer 2/3 and low frequency oscillatory activity in Layer 5/6 from the paper. I then scaled it up to connect supragranular (above layer 4) and infragranular (below layer 4) layers. Finally, I connected area V1 to area V4 accounting for anatomical feedforward and feedback projections between the areas. Notice in the power spectrum of connected layer 2/3 (top right, below) we see an emergence of activity reminiscent of layer 5/6 in the 3-12 Hz or alpha band.

Thoughtfully creating models such as this one allows us to form and develop hypotheses when we do experiments in the wet lab. If you are interested in looking deeper into my code and reading a more complete write-up please refer to my GitHub.

I can now confidently say that MATLAB is something I will use in my near future and I had a great time learning this new skill and growing as a scientist in the Chand Lab!

Dimensionality reduction for neural data analysis

By Munib HasnainJune 18th, 2020

My name is Munib Hasnain and I am a first year PhD Student in the Biomedical Engineering Department at Boston University. I rotated in the Chand Lab during April and May 2020 where I learned about and applied dimensionality reduction techniques to neural data.

Advances in recording techniques have enabled neuroscientists to record activity from large populations of neurons simultaneously. This has enabled researchers to formulate population-level hypotheses and has opened up the sorts of analyses that can be performed. One form of analysis is dimensionality reduction, in which we seek low-dimensional representations of high-dimensional data - the idea being that there may be shared activity or mechanisms among a population of neurons that can describe the entire population. Dimensionality reduction methods extract these explanatory or latent variables and discard unexplained variance as noise. There are many methods to perform dimensionality reduction, but not all are suitable for neural data. During my rotation, I applied four different methods to understand their applicability to neural data and to get a sense of what it takes to implement them.

The first method I looked at was Principal Component Analysis (PCA). PCA seeks to find the direction of maximal variance in a dataset. All principal components are orthogonal to each other and are simply the eigenvectors of the data covariance matrix. Dimensionality reduction is achieved by projecting the original data onto some number of the principal components found through PCA. Although PCA is a powerful technique, it is generally only applied to trial-averaged data as it does not have a noise component. The lack of a noise model hinders the technique's ability to capture the true dynamics of the neural population, since neurons may contain both shared and independent variance.

Factor Analysis (FA), on the other hand, can be thought of as similar to PCA, but contains an implicit noise model that is capable of separating the shared variance of neurons from the independent variance, thus making it more suitable to single-trial data analysis. The goal of applying PCA and FA are exactly the same, but FA has been explicitly designed to identify latent variables, whereas PCA provides an approximation to those latent variables.

The next technique I looked at was Gaussian Process Factor Analysis (GPFA). When applying PCA or FA to neural data, the data must be smoothed over time before dimensionality reduction. GPFA combines both smoothing and dimensionality reduction into a unified process in order to ensure that both the degree of smoothness and the relationship between latent and original activity are optimized together. GPFA has been applied extensively to neural data and seems to be one of the more popular techniques for extracting latent neural trajectories.

Lastly, I looked at Variational Latent Gaussian Processes (vLGP). This technique is similar to GPFA in that smoothing and dimensionality reduction are combined into a single process. vLPG differs from GPFA in that it assumes neural spiking to be follow a point process, which can be seen as preserving information between firing rate bins that may be lost in techniques such as GPFA. Both GPFA and vLGP are suitable for single-trial neural data analysis.

My time in the Chand Lab was extremely valuable and I hope to be able to continue learning and applying large-scale neural data techniques to answer interesting questions about behavior and motor control. If you are interested in seeing the implementation of these techniques to both simulated and real data, you can find my code on GitHub.

openEyeTrack – A high speed multi-threaded eye tracker for head-fixed applications

By Chandramouli ChandrasekaranApril 23rd, 2020

Check out our new tool, openEyeTrack, a low-cost open-source high-speed eye tracker for tracking eye position in head-fixed applications. This was a summer project for a talented undergraduate in the lab, Jorge Paolo Casas. We were able to architect a multi-threaded eye tracker using OpenCV, Teledyne DALSA Camera, and C++. We are currently running behavioral experiments to validate the eye tracker for general use in psychophysical experiments and to derive estimates of accuracy. Work inspired by this oculomatic paper by Jan Zimmerman et al.

  1. Paper is available here:DOI badge
  2. Code is available here.
  3. The archived V1.0.0 version of the software is available on Zenodo.DOI

References

Jan Zimmermann, Yuriria Vazquez, Paul W. Glimcher, Bijan Pesaran, Kenway Louie (2016),  Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates, Journal of Neuroscience Methods, 270, 138-146,

 

Lab receives NARSAD funding, and Chand a Moorman-Simon Career Development Professorship

By Chandramouli ChandrasekaranFebruary 1st, 2020

We have had a reasonably productive and successful Year in 2019.

  • Karen Marmon joined the lab. She comes to us from Salzmann lab and will be a research tech with us and lab manager for Jerry Chen.
  • Kenji Lee joined us as a graduate student. He comes from the Allen Institute and brings a lot of experience!
  • Hymavathy Balasubramanian completed her lab rotation with us. We will miss her. She did some awesome work on understanding waveform properties and how they change across cortical laminae.
  • Michael Kleinman, a graduate student at UCLA with Prof. Jonathan Kao presented our collaborative work at Computational Cognitive Neuroscience 2019 (CCN 2019) (link). He won a travel award for his work!
  • The lab received a NARSAD Young Investigator Grant from the Brain and Behavior Research foundation.
  • Chand was awarded a Moorman-Simon Career Development Professorship which provides more support for the lab. Thanks to Ruth Moorman and Sheldon Simon for the support of my lab and BU!
  • Our CHaRTr paper is also in press. Stay tuned for the final PDF. If you want to try the code out it is available here :)!
  • Brenna Lee received a UROP award to continue her research from the summer. Congratulations Brenna!

 

Assessing lab videos of animal behavior using deep learning

By Brenna LeeSeptember 1st, 2019in Summer Projects

My name is Brenna Lee and I’m about to start my senior year in biomedical engineering at Boston University. I have been very involved at BU, as I have worked as a tour guide, tutor, and Learning Assistant, and I’m currently the president of BU’s Irish dance team. This summer I finally got involved in research at BU in the Chand Lab in the Department of Anatomy & Neurobiology and the Department of Psychological and Brain Sciences. 

My summer project involved writing code to classify and assess behavior using experimentally obtained videos of animals from the lab performing tasks. I used these videos in DeepLabCut, the Python toolbox from the Mathis Lab at Harvard, which uses deep learning to create labeled videos to track the position of certain animal body parts in the videos. I was then able to analyze the data and make calculations by writing Python code in a Jupyter Notebook.

To start my project, I first had to install DeepLabCut, which I struggled with at first. I only knew how to code in Matlab and I had no experience yet with Linux, Python, and all the software involved in using DeepLabCut. However, I was able to resolve the issues simply by verifying I had the correct corresponding versions of each software installed. I was then able to figure out how to use DeepLabCut most efficiently by practicing on stock videos of animals before beginning my project. When I first started working on the videos from the lab, I had a few challenges with the analysis because coding in Python was still new to me. There were some errors with indexing, but this was easy to fix. I would also come across issues with keeping the code generalized. I had to make sure the code could work on all videos I ran the code on, rather than just on the one I was testing. However, I liked working through these issues because I got to try different approaches and determine which is best and most efficient. 

Being able to work with Chand this summer allowed me to learn a lot, and I’ve gained so much experience that will definitely be beneficial to me for when I start looking into graduate school. I learned how to code in a new programming language and work in an operating system I hadn’t used before. I also learned what deep learning is and about how it works. I feel much more confident in my programming abilities and problem solving skills, and I look forward to returning to the lab in the fall.

Building an open source eye tracker

By Jorge CasasJuly 16th, 2019in Research, Summer Projects

My name is Paolo Casas and this summer I had the opportunity of working at the Chand Lab in the Department of Anatomy & Neurobiology and the Department of Psychological and Brain Sciences at Boston University. I am a rising junior studying biomedical engineering and computer science, I was a member of the varsity swim team at BU for two years, and this upcoming fall I will be spending the semester abroad at the National University of Singapore. 

My two month summer project was to develop a C++ eye-tracking application that Chand and other scientists would be able to use in neuroscience and psychological experiments. Currently there are commercial eye-trackers that are available for purchase but can cost upwards of $10,000. Furthermore, commercial eye-trackers, often incorporate proprietary software which limits the researcher’s ability to get “under the hood” and modify the program for their specific needs. Luckily for me, I was able to take advantage of prebuilt API’s and libraries that I could just incorporate into my code. 

At first, I thought that this would be a relatively easy task because I didn’t need to code everything from scratch and could just call the blob detection function from OpenCV (a computer vision library) on the frames. However, building a real-time system proved to be much more difficult than initially expected as I had to deal with issues concerning type conversions, memory limitations and hierarchy, buffer overflows, compiling, and linking. Despite that, I was able to get a working version of my eye-tracker up and running relatively quickly. The only problem was that for Chand’s purposes and research purposes in general, the rate at which the camera captured and processed images was far too slow. Thus, to speed up the application I created a threaded model which can sufficiently keep up with the 728 fps capture rate of the camera. 

Although I only spent two months working with Chand, I learned a lot and was able to further deepen my knowledge of computer science and software development as applied to neuroscience and psychology. To name a few, some of the things that I picked up during my time with Chand are: how to create a makefile, becoming familiar with GitHub, working in a linux system, creating shared memory spaces, how to set up data transmission sockets, mutex_locks, markdown and bib files, and most notably creating a multi-threaded application. Currently, the software is pending submission for the Journal of Open Source Software so others may also benefit from this low cost but effective eye tracker. Working with Chand has been a very rewarding experience and I am excited to visit the lab in the Spring once the lab is fully up and running.

Example of eye tracking in progress.

New paper in Journal of Mathematical Psychology

Our first paper from the new lab - Audiovisual detection at various intensities and delays has been accepted into Journal of Mathematical Psychology

Chandrasekaran C, Gondan MG, Audiovisual detection at different intensities and delays, under revision for Journal of Mathematical Psychology, see biorxiv preprint (link).

In this paper, we modeled the accuracy (i.e., hit rate) and response times of monkeys performing a multisensory detection task at various intensities and delays. We found that the classical Wiener diffusion superposition model failed to describe the behavior of the monkeys but expanding this model to include a deadline could explain both the accuracy and RTs. This model outperformed other models we considered. If you are interested in this question, please look at the paper. A preprint is available on bioRxiv.