Check out our new tool, openEyeTrack, a low-cost open-source high-speed eye tracker for tracking eye position in head-fixed applications. This was a summer project for a talented undergraduate in the lab, Jorge Paolo Casas. We were able to architect a multi-threaded eye tracker using OpenCV, Teledyne DALSA Camera, and C++. We are currently running behavioral experiments to validate the eye tracker for general use in psychophysical experiments and to derive estimates of accuracy. Work inspired by this oculomatic paper by Jan Zimmerman et al.
- Paper is available here:
- Code is available here.
- The archived V1.0.0 version of the software is available on Zenodo.
Jan Zimmermann, Yuriria Vazquez, Paul W. Glimcher, Bijan Pesaran, Kenway Louie (2016), Oculomatic: High speed, reliable, and accurate open-source eye tracking for humans and non-human primates, Journal of Neuroscience Methods, 270, 138-146,
If you are interested in arbitrating between DDMs, urgency gating or collapsing boundary models to describe the behavior of observers in perceptual decision-making tasks, then check out our paper (ChaRTr paper) and accompanying toolbox (github.com/mailchand/CHaR).
We have had a reasonably productive and successful September.
- Karen Marmon joined the lab. She comes to us from Salzmann lab and will be a research tech with us and lab manager for Jerry Chen.
- Kenji Lee joined us as a graduate student. He comes from the Allen Institute and brings a lot of experience!
- Hymavathy Balasubramanian completed her lab rotation with us. We will miss her. She did some awesome work on understanding waveform properties and how they change across cortical laminae.
- Michael Kleinman, a graduate student at UCLA with Prof. Jonathan Kao presented our collaborative work at Computational Cognitive Neuroscience 2019 (CCN 2019) (link). He won a travel award for his work!
- The lab received a NARSAD Young Investigator Grant from the Brain and Behavior Research foundation.
- Chand was awarded a Moorman-Simon Career Development Professorship which provides more support for the lab. Thanks to Ruth Moorman and Sheldon Simon for the support of my lab and BU!
- Our CHaRTr paper is also in press. Stay tuned for the final PDF. If you want to try the code out it is available here :)!
- Brenna Lee received a UROP award to continue her research from the summer. Congratulations Brenna!
My name is Brenna Lee and I’m about to start my senior year in biomedical engineering at Boston University. I have been very involved at BU, as I have worked as a tour guide, tutor, and Learning Assistant, and I’m currently the president of BU’s Irish dance team. This summer I finally got involved in research at BU in the Chand Lab in the Department of Anatomy & Neurobiology and the Department of Psychological and Brain Sciences.
My summer project involved writing code to classify and assess behavior using experimentally obtained videos of animals from the lab performing tasks. I used these videos in DeepLabCut, the Python toolbox from the Mathis Lab at Harvard, which uses deep learning to create labeled videos to track the position of certain animal body parts in the videos. I was then able to analyze the data and make calculations by writing Python code in a Jupyter Notebook.
To start my project, I first had to install DeepLabCut, which I struggled with at first. I only knew how to code in Matlab and I had no experience yet with Linux, Python, and all the software involved in using DeepLabCut. However, I was able to resolve the issues simply by verifying I had the correct corresponding versions of each software installed. I was then able to figure out how to use DeepLabCut most efficiently by practicing on stock videos of animals before beginning my project. When I first started working on the videos from the lab, I had a few challenges with the analysis because coding in Python was still new to me. There were some errors with indexing, but this was easy to fix. I would also come across issues with keeping the code generalized. I had to make sure the code could work on all videos I ran the code on, rather than just on the one I was testing. However, I liked working through these issues because I got to try different approaches and determine which is best and most efficient.
Being able to work with Chand this summer allowed me to learn a lot, and I’ve gained so much experience that will definitely be beneficial to me for when I start looking into graduate school. I learned how to code in a new programming language and work in an operating system I hadn’t used before. I also learned what deep learning is and about how it works. I feel much more confident in my programming abilities and problem solving skills, and I look forward to returning to the lab in the fall.
My name is Paolo Casas and this summer I had the opportunity of working at the Chand Lab in the Department of Anatomy & Neurobiology and the Department of Psychological and Brain Sciences at Boston University. I am a rising junior studying biomedical engineering and computer science, I was a member of the varsity swim team at BU for two years, and this upcoming fall I will be spending the semester abroad at the National University of Singapore.
My two month summer project was to develop a C++ eye-tracking application that Chand and other scientists would be able to use in neuroscience and psychological experiments. Currently there are commercial eye-trackers that are available for purchase but can cost upwards of $10,000. Furthermore, commercial eye-trackers, often incorporate proprietary software which limits the researcher’s ability to get “under the hood” and modify the program for their specific needs. Luckily for me, I was able to take advantage of prebuilt API’s and libraries that I could just incorporate into my code.
At first, I thought that this would be a relatively easy task because I didn’t need to code everything from scratch and could just call the blob detection function from OpenCV (a computer vision library) on the frames. However, building a real-time system proved to be much more difficult than initially expected as I had to deal with issues concerning type conversions, memory limitations and hierarchy, buffer overflows, compiling, and linking. Despite that, I was able to get a working version of my eye-tracker up and running relatively quickly. The only problem was that for Chand’s purposes and research purposes in general, the rate at which the camera captured and processed images was far too slow. Thus, to speed up the application I created a threaded model which can sufficiently keep up with the 728 fps capture rate of the camera.
Although I only spent two months working with Chand, I learned a lot and was able to further deepen my knowledge of computer science and software development as applied to neuroscience and psychology. To name a few, some of the things that I picked up during my time with Chand are: how to create a makefile, becoming familiar with GitHub, working in a linux system, creating shared memory spaces, how to set up data transmission sockets, mutex_locks, markdown and bib files, and most notably creating a multi-threaded application. Currently, the software is pending submission for the Journal of Open Source Software so others may also benefit from this low cost but effective eye tracker. Working with Chand has been a very rewarding experience and I am excited to visit the lab in the Spring once the lab is fully up and running.
Our first paper from the new lab - Audiovisual detection at various intensities and delays has been accepted into Journal of Mathematical Psychology
Chandrasekaran C, Gondan MG, Audiovisual detection at different intensities and delays, under revision for Journal of Mathematical Psychology, see biorxiv preprint (link).
In this paper, we modeled the accuracy (i.e., hit rate) and response times of monkeys performing a multisensory detection task at various intensities and delays. We found that the classical Wiener diffusion superposition model failed to describe the behavior of the monkeys but expanding this model to include a deadline could explain both the accuracy and RTs. This model outperformed other models we considered. If you are interested in this question, please look at the paper. A preprint is available on bioRxiv.
If you are like me and obsessed about understanding the reaction times and choice behavior of monkeys performing cognitive tasks, then check out our new toolbox, CHaRTr, which has the ability to use a wide range of decision-making models to describe behavior.
Matt golub, a collaborator of mine from Stanford will be presenting our work titled "Joint neural-behavioral models of perceptual decision making". Matt has worked on a new framework to train RNNs to model both the neural data and behavior simultaneously. He will be presenting this work at COSYNE.
Our new study "Macaque dorsal premotor cortex exhibits decision-related activity only when specific stimulus-response associations are known" has been accepted for publications at Nature Communications. Please see an earlier version of this manuscript on biorxiv (link).
The actual published article is now available on the Nature Communications Website (Link to the paper)
My new paper on beta band activity in dorsal premotor cortex coauthored with Iliana Bray and Krishna Shenoy is now available! See link