Most people are familiar with the idea that people who are blind have better hearing than those with normal vision. It was formerly thought that this compensation for lack of vision could only develop in the brains of the very young. However, new research conducted at the University of Maryland and Johns Hopkins University suggests that the brain may be more flexible than previously believed.
In the study, researchers kept one group of healthy mice in total darkness for a week, and exposed the other group to natural light for a week. Then the team used electrodes to measure activity in neurons in the mice’s primary auditory cortex. This is the part of the brain that processes how loud a sound is and its source. By analyzing this data, researchers found that the mice who were exposed to a week of darkness had much better hearing than the control mice.
This suggests that the circuits that process sensory information can be re-wired in the brains of adult mice, even after the early critical period for hearing. These findings seem to contradict the idea that once the critical period for hearing is past, the auditory system doesn’t respond to changes in an individual’s soundscape.
In my vision modeling class this week, we were learning about the structure of the (primate) visual cortex and one of my classmates posed an interesting question: how is it that birds sustain such amazing visual acuity when they don’t seem to have the cortical volume to process that detailed information? In other words, how does a bird brain deal witha bird’s eye view? I’m curious – and I still am, because so far I have not found a lot of research on the topic. Indeed, I imagine it’s difficult to come up with a definitive way to determine what a bird is experiencing for the sake of a laboratory experiment. Although, if I had to hazard a guess, perhaps much of a bird’s reaction to what it sees relies on more primitive structures – maybe birds rely more on instinct than interpretation? While this seems to remain mysterious, scientists do know some neat stuff about how birds’ eyes function in ways that allow them to see what we can’t. Check it out!
Dr. Frank Werblin at UC Berkeley has dedicated nearly his entire academic life to the study of the eye and visual processing. More recently Dr. Werblin has completed his model of the retinal processing system he has deemed “The Retinal Hypercircuit”. The Hypercircuit itself is made up of the five classical retina cell types: Photoreceptor, Horizontal, Bipolar, Amacrine and Retinal Ganglion Cells, but more recently, a collaborative effort has identified over 50 morphologically different cell types. Of this vast array of unique cell types the most variance falls in the morphology of the Amacrine cells, which offer horizontal properties in the Inner Plexiform Layer between the Bipolar and Ganglion Cells. Although the mechanics behind the Hypercirtuit are fascinating, what I find arguably more important is the output of the system, a topic which Werblin has indirectly stumbled upon, but which I believe could potentially lead to an incredibly progressive line of research. More
The mantis shrimp diverged evolutionarily from the crustacean mainline about 400 years ago and have since developed unique characteristics. Unlike most other crustaceans, they actively hunt prey and kill it with a crushing blow which has been theorized to be strong enough to create bubbles containing gas at temperatures upwards of 2000 Kelvin. This quality, however, is nowhere near as stunning as the mantis shrimp’s most incredible attribute: their eyes. In April 2001, the most comprehensive paper to date describing the mantis shrimp’s visual system was published by Justin Marshall and Thomas Cronin in The Biological Bulletin. In their paper, the authors described the unusual characteristics of the mantis shrimp visual system and hypothesized the applications of this system in the development of machine vision. More
“As I closed my eyes, images – if they can be called such – began racing at an ever-increasing speed before me. Swirls of colors, shapes, forms, textures and sounds simply overpowered me to the point where I became immobile. Like many others before me, no doubt, I became somewhat frightened. What had I let myself in for? When I opened my eyes, the phantasmagoria of forms vanished, and I saw myself in the same room with the others”
Donald M. Topping’s description is very similar to the accounts many others have given. He brought up many questions on the vividness of visions produced after his very first ingestion of the hallucinogenic brew Ayahausca. What underlying brain mechanisms allow potentially healing, uplifting and fearful experiences to occur behind closed eyelids? That is what Draulio B. de Araujo and others sought out to find. More
Most would agree that the most important of our basic senses is sight. Without it, many basic forms of communication fall apart, the vibrance of the world around us dulls, and our understanding and ability to sense the complexity of the physical world diminishes. Without the ability to see, it would logically be impossible to portray our surroundings artistically in a coherent and visually realistic manner…
For patients who have lost their sight to various eye diseases, artificial retina technology allows them to experience limited vision once more.
The external parts of the artificial retina device include glasses with a mounted camera and a small computer.
The device also includes an electrode implanted onto the patient’s retina. When the camera “sees” an image, the computer is able to translate these into a pattern of neural signals. This pattern is then transmitted to the implanted electrode, and directly stimulates the optic nerve. These signals are then able to be processed by the brain and interpreted as very rudimentary images.
The first artificial retina to be implanted in a patient, known as Argus I, included only sixteen electrodes that stimulated the optic nerve. However, the patient with this implant was still able to tell the differences between light and dark, and could make out basic shapes. The newer version of the technology, Argus II, now includes sixty electrodes. However, it is still limited in that patients can only tell the differences between light and dark areas, and can only see shapes, outlines, and blurs, and not detailed images. Regardless, this is a large improvement over no sight, and patients with the implant are satisfied with simply a partial regain of their vision, and are hopeful that the technology will continue to improve. As of late, a third model of the artificial retina is in development, and will include over 200 electrodes.
Though the project began almost ten years ago, the implant has recently been approved for patients in Europe. The company has not yet submitted approval to the FDA, but hopes to do so by the end of this year.
Second Sight – How is Argus II Designed to Produce Sight?
CBS News HealthPop – First Artificial Retina Approved in Europe
US Department of Energy Office of Science – About the Artificial Retina Project
In Disney/Pixar’s “Finding Nemo,” Marlin and Dory are swimming through murky waters en route to Sydney Harbor. Marlin suddenly exclaims, “Wait, I have definitely seen this floating speck before. That means we’ve passed it before and that means we’re going in circles and that means we’re not going straight!” – and he is probably right.
Is it really possible that when we cannot see where we are going, we actually travel in circles? Souman et al. tested this belief through a variety of experiments. They found in all cases that when deprived of a visual stimulus, it is actually impossible to travel in a straight line.
The first set of experiments had participants travel through a wood without visual impediments (such as blindfolds). One set of subjects traveled through the woods when it was cloudy, the second set when it was sunny. All of the cloudy group walked in circles and walked in areas that they had previously been, without noticing they had crossed a previous path. In contrast, all of the subjects who could see the sun were able to maintain a course that was relatively straight and had no circles.
The experiment was also performed on blindfolded subjects in an open field.
The blue paths correspond to the subjects that walked on cloudy days. Their paths are mostly curved with many circles. The small straight areas of walking are most likely caused by the setup of the trial – participants walked for a period of time, then were unblindfolded and allowed to walk to the starting point of the next walking block. Even so, when blindfolded, lack of a visual stimulus when blindfolded always resulted in walking in curved motions or in circles. This contrasts the yellow path; this subject walked on a sunny day, and maintained a straight course for a long distance.
What causes this strange phenomenon? Could it perhaps be subtle differences in leg length that introduce a bias to walk in one direction, thus accounting for the circular motion? Nope – the circle directions were still random. Adding shoe soles to add a more than subtle difference in leg length didn’t make a difference: the participants continued to walk in random circles.
Perhaps the only explanation is that our vision is so necessary for our daily lives that our body randomizes without it. This idea is demonstrated in studies in which subjects are kept in a room with constant lighting: their biological clocks become completely randomized with no night and day inputs. More studies should be performed to truly understand the importance of the visual system. Since we rely so heavily on vision, is it natural for movements to become randomized without it? Do those who are blind from birth experience the same walking in circles phenomenon? For now, the conclusion here is that the sensory systems are complex and there is still much work to be done in understanding this strange phenomenon. So, if you ever find yourself lost in murky Australian waters, you probably should not just keep swimming, but rather, ask a friendly passing whale for directions.
Walking Straight into Circles – Current Biology
Most of us are probably not strangers to the recent hub-bub in the media regarding the effects of video gaming on the brain. From whinny mothers and senators complaining that graphic video games predispose our youth to violence and damage their minds, to the claims that daily “brain training” video game exercises can improve your overall mental well-being, it can be hard to determine just how video games are actually affecting our brains. While the jury is still out as to whether or not violent video games overload the amygdala or if playing Brain Age everyday on your Nintendo DS can boost your memory and cognitive abilities, several studies produced in the last year or so have made some very interesting discoveries regarding the effects of gaming on the brain. Though many of us may want to hear that playing StarCraft all day will predispose us to being strategic wizards and give us an edge at the next chess match, such is not the case. The actually findings, however, may still surprise you.
When you think of mentally stimulating activity in the realm of video games, you probably wouldn’t think of something like Call of Duty or the Prince of Persia as a game that would really get synaptic efficacy churning. One would probably be more inclined to attribute that to electronic chess, or puzzle games like Tetris or Bejeweled, or even a tactical strategy game like Command and Conquer. According to most independent studies into video gaming, however, it actually has been shown that fast paced, action gaming (and more commonly first person shooter games) just like Call of Duty are the only types of video games that provide any beneficial effects on the brain. That’s right, your annoying roommate and all his obnoxious friends playing Halo at 3 am while you are trying to devise the perfect battle plan in WarCraft are doing something more mentally constructive than you! How exactly though do video games provide any benefit (karma, magic, summoned magical demons!?) and what areas of the brain do they act upon?
By testing the reaction times of groups of patients both with and without extensive video gaming experience, researchers C. Shawn Green and Daphne Bavelier seem to have provided evidence that playing video games can substantially boost one’s overall attentional skills. Unlike subjects without any experience playing video games, Green and Bavelier observed that gamers exhibited a much stronger ability to fixate upon specific visual and spatial cues while filtering out superfluous ones. Subjects with gaming experience also displayed much faster reaction times in the spatial localization and object recognition tests that Green and Bavelier administered to them. Even more interesting was that the researchers observed that these attentional abilities were not just specific to the test paradigms themselves, and could be applied to multiple other tests and situations with similarly above average results.
When you consider the circumstances of the kind of video games that these subjects are used to performing under, these results seem to make sense. The action and pace of the games are fast and sporadic, with stimuli randomly popping up all over the place. The gamers are constantly conditioned and trained to respond quickly to certain stimuli, while filtering other unimportant stimuli out (and of course, they are rewarded for proper responses by either advancing further in the game or winning in general). Another important aspect of these games that Bavelier points to is the fact that there is no set of right/wrong answers or a specific learning paradigm in them due to how random the games are. For this reason, and due to the fast pace such gameplay demands, Bavelier and Green also speculate that action video gaming benefits the decision making skills of gamers as well by, again, forcing them to think and react accurately and quickly to specific stimuli while ignoring/rejecting others that would lead to a mistake in the game (a skill that the two have coined as probabilistic interference). This goes strongly against all that admonishment your mother would give you back in the day about rotting your brain away in front of the Super Nintendo. In actuality, you could have been sharpening it!
Enhanced spatial attention and quick decision making are apparently not the only unexpected benefit of video gaming; according to a research team in Toronto, Canada, extensive gaming can also improve hand-eye coordinative tasks and overall visuomotor abilities. Through performing fMRI analysis on several test subject both with extensive gaming experience (or week long game training) and no video game experience while they conducted different visuomotor tasks (navigating a maze with joysticks, pointing in one direction while facing the other, etc.), it was found that those with gaming experience performed leagues better than those without. Even more curious, however, was that it the gamers seemed to perform so much better and quicker than the non-gamers because they utilized a completely different neural network than the non-gamers to process the test data! While non-gamers primarily employed their parietal lobes in the visuomotor tasks, the gamers utilized the prefrontal, premotor, primary sensorimotor and a larger portion of their parietal regions to process and respond to the tasks.
This shift in processing channels, however, did not result from viewing test information differently, or processing it differently in the retina; instead it came through a complete reorganization of the visuomotor pathways in the brain, developing a more efficient and effective pathway! Much like Bavelier and Green, the Canadian research team seems to attribute these changes to the fast pace of action gaming and the high attention to detail that said games demand of the players. Not only must the players translate the movements they desire for their in-game character onto the screen itself (and memorize multiple button patterns to do so), but they must constantly react as quickly and accurately as possible if they want to be able to keep playing. The researchers even joke at one point that with all the training such games offer to the players in speed, precision and accuracy with hand-eye coordinative movements, many of them could be potential candidates for surgeons someday!
Despite the fact that video games may not give us amazing deductive powers by playing puzzle games or promote superhuman prefrontal abilities through strategy gaming, they can help us respond faster and develop different processing pathways for visuomotor tasks (a prospect that could prove to be very beneficial for Alzheimer’s patients who are highly impaired in parietal visuospatial performance). While we know that joystick and button-pad gaming can foster such benefits, it would be interesting to see if any of the new “motion controlled” types of video games could increase the development of such skills by forcing the player to move the controller in the actual direction of movement or action in the game (as pioneered by Nintendo’s Wii and the Playstation’s Move). This would be most interesting to study in Microsoft’s Xbox Kinect console, a system that translates real time motion captured movements into the game itself, so a player can use his/her arms, legs and entire body as the controllers! Could this foster enhanced visuomotor skills as well, or only serve to make you look silly as you prance around in front of the TV screen?
Sources and Related Reading:
Neuroscience News – Gamers Have Advantage in Performing Visuomotor Tasks
Medical News Today – Sharpening Decision-Making Skills Through Action Video Game Play
Nature Neuroscience – Carrot Sticks or Joysticks: Video Games Improve Vision
Cortex – Extensive Video Game Experience Alters Cortical Networks for Complex Visuospatial Transformations
PubMed Central – Effects of Action Video Games on the Spatial Distribution of Visuospatial Attention
Vision is one of the most impressive functions of the human brain. It interprets nothing but electromagnetic waves and paints a glorious picture of our daily existence from the scattered chaotic sea of intertwining light waves that we call home. Many see their vision deteriorate and the world blur as time goes on and these problems can be corrected by optometry, but blindness comes on like a relentless infidel for more than two million people worldwide in the form of retinitis pigmentosa (RP). RP is a heritible genetic disorder that leads to degeneration and loss of function in the retina’s photoreceptor cells, and can lead to full blindness in a matter of years. There is no cure or treatment for RP, but new research may change that very soon.
Vision starts in the photoreceptor when light activates rhodopsin, a G-protein coupled receptor pigment consisting of light sensitive opsin and retinal. Light triggers a conformational change in retinal that kick starts a G-coupled visual cascade and the flow of visual information to the brain. In retinitis pigmentosa, rhodopsins in the photoreceptors become insensitive to light starting in the rod cells, and blindness sets in gradually. Rod cells are used in low light and deteriorate first, leading to night blindness, and dysfunction in cone cells used for color vision and acuity sets in until full blindness plagues the individual. Fortunately, a team of French scientists has investigated this degradation and has found a way to combat it by reactivating the photoreceptor cells through genetics. Their study was featured in the July 23rd issue of Science.
The scientists isolated an achaebacterial rhodopsin analog called halorhodopsin that functions in the yellow and green wavelength range. They then introduced a halorhodopsin encoding gene into retinitis pigmentosa model mice via a viral vector and also created a control group. In their experiments, it was found that both slow and fast degrading retinal cells in the experimental mice regained their light sensitivity in response to integration of halorhodopsin into their insensitive photoreceptor cells. Electrical responses were recorded from ganglion cells (the third tier cell in the visual cascade) and healthy photoreceptor spikes were observed in response to light stimulation. Most importantly, lateral inhibition (the mechanism by which the brain discriminates edges of objects) was fully preserved, while mono-directional movement was retained. Halorhodopsin mice also performed significantly better than the control RP mice in a battery of visually guided tasks, demonstrating that their photoreceptors had been successfully resensitized by halorhodopsin integration. The scientists also tested the resensitizing ability of halorhodopsin on cultured human retinal cells. They were successful in integrating halorhodopsin into human cells via viral vectors, but could not conduct any clinical trials. However, photoreceptors expressing halorhodopsin demonstrated photocurrents and photovoltages that would be adequate to restore human vision.
This opens the door to treatment of retinitis pigmentosa in the genes – the same place where it starts. Although halorhodopsin therapy will not fully restore all wavelengths in human vision, it can still serve as a tool to bring restore vision in the blind through optical devices. For example, an RP patient could be treated with halorhodopsin gene thearpy, then outfitted with a device that images the visual field and translates it into halorhodopsin recognizable wavelengths. This light mosaic is then projected onto the patient’s eyes and the can “see” what is in front of them. The supplied image of the device is from a perspective article in the beginning of the current issue of Science.
Seeing the Light of Day – Science (Perspective)
Genetic Reactivation of Cone Photoreceptors Restores Responses in Retinitis Pigmentosa – Science (Research Article)