Parkinson’s disease is one of the most infamous neurological disorders known to medicine and has afflicted many, including Muhammed Ali and Michael J. Fox. Described by it’s characteristic tremors and shaking, the disease induces loss of control of motor function as a result of neuronal death in dopamine-releasing (dopaminergic) neurons in a midbrain structure called the substantia nigra. In the pathology of Parkinson’s Disease, this localized population of dopaminergic neurons start to die, while other neurons in the substantia nigra and other local dopaminergic neurons remain healthy and functional. This has puzzled researchers extensively, but a recent letter to Nature provides some breakthroughs in understanding what is happening in these degenerative neurons.
Recent research into Parkinson’s suggests that the problems are localized to the mitochondria of the suspect neurons, leading to the conclusion that the disease is related to problems with metabolism. The mitochondria are the machines of cell metabolism and the stage for the Kreb’s Cycle and the Electron Transport Chain (ETC), which are the two major energy producing processes of the cell. The metabolism occurring in the mitochondria produces large amounts of adenosine triphosphate (ATP) which is used as energy currency in all cells of the body, and also lead to the production of water to be used elsewhere by the cell. Oxidative stress in the water producing portion of the metabolic pathway can lead to the production of oxygen free radicals, which are highly reactive as a result of a single free electron (free radicals have implications in other degenerative disorders and cancer). Scientists now suspect that these free radicals are contributing to the neurodegenerative consequences of Parkinson’s Disease.
In the Nature study, scientists tagged the mitochondria of the Parkinson’s suspect neurons with fluorescent protein that would allow them to observe oxidation states of the cells in rats. They found that Parkinson’s cells were in fact under a high level of oxidative stress and showed stress patterns at regular intervals, concluding that the cells function rhythmically when releasing dopamine. This rhythmic function is associated with ATP driven increases of calcium levels in the neurons, and the study suggests that this rhythmic calcium fluctuation is the instigator of the oxidative stress associated with Parkinson’s disease. In further experimentation, the researchers blocked these calcium influxes with drugs, which led to a decrease in Parkinson’s-like oxidative stress in mouse models of early-onset Parkinson’s disease. These drugs are known to be tolerated by humans well, and provide a legitimate option for medicinal therapy in the disease.
…Well, it’s not exactly Atkins, but the ketogenic (or “keto,” for short) diet is now being prescribed as a treatment for drug-resistant pediatric epilepsy. This low-carb, high-fat diet involves eggs, cheese, yogurt, and cream all to a seemingly unreasonable and unhealthy extent. Patients require supplements to stay healthy and grow, and must drink enough fluids to avoid kidney stones. As crazy as it sounds, it works.
Dr. Elizabeth Thiele, a pediatric neurologist at Massachusetts General Hospital, has compiled clinical data showing 7 out of 10 patients reducing their seizure count by more than ninety percent on this diet. It appears that the diet has few lingering health effects after it is stopped and does not seem to stunt growth, and doctors have been using it since the early twentieth century. But how does fat prevent the electrical surges in brain activity that constitute epilepsy?
The current theory is that ingesting large amounts of fat forces the body to mimic starvation mode and burn fat for energy rather than carbohydrates. The ketone bodies that result from the breakdown of fats seem to have certain unique properties that help to protect the brain. All kinds of applications for these ketone bodies are being studied, including their ability to promote the slowing of tumor growth, and their potential use for the treatment of Alzheimer’s and Parkinson’s Disease. How the ketone bodies are doing this has yet to be discovered so drug development is limited. Fortunately, the studies demonstrating the effectiveness of the diet in pediatric cases is opening the door for further research funding.
Original article: Epilepsy’s Big, Fat Miracle – The New York Times
Additional reading: The ketogenic diet in childhood epilepsy – where are we now? – Archives of Disease in Childhood
Lets face it, coaching is just a part of our everyday lives. Whether or not we accept the advice or let our alter-egos consume us with pride remains in question, but ultimately learning is the number one goal. A major topic of research at Case Western Reserve University’s Weatherhead School of Management since 1990, coaching has withstood the test of time as research continues to be conducted to prove “effective coaching can lead to smoothly functioning organizations, better productivity and potentially more profit.”
However, there is still little understanding as to what kind of interactions can contribute to or detract from coaching’s effectiveness. Ways of coaching can and do vary widely, due to a lack of understanding of the psycho-physiological mechanisms which react to positive or negative stimulus. Internal Research done by the university has since compared varying coaching styles, from the kind and compassionate vs. the rugged and raw. The results can then be used to reveal the psychological methods by which learning can be enhanced or reduced, depending on the style of coaching in question. “We’re trying to activate the parts of the brain that would lead a person to consider possibilities,” said Richard Boyatzis, distinguished university professor, and professor of organizational behavior, cognitive science and psychology. “We believe that would lead to more learning. By considering these possibilities we facilitate learning.”
Boyatzi believes that coaches attempt to arouse a Positive Emotional Attractor (PEA), which causes positive emotion and arouses neuroendocrine systems that stimulate better cognitive functioning and increased perceptual accuracy and openness in the person being coached, taught or advised. On the flip side, emphasizing negativity through weaknesses and flaws, yields an opposite result. “You would activate the Negative Emotional Attractor (NEA), which causes people to defend themselves, and as a result they close down,” Boyatzis says. “One of the major reasons people work is for the chance to learn and grow. So at every managerial relationship, and every boss-subordinate relationship, people are more willing to use their talents if they feel they have an opportunity to learn and grow.”
Boyatzi demonstrated his ideas, when two academic coaches with contrasting styles were each assigned to a volunteer undergraduate student. Following a series of questions, Boyatzi found that “people respond much better to a coach they find inspiring and who shows compassion for them, rather than one who they perceive to be judging them. Sure enough, we found a trend in the same direction even for the neutral questions. Students tended to activate the areas associated with visioning more with the compassionate coach, even when the topics they were thinking about weren’t so positive,” Jack said (Boyatzi’s assistant).
All and all, everyone has a few weaknesses whether the’yre willing to admit it or not, but often the focus is so much on the bottom line that we worry ourselves into the ground. Rather it is more important to focus on what gets you going in the morning and gets you wanting to work hard and stay late that truly embodies ones character.
Coaching With Compassion Can ‘Light Up’ Human Thoughts – Science Daily
Most of us are probably not strangers to the recent hub-bub in the media regarding the effects of video gaming on the brain. From whinny mothers and senators complaining that graphic video games predispose our youth to violence and damage their minds, to the claims that daily “brain training” video game exercises can improve your overall mental well-being, it can be hard to determine just how video games are actually affecting our brains. While the jury is still out as to whether or not violent video games overload the amygdala or if playing Brain Age everyday on your Nintendo DS can boost your memory and cognitive abilities, several studies produced in the last year or so have made some very interesting discoveries regarding the effects of gaming on the brain. Though many of us may want to hear that playing StarCraft all day will predispose us to being strategic wizards and give us an edge at the next chess match, such is not the case. The actually findings, however, may still surprise you.
When you think of mentally stimulating activity in the realm of video games, you probably wouldn’t think of something like Call of Duty or the Prince of Persia as a game that would really get synaptic efficacy churning. One would probably be more inclined to attribute that to electronic chess, or puzzle games like Tetris or Bejeweled, or even a tactical strategy game like Command and Conquer. According to most independent studies into video gaming, however, it actually has been shown that fast paced, action gaming (and more commonly first person shooter games) just like Call of Duty are the only types of video games that provide any beneficial effects on the brain. That’s right, your annoying roommate and all his obnoxious friends playing Halo at 3 am while you are trying to devise the perfect battle plan in WarCraft are doing something more mentally constructive than you! How exactly though do video games provide any benefit (karma, magic, summoned magical demons!?) and what areas of the brain do they act upon?
By testing the reaction times of groups of patients both with and without extensive video gaming experience, researchers C. Shawn Green and Daphne Bavelier seem to have provided evidence that playing video games can substantially boost one’s overall attentional skills. Unlike subjects without any experience playing video games, Green and Bavelier observed that gamers exhibited a much stronger ability to fixate upon specific visual and spatial cues while filtering out superfluous ones. Subjects with gaming experience also displayed much faster reaction times in the spatial localization and object recognition tests that Green and Bavelier administered to them. Even more interesting was that the researchers observed that these attentional abilities were not just specific to the test paradigms themselves, and could be applied to multiple other tests and situations with similarly above average results.
When you consider the circumstances of the kind of video games that these subjects are used to performing under, these results seem to make sense. The action and pace of the games are fast and sporadic, with stimuli randomly popping up all over the place. The gamers are constantly conditioned and trained to respond quickly to certain stimuli, while filtering other unimportant stimuli out (and of course, they are rewarded for proper responses by either advancing further in the game or winning in general). Another important aspect of these games that Bavelier points to is the fact that there is no set of right/wrong answers or a specific learning paradigm in them due to how random the games are. For this reason, and due to the fast pace such gameplay demands, Bavelier and Green also speculate that action video gaming benefits the decision making skills of gamers as well by, again, forcing them to think and react accurately and quickly to specific stimuli while ignoring/rejecting others that would lead to a mistake in the game (a skill that the two have coined as probabilistic interference). This goes strongly against all that admonishment your mother would give you back in the day about rotting your brain away in front of the Super Nintendo. In actuality, you could have been sharpening it!
Enhanced spatial attention and quick decision making are apparently not the only unexpected benefit of video gaming; according to a research team in Toronto, Canada, extensive gaming can also improve hand-eye coordinative tasks and overall visuomotor abilities. Through performing fMRI analysis on several test subject both with extensive gaming experience (or week long game training) and no video game experience while they conducted different visuomotor tasks (navigating a maze with joysticks, pointing in one direction while facing the other, etc.), it was found that those with gaming experience performed leagues better than those without. Even more curious, however, was that it the gamers seemed to perform so much better and quicker than the non-gamers because they utilized a completely different neural network than the non-gamers to process the test data! While non-gamers primarily employed their parietal lobes in the visuomotor tasks, the gamers utilized the prefrontal, premotor, primary sensorimotor and a larger portion of their parietal regions to process and respond to the tasks.
This shift in processing channels, however, did not result from viewing test information differently, or processing it differently in the retina; instead it came through a complete reorganization of the visuomotor pathways in the brain, developing a more efficient and effective pathway! Much like Bavelier and Green, the Canadian research team seems to attribute these changes to the fast pace of action gaming and the high attention to detail that said games demand of the players. Not only must the players translate the movements they desire for their in-game character onto the screen itself (and memorize multiple button patterns to do so), but they must constantly react as quickly and accurately as possible if they want to be able to keep playing. The researchers even joke at one point that with all the training such games offer to the players in speed, precision and accuracy with hand-eye coordinative movements, many of them could be potential candidates for surgeons someday!
Despite the fact that video games may not give us amazing deductive powers by playing puzzle games or promote superhuman prefrontal abilities through strategy gaming, they can help us respond faster and develop different processing pathways for visuomotor tasks (a prospect that could prove to be very beneficial for Alzheimer’s patients who are highly impaired in parietal visuospatial performance). While we know that joystick and button-pad gaming can foster such benefits, it would be interesting to see if any of the new “motion controlled” types of video games could increase the development of such skills by forcing the player to move the controller in the actual direction of movement or action in the game (as pioneered by Nintendo’s Wii and the Playstation’s Move). This would be most interesting to study in Microsoft’s Xbox Kinect console, a system that translates real time motion captured movements into the game itself, so a player can use his/her arms, legs and entire body as the controllers! Could this foster enhanced visuomotor skills as well, or only serve to make you look silly as you prance around in front of the TV screen?
Sources and Related Reading:
Neuroscience News – Gamers Have Advantage in Performing Visuomotor Tasks
Medical News Today – Sharpening Decision-Making Skills Through Action Video Game Play
Nature Neuroscience – Carrot Sticks or Joysticks: Video Games Improve Vision
Cortex – Extensive Video Game Experience Alters Cortical Networks for Complex Visuospatial Transformations
PubMed Central – Effects of Action Video Games on the Spatial Distribution of Visuospatial Attention
Magic and neuroscience are not two commonly associated topics. Yet we don’t realize how pertinent these sleights of hand are to certain neural processes. Have you ever been walking down the street and been approached by a street magician? You say “bring it on mister” and think: I’m smarter than this dude. If I pay close enough attention to what he’s doing there’s no way he can trick me. But despite your best efforts, your wallet ends up in his hand and your chutzpah on the ground.
Based on a video from Scientific American, it appears that magicians are pseudo-neuroscientists. In the video neuroscientists Stephen Macknik and Susana Martinez-Conde analyze the work of street illusionist Apollo Robbins. Macknik and Martinez-Conde explain that magicians rely upon something called ‘active misdirection’ for their tricks. By using verbal cues and focusing his eyes on his left hand Apollo has directed your attention there. He then proceeds to snipe your wallet out of your back pocket with some quick movements of his right hand.
He probably threw in a couple jokes about the ol’ ball-and-chain or your hideous sweater, right? Turns out it’s difficult to pay attention to all of your surroundings while you’re laughing.
According to another slightly far-fetched theory, magicians take advantage of ‘mirror neurons’ as well. Mirror neurons help us feel sympathy or, in this case, cause us to act similarly to the person we are interacting with. So when Apollo looks at his left hand your mirror neurons fire, cause you to follow his gaze, and ‘poof’ goes your wallet.
Watch the video from Scientific American here.
We’re told to find ourselves a quiet nook, to maintain a schedule, and to tackle one subject at a time. Our parents tell us that naps are a waste of time. And mass media conglomerates encourage us to fill every spare moment with a quick video clip or a two-minute game on our cell phones. But as it turns out, it is time to quit buying into what we’re told creates the optimal environment and habits to learn.
First, forget about holing up at that same seldom-visited spot over the span of time before an exam. Studies have found that students who vary where they study will remember the information better than those who stay in one place. The brain makes associations between what we study and where we study it, so the greater the number of associations, the more enriched the material, and the better entrenched the memory.
Next, throw the one-subject-at-a-time approach out the window. Varying the type of material studied in one session has been shown to leave a deeper impression on the brain than focusing on a single topic at a time.
For example, a recent study in the journal of Applied Cognitive Psychology featured two groups of 4th graders being taught how to calculate the dimensions of a prism. There were four existing problem sets; one subject group was given repeated examples of one type of problem, while the other was given a mix of all four types of problems. A day later the groups were given separate tests on what they had learned, and the 4th graders who had been given the mixed problem sets performed twice as well.
Last but not least, when you have some down time, take the airplane approach: turn off all your electronics and take a nap. This technological age encourages nearly constant multitasking, but multitasking deprives our brains of much-needed rest. A continuous stream of digital input-via cell phones, iPods, computer screens, and televisions- forfeits the time when our minds could better learn and remember information, even form ideas.
A study at UC-San Francisco found that rats do not process new information and transform it into a persistent memory until they are given a break from those new experiences. Thus in order for us to process what we’ve learned and experienced during the day, we need to rest our brains.
Recent studies have shown that sleep not only consolidates what you have already studied, but it also primes the mind for further learning. So an afternoon nap between classes (as long as you set your alarm) can actually be the final element to a perfect study system.
Digital Devices Deprive Brain of Needed Downtime- The New York Times
Forget What You Know About Good Study Habits- The New York Times
Behavior: Napping Can Prime the Brain for Learning- The New York Times
Obviously, our brain is the most complex part of our body, but did you ever think that people would use its powers to persuade and manipulate you to buy products seen in advertisements?
Well, with the ever-changing and enhancing state of technology these days, it is no surprise that people would be bound to create more amazing advancements, especially when applied to consumerism. Neuromarketers, groups of researchers who use techniques from neuroscience to study people’s reactions to products, are bringing new studies to the forefront due to the fact that only 2 percent of the brain’s energy is expended on conscious activities.
A.K. Pradeep, founder and chief executive of NeuroFocus, a neuromarketing firm based in Berkeley, California, believes that the only way to truly understand people’s inclinations is through studying their subconscious. Therefore, NeuroFocus has led the way in this upcoming field by researching volunteers through the use of eye-tracking devices and measuring the brain’s electrical frequencies.
A volunteer undergoing testing that focuses on measuring his brain’s electrical frequencies and his eye movements.
By tapping into this realm, researchers are able to get a clear view of people’s unconscious thoughts when viewing commercials, movie trailers, or web sites. As Dr. Pradeep says, “We basically compute the deep subconscious response to stimuli.”
This process has now led way to multiple companies forming in hopes of furthering the development of neuromarketing. And many big-name sponsors -such as Google, CBS, and Disney- have used neuromarketing to test consumer responses to advertisements, even political ones.
However, some people are concerned that companies could take advantage of consumer’s thoughts and use those against them.
“If I persuaded you to choose Toothpaste A or Toothpaste B, you haven’t really lost much, but if I persuaded you to choose President A or President B, the consequences could be much more profound” Dr.Pradeep says.
The likelihood of this is not large since companies are not focusing heavily on the political side of things and we do still have control over our brains.
A professor of neuroscience and psychology at Berkeley, Dr. Robert T. Knight explains that neuromarketing may distinguish between one’s positive or negative emotions, but it cannot be specific enough as to say whether one’s positive emotion is joy or excitement. The only measurable variable is if the viewer pays attention. No correlation has been made between the brain-pattern responses to neuromarketing and purchasing or reactionary behavior.
Whatever your opinion, the initiative is just beginning, and the Advertising Research Foundation has developed a project for defining industrywide standards based off of reviewing research done by participating neuromarketing firms.
The future looks bright for these companies as sponsors have poured in with great interest, but only time will tell the fate of our brains being used for or against us.
Neuromarketing – Ads That Whisper to the Brain - NYTimes.com
What would happen if humans were like turtles – alone at birth with no mom to guide them back home? We probably would not survive very long before getting attacked and/or eaten by something bigger than us. For many animal species, instinct guides survival. But for humans and other mammal species, nurture as an infant is crucial to our development.
Weaver et al investigated the phenomenon of nurture in rats. They noted that some rat moms extensively licked and groomed their pups, while others ignored their pups. Pups that received attention during the first week of life grew up to be happy and calm, while those that were ignored grew up to be anxious, and were more prone to disease. Epigenetics studies the genomic changes that occur in response to the external environment. The differences in behavior are due to a change in a glucocortocoid receptor (GR) gene during development. At birth, the gene is highly methylated and inactive. If a rat mother is attentive towards her pups, the pups’ GR gene gradually demethylates, making the gene more active. These pups will be more relaxed in response to stress. Those that were not given attention, and do not express the GR gene, respond poorly to stress. You can try being a rat mom in an interactive game here .
A related study by McGowan et al studied hippocampal tissue in humans that had committed suicide and been abused as a child, and humans that had committed suicide with no history of child abuse. When compared to controls and subjects that were not abused, the subjects that had been abused had decreased level of a GR protein. This shows that events later in life (such as those leading to a suicide) do not actually alter genetic makeup, rather, it is the early childhood interactions which cause epigenetic changes leading to adult behavior. These data are consistant with those of the rats and show the importance and effect of having proper nurture as a child.
But in reality, how important is it to be calm and controlled in response to stress? Rats are found in urban areas as well as in the wild.
What were to happen if one of the calm happy rats were to stumble upon a mouse (or, in this case, rat) trap? It would be less concerned about danger and be more likely to die, whereas an anxious rat would be guarded and could better survive the harsh environment.
What is the significance of these epigenetic changes for humans? Maybe living in a developed society has prevented us from realizing just how much nurture plays a role in development. Do those born into a war-ridden society have an inactive GR gene and thus a guarded and anxious personality? This is probably advantageous for survival.
In our society, we will of course never be left alone immediately after birth to fend for ourselves. But, what degree of nurture must we receive in order to grow up to be productive members of society? Why are species like turtles able to survive without a mom? Epigenetic studies will be key in future questions concerning nature and nurture.
Almost everyone can agree that our senses are what makes life enjoyable: Your sense of smell helps you recognize delicious baked goods, your sense of sight lets you see how sexy you are in the mirror (very, I’m sure), your sense of balance makes a Saturday in Allston seem like a wacky whirlwind of wobbly adventure. But the underlying neural mechanisms behind our senses can be surprisingly complex and difficult to study. Sometimes, scientists have to improvise in unexpected ways.
Harvard neurobiologists recently published a study in which researchers engineered mice that could actually “smell” light, in order to better understand how odors are processed in the brain. If it seems counter-intuitive, or downright unbelievable, don’t worry: it is!
“In order to tease apart how the brain perceives differences in odors, it seemed most reasonable to look at the patterns of activation in the brain,” says Venkatesh N. Murthy, professor of molecular and cellular biology at Harvard. “But it is hard to trace these patterns using olfactory stimuli, since odors are very diverse and often quite subtle. So we asked: What if we make the nose act like a retina?”
You might be thinking, “WTF, how is that even possible??”, so it might help to have a little background on how smell and vision work:
When you sniff, chemical stimuli called “odorants” are drawn into your nose and into your nasal cavity, where they pass over a thin sheet of cells called the olfactory epithelium (not before passing through a thin coating of mucus though). The odorants then bind to an odorant receptor protein on the cilia (nose hair) and cause the olfactory receptor neuron to fire. However, each individual neuron can only have one kind of protein receptor, meaning that a single odorant may cause different neurons to fire differently, or not at all.
These neural signals are then received by glomeruli in the olfactory bulb. Here, each glomerulus receives input from all the neurons that share a specific receptor protein and sends the information to the olfactory cortex for further processing. The pattern of activity created by different glomeruli receiving input from the neurons creates a sort of sensory map in the cortex. Different patterns in the map are thought to be responsible for the different smells that we perceive.
If that seems too complicated, just remember: Proteins in the nose respond to smells, eventually creating a map of activity in the glomeruli in the brain that identifies what the smells are!
Vision works in a somewhat similar way. Light travels through the lens and pupil to the back of the eye, where it hits photoreceptors on the retina. The light activates a receptor protein, called an opsin, which activates other proteins in the cell and causes the photoreceptor to depolarize. This depolarization then continues through bipolar cells, ganglion cells, and eventually makes it’s way as a neural signal to the visual cortex and beyond.
So why would you want rats to smell light instead of odors?
“It makes intuitive sense to use odors to study smell,” Murthy says. “However, odors are so chemically complex that it is extremely difficult to isolate the neural circuits underlying smell that way.”
Optogenetics is a new field where these light-sensitive proteins are genetically engineered to be expressed in sensory systems other than the visual system. Murthy and his associates utilized optogenetics to breed mice that expressed a subtype of opsin, called channelrhodopsin-2, in their olfactory sensory neurons. Basically, instead of having the protein that senses odorants (i.e. smells), the mice had the protein that responds to light.
Murthy and his team were then able to use a special projector to activate specific glomeruli in the brain via a tetrode attached directly to the brain tissue. The team wanted to investigate if mitral/tufted cells “sister cells”, cells that synapse with the glomeruli, affect the sensory map. These cells are thought to modify activity from the olfactory sensory neurons, either by modulating the timing of the activity spiking or the rates of the spiking. Both of these factors are very important for the brain to identify smells, but are poorly understood.
The experimenters, by directly activating specific olfactory sensory neurons with tiny spots of blue light, were able to see the extent to which these sister cells affected the sensory map. Normally, this would be incredibly difficult to study due to the complexity of smell: The kind of odorants in the air, the concentration, and the timing of their passage through the nose can all affect the activity in the glomeruli. However, by using light to directly illuminate the neurons, the team was able to much more clearly control the neural activity.
They concluded that, based on the activity of the glomeruli from different combinations of light input, more information is leaving the glomeruli than is entering from the neurons. Thanks to the sister cells independently encoding information about the timing and rates of neural activity, computations about smell are being made before even being processed by the olfactory cortex!
So why is this important? Well, obviously it’s mad cool that someone out there is breeding mice that can smell freaking light. It’s also a neat demonstration of how cutting-edge methods like optogenetics are being developed by neuroscientists to understand our brains and nervous systems in seemingly unconventional ways. Neural activity that is normally very messy can be much better controlled and studied, as Murthy and his team showed with smell. Someone could, for instance, breed rats that could taste light in order to study how taste is processed! Optogenetics as a field is only a few years old too, so who knows what other kinds of techniques and technologies will be available to use in another, 5 10, or even 20 years (tastable music perhaps? We can dream…).
Sadly, your own receptor proteins are already happy and taken care of in your nose, so you probably won’t be able to smell how good you look anytime soon (what a shame!), but at least a mouse could.
A recent study confirms the cause-effect relationship between problems in the immune system and the development of mental illness. Nobel Prize winning geneticist, Mario Cappecchi, and his team, linked a deficiency of microglial cells with Trichotillomania, an impulse control disorder that causes people to pull out their hair. In his experiment, Cappecchi performed bone marrow transplants on “hair pulling” mice. It was reported that within four months of the transplants, the mice no longer had the disorder. Consequently, healthy mice given bone marrow from affected mice developed the behavioral disorder.
The hair pulling in the mice was discovered to be caused by a deficiency of microglia in the brain. Microglia are located throughout the brain and spinal cord and play a major role in the immune defense of the central nervous system. They are responsible for locating damaged neurons as well as fighting off infections that go past the blood-brain barrier. The deficiency of microglia is said to be caused by a mutant Hox8 gene. The Hox8 gene is responsible for some of the development of the body and the organs. Since the microglial cells originate from the bone marrow and migrate towards the brain later on, it is the only group of cells in the brain that have this gene, and is suggested to be very important for the development of the brain.
Scientists were aware of the link between the immune system and mental illness for a while now. People diagnosed with psychiactric disorders such as bipolar disorder, schizophrenia, and depression often had problems with their immune system. This discovery, however, suggests a cause-effect relationship with problems in the immune system and psychiactric disorders. As Ghosh says, “Apart from the fact that this is the world’s first reported behavior transplant, this finding is an important landmark in our understanding of the genetic basis of behavior.” This opens a whole range of possibilities as scientists are able to apply the knowledge they know about the immune system discover causes for psychiatric disorders and also find ways to treat these illnesses with immuno-based therapy.