The ability to picture things in our mind seems so natural, and yet certain people, as researchers have discovered, are unable to. Referred to as “aphantasia,” the condition describes an inability to form mental images. What is currently known about the condition comes mostly from the work of neurologist Adam Zeman.
In 2005, Dr. Zeman at the University of Exeter Medical School performed a minor surgical procedure on a patient referred to as “MX” who thereafter lost his ability to create mental images. Dr. Zeman failed to find any description of such condition in medical literature, so he gave MX a series of examinations. In addition to performing well in problem solving and semantic memory tests, MX could look at faces of famous people and name them; however, if only given their names, he could not be asked to picture their faces. Brain scans revealed that face-recognition regions that would active in normal brains in such a test were not active in MX’s brain.
Dr. Zeman and his colleagues more recently performed another study on several people believed to have the same condition. Their results found similar symptoms between each subject in that they could all perform general knowledge tasks like counting the windows in their house, but could not be asked to picture things like a sunrise. However, in their report, the scientists noted that many of the subjects did not become afflicted with aphantasia from an injury, but had it since birth. As a matter of fact, the condition is believed to affect as much as 2% of the population, but as of now, Dr. Zeman hopes to find more people with it in order to perform a bigger scanning study by comparing their brains with people who can form mental images and find out how common the condition really is.
Firefox co-creator Blake Ross described how it feels to have aphantasia since birth and his surprise at his discovery that other people can visualize things. “I can’t ‘see’ my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago,” he wrote on Facebook. “I thought ‘counting sheep’ was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind.”
Writer: Nathaniel Meshberg
Editor: Kawtar Bennani
Starting at conception, your genes lay out a neural map for the nervous system: your cells multiply and migrate to form the primitive beginnings of your brain. Much of what happens during this time is co-determined by your environment, which in turn is determined by that of the mother. Sometimes, this interplay, or even purely genetic factors, can result in congenital deafness. Upon birth, a baby who can hear nothing has no audio input traveling to the brain. But once out of the womb, neural development keeps going at a rapid pace, and your brain continues to shape itself. For deaf infants, this means it develops without any sound, likely facilitating the encroachment of other functioning areas upon the cortex traditionally reserved for auditory processing.
An interesting question is what this type of plasticity (the ability for the brain to change and adapt) means for language learning in children given cochlear implants. An Ohio State University Research team attempted to tease apart this issue by observing parent interactions with congenitally deaf children using cochlear implants. The parents presented new toys with unique names to the children and the researchers recorded the entire interaction at different angles and with eye trackers to see what strongly draws the child’s attention.
The official results have not yet been revealed, but there is hope that more studies such as this will help reveal why children with cochlear implants, even with implants from a very young age, have language delays and appear to learn language differently. In the video accompanying the study description, parents of one subject describe how narrating daily activities to their son has made a world of difference in their communication, a tip they learned from participating in the experiment. However, it’s possible that this has to with simply exposing the child to more language, or maybe even his developmental stage.
Once the results of this study, and others like it, are published, parents will hopefully be able to bridge the gap between themselves and their deaf children, better understanding the differences between being born with hearing and born without it.
Some peace and quiet might be nice, but too much may actually harm us. Humans are naturally social animals: we build communities, create teams, and all around like to be around others who share similar ideas with us. The aspect of interacting with others, discussing topics, or even criticizing others all are different forms of interaction that happen on a daily basis. What happens then when we are stripped away from all this? Research has shown that being in isolation damages people both neurologically and psychologically. In a study carried out by Dr. Heidbreder in Neuroscience, his team found that breeding rats in isolation caused them to have increased hyperactivity, startle response, and food hoarding behavior. Neurologically, these rats also experienced decreased dopamine and serotonin. Finally, they showed that these rats experienced chronic stress. All of this resulted from living in isolation. Without being able to interact with other rats, these rats grew stressed and were unable to get out of that stressed state. How would this look like in humans? Humans actually have something very similar to this: solitary confinement. At the University of Pittsburgh School of Law, there were a panel of neuroscientists who testified to the fact that isolation has a serious degenerative neurological effects. One of the specialists, Akil, attributed it to the stress hormones essentially “rewiring” the brain. The power of being able to interact with others allows one to get rid of the stress, and being deprived of it causes chronic stress for the individual. Isolation causes a cease in brain activity, as the stimulation of thought and action leads to the firing of more neurons in the brain. Without that, we are left with nothing but a state of stress. There has been a lack of further neurological data to show how and what mechanisms of the brain deteriorate, due to the inhumanity of doing such research. However, we know that human interaction is almost essential. In order to live, we need to interact so we can stimulate our brains to function. If you would like to learn more about this, there’s a great video on the subject of isolation that shows the real life effects of what isolation could do to a person.
~ Albert Wang
Over-used expressions like “on the same page” or “same wavelength” may actually have some physiological truth behind them. When two people are having a conversation or listening to the same story, it makes sense that they’d be using similar parts of their brain, but the question is just how similar this activation is. Drexel and Princeton Universities teamed up recently to further explore this issue using a newer technology, hoping to prove its efficacy.
Using functional near-infrared spectroscopy (fNIRS), a functional brain imaging technique, researchers sought to find out what happens when two people communicate and how to possibly improve face to face communication. In this study, experimental subjects wore an fNIRS headband which measured their neural activity while they engaged in conversation with one another. This in itself is pretty great as other imaging techniques like fMRI that measure blood flow to brain regions require people to lie down in a noisy machine, which is not at all conducive to personal conversation.
During the experiment, subjects listened to a story in their native language while their futuristic headbands measured activity in prefrontal and parietal areas. These regions were targeted because they’re largely responsible for higher order processing involved with relating to others, an important piece of any communicative effort. When they examined the recordings, the researchers saw that brain activity of the listener heavily resembled that of the speaker after a delay. This copy-cat effect, however, was not observed when subjects didn’t understand the communicator, for example when the speaker only communicated in Turkish but the listener was only fluent in English.
With the results from fNIRS, the experimenters found that the fNIRS recordings correlated quite closely with fMRI results of a similar experiment. This is a pretty big deal since it confirms that fNIRS is a legitimate functional imaging technique that could open the door to a brand new wave of experiments involving communication. fNIRS will prove a useful tool in the future, especially to decode the issue of “brain-synching” during conversation.
~ Jackie Rocheleau
Happiness, by definition, is often the feeling of contentment or pleasure in doing something you like. However, the formal definition of happiness and what it truly is may differ. In one study, the Harvard Department of Psychology tried to determine the role of morality in happiness. The participants were given an example of a hypothetical person named “Tom”, who rarely felt sad or lonely. This is because he felt satisfied by stealing from students and reselling the items he stole to buy alcohol. Most of the subjects agreed that Tom is satisfied, but not exactly happy. The way they attributed this was that one has to be good to be happy, and therefore morality plays a role in happiness. This was surprising because Tom was clearly happy with his decisions but was deemed unhappy by people who have never even known this hypothetical character. So what exactly makes us happy?
In another study conducted by Dr. Kringelbach and Dr. Berridge, they researched and found the exact neuroscience behind happiness and pleasure. They started by comparing the correlation between happiness and hedonia (pleasure) and the correlation between happiness and eudaimonia (a life well lived). Through this, they found that most people associated happiness more strongly with hedonia, which explains why Tom might feel happy about his actions. They then identified the hedonic hotspots in the brain, which include centers that produce “neurochemical modulators” and enhance a liking reaction. This pleasure that is experienced would then be translated into a motivational process that would lead to wanting, as it would increase dopamine (the neurochemical for pleasure) in the brain. By enhancing this connection between what makes you happy and the chemicals in your brain, you involuntarily strengthen actions that create pleasure through constant usage. An important note from this study is that “all pleasures seem to involve the same hedonic brain systems, even when linked to anticipation and memory.” This would lead individuals without certain “higher-order pleasures (such as monetary, artistic, musical, altruistic and transcendent pleasures)” to search for other ways to satisfy that biological need.
So how can we search for or create happiness? Dan Gilbert explains in a TED talk that happiness can be synthesized. He brings up the concept of a psychological immune system- a way for you to feel better about the world you live in. This may come in the form of morality, such as when the students from the initial study thought that the hypothetical person, Tom, could only be happy by being moral. It could also be the way Tom finds happiness through immoral actions. Gilbert explains that “the freedom of choice is an enemy to synthetic happiness.” Having a choice in what makes you happy creates a negative effect while being stuck with a choice makes you grow to like what you choose. Once the decision becomes a choice, the indecisiveness may become detrimental to your happiness. In order to be truly happy, you must simply stick to a choice and be happy about it.
If you have time, check out the TED talk below, as it is truly a great watch.
TED talk: What makes you happy?
As important as it is to be productive and live a balanced life, it seems like for a great deal of people, students in particular, health comes second to term papers. I suppose it’s an occupational hazard, but it’s pretty interesting to think that despite the incredibly adverse affects on our intellect, sleep is the first healthy habit to go.
Most people have figured out on their own that we need sleep (or caffeine) to recharge and start the next day energized, but they seem to ignore the other equally important benefits. The second most known benefit of sleep is likely that it helps our immune systems function, which, while obviously important, can also be supplemented to an extent with vitamins and healthy eating. However, the one function of sleep that we absolutely cannot replace is memory consolidation.
Basically, when you encounter or experience something important, an area of the brain called the hippocampus becomes active and works to hold that event in your short term memory. In order for these short term memories to become long term, and thus less easily forgotten, the brain needs to consolidate the information and store it outside of the hippocampus. This process involves the formation, breakdown, and reformation of synapses (connections between neurons) throughout the cortex. It turns out that one of the best things you can do to ensure that this process goes smoothly is sleep.
When you sleep, your brain “downscales” the unimportant activity of irrelevant synapses and “upscales,” or increases activity of, important synapses. A four-year-long study recently published physical proof of this phenomenon in the form of actual pictures of synapse activity in mice: “they found that a few hours of sleep led on average to an 18 percent decrease in the size of the synapses.” (Neuroscience News). Similar findings have been found in people too.
Ironically, students give up a lot of sleep in the name of studying information they’ll never fully retain because of that lack of sleep.
~ Jackie Rocheleau
In theory, multitasking sounds efficient. Why perform different tasks separately when you can perform them simultaneously and save time, right? In practice, however, multitasking is not as efficient as it may seem. In fact, people are terrible multitaskers; most attempts at multitasking usually only result in one’s attention switching back and forth between tasks, which exhausts the brain of oxygenated glucose. Because of how tiring this switching is, it becomes harder to successfully focus on a single task.
Multitasking is more than just exhausting, though. In a study performed at Institut National de la Santé et de la Recherche Médicale, scientists found that the brain’s prefrontal cortex, involved with attention, has a left and right side. These sides cooperate when people focus on a single task and work independently when they perform two tasks at once. Participants were asked to perform two tasks simultaneously – when scientists told the participants that they would receive a larger reward for accurately completing one of the tasks, fMRI showed increased neural activity in only one side of the prefrontal cortex. When the larger reward became associated instead with the other task, the other side of the prefrontal cortex showed more neural activity, suggesting that the brain splits in half when there are two simultaneous goals. Additionally, when asked to perform a third task, participants repeatedly forgot one of the three tasks they were asked to perform and were three times as likely to make errors as they had when trying to perform two tasks. The study shows that trying to perform more than two tasks at the same time is more challenging because the brain only has two frontal lobes that can attend to the tasks. While some tasks may be difficult to perform simultaneously, others seem much easier. For instance, reading and eating at the same time is easier than reading and driving because eating demands less engagement from the prefrontal cortex than driving.
Since multitasking generally increases the likelihood of error and is mentally exhausting, how else can a person cope with a busy schedule? Instead of multitasking, scientists recommend taking breaks every two hours, and devoting different activities to specific timeslots, such as only using social media in the morning and midday. As a result of less distraction, productivity will increase and stress will decrease.
~ Nathaniel Meshberg
This timeline will be familiar to those of you who have experienced an all-nighter. During the first 16 hours of day 1, you feel normal. Your attention span and working memory have not yet been affected. Then, around hour 17, you enter your “biological night time.” The hormone melatonin, which circulates from your brain to your body, reaches a peak level that signals to your body that it is night-time. This is when your performance rapidly deteriorates and reaches a minimum around 6 to 8am the next morning. While your performance may improve throughout the following day, it will remain below that of day one until you get a decent amount of sleep. This timeline of your performance is regulated by your internal biological time of day; it is not a linear deterioration based on the number of hours you have been awake.
A team of researchers used functional magnetic resonance imaging (fMRI), a noninvasive technique used to measure and map brain activity by detecting changes in blood oxygenation and flow, to scan the brains of 33 people who were sleep deprived over two days and following a period of recovery sleep. The participants’ levels of melatonin were also measured to determine each person’s internal biological time. Brain images were taken during a reaction time task, sleep deprivation in the evening and morning when performance undergoes rapid changes, and after recovery sleep. The results showed some variation in the timing of the 24-hour circadian rhythm that is followed by some brain regions, including subcortical areas such as the thalamus, which is responsible for relaying sensory information from receptors in the body to the cerebral cortex. The frontal brain regions showed a decrease in activity during sleep deprivation and a return to regular levels of activity after recovery sleep. The effects of sleep deprivation were also evident in participants’ performances in simple reaction time tasks.
While sleep deprivation affects various brain regions differently, its effects are pervasive. Hence, you should try to sleep between study sessions so your brain has a chance to consolidate the information you studied, and you can be at your top performance level to continue studying or to take your final the next day.
~ Sophia Hon
Visual and auditory neuroscience has been used more frequently in the new age of technology. We can see it being used when Facebook automatically identifies our faces in tagged photos or when Siri finally figures out that we want the weather for today and not asking to call your mom. Artificial intelligence is now crossing over onto various different biological sciences, but it’s the scientists and engineers that need to figure out how to transfer all of the information on how natural sounds and visuals get transferred to a computer and how to adapt from that information.
Visual neuroscience’s main focus nowadays is on face processing, or face recognition. The part of the brain responsible for facial recognition is the occipito-temporal region, the right middle fusiform gyrus specifically. Facial recognition is said to involve several stages that range from perceiving basic stimuli to deriving details from said basic stimuli. New research has shown that facial recognition is a very connected brain mechanism that can be taught to those who have experienced a brain injury.
By applying this basic biological science to artificial intelligence, many researchers have been able to further progress the technology surrounding perception – like audial and visual. MIT has recently created a “machine-learning system [that] spontaneously reproduces aspect of human neurology,” meaning that this computational model they have created learns in the same way the brain learns. Apparently this detail wasn’t knowingly built into the system, as it suddenly showed up during the training process. MIT’s Tomaso Poggio developed the system to train itself to recognize certain faces in certain directions. The way they figured out that the system induced another processing step was when a certain face was rotated a certain degree regardless of the direction of the face. Another MIT lab, Computer Science and Artificial Intelligence Laboratory (CSAIL), is attempting to create another machine-learning system that can identify and learn from said identification natural sounds or background noise like crowds cheering or waves crashing. What makes this machine-learning system different from its predecessors is the fact that it does not require hand-annotated data when training, and instead researchers use videos to find the correlations between visuals and sounds. Carl Vondrick, graduate student of the MIT lab, describes it as “the natural synchronization between vision and sound.”
The sciences between artificial intelligence and biology are growing closer and closer together as this era of technology progresses furthermore. It was only a few years ago that facial recognition software became wholly available to the public, and so with further advancement in technology, who knows what one might expect with artificial intelligence.
Ever notice someone, friend or stranger, subconsciously mimic your behavior during a conversation? Ever notice yourself doing the same? If so, you may be wondering why this happens. Inside your brain, there are specific neurons called “mirror neurons,” and research from over the past decade suggests that these neurons could possibly be responsible for our strange, automatic mimicry of one another.
Mirror neurons were first discovered in the 1990s by Professor Giacomo Rizzolatti and his colleagues at the University of Parma when they were trying to measure motor neuron activity linked to specific movements while feeding a monkey. Using electrodes, they were surprised to discover that when the monkey noticed and participated in a specific action, motor neurons in an area of the monkey’s premotor cortex, called F5, fired. For example, the same individual neurons would fire when the monkey sees an experimenter put a peanut in his mouth as well as when the monkey put a peanut in it’s own mouth. Since then, researchers have been trying to determine the existence of mirror neurons within humans using neuroimaging. However, because neuroimaging can only measure millions of neurons firing at once and not singular neurons, researchers have only so far proven the existence of a human mirror system and not individual mirror neurons within the human brain.
Aside from looking at people’s actions through motor neurons, researchers have begun branching out to determine whether or not other areas of the brain participate in this mirror system. More recent research suggests that the mirror system plays a role in not only responding to other people’s actions, but their emotions as well. For instance, imaging has shown a brain region, called the anterior insula, activates both when someone feels disgusted and looks at someone else who is disgusted. Even more astonishing is that a person’s mirror neurons appear to fire differently when viewing an action occurring with one intent as opposed to another (e.g. picking up a teacup during a tea party versus picking it up when the tea party is over). The ability for mirror neurons to respond to someone’s actions in addition to their emotions and intentions, has consequently lead scientists to believe that mirror neurons, overall, are responsible for empathy.
Thus, since research shows that we possess a neural mechanism which causes us to automatically empathize with one another, it makes perfect sense for someone to naturally mimic another’s actions. You could say that our brains are always working to find connections and help us to establish relationships without us even knowing! Gee, thanks brain!