In theory, multitasking sounds efficient. Why perform different tasks separately when you can perform them simultaneously and save time, right? In practice, however, multitasking is not as efficient as it may seem. In fact, people are terrible multitaskers; most attempts at multitasking usually only result in one’s attention switching back and forth between tasks, which exhausts the brain of oxygenated glucose. Because of how tiring this switching is, it becomes harder to successfully focus on a single task.
Multitasking is more than just exhausting, though. In a study performed at Institut National de la Santé et de la Recherche Médicale, scientists found that the brain’s prefrontal cortex, involved with attention, has a left and right side. These sides cooperate when people focus on a single task and work independently when they perform two tasks at once. Participants were asked to perform two tasks simultaneously – when scientists told the participants that they would receive a larger reward for accurately completing one of the tasks, fMRI showed increased neural activity in only one side of the prefrontal cortex. When the larger reward became associated instead with the other task, the other side of the prefrontal cortex showed more neural activity, suggesting that the brain splits in half when there are two simultaneous goals. Additionally, when asked to perform a third task, participants repeatedly forgot one of the three tasks they were asked to perform and were three times as likely to make errors as they had when trying to perform two tasks. The study shows that trying to perform more than two tasks at the same time is more challenging because the brain only has two frontal lobes that can attend to the tasks. While some tasks may be difficult to perform simultaneously, others seem much easier. For instance, reading and eating at the same time is easier than reading and driving because eating demands less engagement from the prefrontal cortex than driving.
Since multitasking generally increases the likelihood of error and is mentally exhausting, how else can a person cope with a busy schedule? Instead of multitasking, scientists recommend taking breaks every two hours, and devoting different activities to specific timeslots, such as only using social media in the morning and midday. As a result of less distraction, productivity will increase and stress will decrease.
~ Nathaniel Meshberg
This timeline will be familiar to those of you who have experienced an all-nighter. During the first 16 hours of day 1, you feel normal. Your attention span and working memory have not yet been affected. Then, around hour 17, you enter your “biological night time.” The hormone melatonin, which circulates from your brain to your body, reaches a peak level that signals to your body that it is night-time. This is when your performance rapidly deteriorates and reaches a minimum around 6 to 8am the next morning. While your performance may improve throughout the following day, it will remain below that of day one until you get a decent amount of sleep. This timeline of your performance is regulated by your internal biological time of day; it is not a linear deterioration based on the number of hours you have been awake.
A team of researchers used functional magnetic resonance imaging (fMRI), a noninvasive technique used to measure and map brain activity by detecting changes in blood oxygenation and flow, to scan the brains of 33 people who were sleep deprived over two days and following a period of recovery sleep. The participants’ levels of melatonin were also measured to determine each person’s internal biological time. Brain images were taken during a reaction time task, sleep deprivation in the evening and morning when performance undergoes rapid changes, and after recovery sleep. The results showed some variation in the timing of the 24-hour circadian rhythm that is followed by some brain regions, including subcortical areas such as the thalamus, which is responsible for relaying sensory information from receptors in the body to the cerebral cortex. The frontal brain regions showed a decrease in activity during sleep deprivation and a return to regular levels of activity after recovery sleep. The effects of sleep deprivation were also evident in participants’ performances in simple reaction time tasks.
While sleep deprivation affects various brain regions differently, its effects are pervasive. Hence, you should try to sleep between study sessions so your brain has a chance to consolidate the information you studied, and you can be at your top performance level to continue studying or to take your final the next day.
~ Sophia Hon
Visual and auditory neuroscience has been used more frequently in the new age of technology. We can see it being used when Facebook automatically identifies our faces in tagged photos or when Siri finally figures out that we want the weather for today and not asking to call your mom. Artificial intelligence is now crossing over onto various different biological sciences, but it’s the scientists and engineers that need to figure out how to transfer all of the information on how natural sounds and visuals get transferred to a computer and how to adapt from that information.
Visual neuroscience’s main focus nowadays is on face processing, or face recognition. The part of the brain responsible for facial recognition is the occipito-temporal region, the right middle fusiform gyrus specifically. Facial recognition is said to involve several stages that range from perceiving basic stimuli to deriving details from said basic stimuli. New research has shown that facial recognition is a very connected brain mechanism that can be taught to those who have experienced a brain injury.
By applying this basic biological science to artificial intelligence, many researchers have been able to further progress the technology surrounding perception – like audial and visual. MIT has recently created a “machine-learning system [that] spontaneously reproduces aspect of human neurology,” meaning that this computational model they have created learns in the same way the brain learns. Apparently this detail wasn’t knowingly built into the system, as it suddenly showed up during the training process. MIT’s Tomaso Poggio developed the system to train itself to recognize certain faces in certain directions. The way they figured out that the system induced another processing step was when a certain face was rotated a certain degree regardless of the direction of the face. Another MIT lab, Computer Science and Artificial Intelligence Laboratory (CSAIL), is attempting to create another machine-learning system that can identify and learn from said identification natural sounds or background noise like crowds cheering or waves crashing. What makes this machine-learning system different from its predecessors is the fact that it does not require hand-annotated data when training, and instead researchers use videos to find the correlations between visuals and sounds. Carl Vondrick, graduate student of the MIT lab, describes it as “the natural synchronization between vision and sound.”
The sciences between artificial intelligence and biology are growing closer and closer together as this era of technology progresses furthermore. It was only a few years ago that facial recognition software became wholly available to the public, and so with further advancement in technology, who knows what one might expect with artificial intelligence.
Ever notice someone, friend or stranger, subconsciously mimic your behavior during a conversation? Ever notice yourself doing the same? If so, you may be wondering why this happens. Inside your brain, there are specific neurons called “mirror neurons,” and research from over the past decade suggests that these neurons could possibly be responsible for our strange, automatic mimicry of one another.
Mirror neurons were first discovered in the 1990s by Professor Giacomo Rizzolatti and his colleagues at the University of Parma when they were trying to measure motor neuron activity linked to specific movements while feeding a monkey. Using electrodes, they were surprised to discover that when the monkey noticed and participated in a specific action, motor neurons in an area of the monkey’s premotor cortex, called F5, fired. For example, the same individual neurons would fire when the monkey sees an experimenter put a peanut in his mouth as well as when the monkey put a peanut in it’s own mouth. Since then, researchers have been trying to determine the existence of mirror neurons within humans using neuroimaging. However, because neuroimaging can only measure millions of neurons firing at once and not singular neurons, researchers have only so far proven the existence of a human mirror system and not individual mirror neurons within the human brain.
Aside from looking at people’s actions through motor neurons, researchers have begun branching out to determine whether or not other areas of the brain participate in this mirror system. More recent research suggests that the mirror system plays a role in not only responding to other people’s actions, but their emotions as well. For instance, imaging has shown a brain region, called the anterior insula, activates both when someone feels disgusted and looks at someone else who is disgusted. Even more astonishing is that a person’s mirror neurons appear to fire differently when viewing an action occurring with one intent as opposed to another (e.g. picking up a teacup during a tea party versus picking it up when the tea party is over). The ability for mirror neurons to respond to someone’s actions in addition to their emotions and intentions, has consequently lead scientists to believe that mirror neurons, overall, are responsible for empathy.
Thus, since research shows that we possess a neural mechanism which causes us to automatically empathize with one another, it makes perfect sense for someone to naturally mimic another’s actions. You could say that our brains are always working to find connections and help us to establish relationships without us even knowing! Gee, thanks brain!
The inability to feel pleasure from any kind of activity that is normally enjoyed by the individual is present in many psychiatric illnesses, such as schizophrenia and depression. The Greeks named this symptom ‘anhedonia.’ From music, eating, playing and even sex, anhedonia can cause those who have it to feel no pleasure from those activities.
The specific causes to anhedonia are unknown, but there has been a lot of ongoing research to diminish this symptom in most people with depression. According to Stanford scientist and doctor, Robert Malenka, MD, PhD, the feelings of anhedonia and depression are typically associated with being in a stress-inducing environment, and the hormone melanocortin is also associated with depression-related syndromes. Another Stanford scientist, Emily Ferenczi, links anhedonia and depression with the malfunction of the medial prefrontal cortex (mPFC), which is a part of the brain that mediates decision making, supports memory consolidation, and is now speculated to have a part in conducting part of the brain’s reward system.
While the final consensus to what causes anhedonia is disputed, there have been recent peripheral studies on various types of anhedonia – like musical anhedonia, that can help us in the understanding of our brain. According to the University of McGill in Montreal, about 3-5% of the healthy population do not experience any pleasure from listening to music. This is only characterized by lack of sensitivity to music and not to any other pleasurable stimulus, like money. This percentage of the population understands and perceives the melodies and rhythms of music but does not enjoy musical stimulus. By observing various groups through an fMRI, scientists were able to observe that musical stimuli is related to “a reduction in the activity of the nucleus accumbens, a key subcortical structure of the reward system,” while other pleasurable stimuli like money had the normal effect of the nucleus accumbens like that in a normal healthy person. Studies like these have given us a lot of insight on how the mystery that is our brain truly works and allowed mental illnesses to be better understood.
~ Cindy Wu
Alzheimer’s Disease (AD) is one of the most terrifying things that can happen to a person and their family. Troubles brought about by old age are trying enough, but the added deficits and severe neural injury caused by AD make a once highly functioning, caring, and involved family member a stranger to their loved ones. There is no cure, so caregivers, friends, and family merely stand by and do their best to help, even as recognition of a wife, husband, mother, father, sister, or brother quickly fades away. There have, however, been hopes that certain techniques and treatments may slow the progression of the disease. One new study proposes that exercise can actually increase thickness of the cortex in those diagnosed with mild cognitive impairment (MCI), a condition that leads to AD.
The results from this study show that exercise, particularly that which improves cardiorespiratory health, can actually increase cortical thickness, especially around the areas that degenerate the fastest in AD. For both healthy elders and those affected with MCI, the MRI scans showed an 8.49% increase in cortical thickness. Not only does it protect against faster degeneration, but it could also help older adults without MCI protect their mental capacities.
What is interesting is that after participation in the exercise regimen, both the people with MCI and without MCI showed a smaller amount of cortex around the fusiform gyrus, the area largely attributed to our uncanny ability to discriminate faces and other objects we are “experts” in. This raises questions about how this area develops or degenerates in healthy as well as cognitively impaired elderly individuals as well as how this manifests.
~ Jackie Rocheleau
If you like to skydive or participate in other dangerous, adrenaline-inducing activities such as extreme sports or doing drugs, you may be someone who is easily bored or impulsive. People who demonstrate “novelty seeking” behavior tend to prefer new or unexpected experiences. Studies show that these new experiences release pleasure chemicals, or dopamine, in the brain, which may be why some people are drawn to dangerous activities.
Dopamine is a neurotransmitter that regulates the brain’s reward and pleasure centers. Rewarding experiences, such as eating, activate the dopamine system, which then controls how we perceive the task we’re doing and the reward or failure associated with it. Dopamine is used to treat movement symptoms in Parkinson’s disease. A study found that 17 percent of Parkinson’s disease patients who took drugs that stimulate dopamine receptors developed unpredicted behavioral addictions. They were also more likely to engage in risky behaviors and demonstrated a preference for novelty. This study shows that an active dopamine system is positively correlated to the likelihood of taking risks.
Another study found that anticipating a win can increase brain activity in dopamine regions, whereas anticipating a loss decreases such activity. As expected, our expectation of a win or reward encourages us to take a risk. But similarly, the urge to avoid a loss also drives us to take a risk. Therefore, someone who is drawn to the thrill of skydiving may be acting on their urge to avoid serious loss, such as death.
It turns out that our chances of taking a risk can be manipulated. Research on rats shows that risk taking can be reduced by mimicking the dopamine signal that provides information about previous negative outcomes. The risk taking behavior of binge drinkers can also be reduced by experiencing, rather than expecting, a loss outcome.
So what makes some people more likely to be thrill-seeking than others? The answer has to do with both nature and nurture. Studies have found that people who have a specific dopamine receptor are more likely to engage in risky behavior. This gene variant may cause an increase in the release of dopamine in the brain when presented with unexpected rewards, making the new experience more thrilling. On the other hand, people may also be more likely to engage in thrill-seeking behavior due to the peer pressure to conform, or when the individual is feeling especially sad or stressed.
Nearing the end of this election season, we can see the great divide between Trump supporters and Hillary supporters. Both sides of the election show a great deal of unwavering support for their candidates. We can see that these supporters remain uncritical and unfazed when either candidate is involved with a scandal. What if there was science to explain why Hillary and Trump supporters are so unwilling to rationally criticize their candidate of choice?
In John Hartung’s article for the Foreign Policy Journal, he introduced research from Danish scientists that explains what happens to our brains when we hear people “preach” our beliefs. The study had two groups split up into how they identified: religious or non-religious. These people were put into MRI scanners and listened to recording of a pastor preaching. Those who identified as more religious had something happen to their brains compared to those who identified as non-religious – their medial and dorsolateral prefrontal cortices shut off. The prefrontal cortex is the part of the brain that coordinates our executive functions like: decision making, planning complex cognitive behavior, personality expression, and moderating social behavior. Now, this kind of reaction in our brains isn’t just limited to religious sermons, but is also likely relevant to our political beliefs.
Now we know why our Facebook newsfeed is filled with people from both political parties constantly defending their candidate: hearing political candidates “preach” their beliefs may be affecting their brains, preventing them from fully recognizing that their belief system and candidate could be flawed. This could also be why a lot of Trump supporters are completely unfazed by his various sexual assault scandals, and are willing to believe anything Trump says without question. The same goes for Hillary’s supporters who are willing to defend her despite her various scandals. This research gives us great insight on one possible reason behind people’s unwavering support for their favored candidate. With this in mind, we should all use our best judgment to vote on November 8th, and consider every aspect when choosing who we believe is the right candidate.
In recent years, many new technologies have been aiming to make virtual reality a thing of the present rather than just a vision. Defining virtual reality as experiencing an illusion different from that of the present, achieving virtual reality immersion has always been a goal of ours starting from the 1800s. No actual machine was created however, to experience the all senses until the 1930s-1950s with the creation of the Sensorama by Morton Heilig, and even then the machine was too big to carry around. Currently, we have many devices that actually simulate ourselves in a virtual environment and are a lot more mobile than they were in the past. Oculus Rift, HTC Vive, and even Google Cardboard are all different ways you can experience a dimension different from the present. Although we may view these worlds as different, new studies have begun connecting virtual reality to real life applications.
A recent study has shown that virtual reality is now being used to show how our brain may use perception and not sensation to regulate emotions and behavior. This study, done by Dr. Dobricki and Dr. Pauli, was done by having healthy people explore a VR-simulation of a forest glade. They had the participants walk across a plank with each of the participants given oneof four different set ups: having the height be either that of the trees or that of the ground and/or having the plank be bouncy or steady. The results showed that having a bouncy plank compared to that of the steady plank on a high pillar made the head look mainly below the horizon and had the participant mark the experience as negative. However, while the bouncy plank instead of the steady plank was on the ground surface, the participants had the head above the horizon and rated the experience as positive. These results largely show how perception and not necessarily the sensation of an environment could shape individuals’ emotions and behavioral responses. With that being said, many techniques of therapy have already begun adopting VR-stimulation in therapeutic practices because of its ability to affect people’s emotions.
New therapeutic techniques have been adopted with the rise of VR. The new adoration for the new technology comes from virtual reality’s ability to have control over variables while creating a naturally rich experience. Difficulties in traditional therapeutic techniques include the difficulty of creating a controlled environment, without sacrificing the ecological validity of said environment. VR has maximum control over the environment while allowing for a natural environment for the patient to behave in. With this practice being more and more accepted, maybe VR will start to be a part of everyday life and soon integrate with what we know to be reality. Time can only tell what new technologies this world will bring.
~ Albert Wang
In some situations, we end up surprising ourselves by how we act. These are the moments when we act automatically without thinking. It is as if we really didn’t know what was going to happen.
We have different narratives running through our minds, even without our conscious awareness. Underlying these narratives are complex networks of axonal tracts, synapses, and feedback loops. They interconnect and correspond in ways we don’t yet know, revealing explanations for human behavior that cannot otherwise be explained. One such explanation proposes that moral judgment and moral action are two separate entities, processed differently within and across individuals.
A study conducted at Plymouth University reveals compelling evidence for separate processes defining moral judgement and action. By comparing predicted action in textbook moral paradigms and actions in virtual reality moral paradigms, the research team revealed divergent results that suggest separate mechanisms.
In the textbook paradigms, most predicted that they would not sacrifice others for a greater good, whereas in the virtual reality paradigm, they did act in a utilitarian manner. Interestingly, antisocial traits in subjects were also examined, and such traits only predicted actions in the virtual reality paradigm. These findings show that there can be stark differences between what we say we would do and what we would actually do. It also shows that to get better insight regarding what people might actually do, virtual reality is a useful tool and testing paradigm compared to other non-realistic methods.
Clearly, morality is a complex human trait. A study like this shows us why we may have such difficulty making hard moral decisions. Trying to reconcile what we think we would do versus what we would actually do might be so hard simply because our brains make it that way.
~ Jackie Rocheleau