I hereby request that you read the following excerpt from Alex Bellos’ Alex’s Adventures in Wonderland. However, I recognize that your choice to do so (or not to) is entirely up to you. You have one-hundred percent control. Either way, here it is:
“The human brain finds it incredibly difficult, if not impossible, to fake randomness. And when we are presented with randomness, we often interpret it as non-randomness. For example, the shuffle feature on an iPod plays songs in a random order. But when Apple launched the feature, customers complained that it favoured certain bands because often tracks from the same band were played one after another. The listeners were guilty of the gambler’s fallacy. If the iPod shuffle were truly random, then each new song choice is independent of the previous choice. As the coin-flipping experiment shows, counterintuitively long streaks are the norm. If songs are chosen randomly, it is very possible, if not entirely likely, that there will be a cluster of songs by the same artist. Apple CEO Steve Jobs was totally serious when he said, in response to the outcry: ‘We’re making [the shuffle] less random to make it feel more random.’ Why is the gambler’s fallacy such strong human urge? It’s all about control. We like to feel in control of our environments. If events occur randomly, we feel we have no control over them. Conversely, if we do have control over events, they are not random. This is why we prefer to see patterns when there are none. We are trying to salvage a feeling of control. The human need to be in control is a deep-rooted survival instinct. In the 1970’s a fascinating (if brutal) experiment examined how important a sense of control was for elderly patients in a nursing home. Some patients were allowed to choose how their rooms were arranged and allowed to choose a plant to look after. The others were told how their rooms would be and had a plant chosen and tended for them. The result after 18 months was striking. The patients who had control over their rooms had a 15 percent death rate, but for those who had no control the rate was 30 percent. Feeling in control can keep us alive.”
Bellos is a British writer and broadcaster who studied mathematics and philosophy as an undergraduate (so his sweeping conclusions might be a little dramatized with pretty flourishes for emphasis, but seem worth mulling over for a couple minutes on a long summer day).
Bellos grazes over several points in these two short paragraphs: Our ability to find patterns where there are none, our comfort with patterns, our constant grasp at control through patterns, and the emotional support that control gives. Many neuroscientists would argue that any control we enjoy is all a mirage; every decision and thought is predetermined despite the turmoil we feel when making a decision. But if that is the case, why does picking the khaki shorts over the denim or “one vente, skinny, extra hot, double shot frappawappacino with whip, please” feel so right? If our decisions have all been made for us – and we are just speaking for the desires of our neurotransmitters -why would the illusion of control give us such comfort? Why would it double the survival rate of elderly patients? If my brain has carefully calculated all of my decisions from now until forever, why did we develop so that control feels so good?
In any case, a gold star to Steve Jobs for making the shuffle option less shuffle-y. Gambler’s fallacy, shmambler’s fallacy – I know my iPod was starting to show a clear preference for Queen.
What contributes more to creating a person’s identity (i.e. personality, behavior, intelligence)? Is it genetics, or is it the environment in which the person was raised? In other words, as Francis Galton might ask, is it “nature” or “nurture?”
When it comes to how empathetic someone is, Frans de Waal, a Dutch primatologist and ethologist, believes it’s both nature and nurture. He says that a person’s empathy is “innate” – inherited through genes – but also that a person can learn to become more or less empathetic. That seems reasonable; depending on early experiences and education, someone may be more or less of a certain characteristic.
But how is empathy innate? Two NewScientist writers, Philip Cohen and Ewen Callaway, wrote articles discussing the areas in our brains called the anterior cingulate cortex (ACC) and the anterior insula (AI), which become active not only when we are in pain but also when others are.
Imaging studies, cited in their articles, found a positive correlation between a volunteer’s reported empathy for a person in pain and activity in the pain-processing areas of the volunteer’s brain. This has led Cohen to believe, “Humans are hardwired to feel empathy.”
For example, in a study led by Shihui Han and colleagues, “17 Chinese and 16 Caucasian (from the US, Europe and Israel) volunteers” were shown videos of strangers, both Caucasian and Chinese, in pain while their brains were scanned using fMRI. While their fMRI results suggested that they responded more empathetically towards volunteers of the same ethnicity or from the same country, their responses actually indicated they “[felt] each other’s pain about equally.”
Interestingly, our brains seem to be “hardwired” to feel more for certain groups over others, whether we notice or not. These groups appear to consist of people we can identify more with, whether through ethnicity, age, gender, or any other in-group.
Frans de Waal would find these results quite understandable. He says, “Empathy is more pronounced the more similar you are to someone, the more close, socially close, you are to someone.” He continues to say that empathy “evolved… for members of any species that is cooperative and social… it’s important to take care of others in the group because you depend on [them], you survive by [them].”
Seemingly then, our brains, and likely those of other species, have evolved to serve a survival advantage; they respond in those pain-processing areas more actively when those like us are in pain, despite what we report as our level of empathy.
While we seem to be hardwired to empathize more with certain groups over others, we’re still united as a species to empathize with one another over those of other species.
Martha Farah, a cognitive neuroscience researcher, suggests that we have a “person network” divided into persons and non-persons, which has promoted closer social bonds within our species. Farah proves that this brain network exists by considering the rare disorder prosopagnosia, which consists of “impaired visual recognition of the human face.” A specific area of the brain can be “selectively” damaged for one to obtain the disorder, demonstrating that specialized areas of the brain exist for discerning other humans.
Whether our brain also specializes in empathy towards non-persons is something to look into. For now, consider yawn contagion, which de Waal discusses with TIME about. He says there is a “deep bodily connection” that allows pets to catch yawns from their owners. This seemingly innate connection seems to break physical barriers with other animals, but what, if any, connection breaks emotional ones? And is it innate, or is it learned?
Have animal rights activists and pet lovers learned to be more empathetic towards non-persons? I’d like to think that it’s not just the influence of my environment that has led me to empathize with my childhood pets or toys – not to mention some of my favorite characters, like Hamm from Toy Story or Patrick from SpongeBob SquarePants.
Whether it is learned, innate, or both, I cannot say, but anthropomorphism seems to explain our emotional connections with non-humans. It breaks the barrier, allowing us to personify or add human characteristics to non-humans. For example, most people would probably like to think of their childhood pets as loved ones with human-like feelings and desires. However, would some stranger halfway across the world feel the same way you do about your pet? Probably not. They’d likely think of it as just another animal, simple as that.
Most people, if asked if they support animal rights, would probably answer ‘Yes’ or some derivative of that. But, would they promise to never buy any animal-based products (eggs, meat, suede, leather, or even the chinchilla coat seen on Teresa last week in The Real Housewives of New Jersey)? Most likely not. I mean, for anyone, that’s a hard promise to keep when we have other priorities.
So how do we go from talking to our pets as if they were humans to absentmindedly buying products that might contain ingredients of an animal just like our pets?
de Waal says we do this through dehumanization. We go about anthropomorphizing our favorite pets, toys, and characters just as we go about dehumanizing them. By removing human characteristics, like emotion or spoken language, we don’t have to feel as bad about buying that leather jacket we always wanted. de Waal reminds us, “We eat nonhuman animals, wear them, perform painful experiments on them, hold them captive for purposes of our own – sometimes in unhealthy condition. We make them work, and we kill them at will.”
So, the next time you shop and find that animal-based product you just NEED to buy, take a second to think about how you’re setting your priorities. Think about how, maybe unconsciously or unintentionally, you are dehumanizing the animals used for the creation of the product you’re about to buy. Couldn’t that animal be from the same species as your favorite TV character, or even your old pet? I think so, easily.
Are Humans Actually Selfish – Time
Learning Empathy From Apes – KPBS
Brain’s response muted when we see other races in pain – NewScientist
Humans are hardwired to feel others’ pain – NewScientist
Primates and Philosphers: How Morality Evolved – Google books
As my good friend Cobb once told me, “Dreams feel real while we’re in them. It’s only when we wake up that we realize something was actually strange.”
OK, fine, Leonardo DiCaprio’s character from Inception isn’t real, but he does make a valid point. Oneirologists, those who study dreams, have traditionally viewed dreams as uncontrollable streams of sounds and images with the ability to induce a tremendous spectrum of emotion. However, the idea of lucid dreaming has caused the conventional understanding of dreams to collapse. A “lucid dream,” terminology coined by the Dutch psychiatrist Frederik van Eeden, is one in which the sleeper is aware that he or she is dreaming. This example of dissociation is wonderfully paradoxical in that it exhibits components of both waking and dreaming consciousness.
An American psychiatrist and dream researcher named Allan Hobson specializes in the quantification of mental events and their corresponding brain activities. Although he vehemently dismisses the idea of hidden meanings in dreams, he has embarked on a search along with other neurobiologists and cognitive scientists to decipher the neurological basis of consciousness. Hobson hypothesizes that subjects may learn to become lucid, self-awaken, and regulate plot control by intercalating voluntary decisions into the involuntary nature of the dream.
The validation of this idea would imply that the mind is capable of experiencing a waking and a dreaming state at the same time. Consequently, Hobson states, “…it may be possible to measure the physiological correlates of three conscious states, waking, non-lucid dreaming, and lucid dreaming in the laboratory.” If there is a psychological distinction between the three, there should also be a physiological difference.
The advent of lucid dreaming experimentation has not only benefitted Hollywood, but it has also provided possible treatment options for those hindered by frequent nightmares or post-traumatic stress disorder (PTSD). Methodologically speaking, the study of lucid dreaming presents a formidable challenge, but it is becoming an important component of the cognitive neurosciences.
Josefin Gavie and Antti Revonsuo have built on Hobson’s theories by proposing a technique termed lucid dreaming treatment (LDT). The key to this treatment is that the subject learns how to identify cues that facilitate lucidity during a dream, and the subject learns to manipulate the environment once lucidity is attained. The phenomenon of lucidity may prove to be a useful device in that it offers the sleeper a method to control components of the dream – altering and diminishing any threatening situation. Although the investigation of LDT is extremely new and incontestably controversial, it has shown promising preliminary results in its ability to lower the frequency of nightmares in the selected subjects.
The premise of the film Inception may be wildly hypothetical, but it has expertly amplified the current research on lucid dreams. However, researchers in the field should take a word of advice from the character of Eames: “You mustn’t be afraid to dream a little bigger, darling.”
The Neurobiology of Consciousness: Lucid Dreaming Wakes Up – J. Allan Hobson
The Future of Lucid Dreaming Treatment (PDF) – Josefin Gavie and Antti Revonsuo
The familiar mantra “practice makes perfect” may be taken too literally. The definition of effective practice as the constant repetition of a particular exercise – a golf swing, a tennis serve, a dance step – is faulty, as it turns out.
Time has reported on a study published in Nature Neuroscience by neuroscientists at the University of Southern California and UCLA. The study compares the results of repetitive, “constant practice” with the results of “variable practice.” In one experiment, scientists instructed a group of subjects to copy a movement with their forearm as displayed by a line on a computer screen. One group representing constant practice repeated a movement holding their arm at 60-degrees 120 times. The variable practice group was asked to do the same 60-degree movement only 60 times, but they were also asked to do three other movements 20 times each. The two groups did equally well in practice. However, when they were retested 24 hours later, the variable practice group outperformed the rote repetition group on the 60-degree task.
So, variable practice works – but why? Some of the subjects from each group were treated with transcranial magnetic stimulation (TMS). A portion of each group had TMS in the prefrontal cortex, and another portion received TMS in the primary motor cortex. The prefrontal cortex is the part of the brain that allows for executive functions like reasoning and planning while the primary motor cortex deals with simple, physical task learning. Fittingly, when the prefrontal cortices of variable-practice group members were “messed with” by TMS, the performance of the participants declined. Performance levels also decreased when constant-practice subjects underwent TMS in their primary motor cortices. It seems that “tedium is bad for the brain,” and it needs variety to actively learn by using higher structures like the prefrontal cortex to better retain what has been practiced.
It would be interesting to find out whether or not this concept applies to different types of learning, like studying for exams or playing an instrument. Even when training a dog, it is suggested to work amid distractions and to increase the time between clicking the “clicker” to let the dog know it has performed a task correctly and rewarding it with a treat. A higher level of focus seems to occur when there are more variables in the practice routine. My piano teacher must have been on to something when she gave me so much homework!
Article: Practice Structure and Motor Memory -Nature Neuroscience
I would hate to marginalize the Creationists that may frequent this blog, but, it is becoming difficult to ignore all of the
evidence for Evolution piling up higher and higher. This conglomerate of information is contributed to by almost all fields of study- from Archeology to Biology, and in a recent surge in the rapidly growing field, Neuroscience. Unfortunately for us, submitting to the idea of Evolution forces us to think of ourselves and our fellow humans as a little less awesome or unique- we have always reveled in our species immense capacity for complex language processing, as well as other things of course. But it didn’t just pop up out of nowhere a couple million years in.
To expand upon the research being done by neuroscience in exploring the evolution of the brain, this article focuses on Wernicke’s area (known for its dedication to processing auditory language information) in chimpanzees. This study used design-based stereologic methods to estimate regional volumes, total neuron number and neuron density. When compared to what we know about the human brain, the results are intriguing.
What did they find? A leftward asymmetry of this language area in the monkey brains. What does this mean to us? It suggests that the left lateralization of the language area in the brain (left = language, left = language, first thing to memorize in Psych 101) originated before our cutting-edge human species, prior to the appearance of our modern human language.
This investigation may seem generally boring- these Chimpanzees aren’t Darwin’s finches or anything- but it certainly is significant in showing Neurosciences’ huge potential for contribution to the case of Evolution. They showed that a language specialization that is key to our unique language capabilities actually evolved prior to the emergence of modern humans, serving as a pre-adaptation to modern human language and speech. Studies like these are closing the gap between ancestral species, and unfortunately, making us all feel a little less special about our leap to civilized society. Way to go Evolution.
Similar articles investigating monkey brains and language adaptions: http://current.com/1ri8u4c
Ps.- If anyone studying at BU is really really into Evolution, I highly recommend the Ecuador Study Abroad Program… A trip to the Galapagos Islands (on a private yacht, no less) and to the Charles Darwin Research Station is a chance of a lifetime.
Capitalism gained a solid foundation in the 19th and 20th centuries due to the development of several philosophies of human nature, all of which proposed a rational, self-sufficient individual to be the most important element in any society. One author in particular, the famously arrogant Ayn Rand, who once said “emotions are not tools of cognition” advanced the idea that humans have direct contact with reality through sensory perception, and thus could inductively and deductively produce logical concepts of the world which exactly reflected the nature of reality. This idea, combined with the assumption that humans have free-will, led Rand and others to the conclusion that the individual’s rational pursuit of survival, property and happiness was an inalienable right, and thus laissez faire capitalism remained the only moral political system.
This was all nice and fine for a society that knew next to nothing about human psychology and brain physiology. But much of the data gathered since suggests that humans are irrational, lack an ability to directly perceive their environment through sensory perception, and use emotions as their guides in nearly all decision making.
As an example of human irrationality, consider the Ultimatum Game, in which one subject is given an amount of money, say $10, and told to give any amount of this to a second subject. Both subjects are informed that if the receiving subject refuses to accept the offered amount, neither subject gets to keep any money at all.
The receiving subjects tend to reject stingy offers, preferring an outcome which actually grants them less money than they would have received if they had simply swallowed their pride and accepted the puny sum.
Further evidence for human irrationality can be seen in a study which asked its subjects to select a bar of soap to purchase among several different types. Subjects invariably justified their choice of one particular bar of soap, referencing its superior scent, sanitization capacity, and low price. Video records of the subjects revealed that they persistently caressed the smooth, ovular shape of the “superior” soap bar. The same soap’s popularity plummeted when it was presented in a rectangular shape.
Indeed, it appears that as humans we are often unknowingly overwhelmed by our whims and irrational urges. What, then, does this mean for capitalism?
If anything, it’s safe to say that as pursuers of happiness, we humans are at risk for making irrational decisions and conclusions about what will make us happy. We hoard electronic gadgets, eat sugary foods, and cram our brains full of reality TV and advertisements, all with the idea that these things bring us comfort and fulfillment. Our cities expand, our consumption of natural resources escalates, and our waste accumulates, everywhere. Still, we act surprised when we discover the rising rates of ADD, obesity and depression, and we somehow overlook the fact that our existence will be endangered upon the depletion of our finite resources.
Through adherence to a laissez-faire economic system, our inability to be consistently rational jeopardizes our survival. Hopefully, the influence of neuroscience and psychology will extend over the realm of politics and economics in time to save us from ourselves.
Messages and Myth – Dan P. Millar
For the New Intellectual – Ayn Rand
“Turn on, Tune in, Drop out.”
With this snappy catchphrase, and an eponymous book, Dr. Timothy Leary boldly endorsed the use of hallucinogens to the American public five decades ago. Citing potential medical benefits, Leary believed that psychedelic drugs like LSD and psilocybin could help patients overcome psychiatric illness and facilitate a higher stage of consciousness.
While his divisive stance and strident attitude made him a symbol of the counterculture movement in the 1960’s, his ideas have been largely regarded by the scientific community as radical and medically insignificant. But recently, some researchers have begun to rethink those conclusions.
In April, the New York Times reported on the case of Clark Martin, a former psychologist who participated in a psilocybin study at Johns Hopkins Medical School. Suffering from depression, Martin opted to participate in the program after finding no solace in traditional treatments. In the study, Martin was given psilocybin and spent the next five hours listening to classical music in private reflection.
A year later, he reported that the psilocybin treatment had helped him largely overcome his depression, and considers the experience one of the most meaningful of his life. He wasn’t alone. Most of the patients in the study said the treatment yielded positive, long-lasting benefits.
Encouraged by these results, scientists are also considering studying the effects of psychedelics in the treatment of obsessive compulsive disorder, anxiety, post traumatic stress disorder, and addiction.
The Johns Hopkins study largely corroborates the data Leary had collected decades earlier in a 1960 study on psilocybin conducted at Harvard University, in which he saw significant positive results in his patients.
The use of psychoactive drugs in medicine is still relatively new. Not until the middle of the 20th century, with the introduction of the anti-psychotic drug Chlorpromazine, have physicians turned to pharmaceuticals to treat psychiatric illness. They largely displaced older, more invasive treatment options such as insulin shock therapy, psychosurgery, and electroconvulsive therapy.
Perhaps because of their initial success, pharmaceutical companies have recently embraced psychotropic medications, finding massive markets for the treatment of conditions like depression and anxiety.
But while drugs like Prozac and Valium have made fortunes, research for experimental treatments like psychedelics has garnered little attention. Without the profit making potential of less radical options, pharmaceutical companies are reluctant to allocate time and money to developing these equivocal treatments.
Still, a one-time session that produces lasting results could potentially provide a cheaper and less disruptive alternative to a daily pill. And the treatment, an interesting middle ground between therapy and medication, perhaps provides a more complete approach.
Now, 50 years removed from the counterculture movement that defined Leary’s research, his stigma is beginning fading, but maybe his science will remain.
One hundred years ago, when Alzheimer’s Disease (AD) was even more of a mystery than it is now, amyloid protein aggregates were described as black spots that showed up on brain slices after autopsy. These aggregates, commonly known as plaques, denote the telltale sign that a patient has AD. Until recently, these plaques could only be detected after death, but Dr. Daniel Skovronsky, creator of Avid Radiopharmaceuticals, may have a solution.
On July 11th, Dr. Skovronsky will present his latest findings at the international meeting of the Alzheimer’s Association in Honolulu. He has spent the last five years creating a fluorine radioactive dye to be used in positron emission tomography (PET) scans. The results of these PET scans are engineered to be so accurate that they can compete with brain autopsies, the only method currently available to determine whether a patient has AD.
The Food and Drug Administration (FDA) questioned Dr. Skovronsky about his fluorine-18 dye and whether the results of fluorine-18 PET scans compare to the definitive results of brain autopsies. Dr. Skovronsky recruited thirty-five patients in hospice with ranging levels of memory loss; all of these patients would receive a PET scan and would have their brains autopsied post-mortem. The results of each patient’s PET scan matched his or her autopsy results.
If approved by the FDA, Dr. Skovronsky’s work will lead to an increase in accuracy in the diagnosis of Alzheimer’s disease. Currently, 20% of patients diagnosed with AD are revealed to not have the disease when an autopsy is performed. With fluorine-18, Dr. Skovronsky has fine-tuned a method to detect amyloid plaques in the brain in a living patient, which is a feat within itself. Previously, the only way one could determine whether a patient had the disease or not was through autopsy – a posthumous procedure. Now, patients could have the chance to receive an accurate diagnosis while they are still alive and earlier in their lives.
In addition to simply detecting plaque, fluorine-18 will also aid in understanding the development of the disease, for plaques were found in patients deemed as healthy when they took memory tests. Currently, people who are not diagnosed with AD earlier in life will not receive treatment until the disease has developed more, and they will likely not receive any preventative medicine. With Dr. Skovronsky’s PET scans, doctors could diagnose the development of the disease earlier and administer preventative measures to slow down the development of the disease. Also, patients who are currently misdiagnosed with AD do not receive the correct treatments that they need for the diseases that are actually causing their memory loss or dementia, like depression.
The Vanishing Mind – Promise Seen for Detection of Alzheimer’s – NYTimes
The Alzheimer’s Disease Neuroimaging Initiative positron emission tomography core – Alzheimer’s Dement. 2010
In Vivo Imaging of Amyloid Deposition in Alzheimer Disease Using the Radioligand 18F-AV-45 (Flobetapir F 18) – The Journal of Nuclear Medicine