The cerebral cortex, a layer of neural tissue surrounding the cerebrum of the mammalian brain, has been known to play various roles in memory, language, thought, attention, and consciousness. Up until now, no invertebrate equivalent
to the cerebral cortex has been encountered, but Detlev Arendt, Raju Tomer, and colleagues may have found an evolutionary counterpart. The obvious answer is hidden in one simple creature– the worm. Wait, what? Yeah, you heard me. The marine ragworm, found at all water depths, has been shown to possess a tissue resembling that of our mysterious cerebral cortex.
Arendt and his colleagues used a technique called cellular profiling to determine a molecular footprint for each kind of cell in this particular type of ragworm. By utilizing this technique, they were able to uncover which genes were turned on and off in each cell, providing a means for cellular categorization. Surprisingly, mushroom bodies, regions of the ragworm’s brain that are thought to control olfactory senses, show a striking similarity to tissue found in our cerebral cortex. This intriguing discovery may provide remarkable insight into the evolutionary basis of what has developed into an incredibly important cerebral structure.
Read more about this review here, or see the original article in Cell.
The discoveries of modern neuroscience have certainly heightened our understanding of the brain and its functions, and have begun to provide us with a physical groundwork for the complicated problem of effectively investigating the mind. While it is certainly beneficial to establish physical principles that underly cognitive function of the brain, how does this effect the larger endeavor of understanding the mind? Neuroscientists such as Rebecca Saxe of MIT are converging on things like the nuroanatomical basis of moral judgment and just scraping the surface of what can bridge the gap between what physically “is” and what metaphysically “ought” to be. In her experiments, Saxe proposes that she has pinpointed the right temporoparietal junction (RTPJ) as a brain center for making moral judgments and has conducted experiments with magnetic brain stimulation that can effectively change the moral judgments of her subjects. Please see her TED talk here for a full explanation of her study.
In the 1700s, David Hume proposed what has now become known widely as the Is-Ought Problem. He calls for caution in making statements about morality or what “ought” to be based on extrapolations of what “is” and that what ought to be does not necessarily follow from what is. The problem aptly applies to neuroscientists like Saxe whose research make strong suggestions about the neural basis of existence and attempts to bridge the is-ought gap. All of this research is establishing a large library of what “is” concerning the brain, but it also suggests that metaphysical concepts such as morality and meta-ethics can be reduced to neurological connections and connectivity. Hume stresses that while what is and what ought to be are important revelations in and of themselves, what ought to be need not follow from what is. Neuroscience must understand this separation as its advances begin to encroach on many of philosophy’s already well-established concepts.
What I’m saying here is that modern neuroscience must use caution in making conclusions about human nature. Empirical evidence can certainly be used to help understand more abstract ideas, but the evidence and the ideas must remain seperate with respect to causality. Making discoveries about brain function and the empirical science behind things like emotion or judgment is a valiant and respectable scientific investigation. However, this pursuit must be kept separate and distinct from the pursuit of understanding how we ought to be or act. Our moral thought is something more abstract and multidimensional than connections between neurons and sequential acton potentials. While investigation of the science of the mind is important, it should not seek to explain our existence nor try to answer philosophy’s greatest problems with calculations and empirical data.
The Is-Ought Problem – David Hume via Wikipedia
Theory of Mind TED Talk – Rebecca Saxe (MIT)
Maciek Drejak Labs released an app earlier this year for the iPhone (which can also be used on the iPod Touch) called “Sleep Cycle.” Recently, Lifehacker rated this App the best alarm clock application function for smart phones for its weekly Hive Five feature.
The way this application works is by monitoring your body movements during sleep. The user is instructed to place the phone face side down between the fitted sheet and the mattress. Over the course of the night, the program registers high amounts activity (movement) as “awake,” moderate activity as “Dreaming” (REM sleep), and little to no activity as “Deep Sleep” (slow wave sleep). For the first two or three nights of use, Sleep Cycle familiarizes itself with the user’s movement patterns by creating a graph of the user’s sleep cycle.
At the peaks of the graph, the user is most likely at his or her lightest sleep. The user sets an alarm for what time he or she would like to wake up. Within the last thirty minutes of a night’s sleep, Sleep Cycle will analyze the peaks of the graph and will attempt to wake the user gently when he or she is exhibiting a peak of high activity (ie: experiencing light sleep).
Customer reviews express some mixed results with this application, but overall, it appears that many people have positive experiences with Sleep Cycle. Some users report that when the application works properly, they feel wonderful when they wake up rather than being ripped out of deep sleep or a dream when the alarm goes off.
Although this application is only useful for iPhone and iPod Touch users, it is fairly inexpensive at 0.99 cents, and it has the potential to help extremely deep sleepers.
Stuart Hameroff, MD, is an anesthesiologist and professor at the University of Arizona. In one of many articles and videos about consciousness on the Huffington Post, Hameroff describes how anesthesia can help explain consciousness.
If the brain produces consciousness (all aspects of the term), then it seems to follow that turning off the brain will also turn off consciousness. This is exactly how anesthetics work.
While most anesthetics are nonselective “dirty” drugs, they all produce loss of consciousness, amnesia, and immobility by either opening inhibitory ion channels or closing excitatory ion channels in neurons. The commonly used intravenous drug propofol, for example, acts by activating GABA receptors, the ubiquitous inhibitory channels in CNS interneurons. Brain off = consciousness off.
Hameroff does not subscribe to this. He argues that consciousness is an intrinsic part of the universe and that anesthetics simply disconnect it from the brain. He also thinks that by saying “quantum” a lot, he can scientifically prove the existence of the soul.
What’s scary is that Hameroff has “MD” and “Professor” next to his name. Will Joe the Plumber see through the misinformation?
Don’t take the HuffPost too seriously:
Imagine that you’ve just spent the whole morning working non-stop. You’ve been hushing your stomach grumblings for the past hour and you cannot concentrate on anything but your hunger and that devastatingly slow-ticking clock.
Another hour passes and that long awaited lunchtime break has finally come around. All you know is that you need food, and you need it now, so you decide to stay in your building and rush to the cafeteria. You enter, put your things down, and begin the search.
Which foods will you choose? Or, which foods will choose you?
Brian Wansink and David R. Just are trying to answer that question, specifically pertaining to children. In addition to being faculty at the Dyson School of Applied Economics and Management at Cornell, Wansink and Just are co-directors of the newly launched Cornell Center for Behavioral Economics in Child Nutrition Programs. With a $1 million grant from the U.S. Department of Agriculture, the center will provide valuable research on subtle behavioral influences, helping efforts to “nudge” children into making healthier eating choices.
Wansink says, “We’re taking some of the best researchers in the nation and pairing them with schools to figure out new, cool ways to get people to eat healthier.” For example, “by strategically placing healthy food at both the beginning and end of school lunch lines, more children choose them.”
Naming foods more descriptively, charging extra for dessert, and placing healthy foods like fruits into baskets also increased choice of healthier foods. Other interesting techniques Wansink and Just suggest can be seen in Joe McKendry’s illustration in The New York Times.
The benefits of this research seem clear enough. Children who are encouraged to eat healthier each day at school will likely develop long-lasting, healthy habits. These habits can then help reduce their risk of obesity and associated diseases.
So what about the drawbacks? Are there any? Is changing the way options are presented a violation of free will or choice? Just says he and his colleagues are “not eliminating choice… [they’re] pushing things where [they] can and not trying to do the impossible.”
What do you think? Are there degrees of choice? Are Just and Wansink simply lowering the degree children have in selecting foods to eat? Either way, isn’t it sort of disturbing that such subtle changes in placement, naming, or presentation can influence your decisions, whether it be which foods you eat or which habits you’ll develop?
The research in the rising field of behavioral economics certainly leads one to ask these questions. Author of Predictably Irrational, Dan Ariely talks on TED about how irrational people are in their decisions. Particularly, he discusses how easily external forces can influence choices. For example, depending on how a question is worded or presented, people respond differently even though the two forms of the question are essentially the same.
One interesting study he conducted involves choosing the image of the most attractive man out of a total of three men. Two of these images are of the same man, Jerry, but one is edited to make him less attractive by distorting the facial features. The other image is of another man, Tom. Most participants chose the more attractive version of Jerry. However, if instead there are two images of Tom, one less attractive, and one image of Jerry, most people choose the more attractive version of Tom. Even though the original images of Tom and Jerry remained in both sets, they did not receive the same response because of an external force.
Through these examples, Ariely demonstrates how much influence the designer, whether of surveys, forms, or tests, has on the decisions of the people filling them out. Do the people still have a choice to choose Jerry when unattractive Tom highlights regular Tom so well? If yes, then why do most people choose Tom? Are there degrees of choice involved? How about degrees of resistance to external forces? Do they change at all when the designers have different intentions? For example, compare a store selling 2 shirts for the price of one and a cafeteria offering “creamy corn” and “two-dollar cookies.” Both places are trying to take advantage of subtle differences, but the cafeteria seems to have kinder intentions.
What can we make of all this? Can we change how irrational, as Ariely might say, we are? If so, should we advocate the advances of behavioral economics in their kinder intentions, despite the seeming drawbacks?
Stealth Health for Kids – Newsweek
Lunch Line Redesign – The New York Times
New center, with $1 million grant, aims to make school lunchrooms smarter – ChronicleOnline, Cornell University
Dan Ariely: On Our Buggy Moral Code – TED Talks
Tired of hearing about halogenation and hydrogenation reaction mechanisms? Keep that organic chemistry book open, because it gets better:
At Columbia University Medical Center, researchers have discovered the reason for the build-up of harmful proteins in Parkinson’s patients. The scientists have worked out a mechanism for the build-up of a class of proteins known as polyamines, a known neuron-killer involved in Parkinson’s disease. High-resolution fMRI scans showed a brainstem region in Parkinson’s patients that produced less activity than the same region in healthy patients. Based on tissue samples from deceased patients, Dr. Scott A. Small of Columbia posits that SAT1, an enzyme that breaks down polyamines, might play a role in the development of Parkinson’s and thus explain the differences in the fMRI studies.
Experiments in yeast, mice and humans have shown the pathogenicity of polyamines. At Brandeis University, researchers showed that yeast cells engineered to produce polyamines died more quickly than yeasts that were not. Scientists at UC San Diego School of Medicine used mice to both demonstrate the connection between SAT1, polyamines and Parkinson’s and to show that SAT1-targeting drugs could help to deter the progression of the disease. In addition, Columbia researchers have studied the SAT1 gene and found a genetic variation that is present in Parkinson’s patients but absent in healthy control subjects.
Several years ago, polyamine-lowering drugs were being studied as potential cancer treatments. It was unclear in these trials whether or not the drugs could penetrate the blood-brain barrier. The ability to penetrate the blood-brain barrier is crucial for Parkinson’s treatments: if the drug cannot get through, it must be administered into the brain directly. Dr. Small’s lab is currently working on drugs that can pass through the blood-brain barrier so that Parkinson’s can be effectively treated with pills rather than brain surgery!
New Molecular Pathway Underlying Parkinson’s Disease Identified – Karin Eskenazi, media contact, neurosciencenews.com
Polyamine pathway contributes to the pathogenesis of Parkinson disease – Lewandowski et al. (Original Article)
Ever hear of “neurocinematics” – a term coined by Uri Hasson of Princeton University?
If not, it’s basically a method that has been employed by neuromarketers, using instruments that had been predominantly handled by scientists, that’s targeted towards filmmakers. Using tools such as biometric devices (to track eye movements and heart rate), EEG (to analyze brain waves), and fMRI (to record brain activity), neuromarketers can help filmmakers better understand their viewers’ reactions, whether to completed pieces, screenings, or trailers (the latest Harry Potter movie trailer employed neurocinematics).
“Under the assumption that mental states are tightly related to brain states” (a hypothesis that is widely accepted by most neuroscientists and many philosophers), Hasson and colleagues found that “some films can exert considerable control over brain activity and eye movements.”
Neuromarketers ensure the reliability of their findings using several techniques.
To provide a basis for measuring the viewers’ brain activity and to avoid measuring noise dissociated from the task at hand, neuromarketers assess the participant viewers while watching non-stimulating targets (e.g. a standard cross amidst gray background), which should elicit no response. Then, neuromarketers compare this response (or lack thereof) to that elicited from a clip. Some participants may even be asked to watch a clip up to three or four times for comparison purposes.
Because the response of one participant does not say a lot about a clip, neuromarketers use inter-subject correlation analysis (ISC) to ensure further reliability. They can “assess similarities in the spatiotemporal responses across viewers’ brains,” in which correlations can “extend far beyond the visual and auditory cortices” to other areas, such as the lateral sulcus (LS), the postcentral sulcus, and the cingulate gyrus.
In 2008, Uri Hasson and colleagues measured how viewers’ brains responded to different types of films, ranging from real-life scenarios, documentaries, and art films to Hollywood blockbusters. While Alfred Hitchcock’s big hit Bang! You’re Dead elicited the same response across all viewers in 65% of the frontal cortex, Larry David’s Curb Your Enthusiasm elicited the same response in only 18%.
Hasson alleges that the ISC level indicates the extent of control a filmmaker has over his viewers’ experiences, whether intentionally or not, leaving Hitchcock consequently with more control than Larry David.
Hasson’s team also changed the order of scenes for different participants and assessed viewers’ reactions, finding that the more coherent the scene order, the higher the ISC in parts of the brain involved in extracting meaning. Changing the order of scenes can help filmmakers determine which sequence effectively promotes their viewers’ understanding.
Phil Carlsen and Devon Hubbard of MindSign in San Diego, CA. suggest that using neurocinematics can help filmmakers decide which actor will elicit more brain activity from viewers and consequently, give them a better shot in the box office. Not only that, but the method can also help them assign movie ratings, depending on how brain areas associated with disgust and approval respond.
Carlsen has also found, not surprisingly, that 3D scenes activate the brain more so than those of 2D, particularly when viewers used modern polarized glasses over the older blue and red ones.
Neurocinematics has the ability to change the film industry immensely. Whether a filmmaker wants near-complete control or just enough to ensure his message crosses over, he can use this method to make it happen. Even the U.S. Advertising Research Foundation is seriously considering this new method, defining regulatory standards and a quality consult, says Ron Wright of Sands Research.
While some may disagree over whether neurocinematics is killing creativity or invading human interest or personal privacy, others might find it revolutionary, providing filmmakers with more opportunities to create their ideal pieces, and viewers with more engaging, worthwhile films.
Wright, along with neuromarketing consultant Roger Dooley, would likely argue that the method is far from invading human interest or privacy. Wright believes there are too many variables in determining the human mental “buy button,” which would hypothetically lead someone to spend money – in this case on a film. Dooley does not believe that neuromarketers will “ever find some sort of magic spot that will allow [them] to accurately predict whether someone will purchase a product or not.”
Neurocinematics, agreeable or not, is becoming an important element in the blending of the arts and sciences.
Blending with dozens of other fields and creating amazing products and methods out of doing so, neuroscience, in my opinion, has driven down quite the revolutionary road.
Sources & Additional interesting, related sites:
Songs in the key of EEG – Michael Brooks, NewScientist
Neurocinema – film producer Peter Katz, YouTube
MindSign Neuromarketing — MindSign
Science of the Movies – MindSign Neuromarketing – Nar Williams, YouTube
DOI: 10.3167/proj.2008.020102) – Hasson, et al., Projections
Brain scans gauge horror flick fear factor – Grace Wong, CNN