Tagged: Neuroscience and Society
Hey Scientists, Where’s My Jetpack?! : The future is here; it just looks a little different than expected
In almost every major futuristic science-fiction work of the last century, jetpacks and flying cars are seemingly as ubiquitous as today’s oversized SUV’s, lining the closets and garages of every hardworking American. Understandably, in the year 2011, this has lead many disenchanted Trekkies and purveyors of assorted geek cultures to ask, “Well, scientists, where’s my jetpack?!” While I commiserate with my fellow fans of Asimov and Adams, several recent innovations have led me to believe that we all might be overlooking just how “futuristic” the time we live in really is. Accessing Google on the iPhone is certainly as close to the Hitchhiker’s guide to the galaxy as we may ever come. We have the ability to beam blueprints of intricate plastic objects and now even organs anywhere in the world and literally print them out. We have computers that can beat us in Jeopardy! And last but not least, Ladies and Gentlemen, I present to you Brain Driver, the thought-controlled car. On behalf of scientists everywhere, I accept your apologies, geeks. More
A Peek at Parkinson’s: What’s New for the Old?
With the Pancakes for Parkinson’s event at Boston University nearing, on April 2nd, I thought it would be a good time to check up on the latest in Parkinson’s research.
Firstly, Parkinson’s Disease (PD) is a motor disorder that affects dopaminergic neurons of the brain, which are necessary in the coordination of movement. Onset is usually around age 60, starting with symptoms including tremor, stiffness, slowness of movement, and poor balance and coordination. While current treatments can help alleviate the symptoms in patients, none provide a cure.
Second off, the mission of the Michael J. Fox Foundation for Parkinson’s Research and other support groups is to find better treatments for those suffering from the disease. With the Baby Boomer generation entering late adulthood and old age, more research needs to be done to better understand the disease and help those with it find relief. Consider stopping by the GSU Alley for some pancakes to show your support for the Foundation and its cause next month!
Ranging from studying food intake to using technology, many approaches have been used in PD research. More
Moral Code
Why is it wrong to kill babies? Why is it wrong to take advantage of mentally retarded people? To lie with the intention of cheating someone? To steal, especially from poor people? Is it possible that Medieval European society was wrong to burn women suspected of witchcraft? Or did they save mankind from impending doom by doing so? Is it wrong to kick rocks when you’re in a bad mood?
Questions of right and wrong, such as these, have for millenia been answered by religious authorities who refer to the Bible for guidance. While the vast majority of people still turn to Abrahamic religious texts for moral guidance, there are some other options for developing a moral code. Bibles aside, we can use our “natural” sense of what’s right and wrong to guide our actions; a code based on the natural sense would come from empirical studies on what most people consider to be right or wrong. Ignoring the logistics of creating such as code, we should note that the rules in this code would not have any reasoning behind them other than “we should do this because this is what comes naturally.” How does that sound? Pretty stupid.
The other option is to develop a moral code based on some subjective metaphysical ideas, with a heavy backing of empirical facts. “Subjective” means these ideas won’t have an undeniability to them; they are what they are and that’s it. Take as an example the rule such as “we should not kill babies.” There is no objective, scientific reason why we shouldn’t kill babies. Wait!, you say, killing babies is wrong because it harms the proliferation of our species and inflicts pain on the mothers and the babies themselves! But why should we care about the proliferation of our species? About hurting some mother or her baby? While no one will deny that we should care about these, there is nothing scientific that will explain why. Science may give us a neurological reason why we care about species proliferation (it will go something like, “there is a brain region that makes us care about proliferation of our species.”), but why should we be limited to what our brains tend to make us think or do?
Subjective rules like these must therefore be agreed upon with the understanding that they are subject to change. Interestingly, some argue that science can answer moral questions because it can show us what “well-being” is, how we can get it, etc. But the scientific reason why we should care about well-being is nowhere to be found. The result is that we can use science to answer moral questions, but we have to first agree (subjectively) that we want well-being. Science by itself cannot answer moral questions because it shows us what is rather than what ought to be. (Actually, Sam Harris is the only one to argue that science can be an authority on moral issues; his technical faux-pas is an embarrassment to those who advocate “reason” in conduct).
But more on the idea of metaphysically constructed moral codes. What properties should this code have, and how should we go about synthesizing it? Having one fixed/rigid source as an authority for moral guidance is dangerous. Make no mistake: there must be some authority on moral questions, but it must be flexible, and adaptable; it must be able to stand the test of time on the one hand, but to be able to adjust to novel conditions on the other. This sounds a lot like the constitution of the U.S. But even with such a document as The Constitution, which has provided unity and civil progress since the country’s founding, there are some who take its words literally and allow no further interpretation; if it’s not written in the constitution, it can’t be in the law, they argue (see Strict Constructionism versus Judicial Activism). These folks also tend to be rather religious (read: they spend a lot of time listening to stories from the Bible; not to be confused with “spiritual” or of religions other than the Abrahamic ones). So while we must have a moral code, it must be flexible (i.e. change with time) and we must seek a balance between literal and imaginative interpretations, just as we do with the US Constitution.
Why and how is a rigid moral authority dangerous? Our authority must change with time because new developments in our understanding of the world must update how we interact with others. For example, if science finds tomorrow that most animals have a brain part that allows them to feel emotional pain in the same way that humans do, we will have to treat them with more empathy; research on dolphin cognition has recently produced an effort by scientists to have dolphins be considered and treated as nonhuman persons. Furthermore, if we don’t explain why we do certain things, we won’t understand why we do them and therefore won’t know why violating them is bad. This unquestionability aspect of God as moral authority or the Strict Constructionists as law-makers is what makes them particularly dangerous and leads to prejudice and ignorance. Our moral code must therefore be based on empirical research, with every rule being subject to intense scrutiny (think of two-year-olds who keep asking, “but why?”).
But why should we have a moral code in the first place? Perhaps if everyone followed a moral code of some sort, the world would have fewer injustices and atrocities. Getting people to follow a moral code of any kind is a completely different issue.
Nonhuman Personhood for Dolphins
Scientific Misinformation
Stuart Hameroff, MD, is an anesthesiologist and professor at the University of Arizona. In one of many articles and videos about consciousness on the Huffington Post, Hameroff describes how anesthesia can help explain consciousness.
If the brain produces consciousness (all aspects of the term), then it seems to follow that turning off the brain will also turn off consciousness. This is exactly how anesthetics work.
While most anesthetics are nonselective "dirty" drugs, they all produce loss of consciousness, amnesia, and immobility by either opening inhibitory ion channels or closing excitatory ion channels in neurons. The commonly used intravenous drug propofol, for example, acts by activating GABA receptors, the ubiquitous inhibitory channels in CNS interneurons. Brain off = consciousness off.
Hameroff does not subscribe to this. He argues that consciousness is an intrinsic part of the universe and that anesthetics simply disconnect it from the brain. He also thinks that by saying "quantum" a lot, he can scientifically prove the existence of the soul.
What's scary is that Hameroff has "MD" and "Professor" next to his name. Will Joe the Plumber see through the misinformation?
Don't take the HuffPost too seriously:
Further Blending the Arts and Sciences
Ever hear of “neurocinematics" – a term coined by Uri Hasson of Princeton University?
If not, it’s basically a method that has been employed by neuromarketers, using instruments that had been predominantly handled by scientists, that’s targeted towards filmmakers. Using tools such as biometric devices (to track eye movements and heart rate), EEG (to analyze brain waves), and fMRI (to record brain activity), neuromarketers can help filmmakers better understand their viewers’ reactions, whether to completed pieces, screenings, or trailers (the latest Harry Potter movie trailer employed neurocinematics).
“Under the assumption that mental states are tightly related to brain states" (a hypothesis that is widely accepted by most neuroscientists and many philosophers), Hasson and colleagues found that "some films can exert considerable control over brain activity and eye movements.”
Neuromarketers ensure the reliability of their findings using several techniques.
To provide a basis for measuring the viewers’ brain activity and to avoid measuring noise dissociated from the task at hand, neuromarketers assess the participant viewers while watching non-stimulating targets (e.g. a standard cross amidst gray background), which should elicit no response. Then, neuromarketers compare this response (or lack thereof) to that elicited from a clip. Some participants may even be asked to watch a clip up to three or four times for comparison purposes.
Because the response of one participant does not say a lot about a clip, neuromarketers use inter-subject correlation analysis (ISC) to ensure further reliability. They can “assess similarities in the spatiotemporal responses across viewers’ brains,” in which correlations can “extend far beyond the visual and auditory cortices" to other areas, such as the lateral sulcus (LS), the postcentral sulcus, and the cingulate gyrus.
In 2008, Uri Hasson and colleagues measured how viewers’ brains responded to different types of films, ranging from real-life scenarios, documentaries, and art films to Hollywood blockbusters. While Alfred Hitchcock's big hit Bang! You're Dead elicited the same response across all viewers in 65% of the frontal cortex, Larry David’s Curb Your Enthusiasm elicited the same response in only 18%.
Hasson alleges that the ISC level indicates the extent of control a filmmaker has over his viewers’ experiences, whether intentionally or not, leaving Hitchcock consequently with more control than Larry David.
Hasson’s team also changed the order of scenes for different participants and assessed viewers’ reactions, finding that the more coherent the scene order, the higher the ISC in parts of the brain involved in extracting meaning. Changing the order of scenes can help filmmakers determine which sequence effectively promotes their viewers’ understanding.
Phil Carlsen and Devon Hubbard of MindSign in San Diego, CA. suggest that using neurocinematics can help filmmakers decide which actor will elicit more brain activity from viewers and consequently, give them a better shot in the box office. Not only that, but the method can also help them assign movie ratings, depending on how brain areas associated with disgust and approval respond.
Carlsen has also found, not surprisingly, that 3D scenes activate the brain more so than those of 2D, particularly when viewers used modern polarized glasses over the older blue and red ones.
Neurocinematics has the ability to change the film industry immensely. Whether a filmmaker wants near-complete control or just enough to ensure his message crosses over, he can use this method to make it happen. Even the U.S. Advertising Research Foundation is seriously considering this new method, defining regulatory standards and a quality consult, says Ron Wright of Sands Research.
While some may disagree over whether neurocinematics is killing creativity or invading human interest or personal privacy, others might find it revolutionary, providing filmmakers with more opportunities to create their ideal pieces, and viewers with more engaging, worthwhile films.
Wright, along with neuromarketing consultant Roger Dooley, would likely argue that the method is far from invading human interest or privacy. Wright believes there are too many variables in determining the human mental “buy button,” which would hypothetically lead someone to spend money – in this case on a film. Dooley does not believe that neuromarketers will “ever find some sort of magic spot that will allow [them] to accurately predict whether someone will purchase a product or not.”
Neurocinematics, agreeable or not, is becoming an important element in the blending of the arts and sciences.
Blending with dozens of other fields and creating amazing products and methods out of doing so, neuroscience, in my opinion, has driven down quite the revolutionary road.
Sources & Additional interesting, related sites:
Songs in the key of EEG – Michael Brooks, NewScientist
Neurocinema – film producer Peter Katz, YouTube
MindSign Neuromarketing -- MindSign
Science of the Movies - MindSign Neuromarketing – Nar Williams, YouTube
DOI: 10.3167/proj.2008.020102) – Hasson, et al., Projections
Brain scans gauge horror flick fear factor – Grace Wong, CNN
Subconscious Security: Our Next Big Life Investment?
Have you seen the hit summer movie, Inception, yet? If not, I recommend you to, because it’s mind bottling (Yeah, Anchorman’s Ron Burgundy would approve). Either way though, seen it or not, the movie tweaked my curiosity about the ever-growing interaction between technology and our brains, our minds.
In the movie, Leonardo DiCaprio’s character, Cobb, is on a mission to plant an idea into another character's mind in order to safely, legally, go home to his kids.
Cobb and his colleagues use a PASIV (Portable Automated Somnacin IntraVenous) device to access the target’s mind while he is sleeping on an airplane.
Unfortunately for Cobb’s team, the target’s mind has what Cobb calls “subconscious security,” trained mental "projections" set up in his mind to protect it from intruders. To implant the idea, they have to find a way around this security, but how? Will they make it? The movie’s a must-see – watch and find out!
So, how is this far-off movie-world of inception, dream-sharing, and mind-reading relevant, or worthy of discussion at present?
Well, haven’t fMRI results been cluing us in on some of our emotions, conscious or not? In my last post, I discussed how one’s level of empathy correlates with the activity of the ACC (anterior cingulate cortex) as recorded by an fMRI. Whether lab volunteers knew it or not, they (their brains, really) reacted more actively to the pain of those similar to them.
And now, with the rapid succession of advancements in brain-imaging technology, mind-reading and even dream-recording does not seem so unrealistic. “Subconscious security” might actually come in handy if our privacy becomes too vulnerable.
Recent articles (one and two) discuss how researchers at Northwestern University have discovered technology that can be used to “get inside the mind of a terrorist.” In their experiment, they created a mock terror-attack plan with details known and memorized only by some lab participants. The researchers found a correlation between the participants' P300 brain waves and their guilty knowledge with “100 percent accuracy,” as J.P. Rosenfeld says.
Measuring the waves using electrodes attached to the participants' scalps, they were able to determine whether they had prior knowledge of, or strong familiarity with, certain dates, places, times, etc.
While TIME writer Eben Harrell notes the threat this type of experiment has on privacy and its clear limitations (“confounding factors” such as familiarity, not because of guilty knowledge but of fond memories, i.e. a hometown), he also notes that interrogations can be more detrimental than these experiments. Accuracy can be improved upon simply by presenting more details to participants.
In her 2008 article, Celeste Biever tells readers how scientists, particularly Yukiyasi Kamitani and his colleagues, have come to analyze brain scans and reconstruct images the lab volunteers had seen in their mind. The scientists believe their work has potential to improve reconstructions to finer focus and even color.
John-Dylan Haynes of the Max Planck Institute in Germany, referred to in Biever’s article, says that the “next step is to find out if it is possible to image things that people are thinking of.” He even considers the possibility of making “a videotape of a dream” in the near future.
However, as Berns points out in an August 2010 article written by Graham Lawton and Clare Wilson, ethical issues would likely be raised if this brain-imaging technology begins to bind too intimately with advertising and marketing companies. People would probably feel uncomfortable if advertising becomes too knowledgeable of the workings of their minds, perhaps even enticing them to buy things they “don’t want, don’t need, and can’t afford.”
For now though, the advertising approaches using this technology seem innocent enough. With EEG machines, advertisers can determine which designs, words, or advertisements receive certain patterns of brain activity – in other words, receive the most attention from potential buyers.
Not only have EEG machines been used by advertisers, but also by sales companies – for both purposes inspiring and not so inspiring. Starting with the latter, some companies (i.e., Mattel, Emotiv) have sold cheap EEG devices to “mainstream consumers,” particularly gamers. One Californian company, NeuroSky, has built an emotion-detecting device, determining, per se, whether one is relaxed or anxious.
While not completely necessary or inspiring, devices like these are fascinating. I’d like to see what some device thinks I’m feeling, and how accurate it is.
Fortunately, there are inspiring purposes for these EEG devices. As discussed by Tom Simonite in his April 2009 article, those with disabilities or paralysis can use the devices to help control wheel chairs and even type on a computer. One engineer, Adam Wilson, even updated his Twitter using one of these systems (BCI2000): “USING EEG TO SEND TWEET.”
Not only can EEG devices help those with disabilities or paralysis, but prosthetic limbs can also. Professors at the Friedrich Schiller University of Jena, Weiss and Hofmann, discuss in their article that one of their systems allows the brain to “pick up…feedback from the prosthesis as if it was one’s own hand,” easing phantom pain.
DARPA (Defense Advanced Research Projects Agency) has allowed prostheses to advance even more than this, aspiring to bring back to wearers the experience of touch. They awarded The Johns Hopkins University Applied Physics Laboratory (APL) over $30 million to "manage and test the Modular Prosthetic Limb (MPL)," which would use a "brain-controlled interface" to perform desired actions.
Despite the hurdles along the way (as noted by Andew Nusca in his blog), the lab has released a final design, as described in a reprinted online article. It offers an amazing 22 degrees of motion, weighs the average nine-pound weight of a natural limb and responds to the wearer’s thoughts.
While the blooming relationship between technology and the brain has raised ethical questions about privacy and its use, it has also brought hope and awe to people. Maybe, with the seeming innocence of neuro-marketing thus far and the hope inspired by research and development, we won't need to turn to “subconscious security” just yet.
Sources:
Reading Terrorists Minds About Imminent Attack: Brain Waves Correlate to Guilty Knowledge in Mock Terrorism Scenarios – Science Daily (reprint)
Fighting Crime by Reading Minds – Eben Harrell of TIME
'Mind-reading' software could record your dreams – Celeste Biever of New Scientist
Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders – Kamitani, et. al of Neuron
Mind-reading marketers have ways of making you buy – Graham Lawton and Clare Wilson of New Scientist
Mind-reading headsets will change your brain – Tom Simonite of New Scientist
Prosthesis with information at its fingertips – Weiss and Hofmann of FS University of Jena
DARPA aims to control prosthetic limbs with brain implants – Andrew Nusca of Smart Planet
Thought Control of Prosthetic Limbs Funded by DARPA – Neuroscience News (reprint)
One Giant Leap for Mankind…in the Wrong Direction
On June 3rd, six volunteers were locked inside a mock space capsule to endure a 17 month simulation of a mission to Mars, called the Mars500. This will be the longest of these types of trials; during the simulation, an all-male crew is expected to perform operations required to complete a round-trip Mars mission. In addition, they must maintain relative physical and mental health in an isolate, confined environment. Scientists hope to gain perspective on the psychological stresses and effects an actual long-term space mission would have on its crew.
While such a lengthy test will provide useful data to psychologists and space scientists alike, it also seems to be a preliminary gesture towards a future of deep-space travel that is dominated entirely by men.
Women were excluded due to “tension between the sexes.”An organizer of the simulation alluded to to a previous co-ed experiment, in which a Russian volunteer attempted to kiss his female associate at a New Year’s Eve party. As a result, a highly qualified, female cosmonaut was not allowed to participate in the experiment.
Similar fears of inappropriate sexual interaction have been used to prevent women from accompanying their male counterparts in other situations for generations. Women have been considered a distracting element, to the point that even their presence jeopardizes the success of a particular endeavor.
An all-woman crew was considered to be unfeasible due to the fact that, according to the organizers, out of 5,600 applicants only one woman was qualified for the job. Though, considering that the discrimination was brought about through the actions of a man, it is a wonder that men aren’t considered unfit for such experiments due to their disqualifying inability to contain their sexual impulses. Ideally, though, one should hope that equal opportunity be given to members of both genders.
This segregation will deprive scientists of any further data on co-ed experiments of this type, thus rendering future real life co-ed missions improbable, if not altogether impossible.
I can’t help but be reminded of the hackneyed but somehow lovable way in which Star Trek: The Next Generation made commentary on social issues, in this case through the introduction of a greedy, swindling, misogynistic race called the Ferengi, who are shocked to learn that members of Starfleet work alongside their females.
Indeed, in the context of many Utopian science fiction tales, the future human race is often portrayed as one that has reached some type of social maturity, and has out-grown its former preoccupation with delineating differences between gender, race, nationality or religion. It is unfortunate that, at present, our rate of technological growth far surpasses that of our social progress.
SOURCE:
http://blogs.discovermagazine.com/80beats/2010/06/03/see-you-in-520-days-pretend-astronauts-begin-simulated-trip-to-mars/