Tagged: Ethics
“Out, damned spot! Out, I say!”
For those of you who’ve forgotten or perhaps even repressed your memories of high school English class, the line in the title is the cry of the power-hungry and all-around homicidal maniac Lady Macbeth, the female lead in Shakespeare’s great tragedy, Macbeth. After having committed regicide so that her husband may become king, she becomes convinced that she cannot wash King Duncan’s blood from her hands. Thoughts are soliloquized, guilt is manifested in madness, and archetypes are born.
Curtain. More
Moral Code
Why is it wrong to kill babies? Why is it wrong to take advantage of mentally retarded people? To lie with the intention of cheating someone? To steal, especially from poor people? Is it possible that Medieval European society was wrong to burn women suspected of witchcraft? Or did they save mankind from impending doom by doing so? Is it wrong to kick rocks when you’re in a bad mood?
Questions of right and wrong, such as these, have for millenia been answered by religious authorities who refer to the Bible for guidance. While the vast majority of people still turn to Abrahamic religious texts for moral guidance, there are some other options for developing a moral code. Bibles aside, we can use our “natural” sense of what’s right and wrong to guide our actions; a code based on the natural sense would come from empirical studies on what most people consider to be right or wrong. Ignoring the logistics of creating such as code, we should note that the rules in this code would not have any reasoning behind them other than “we should do this because this is what comes naturally.” How does that sound? Pretty stupid.
The other option is to develop a moral code based on some subjective metaphysical ideas, with a heavy backing of empirical facts. “Subjective” means these ideas won’t have an undeniability to them; they are what they are and that’s it. Take as an example the rule such as “we should not kill babies.” There is no objective, scientific reason why we shouldn’t kill babies. Wait!, you say, killing babies is wrong because it harms the proliferation of our species and inflicts pain on the mothers and the babies themselves! But why should we care about the proliferation of our species? About hurting some mother or her baby? While no one will deny that we should care about these, there is nothing scientific that will explain why. Science may give us a neurological reason why we care about species proliferation (it will go something like, “there is a brain region that makes us care about proliferation of our species.”), but why should we be limited to what our brains tend to make us think or do?
Subjective rules like these must therefore be agreed upon with the understanding that they are subject to change. Interestingly, some argue that science can answer moral questions because it can show us what “well-being” is, how we can get it, etc. But the scientific reason why we should care about well-being is nowhere to be found. The result is that we can use science to answer moral questions, but we have to first agree (subjectively) that we want well-being. Science by itself cannot answer moral questions because it shows us what is rather than what ought to be. (Actually, Sam Harris is the only one to argue that science can be an authority on moral issues; his technical faux-pas is an embarrassment to those who advocate “reason” in conduct).
But more on the idea of metaphysically constructed moral codes. What properties should this code have, and how should we go about synthesizing it? Having one fixed/rigid source as an authority for moral guidance is dangerous. Make no mistake: there must be some authority on moral questions, but it must be flexible, and adaptable; it must be able to stand the test of time on the one hand, but to be able to adjust to novel conditions on the other. This sounds a lot like the constitution of the U.S. But even with such a document as The Constitution, which has provided unity and civil progress since the country’s founding, there are some who take its words literally and allow no further interpretation; if it’s not written in the constitution, it can’t be in the law, they argue (see Strict Constructionism versus Judicial Activism). These folks also tend to be rather religious (read: they spend a lot of time listening to stories from the Bible; not to be confused with “spiritual” or of religions other than the Abrahamic ones). So while we must have a moral code, it must be flexible (i.e. change with time) and we must seek a balance between literal and imaginative interpretations, just as we do with the US Constitution.
Why and how is a rigid moral authority dangerous? Our authority must change with time because new developments in our understanding of the world must update how we interact with others. For example, if science finds tomorrow that most animals have a brain part that allows them to feel emotional pain in the same way that humans do, we will have to treat them with more empathy; research on dolphin cognition has recently produced an effort by scientists to have dolphins be considered and treated as nonhuman persons. Furthermore, if we don’t explain why we do certain things, we won’t understand why we do them and therefore won’t know why violating them is bad. This unquestionability aspect of God as moral authority or the Strict Constructionists as law-makers is what makes them particularly dangerous and leads to prejudice and ignorance. Our moral code must therefore be based on empirical research, with every rule being subject to intense scrutiny (think of two-year-olds who keep asking, “but why?”).
But why should we have a moral code in the first place? Perhaps if everyone followed a moral code of some sort, the world would have fewer injustices and atrocities. Getting people to follow a moral code of any kind is a completely different issue.
Nonhuman Personhood for Dolphins
A Real Life Terminator?
In the 1984 film The Terminator, an artificial intelligence machine is sent back in time from 2029 to 1984 to exterminate a woman named Sarah Connor. The Terminator had not only a metal skeleton, but also an external layer of living tissue as well, and was thus deemed a cyborg, a being with both biological and artificial parts. In 1984, no such cyborgs existed in the real world. However, fourteen years later, that would change.
Kevin Warwick is a Professor of Cyberkinetics at the University of Reading in England, and in 1998, he became the world’s first cyborg. Using only local anesthetic, a small silicon chip transponder was implanted into his forearm. The chip had a unique frequency that was able to track him throughout his workplace, and with a clench of his fist, he was able to turn lights on and off, as well as operate doors, heaters, and computers.
To take the experiment to the next level, in 2002 Warwick received another implant. A one hundred electrode array was implanted into the median nerve fibers of Warwick’s left forearm. With this implant, he was able to control electric wheelchairs and a mechanical arm. The neural signals being used to control the arm were detailed enough that the mechanical arm was able to mimic Warwick’s arm perfectly. While traveling to Columbia University in New York, Warwick was even able to control the mechanical arm from overseas and get sensory feedback transmitted from the arm’s fingertips (the electrode array could also be used for stimulation).
Although Warwick’s work could profoundly affect the world of medicine through its potential to aid those who have nervous system damage, his work has been considered quite controversial. After his first implant, Warwick announced that his enhancement made him a cyborg. However, questions are being asked, "when does a cyborg become a robot?" If these types of implants become more common in the future, how would the population feel about these “enhanced” individuals? In the future, it is possible that these implants could be used for anything from carrying a travel Visa to storing our medical records, blood type, and allergies in case of medical emergencies. Warwick is proud of his work because he is pioneering how humans can be integrated with computerized systems, but he has his own concerns as well. In one interview, he claims that it is a realistic possibility that one day, humans will create such intelligent artificial beings that it is possible we won’t be able to turn them off. Will cyberkinetic research ever take us that far? We will just have to wait and see.
For more information of the work of Kevin Warwick, visit his website.
Mind the Gap
The discoveries of modern neuroscience have certainly heightened our understanding of the brain and its functions, and have begun to provide us with a physical groundwork for the complicated problem of effectively investigating the mind. While it is certainly beneficial to establish physical principles that underly cognitive function of the brain, how does this effect the larger endeavor of understanding the mind? Neuroscientists such as Rebecca Saxe of MIT are converging on things like the nuroanatomical basis of moral judgment and just scraping the surface of what can bridge the gap between what physically "is" and what metaphysically "ought" to be. In her experiments, Saxe proposes that she has pinpointed the right temporoparietal junction (RTPJ) as a brain center for making moral judgments and has conducted experiments with magnetic brain stimulation that can effectively change the moral judgments of her subjects. Please see her TED talk here for a full explanation of her study.
In the 1700s, David Hume proposed what has now become known widely as the Is-Ought Problem. He calls for caution in making statements about morality or what "ought" to be based on extrapolations of what "is" and that what ought to be does not necessarily follow from what is. The problem aptly applies to neuroscientists like Saxe whose research make strong suggestions about the neural basis of existence and attempts to bridge the is-ought gap. All of this research is establishing a large library of what "is" concerning the brain, but it also suggests that metaphysical concepts such as morality and meta-ethics can be reduced to neurological connections and connectivity. Hume stresses that while what is and what ought to be are important revelations in and of themselves, what ought to be need not follow from what is. Neuroscience must understand this separation as its advances begin to encroach on many of philosophy's already well-established concepts.
What I'm saying here is that modern neuroscience must use caution in making conclusions about human nature. Empirical evidence can certainly be used to help understand more abstract ideas, but the evidence and the ideas must remain seperate with respect to causality. Making discoveries about brain function and the empirical science behind things like emotion or judgment is a valiant and respectable scientific investigation. However, this pursuit must be kept separate and distinct from the pursuit of understanding how we ought to be or act. Our moral thought is something more abstract and multidimensional than connections between neurons and sequential acton potentials. While investigation of the science of the mind is important, it should not seek to explain our existence nor try to answer philosophy's greatest problems with calculations and empirical data.
For reference:
The Is-Ought Problem - David Hume via Wikipedia
Theory of Mind TED Talk - Rebecca Saxe (MIT)
David Hume, Meta-Ethics - Wikipedia
Subconscious Security: Our Next Big Life Investment?
Have you seen the hit summer movie, Inception, yet? If not, I recommend you to, because it’s mind bottling (Yeah, Anchorman’s Ron Burgundy would approve). Either way though, seen it or not, the movie tweaked my curiosity about the ever-growing interaction between technology and our brains, our minds.
In the movie, Leonardo DiCaprio’s character, Cobb, is on a mission to plant an idea into another character's mind in order to safely, legally, go home to his kids.
Cobb and his colleagues use a PASIV (Portable Automated Somnacin IntraVenous) device to access the target’s mind while he is sleeping on an airplane.
Unfortunately for Cobb’s team, the target’s mind has what Cobb calls “subconscious security,” trained mental "projections" set up in his mind to protect it from intruders. To implant the idea, they have to find a way around this security, but how? Will they make it? The movie’s a must-see – watch and find out!
So, how is this far-off movie-world of inception, dream-sharing, and mind-reading relevant, or worthy of discussion at present?
Well, haven’t fMRI results been cluing us in on some of our emotions, conscious or not? In my last post, I discussed how one’s level of empathy correlates with the activity of the ACC (anterior cingulate cortex) as recorded by an fMRI. Whether lab volunteers knew it or not, they (their brains, really) reacted more actively to the pain of those similar to them.
And now, with the rapid succession of advancements in brain-imaging technology, mind-reading and even dream-recording does not seem so unrealistic. “Subconscious security” might actually come in handy if our privacy becomes too vulnerable.
Recent articles (one and two) discuss how researchers at Northwestern University have discovered technology that can be used to “get inside the mind of a terrorist.” In their experiment, they created a mock terror-attack plan with details known and memorized only by some lab participants. The researchers found a correlation between the participants' P300 brain waves and their guilty knowledge with “100 percent accuracy,” as J.P. Rosenfeld says.
Measuring the waves using electrodes attached to the participants' scalps, they were able to determine whether they had prior knowledge of, or strong familiarity with, certain dates, places, times, etc.
While TIME writer Eben Harrell notes the threat this type of experiment has on privacy and its clear limitations (“confounding factors” such as familiarity, not because of guilty knowledge but of fond memories, i.e. a hometown), he also notes that interrogations can be more detrimental than these experiments. Accuracy can be improved upon simply by presenting more details to participants.
In her 2008 article, Celeste Biever tells readers how scientists, particularly Yukiyasi Kamitani and his colleagues, have come to analyze brain scans and reconstruct images the lab volunteers had seen in their mind. The scientists believe their work has potential to improve reconstructions to finer focus and even color.
John-Dylan Haynes of the Max Planck Institute in Germany, referred to in Biever’s article, says that the “next step is to find out if it is possible to image things that people are thinking of.” He even considers the possibility of making “a videotape of a dream” in the near future.
However, as Berns points out in an August 2010 article written by Graham Lawton and Clare Wilson, ethical issues would likely be raised if this brain-imaging technology begins to bind too intimately with advertising and marketing companies. People would probably feel uncomfortable if advertising becomes too knowledgeable of the workings of their minds, perhaps even enticing them to buy things they “don’t want, don’t need, and can’t afford.”
For now though, the advertising approaches using this technology seem innocent enough. With EEG machines, advertisers can determine which designs, words, or advertisements receive certain patterns of brain activity – in other words, receive the most attention from potential buyers.
Not only have EEG machines been used by advertisers, but also by sales companies – for both purposes inspiring and not so inspiring. Starting with the latter, some companies (i.e., Mattel, Emotiv) have sold cheap EEG devices to “mainstream consumers,” particularly gamers. One Californian company, NeuroSky, has built an emotion-detecting device, determining, per se, whether one is relaxed or anxious.
While not completely necessary or inspiring, devices like these are fascinating. I’d like to see what some device thinks I’m feeling, and how accurate it is.
Fortunately, there are inspiring purposes for these EEG devices. As discussed by Tom Simonite in his April 2009 article, those with disabilities or paralysis can use the devices to help control wheel chairs and even type on a computer. One engineer, Adam Wilson, even updated his Twitter using one of these systems (BCI2000): “USING EEG TO SEND TWEET.”
Not only can EEG devices help those with disabilities or paralysis, but prosthetic limbs can also. Professors at the Friedrich Schiller University of Jena, Weiss and Hofmann, discuss in their article that one of their systems allows the brain to “pick up…feedback from the prosthesis as if it was one’s own hand,” easing phantom pain.
DARPA (Defense Advanced Research Projects Agency) has allowed prostheses to advance even more than this, aspiring to bring back to wearers the experience of touch. They awarded The Johns Hopkins University Applied Physics Laboratory (APL) over $30 million to "manage and test the Modular Prosthetic Limb (MPL)," which would use a "brain-controlled interface" to perform desired actions.
Despite the hurdles along the way (as noted by Andew Nusca in his blog), the lab has released a final design, as described in a reprinted online article. It offers an amazing 22 degrees of motion, weighs the average nine-pound weight of a natural limb and responds to the wearer’s thoughts.
While the blooming relationship between technology and the brain has raised ethical questions about privacy and its use, it has also brought hope and awe to people. Maybe, with the seeming innocence of neuro-marketing thus far and the hope inspired by research and development, we won't need to turn to “subconscious security” just yet.
Sources:
Reading Terrorists Minds About Imminent Attack: Brain Waves Correlate to Guilty Knowledge in Mock Terrorism Scenarios – Science Daily (reprint)
Fighting Crime by Reading Minds – Eben Harrell of TIME
'Mind-reading' software could record your dreams – Celeste Biever of New Scientist
Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders – Kamitani, et. al of Neuron
Mind-reading marketers have ways of making you buy – Graham Lawton and Clare Wilson of New Scientist
Mind-reading headsets will change your brain – Tom Simonite of New Scientist
Prosthesis with information at its fingertips – Weiss and Hofmann of FS University of Jena
DARPA aims to control prosthetic limbs with brain implants – Andrew Nusca of Smart Planet
Thought Control of Prosthetic Limbs Funded by DARPA – Neuroscience News (reprint)
Toasters With Feelings
Anthropomorphism is the attribution of human characteristics to inanimate objects, animals, or God. It has been a hallmark of faiths and religions worldwide. Humans have a natural tendency to assign intentions and desires to inanimate objects ("my computer isn't feeling well today - he's so slow!"), but they also strip "lower" beings (animals) of those same human characteristics.
We have a history of treating animals unnecessarily cruelly. I don't mean killing for food - that's necessary for our survival; I'm referring to dog fights, hunting, and other violence. We didn't even think that animals could sense pain until quite recently!
Why do we think of lifeless forms as agents with intentions but of actual living creatures as emotionally inferior clumps of cells?
Could it be that the need to rationalize phenomena is simply stronger when the phenomena have absolutely no visible explanation?
And do toasters really have feelings??
One Giant Leap for Mankind…in the Wrong Direction
On June 3rd, six volunteers were locked inside a mock space capsule to endure a 17 month simulation of a mission to Mars, called the Mars500. This will be the longest of these types of trials; during the simulation, an all-male crew is expected to perform operations required to complete a round-trip Mars mission. In addition, they must maintain relative physical and mental health in an isolate, confined environment. Scientists hope to gain perspective on the psychological stresses and effects an actual long-term space mission would have on its crew.
While such a lengthy test will provide useful data to psychologists and space scientists alike, it also seems to be a preliminary gesture towards a future of deep-space travel that is dominated entirely by men.
Women were excluded due to “tension between the sexes.”An organizer of the simulation alluded to to a previous co-ed experiment, in which a Russian volunteer attempted to kiss his female associate at a New Year’s Eve party. As a result, a highly qualified, female cosmonaut was not allowed to participate in the experiment.
Similar fears of inappropriate sexual interaction have been used to prevent women from accompanying their male counterparts in other situations for generations. Women have been considered a distracting element, to the point that even their presence jeopardizes the success of a particular endeavor.
An all-woman crew was considered to be unfeasible due to the fact that, according to the organizers, out of 5,600 applicants only one woman was qualified for the job. Though, considering that the discrimination was brought about through the actions of a man, it is a wonder that men aren’t considered unfit for such experiments due to their disqualifying inability to contain their sexual impulses. Ideally, though, one should hope that equal opportunity be given to members of both genders.
This segregation will deprive scientists of any further data on co-ed experiments of this type, thus rendering future real life co-ed missions improbable, if not altogether impossible.
I can’t help but be reminded of the hackneyed but somehow lovable way in which Star Trek: The Next Generation made commentary on social issues, in this case through the introduction of a greedy, swindling, misogynistic race called the Ferengi, who are shocked to learn that members of Starfleet work alongside their females.
Indeed, in the context of many Utopian science fiction tales, the future human race is often portrayed as one that has reached some type of social maturity, and has out-grown its former preoccupation with delineating differences between gender, race, nationality or religion. It is unfortunate that, at present, our rate of technological growth far surpasses that of our social progress.
SOURCE:
http://blogs.discovermagazine.com/80beats/2010/06/03/see-you-in-520-days-pretend-astronauts-begin-simulated-trip-to-mars/