Hey Scientists, Where’s My Jetpack?! : The future is here; it just looks a little different than expected
In almost every major futuristic science-fiction work of the last century, jetpacks and flying cars are seemingly as ubiquitous as today’s oversized SUV’s, lining the closets and garages of every hardworking American. Understandably, in the year 2011, this has lead many disenchanted Trekkies and purveyors of assorted geek cultures to ask, “Well, scientists, where’s my jetpack?!” While I commiserate with my fellow fans of Asimov and Adams, several recent innovations have led me to believe that we all might be overlooking just how “futuristic” the time we live in really is. Accessing Google on the iPhone is certainly as close to the Hitchhiker’s guide to the galaxy as we may ever come. We have the ability to beam blueprints of intricate plastic objects and now even organs anywhere in the world and literally print them out. We have computers that can beat us in Jeopardy! And last but not least, Ladies and Gentlemen, I present to you Brain Driver, the thought-controlled car. On behalf of scientists everywhere, I accept your apologies, geeks. More
Research has been conducted that proves that our thoughts can control the rate of firing of neurons in our brain. This research reveals the crucial advancement of brain-operated machines in the field. John P. Donoghue at Brown University has conducted research that uses neural interface systems (NISs) to aid paraplegics. NISs allows people to control artificial limbs; individuals simply need to think about commanding their artificial limbs and signals are sent down from their brain to control the movement of these limbs! This great feat is not the only applicable result of current research done by brain-machine interfaces. Dr. Frank Guenther of Boston University uses implanted electrodes in a part of the brain that controls speech to tentatively give a voice back to those who have been struck mute by brain injuries. The signals produced from these electrodes are sent wirelessly to a machine that is able to synthesize and interpret these signals into speech. This is specifically useful for patients suffering from locked in syndrome, wherein an individual with a perfectly normal brain is unable to communicate due to specific brain damage, and thus allowing these individuals to communicate with the world! These discoveries are not only incredibly useful, but they also reveal the astonishing feats that the field of computational neuroscience is accomplishing in the world today.
"Magic mirror on the wall, who is the fairest one of all" says the evil Queen of Snow White and the Seven Dwarfs. I don't deny that growing up on Disney gave me a somewhat skewed sense of reality at times. Wouldn't it be nice to all have our own magic mirrors, constantly reminding us how wonderful and beautiful we are in the midst of the stress that is life?
A recent study by researchers at Cornell University have shown that we may actually have such a magic mirror - Facebook, as fate would have it. There are varying opinions concerning internet use on our personalities, but this study shows that Facebook can have a short term positive effect on self esteem. More
Obviously, our brain is the most complex part of our body, but did you ever think that people would use its powers to persuade and manipulate you to buy products seen in advertisements?
Well, with the ever-changing and enhancing state of technology these days, it is no surprise that people would be bound to create more amazing advancements, especially when applied to consumerism. Neuromarketers, groups of researchers who use techniques from neuroscience to study people's reactions to products, are bringing new studies to the forefront due to the fact that only 2 percent of the brain's energy is expended on conscious activities.
A.K. Pradeep, founder and chief executive of NeuroFocus, a neuromarketing firm based in Berkeley, California, believes that the only way to truly understand people's inclinations is through studying their subconscious. Therefore, NeuroFocus has led the way in this upcoming field by researching volunteers through the use of eye-tracking devices and measuring the brain's electrical frequencies.
A volunteer undergoing testing that focuses on measuring his brain's electrical frequencies and his eye movements.
By tapping into this realm, researchers are able to get a clear view of people's unconscious thoughts when viewing commercials, movie trailers, or web sites. As Dr. Pradeep says, "We basically compute the deep subconscious response to stimuli."
This process has now led way to multiple companies forming in hopes of furthering the development of neuromarketing. And many big-name sponsors -such as Google, CBS, and Disney- have used neuromarketing to test consumer responses to advertisements, even political ones.
However, some people are concerned that companies could take advantage of consumer's thoughts and use those against them.
"If I persuaded you to choose Toothpaste A or Toothpaste B, you haven't really lost much, but if I persuaded you to choose President A or President B, the consequences could be much more profound" Dr.Pradeep says.
The likelihood of this is not large since companies are not focusing heavily on the political side of things and we do still have control over our brains.
A professor of neuroscience and psychology at Berkeley, Dr. Robert T. Knight explains that neuromarketing may distinguish between one's positive or negative emotions, but it cannot be specific enough as to say whether one's positive emotion is joy or excitement. The only measurable variable is if the viewer pays attention. No correlation has been made between the brain-pattern responses to neuromarketing and purchasing or reactionary behavior.
Whatever your opinion, the initiative is just beginning, and the Advertising Research Foundation has developed a project for defining industrywide standards based off of reviewing research done by participating neuromarketing firms.
The future looks bright for these companies as sponsors have poured in with great interest, but only time will tell the fate of our brains being used for or against us.
Neuromarketing - Ads That Whisper to the Brain - NYTimes.com
Imagine: a mad scientist with a ray gun shoots at a neuron somewhere in cortical layer IV of your visual area MT, burning it up in a matter of microseconds (just for fun, imagine also that the ray gun leaves everything else intact).
With one neuron missing, you probably won't notice any perceptual change. But what if, one by one, all neurons in are MT went AWOL? You'd be stuck with an annoying inability to visually detect motion.
Now imagine that for every cell that our fancy ray gun hits, it replaces it with a magical transistor equivalent. These magical transistors have wires in place of each and every dendrite, a processing core, and some wires in place of axon(s). Naturally, the computational core analyzes the sum of all inputs and instructs the axon to "fire" accordingly. Given any set of inputs to the dendrite wires, the output of the axon wires is indistinguishable from that of the deceased neuron.
We can still imagine that with one neuron replaced with one magical transistor, there wouldn't be any perceptual change. But what happens when more and more cells are replaced with transistors? Does perception change? Will our subject become blind to motion, as if area MT weren't there? Or will motion detection be just as good as with the real neurons? I am tempted to vote in favor of "No change [we can believe in]," but have to remain skeptical: there is simply no direct evidence for either stance.
Ray guns aside, it is not hard to see that a computational model of a brain circuit may be a candidate replacement of real brain parts (this is especially true considering the computational success of the Blue Brain Project's cortical column, which comprises 10,000 neurons and many more connections among them). For example, we can imagine thousands of electrodes in place of inputs to area MT that connect to a computer model (instead of to MT neurons); the model's outputs are then connected, via other electrodes, to the real MT's outputs, and ta-da! Not so fast. This version of the upgrade doesn't shed any more light on the problem than the first, but it does raise some questions: do the neurons in a circuit have to be connected in one specific way in order for the circuit to support perception? Or is it sufficient simply for the outputs of the substitute to match those of the real circuit, given any set of inputs? And, what if the whole brain were replaced with something that produced the same outputs (i.e. behavior) given a set of sensory inputs - would that "brain" still produce perception?
Recent reports of artificial life forms which have "evolved" a basic form of intelligence have caused quite a stir in the biological and computer science communities.
This would normally be the time when I remind everyone that closer scrutiny must be paid to just what is meant by "life", "evolve" and "intelligence". But while those are all fascinating philosophical questions, there is no way in which a modest little blog post could begin to cover those topics.
Instead, I'd like to draw attention to a particular aspect of Isaac Asimov's writing, of which I can't help being reminded after reading these reports. As the father of the term "robotics" and all things relating to it, Asimov dealt with nearly all of the issues relating to artificial intelligence. A few of his fictional robot characters even developed human-like, self-aware consciousness and creativity. But the one thing which stands out about these characters was that their consciousness was rarely a design of their creators, but rather a fluke. Minute variations in the mechanized construction of their positronic brains amounted to unique, creative minds.
Asimov's choice to author conscious robots as results of random chance forces us to think about how human consciousness evolved in reality. It may be that such a consciousness is not strictly required for an organism to dramatically enhance its chances of survival and reproduction. We seem to assume that our superior cognitive abilities grant us an enormous advantage over other species, that the sort of consciousness which makes us self-aware, reflective and creative was the "end result" in a very long line of brain development. But evolution does not work towards such a specific end. There are plenty of other species (e.g. viruses) that persist with just as much vigor as us, despite their lack of cognitive powers associated with the forebrain. Perhaps only a minor, random mutation resulted in a dramatic and permanent change in the brain, a change which ultimately amounted to consciousness. Who knows what the odds are that such an intelligence evolved, or will evolve again in a computer simulation? At least we can be reassured that, on a long enough time scale, even the most unlikely event can occur.
In any case, Boston University's own Isaac Asimov has made many a prediction with his science fiction, and many more can be expected.
"Artificial life forms evolve basic intelligence"-Catherine Brahic
The Stone (a philosophy-oriented opinion column in The New York Times) recently published two arguments related to free will. On July 22, Galen Strawson, a professor of philosophy at Reading University, presented the idea of determinism to his readers. In short, determinism is the idea that everything is causally linked to prior events. Because one cannot control the infinite influences of genetics, culture, and history, it is impossible for one to declare that one has free will. The choices that a person makes depends on an innate selection of preferences that a person already has, and that innate selection of preferences also depends on another set of preferences, and so the sequence regresses. Strawson agrees that this "reality" about the universe does not change anybody's opinion about free will and one's responsibility felt towards one's actions (he includes himself as an ultimate disbeliever), but simply believing free will is real does not make it so.
However, the Stone published an opposing argument on July 25th – a mere three days later. Before readers could wrap their minds around Strawson’s theory about free will, William Egginton, a professor at Johns Hopkins University, presented the fact that free will is at the forefront of everyone's lives. Egginton explains that humans have a tendency to explore beyond what their senses can understand. This trait is good since we as a species have grown to understand more about our surroundings, but this quality also leads to projection and sensationalism among us. People are quick to make assumptions and draw conclusions about facts presented to them. For example, the New York Times itself sensationalized a “finding” of dark matter when in reality, the “discovery” was no more important than levels of dark matter found by accident.
Egginton described an experiment where monkeys "were taught to respond to a cue by choosing to look at one of two patterns." The computer that was hooked up to the monkeys then determined the decision the monkeys were about to make a few fractions of a second before the monkeys' eyes looked at the pattern. The scientists declared that because the monkeys were not taking time to weigh any options, the computer could predict the decisions that the monkeys were about to make.
Egginton asks, but were the computers really able to predict such decisions? Egginton argues that these computers were not predicting decisions; they were merely presenting the neural processes that led up to the monkeys making decisions. This makes sense if one considers the processing speed of computers. They generally perform functions (especially simpler ones) faster than human brains do. One could guess that these computers also work faster than monkey brains, so the computer was just offering the processes faster than the monkey. It was not extrapolating what the monkey would do before it would do it; it was only giving a readout of what the monkey had already decided a little bit faster than the monkey itself. No decisions were technically being made by the computer for the monkey.
Egginton finishes by saying that humans have free will whether they like it or not. They are prisoners of freedom not because they can choose but because they must choose.
However, these two arguments have left me feeling dissatisfied. Although I'm a believer of free will (I love Sartre), I can't help but think that both of these arguments sit on extreme ends. I'm assuming the editors at The New York Times wanted these arguments to be this way, so readers could make choices of their own about their free will.
My father has always told me that life is about balance. As a result, I try to rationally balance everything that I do, say, or think. I would say that a human's ability to have free will and live freely because is a combination of determinism and free will itself. In addition to humans having a tendency towards fanaticism by projecting their knowledge onto simple facts about the universe, humans also like to categorize and rationalize things such that they make the most sense for their own lives (I'm doing it, too!).
For example, when two people are breaking up, one of the partners may say that the breakup was inevitable while the other partner says that he or she had no idea that the breakup was coming. The partner that said the breakup was inevitable would most likely say that the breakup was a sensible thing to do, while the other partner would probably say that the breakup was random and unexpected. The person who views the breakup as a surprise will most likely feel more pain and mourn the end of the relationship more than the person who saw it coming. Despite the fact that these two people are undergoing the same breakup, they rationalize the events as determined or random depending on their point of view. The way they rationalize correlates directly with the way they choose to cope with the breakup.
For some people, saying that a higher power like determinism or even God essentially makes decisions for them makes their lives easier because they don't want to be held responsible for some of their personality traits or actions. For others, saying that they have free will and must make decisions all the time makes them feel better because they will feel like they've done all they could to change a situation when they fail or succeed. Many feel at peace when they "know" that their choices have made them who they are.
I can't say that either side is right or wrong, but life must be more nuanced than either argument says it is. At times, I feel as if certain choices I have made have definitely influenced what I've done or why I feel one way or another about a scenario. But other times, I don't think that anything I could have done would have changed what happened to me. Sure, if I fall down when I'm walking down some stairs, I certainly could have done something else, but why does that matter? For situations like that, I think that it doesn't really matter if your free will did or didn't cause that situation. It happened regardless of what caused it. That is what is most important.
We like to ruminate on how what we did affects what happened, but it seems that we need to be spending more time thinking about how what we do now will affect how we are in the future.
What do you choose?
Your Move: The Maze of Free Will - Opinionator Blog - NYTimes.com
The Limits of the Coded World - Opinionator Blog - NYTimes.com
The media is always hungry for juicy stories about anything. Topics of interest range from Lindsey Lohan's latest adventures to the implications of another political ethics violation. The science writers at the New York Times are no exception. Dennis Overbye confessed in an essay yesterday that some writers are so eager to report on sensational findings that they sometimes hype up their stories.
Shocking! Overbye gives an example of one such NYT article, which reported the amazing story that scientists found hints of the elusive and mysterious dark matter in a Minnesota mine. He said the article raised a hysteria, but it eventually left people disappointed when someone cared to report that the amount of dark matter found was not far above amounts found by chance. Dennis Overbye goes on to condemn the internet for spreading rumors, but he fails to note that the original hyped report on dark matter was written by him!
Perhaps our trusted science writers should do a bit more research before they publish their articles. But wait! they need the stories, and they need those stories to be catchy, damnit! Their job isn't to educate readers on the current state of whatever scientific field; their job is to report the latest findings. They more controversial they are, the better. There's a new article everyday about how exercise is good for you (or is it bad? I can't remember anymore) or how prostate or breast exams for cancer have been wrong all these years (don't worry - they'll turn out to be right again next week). No wonder Americans are confused about their health.
Individual studies are great, but they have to be taken in context and have to stand the test of time. Most findings in basic science research are small; it's the knowledge collected over many experiments and years that gives us a big picture of any one field. So the next time you read about "a new study," take it with a critical grain of salt.
What contributes more to creating a person’s identity (i.e. personality, behavior, intelligence)? Is it genetics, or is it the environment in which the person was raised? In other words, as Francis Galton might ask, is it “nature” or “nurture?"
When it comes to how empathetic someone is, Frans de Waal, a Dutch primatologist and ethologist, believes it’s both nature and nurture. He says that a person’s empathy is “innate” – inherited through genes – but also that a person can learn to become more or less empathetic. That seems reasonable; depending on early experiences and education, someone may be more or less of a certain characteristic.
But how is empathy innate? Two NewScientist writers, Philip Cohen and Ewen Callaway, wrote articles discussing the areas in our brains called the anterior cingulate cortex (ACC) and the anterior insula (AI), which become active not only when we are in pain but also when others are.
Imaging studies, cited in their articles, found a positive correlation between a volunteer’s reported empathy for a person in pain and activity in the pain-processing areas of the volunteer’s brain. This has led Cohen to believe, “Humans are hardwired to feel empathy.”
For example, in a study led by Shihui Han and colleagues, “17 Chinese and 16 Caucasian (from the US, Europe and Israel) volunteers” were shown videos of strangers, both Caucasian and Chinese, in pain while their brains were scanned using fMRI. While their fMRI results suggested that they responded more empathetically towards volunteers of the same ethnicity or from the same country, their responses actually indicated they “[felt] each other’s pain about equally.”
Interestingly, our brains seem to be “hardwired” to feel more for certain groups over others, whether we notice or not. These groups appear to consist of people we can identify more with, whether through ethnicity, age, gender, or any other in-group.
Frans de Waal would find these results quite understandable. He says, “Empathy is more pronounced the more similar you are to someone, the more close, socially close, you are to someone.” He continues to say that empathy “evolved… for members of any species that is cooperative and social... it’s important to take care of others in the group because you depend on [them], you survive by [them].”
Seemingly then, our brains, and likely those of other species, have evolved to serve a survival advantage; they respond in those pain-processing areas more actively when those like us are in pain, despite what we report as our level of empathy.
While we seem to be hardwired to empathize more with certain groups over others, we’re still united as a species to empathize with one another over those of other species.
Martha Farah, a cognitive neuroscience researcher, suggests that we have a “person network” divided into persons and non-persons, which has promoted closer social bonds within our species. Farah proves that this brain network exists by considering the rare disorder prosopagnosia, which consists of “impaired visual recognition of the human face.” A specific area of the brain can be “selectively” damaged for one to obtain the disorder, demonstrating that specialized areas of the brain exist for discerning other humans.
Whether our brain also specializes in empathy towards non-persons is something to look into. For now, consider yawn contagion, which de Waal discusses with TIME about. He says there is a “deep bodily connection” that allows pets to catch yawns from their owners. This seemingly innate connection seems to break physical barriers with other animals, but what, if any, connection breaks emotional ones? And is it innate, or is it learned?
Have animal rights activists and pet lovers learned to be more empathetic towards non-persons? I’d like to think that it’s not just the influence of my environment that has led me to empathize with my childhood pets or toys – not to mention some of my favorite characters, like Hamm from Toy Story or Patrick from SpongeBob SquarePants.
Whether it is learned, innate, or both, I cannot say, but anthropomorphism seems to explain our emotional connections with non-humans. It breaks the barrier, allowing us to personify or add human characteristics to non-humans. For example, most people would probably like to think of their childhood pets as loved ones with human-like feelings and desires. However, would some stranger halfway across the world feel the same way you do about your pet? Probably not. They’d likely think of it as just another animal, simple as that.
Most people, if asked if they support animal rights, would probably answer ‘Yes’ or some derivative of that. But, would they promise to never buy any animal-based products (eggs, meat, suede, leather, or even the chinchilla coat seen on Teresa last week in The Real Housewives of New Jersey)? Most likely not. I mean, for anyone, that’s a hard promise to keep when we have other priorities.
So how do we go from talking to our pets as if they were humans to absentmindedly buying products that might contain ingredients of an animal just like our pets?
de Waal says we do this through dehumanization. We go about anthropomorphizing our favorite pets, toys, and characters just as we go about dehumanizing them. By removing human characteristics, like emotion or spoken language, we don’t have to feel as bad about buying that leather jacket we always wanted. de Waal reminds us, “We eat nonhuman animals, wear them, perform painful experiments on them, hold them captive for purposes of our own – sometimes in unhealthy condition. We make them work, and we kill them at will.”
So, the next time you shop and find that animal-based product you just NEED to buy, take a second to think about how you’re setting your priorities. Think about how, maybe unconsciously or unintentionally, you are dehumanizing the animals used for the creation of the product you’re about to buy. Couldn’t that animal be from the same species as your favorite TV character, or even your old pet? I think so, easily.
Are Humans Actually Selfish – Time
Learning Empathy From Apes – KPBS
Brain's response muted when we see other races in pain – NewScientist
Humans are hardwired to feel others' pain – NewScientist
Primates and Philosphers: How Morality Evolved – Google books
Anthropomorphism is the attribution of human characteristics to inanimate objects, animals, or God. It has been a hallmark of faiths and religions worldwide. Humans have a natural tendency to assign intentions and desires to inanimate objects ("my computer isn't feeling well today - he's so slow!"), but they also strip "lower" beings (animals) of those same human characteristics.
We have a history of treating animals unnecessarily cruelly. I don't mean killing for food - that's necessary for our survival; I'm referring to dog fights, hunting, and other violence. We didn't even think that animals could sense pain until quite recently!
Why do we think of lifeless forms as agents with intentions but of actual living creatures as emotionally inferior clumps of cells?
Could it be that the need to rationalize phenomena is simply stronger when the phenomena have absolutely no visible explanation?
And do toasters really have feelings??