Dr. Frank Werblin at UC Berkeley has dedicated nearly his entire academic life to the study of the eye and visual processing. More recently Dr. Werblin has completed his model of the retinal processing system he has deemed “The Retinal Hypercircuit”. The Hypercircuit itself is made up of the five classical retina cell types: Photoreceptor, Horizontal, Bipolar, Amacrine and Retinal Ganglion Cells, but more recently, a collaborative effort has identified over 50 morphologically different cell types. Of this vast array of unique cell types the most variance falls in the morphology of the Amacrine cells, which offer horizontal properties in the Inner Plexiform Layer between the Bipolar and Ganglion Cells. Although the mechanics behind the Hypercirtuit are fascinating, what I find arguably more important is the output of the system, a topic which Werblin has indirectly stumbled upon, but which I believe could potentially lead to an incredibly progressive line of research. More
For years, scientists have investigated cases of human brain damage as a means of further understanding the function of specific neural regions, but neuroanatomist, Dr. Jill Bolte Taylor, received the unique opportunity of experiencing this function-impeding damage firsthand. She awoke one morning to find herself having a stroke, and years later has recovered to share the event. Taylor’s unique experience sheds and interesting light on the underlying processes of our fascinating brains. Here is the video (via YouTube):
Video Link – Ted.com
Background – DrJillTaylor.com
The world seems as though it is starting to move faster and faster, and thus the demand for information and information accessibility is drastically speeding up as well. Modern computers and related technologies, however, have done a remarkable job with both creating and keeping up with the ever growing demand for data and access people need to it. Perhaps one of the interesting innovations on the scene as of late is the emergence of a new form of information sharing and storing colloquially called “cloud computing”. More
You’re lying on a sandy beach on a hot sunny afternoon, enjoying a few hours of much needed laziness. As you open your eyes and confront the vastness of the ocean in front of you, light of 600nm wavelength hits your retina, kindling an impossibly long cascade of events in your brain: a molecule called retinal changes shape, neurons fire action potentials down the optic nerve, arrive at the lateral geniculate nucleus deep in the brain causing more action potentials in primary visual cortex in the back of your head, and so on ad infinitum. At some point, the mechanical wonder of 100 billion neurons working together produces something special: your experience of the color blue. What’s special is not that you can discriminate that color from others; nor that you are aware of it and paying attention to it. It is not notable that you can tell us about it, or assign a name to it. It’s that you have a subjective, qualitative experience of the color; there is something it is like to experience the color blue. Some philosophers call these experiences qualia – meaning “what kind” – but it is not important what kind of experience you are having, just that you are having one at all. Modern science hypothesizes that subjective experience is a product of the brain, but has no explanation for it. More
A little self-education goes a long way. Let Richard Dawkins enlighten you (and if you’ve seen this already, it’s never a bad idea to brush up on the basics of life):
In the 2009 film Avatar, scientists exploring the planet Pandora used alien hybrid bodies called “avatars” that functioned through a mental connection established with their genetically-matched human counterparts.
While this kind of technology seems as science fictionally fantastic as only the movies can portray it, recent work in the neuro-scientific community may lead the world to think otherwise. Neurologist Olaf Blanke, with the Brain Mind Institute at Ecole Polytechnique Fédérale de Lausanne in Switzerland, led a Virtual-Reality (VR) experiment utilizing computerized “virtual humans” to gain a deeper understanding of the neurobiological basis for the knowledge of one’s location in space. Interestingly, his team seems to have discovered that the sensation of possessing a body arises as part of our own conscious experience.
Blanke and his team had volunteers wear VR stereoscopic visors, or view projections on a large screen, while the researchers challenged them about fundamental aspects of self perception. The scientists physically touched the subjects either in sync or out of sync with their digital human “avatars” as they wandered through 3D environments, and even ‘immersed’ them into an avatar of the opposite sex. They also changed the subject’s perspective from the first to the third-person point of view. While such methods may seem a bit odd and even unorthodox, the response of the subjects to such testing was both highly positive and truly fascinating. Indeed, as Blanke commented regarding his own observations: “They start thinking that the avatar is their own body; we created a partial out-of-body experience. We were able to disassociate touch and vision and make people think that their body was two metres in front of them”.
Throughout the experiement, subjects were fitted with electrode-containing skullcaps to record the electrical activity produced by their brains. The data collected by the electrodes and brain imaging scans (via fMRI) during the study demonstrated a heightened response in the temporo-parietal and frontal regions of the volunteer’s brains, areas classically considered responsible for integrating touch and vision. These findings suggest that the subjects’ brains were successfully being tricked as they experienced their own “bodies” in virtual space.
Progression in the knowledge of self-awareness and virtual reality could lead to major advances in the fields of robotics, neuro-rehabilitation and even severe-pain treatment. Imagine being able to temporarily “leave” the body as it heals after a serious injury! Though we may never get to explore Pandora, the implications of such out of body “avatar” experiences could be enormous.
Scientists project humans into avatars – Financial Times
Scientists explore the meaning of self-consciousness – Irish Times
The real avatar – EurekAlert
Stuart Hameroff, MD, is an anesthesiologist and professor at the University of Arizona. In one of many articles and videos about consciousness on the Huffington Post, Hameroff describes how anesthesia can help explain consciousness.
If the brain produces consciousness (all aspects of the term), then it seems to follow that turning off the brain will also turn off consciousness. This is exactly how anesthetics work.
While most anesthetics are nonselective “dirty” drugs, they all produce loss of consciousness, amnesia, and immobility by either opening inhibitory ion channels or closing excitatory ion channels in neurons. The commonly used intravenous drug propofol, for example, acts by activating GABA receptors, the ubiquitous inhibitory channels in CNS interneurons. Brain off = consciousness off.
Hameroff does not subscribe to this. He argues that consciousness is an intrinsic part of the universe and that anesthetics simply disconnect it from the brain. He also thinks that by saying “quantum” a lot, he can scientifically prove the existence of the soul.
What’s scary is that Hameroff has “MD” and “Professor” next to his name. Will Joe the Plumber see through the misinformation?
Don’t take the HuffPost too seriously:
Imagine: a mad scientist with a ray gun shoots at a neuron somewhere in cortical layer IV of your visual area MT, burning it up in a matter of microseconds (just for fun, imagine also that the ray gun leaves everything else intact).
With one neuron missing, you probably won’t notice any perceptual change. But what if, one by one, all neurons in are MT went AWOL? You’d be stuck with an annoying inability to visually detect motion.
Now imagine that for every cell that our fancy ray gun hits, it replaces it with a magical transistor equivalent. These magical transistors have wires in place of each and every dendrite, a processing core, and some wires in place of axon(s). Naturally, the computational core analyzes the sum of all inputs and instructs the axon to “fire” accordingly. Given any set of inputs to the dendrite wires, the output of the axon wires is indistinguishable from that of the deceased neuron.
We can still imagine that with one neuron replaced with one magical transistor, there wouldn’t be any perceptual change. But what happens when more and more cells are replaced with transistors? Does perception change? Will our subject become blind to motion, as if area MT weren’t there? Or will motion detection be just as good as with the real neurons? I am tempted to vote in favor of “No change [we can believe in],” but have to remain skeptical: there is simply no direct evidence for either stance.
Ray guns aside, it is not hard to see that a computational model of a brain circuit may be a candidate replacement of real brain parts (this is especially true considering the computational success of the Blue Brain Project’s cortical column, which comprises 10,000 neurons and many more connections among them). For example, we can imagine thousands of electrodes in place of inputs to area MT that connect to a computer model (instead of to MT neurons); the model’s outputs are then connected, via other electrodes, to the real MT’s outputs, and ta-da! Not so fast. This version of the upgrade doesn’t shed any more light on the problem than the first, but it does raise some questions: do the neurons in a circuit have to be connected in one specific way in order for the circuit to support perception? Or is it sufficient simply for the outputs of the substitute to match those of the real circuit, given any set of inputs? And, what if the whole brain were replaced with something that produced the same outputs (i.e. behavior) given a set of sensory inputs – would that “brain” still produce perception?