Large Scale Neuronal Network Simulations Take Off

in Article, News
November 4th, 2013

Google, IBM, Microsoft, Baidu, NEC and others use deep learning and neural networks in development of their most recent speech recognition and image analysis systems. Neural networks have countless other uses, so naturally there are tons of startups trying to use neural networks in new ways. The problem being faced with now, is how exactly to implement neural network models in a way that mimics the circuitry of the brain.  Brains are highly parallel and extremely efficient; the ultimate achievement of a neural network model would be able to perform large scale parallel operations while being energy efficient. Current technology is not well suited for large-scale parallel processing, and it is nowhere near as efficient as our brain, which uses only 20 watts of power on average (a typical supercomputer uses somewhere along the order of several megawatts of electricity).

One way in which future computers could mimic the efficiency of the brain is being developed by IBM. IBM believes that energy efficiency is what will guide the next generation of computers, not raw processing power. Current silicon chips have been doubling in power through Moore’s Law for almost half a century, but are now reaching a physical limit. To break through this limit, researchers at IBM’s Zurich lab headed by Dr. Patrick Ruch and Dr. Bruno Michel want to mimic biology’s allometric scaling for new “bionic” computing. Allometric scaling is when an animal’s metabolic power increases with its body size; the approach taken by IBM is to start with 3D computing architecture, with processors stacked and memory in between. In order to keep everything running without overheating, this biologically-inspired computer would be powered, and cooled, by so-called electronic blood. Hopefully this fluid will be able to multi-task, and like blood supplies sugar while taking away heat, accomplish liquid fueling and cooling at the same time.

At the same time as IBM is trying to downscale and make efficient, the largest neuronal network simulation to date has been achieved using a supercomputer in Japan. The K Computer at the Okinawa Institute of Technology Graduate University (OIST) in Japan was used in a study as a part of the RIKEN HPCI Program for Computational Life Sciences. Using the full computational power of the supercomputer, researchers carried out the largest neuronal network simulation to day. The simulation was done using the simulation software NEST and its latest developments of advanced new data structures. NEST is open-source software that anyone can freely download and use. The team, led by Markus Diesmann and Abigail Morrison from the Institute of Neuroscience and Medicine at Jülich, simulated a network of 1.73 billion nerve cells connected by 10.4 trillion synapses. The program used 82,944 processors taking 40 minutes to complete the simulation of a single second of neuronal network activity. The scale of this simulation is obviously huge, but it only represents 1 percent of the neural network of the brain. The purpose of the simulation was to test the limits of the simulation technology of NEST and the capabilities of K. This simulation is just a glimpse into what will be possible in the future with exascale computers, the researchers said, “if petascale computers like the K computer are capable of representing one percent of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers, hopefully available within the next decade.” Who knows what sorts of simulations or applications of neural networks we’ll be seeing then.

A recent achievement by a startup using neural networks was the solving of CAPTCHA by AI startup Vicarious. CAPTCHA is an acronym that stands for Completely Automated Public Turing test to tell Computers and Humans Apart; passing a Turing test is the ultimate goal of robotics/AI research. The CAPTCHA recognition software is just a demonstration of Vicarious’s broader goal of creating an AI that will be able to think like a human. The company is using recursive cortical networks that think and learn much like we do. Instead of trying to model and simulate the brain itself, Vicarious is trying to use only the elements of the brain needed for information processing and learning. The current state of the art learning algorithms in the field of artificial intelligence make up deep learning. These algorithms are used for Apple’s Siri and Google’s voice search, but are based off of simplistic models of neurons much more primitive than those in our brains. Because of the simplicity of current models, learning requires excessive computation. “That is not intelligence,” says cofounder D. Scott Phoenix.  Instead, Vicarious is “trying to do the math behind the processes of the brain,” says Phoenix. It’s the same thinking, he adds, that is behind the obvious fact that “airplanes don’t flap their wings. We’re focusing on lift and thrust vs. feathers and flapping.”

-Leo Shapiro

 

Sources:

IBM unveils concept for a future brain-inspired 3D computer – Kurzweil AI

Largest Neuronal Network Simulation to Date Achieved Using Japanese Supercomputer – Science Daily

AI Startup Vicarious Claims Milestone In Quest To Build A Brain: Cracking CAPTCHA – Forbes

Tagged , , ,

Post Your Comment