In the natural world, applied intelligence takes many forms. For instance, bats use echo-location to navigate in the dark, while snakes utilize their heat-sensing ability to hunt down prey. In the computer science world too, many forms of artificial intelligence are to be found, where diverse networks perform different tasks. Cognitive scientists, for instance, are currently using some of the newly developed networks to know more about one of the most complex intelligence system – the human brain.
“The fundamental questions cognitive neuroscientists and computer scientists seek to answer are similar,” says Aude Oliva of MIT. “They have a complex system made of components — for one, it’s called neurons and for the other, it’s called units — and we are doing experiments to try to determine what those components calculate.”
According to Oliva, neuroscientists are learning much about the role of contextual clues in human image recognition. By using “artificial neurons” — essentially lines of code, software — with neural network models, they can parse out the various elements that go into recognizing a specific place or object.
“The brain is a deep and complex neural network,” says Nikolaus Kriegeskorte of Columbia University. “Neural network models are brain-inspired models that are now state-of-the-art in many artificial intelligence applications, such as computer vision.”
In a current study of more than 10 million images, Oliva and colleagues taught an artificial network to recognize 350 different places that included a kitchen, bedroom, park, living room, etc. They expected the network to learn objects such as a bed associated with a bedroom. What they didn’t expect was that the network would learn to recognize people and animals, for example dogs at parks and cats in living rooms.
“The machine intelligence programs learn very quickly when given lots of data, which is what enables them to parse contextual learning at such a fine level,” Oliva says. “While it is not possible to dissect human neurons at such a level, the computer model performing a similar task is very clear. The artificial neural networks serve as “mini-brains that can be studied, changed, evaluated, and compared against responses given by human neural networks, so the cognitive neuroscientists have some sort of sketch of how a real brain may function,” she added.
“Undeniably”, Kriegeskorte says that these models have helped neuroscientists understand how people can recognize the objects around them in the blink of an eye. “This involves millions of signals emanating from the retina, that sweep through a sequence of layers of neurons, extracting semantic information, for example that we’re looking at a street scene with several people and a dog,” he says. “Current neural network models can perform this kind of task using only computations that biological neurons can perform. Moreover, these neural network models can predict to some extent how a neuron deep in the brain will respond to any image.”
Using computer science to understand the human brain is a relatively new field that is expanding rapidly thanks to advancements in computing speed and power, along with neuroscience imaging tools. The artificial networks cannot yet replicate human visual abilities, Kriegeskorte says, but by modeling the human brain, they are furthering understanding of both cognition and artificial intelligence. “It’s a uniquely exciting time to be working at the intersection of neuroscience, cognitive science, and AI,” he says.
In the same vein, Oliva says; “Human cognitive and computational neuroscience is a fast-growing area of research, and knowledge about how the human brain is able to see, hear, feel, think, remember, and predict is mandatory to develop better diagnostic tools, to repair the brain, and to make sure it develops well.”
Filtering information for search engines, acting as an opponent during a board game or recognizing images: Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ideas from computer science could revolutionize brain research. In the scientific journal Human Brain Mapping they illustrate how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG).
It nonetheless, included performed movements, but also hand and foot movements that were merely thought of, or an imaginary rotation of objects. Even though the algorithm was not given any characteristics ahead of time, it works as quickly and precisely as traditional systems that have been created to solve certain tasks based on predetermined brain signal characteristics, which are therefore not appropriate for every situation.
“Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds,” says computer scientist Robin Tibor Schirrmeister. The researcher is using it to rewrite methods that the team has used for decoding EEG data: So-called artificial neural networks are the heart of the current project at BrainLinks-BrainTools.