In the summer of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They’d already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. “It’s a bit like going and cataloging a piece of the rain forest,” Markram explained. “How many trees does it have? What shapes are the trees?” Now his team would create a virtual rain forest in silicon, from which they hoped artificial intelligence would organically emerge. If all went well, he quipped, perhaps the simulated brain would give a follow-up TED talk, beamed in by hologram.

Markram’s idea—that we might grasp the nature of biological intelligence by mimicking its forms—was rooted in a long tradition, dating back to the work of the Spanish anatomist and Nobel laureate Santiago Ramón y Cajal. In the late 19th century, Cajal undertook a microscopic study of the brain, which he compared to a forest so dense that “the trunks, branches, and leaves touch everywhere.” By sketching thousands of neurons in exquisite detail, Cajal was able to infer an astonishing amount about how they worked. He saw that they were effectively one-way input-output devices: They received electrochemical messages in treelike structures called dendrites and passed them along through slender tubes called axons, much like “the junctions of electric conductors.”

Cajal’s way of looking at neurons became the lens through which scientists studied brain function. It also inspired major technological advances. In 1943, the psychologist Warren McCulloch and his protégé Walter Pitts, a homeless teenage math prodigy, proposed an elegant framework for how brain cells encode complex thoughts. Each neuron, they theorized, performs a basic logical operation, combining multiple inputs into a single binary output: true or false. These operations, as simple as letters in the alphabet, could be strung together into words, sentences, paragraphs of cognition. McCulloch and Pitts’ model turned out not to describe the brain very well, but it became a key part of the architecture of the first modern computer. Eventually, it evolved into the artificial neural networks now commonly employed in deep learning.

These networks might better be called neural-ish. Like the McCulloch-Pitts neuron, they’re impressionistic portraits of what goes on in the brain. Suppose you’re approached by a yellow Labrador. In order to recognize the dog, your brain must funnel raw data from your retinas through layers of specialized neurons in your cerebral cortex, which pick out the dog’s visual features and assemble the final scene. A deep neural network learns to break down the world similarly. The raw data flows from a large array of neurons through several smaller sets of neurons, each pooling inputs from the previous layer in a way that adds complexity to the overall picture: The first layer finds edges and bright spots, which the next combines into textures, which the next assembles into a snout, and so on, until out pops a Labrador.

Despite these similarities, most artificial neural networks are decidedly un-brainlike, in part because they learn using mathematical tricks that would be difficult, if not impossible, for biological systems to carry out. Yet brains and AI models do share something fundamental in common: Researchers still don’t understand why they work as well as they do.

What computer scientists and neuroscientists are after is a universal theory of intelligence—a set of principles that holds true both in tissue and in silicon. What they have instead is a muddle of details. Eleven years and $1.3 billion after Markram proposed his simulated brain, it has contributed no fundamental insights to the study of intelligence.

Keep Reading

Part of the problem is something the writer Lewis Carroll put his finger on more than a century ago. Carroll imagined a nation so obsessed with cartographic detail that it kept expanding the scale of its maps—6 yards to the mile, 100 yards to the mile, and finally a mile to the mile. A map the size of an entire country is impressive, certainly, but what does it teach you? Even if neuroscientists can re-create intelligence by faithfully simulating every molecule in the brain, they won’t have found the underlying principles of cognition. As the physicist Richard Feynman famously asserted, “What I cannot create, I do not understand.” To which Markram and his fellow cartographers might add: “And what I can create, I do not necessarily understand.”

You May Also Like

Fast-food diets are confusing people’s immune systems and leading to a rise in autoimmune diseases

Eating heavily processed foods, including burgers and chicken nuggets, is leading to…

How NSO became the company whose software can spy on the world

The Pegasus project has raised new concerns about the Israeli firm, which…

Telly owners receive free upgrade that could actually reduce their electricity bill – see if yours is eligible

SELECT TV owners are about to get a new green screen in…

Twitter bug allows users to see and download Fleets past the 24-hour expiration date 

Twitter’s new Fleets are designed to disappear in 24 hours, but users…