“Do we have a chance of ever understanding brain function without brain simulations?” So asked the Human Brain Project (HBP), the brainchild of Henry Markram, in a new paper in the prestigious journal Neuron.
The key, the team argued, is to consider brain simulators in the vein of calculus for Newton’s laws—not as specific ideas of how the brain works, but rather as a programming language that can execute many candidate neural models, or programs, now and in the future. When viewed not as a vanity project, but rather as the way forward to understand—and eventually imitate—higher brain functions, the response to brain simulation is a resounding yes.
Because of the brain’s complexity and chaotic nature, the authors argue, rather than reining in simulation efforts, we need to ramp up and develop multiple “brain-simulation engines” with varying levels of detail.
These “general purpose” simulators, when distributed globally on a cloud-based platform, will not just provide neuroscientists with an indispensable tool to test their hypotheses about various functions in the brain—how we make decisions, memories, and emotions. They may be the crux in finally linking the individual components of human intelligence, neurons, to functional neural circuits, and finally, to behavior or even consciousness—that is, how our experience of “self” emerges from seemingly random chattering in millions of neurons.
When Markram announced in 2009 that we would fully simulate the human brain in the next decade, he was either heralded as a visionary or condemned as a madman. Despite controversy, he subsequently skyrocketed to fame as the creator of the HBP, the European Union flagship aimed reconstructing entire brains as “digital replicas” capable of learning and memory, inside supercomputers. In other words, true AI.
The project raised eyebrows from the start, both in its ambition and its lack of specifications: What level of detail should it go into? We barely have the assembly parts of the brain, critics argued. How will static brain mapping capture nuances of experience and intuition, both indispensable to human intelligence but amorphous in physical form? Ever since its birth, neuroscience has had simulations of individual neurons and circuits. Behavior, no matter how complex, relies on a multitude of circuits rather than the entire brain at the same time. Why, then, is whole brain simulation even necessary to understand our responses?
Then there’s the hardware argument: even IBM’s supercomputers—with whom Markram struck up a tumultuous partnership—struggle to mimic the brain’s parallel computing prowess.
Yet the next ten years proved that the HBP was prescient in its quest to deconstruct—and then reconstruct—the brain in silico. Although Markram was overly optimistic—we’re nowhere near close to reconstructing even a mouse brain—innovations in microscopy and advances in computing power and algorithms have kicked off many projects to functionally map brain connections across the world.
To pick apart brain networks and link them to behavior output—think programming commands to compiling (and running) the program—we first need to know the “keyboard,” or “letters” of the brain. That is, we need to meet all the types of neurons inside our heads.
Here, the Allen Institute for Brain Science has been paramount. The Allen Cell Types Database, for example, consolidates information about neurons’ precise 3D shapes and how they react to different electrical zaps. Add a basic understanding of the biophysical properties of an average neuron, and it’s then possible to simulate many different neuronal models using the same simulator, the team said.
“Data is not knowledge,” said Dr. Anton Arkhipov at the Institute. “Models are not knowledge either, but they can help you get closer to knowledge by combining and integrating the data.”
To the HBP, these are still only initial skeleton models; to construct general purpose brain simulators, we also need to know how the neurons connect, how strongly they connect, and how those connections change with experience.
Eventually the simulations should be able to pit experimental data, measured from real brains, to different models of how we think the brain works. These engines will then edge us closer to a better description of brain function for a certain task.
This leads to one of the arguments for whole-brain simulation: it’ll help us solve the “biological imitation game,” a Turing test-like assay that pits digitally reconstructed brains against real ones. Iterations of the test help select increasingly more accurate models for a given task, which eventually become the most promising ideas for how specific biological networks operate. And because these models are based on mathematical equations, they could become the heart of next-generation AI.
The key here, the team argued, is to find the best metric of what makes a good model. Mimicking neural firing, the strategy of most brain-machine interfaces, isn’t enough. Rather, the model should also respond to other sorts of stimulation, including magnetic or light, with the same type of measurable effect as natural neurons.
How is an open question. “Perhaps the realistic behavior of a robot following motor commands produced by a model network could be one success criterion when such models become available?” the team suggested.
Digital biological intelligence aside, brain simulation would also massively improve neuroscience, both in terms of how experimental data is analyzed, and by offering a “model brain” to help neuroscientists generate hypotheses about brain functions.
As neuroscience increasingly moves towards its big data revolution, the field has struggled with how to best consolidate experimental data. For example, even if we can record in minute detail every brain signal that leads to muscle twitches or enhanced memory encoding, those data are all but nonsense without a way to parse electrical signals into meaningful commands.
Brain simulations can guide the way forward, the team said. This includes generating virtual brains to benchmark how a particular network —or several— should respond to a particular input.
Finally, when released to the general neuroscience community, simulators would easily help scientists form new hypotheses: for example, is it possible for EEG, which places electrodes on the scalp, to record sufficient data to discriminate between the left and right leg stepping up? Such experiments underlie brain-machine interfaces, and currently require extensive training for months to test out. With a simulated brain, we could get answers in hours.
With exascale computers just around the corner, likely a million times faster than an average desktop, simulations for brain networks should become ever more possible. Here’s where cloud computing comes in: most neuroscientists will be able to access digital brains remotely through the cloud—as is currently the case for quantum computers. By consciously building graphical user interfaces and plug-and-play programs to create neural networks into digital brain simulations, the tool could become accessible to anyone interested.
The article doesn’t entirely refute naysayers. However, it does better outline guidance for brain simulation going forward. Although the HBP is expected to end in 2023, the team believes that long-term investment is absolutely critical to reap the benefits of a digital brain.
Rather than single ideas, “brain simulators should be viewed as ‘mathematical observatories’ to test various candidate hypotheses,” the team said.
Going forward, they foresee at least three different levels of simulation: exquisitely detailed models of single neurons; integrate-and-fire neurons that can be linked up into networks; and large populations of “point” neurons that focus on the way large brain regions behave.
The renowned physicist Richard Feynman once said, “‘What I cannot create, I do not understand.’’ That is how the HBP makes its strongest case: yes, it is expensive; yes, it will be for the long haul. But brain simulation is how we will ultimately solve the mysteries of the brain—both biological and artificial.