[LINK] Unifying Predictive Coding With Backpropagation
[epistemic status: I know a little about the predictive coding side of this, but almost nothing about backpropagation or the math behind the unification. I am posting this mostly as a link to people who know more.]
This is a link to / ad for a great recent Less Wrong post by lsusr, Predictive Coding Has Been Unified With Backpropagation, itself about a recent paper Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs.
Predictive coding is the most plausible current theory of how the brain works. I’ve written about it elsewhere, especially here.
Backpropagation is an algorithm involved in most modern machine learning / AI. If you create a neural net and “train it” by feeding it problems and answers, the backpropagation algorithm tells you how to make the answers “flow backward” through the model to produce the set of weights that would have made the model predict them. Hopefully this makes the model generally good at solving that class of problem, and then you can feed it new problems you want it to solve.
We’re pretty sure the brain doesn’t directly use backpropagation. Real backpropagation requires, well, propagation going backwards. But neurons can only send information one way; Neuron A sends to Neuron B, but not vice versa. On the other hand, the brain seems to do a lot of the same things artificial neural networks do, with a suspiciously similar structure. So researchers have long suspected the brain was doing some kind of approximation of backpropagation that ended up in the same place.
A series of recent papers helps flesh this out. Predictive coding can approximate backpropagation without needing backwards information transfer. The most recent paper shows that you can do this for arbitrary computational graphs. lsusr writes:
There are two big implications of this.
This paper permanently fuses artificial intelligence and neuroscience into a single mathematical field.
This paper opens up possibilities for neuromorphic computing hardware.
I’m not sure I am as excited as they are; AI and neuroscience have always been a single field in some spiritual sense, and they continue to be very different fields in practice. And I’m not sure what good more neuromorphic computers would be - lsusr suggests “a computer that doesn’t break when you cut it in half”, but it sounds easier to just avoid letting your computer end up in a situation where that might matter.
But there’s been a debate over how much existing neural nets are already pretty similar to the brain, vs. the brain is doing fundamentally more advanced things that we don’t understand at all. This line of research provides some evidence that artificial and natural intelligence are already more similar than we thought.
The paper: https://arxiv.org/pdf/2006.04182.pdf
The article: https://www.lesswrong.com/posts/JZZENevaLzLLeC3zn/predictive-coding-has-been-unified-with-backpropagation