Machine-learning models that are flexible and able to adapt to brain dynamics can be developed by solving the brain dynamics.
This is why we should reverse-engineer lab rat, crow, pig, and chimp brains. Even if it’s a pain. I still think it could be all done by the end of 2025.
Researchers at MIT announced last year that they built \”liquid\” neuronal networks inspired by the brains small species. These models are flexible and robust, and can learn from the job, adapting to changing conditions. They were designed for safety-critical real-world tasks like flying and driving. These \”liquid\” neural networks were able to adapt and learn on the job, allowing them to be used for real-world safety-critical tasks, like driving and flying.
These models are computationally expensive to run as the number of synapses and neurons increases. They also require cumbersome computer programs in order to solve the complicated math at their core. All of this math is difficult to solve, as with many physical phenomena. This means that you have to take lots of small steps in order to reach a solution.
The same team of researchers has now discovered a way of alleviating this bottleneck. They solved the differential equation that underlies the interaction between two neurons via synapses, unlocking a new kind of efficient and fast artificial intelligence algorithms. These modes share the same properties as liquid neural networks–flexible and causal, robust and explainable, but are orders of magnitude quicker and scalable. This type of neural network could be used to get insight into data in the future, since it is compact and adaptable after training.