MIT Solved Century-Old Equation for More Efficient AI/ML Algorithms

MIT has solved a century old differential equation that was causing ‘liquid AI’s’ computational bottleneck

MIT created an AI/ML algorithm that can adapt to new information on the job and not only during initial training. These \”liquid\” (in Bruce Lee’s sense) neural networks literally play 4D Chess. Their models require time-series information to operate, which makes them perfect for time-sensitive tasks such as pacemaker monitoring or investment forecasting. The problem is data throughput, which has become a bottleneck. Scaling these systems has also become prohibitively costly, computationally.

Researchers at MIT announced on Tuesday that they had found a way to overcome this restriction. They did not do so by expanding the data pipeline, but by solving a differential problem that has baffled mathematicians ever since 1907. The team was able to solve \”the differential equation that governs the interaction between two neurons via synapses… in order to unlock a faster and more efficient artificial intelligence algorithm.\”

In a press release on Tuesday, MIT Professor and CSAIL director Daniela Rus stated that \”the new machine learning models called ‘CfCs’ [closed form Continuous-time] substitute the differential equation defining computation of the neurons with a closed-form approximation. This preserves the beautiful properties liquid networks without requiring numerical integration.\” \”CfC\” models are causal, compact and explainable. They’re also easy to predict and train. \”They open the door to trusted machine learning for safety critical applications.\”