Adrien - Friday, May 15, 2026

🧠 Continuous learning in AI: drawing inspiration from biological synapses

Continuous learning in artificial intelligence involves a delicate trade-off between forgetting old knowledge and rigidity in the face of new data. In a study published in Nature Communications, scientists used Bayesian approaches inspired by biological synapses to introduce uncertainty and better balance memory and adaptation.

The human brain learns continuously while preserving acquired knowledge, a balance that artificial intelligence (AI) systems still struggle to replicate. When an AI model takes in new information, it often tends to erase previously acquired knowledge (catastrophic forgetting) or, conversely, becomes too rigid to integrate new data (catastrophic recall).


Scientists from the Center for Nanosciences and Nanotechnologies (C2N, CNRS/Université Paris-Saclay), CEA-Leti, and CEA-List drew inspiration from neuroscience, where recent work suggests that biological synapses follow Bayesian principles: they adjust their representations of the world by weighing new observations against prior knowledge, while accounting for their degree of uncertainty.


Building on this, the team proposed a new continuous learning framework called Metaplasticity from Synaptic Uncertainty (MESU).

In MESU, each connection in the network acts as a Bayesian synapse, maintaining its own uncertainty estimate. It thus adapts its learning speed based on the confidence placed in new information, while incorporating a gradual forgetting mechanism for data deemed less relevant. MESU therefore translates certain neuroscientific hypotheses about how the brain balances memory stability and cognitive flexibility.

The experiments conducted showed that MESU achieves a robust balance between memorization and adaptation. Across several datasets, including animal image classification, permuted digit recognition, and incremental object learning, MESU significantly reduces both forgetting and learning rigidity, while providing reliable uncertainty estimates. It outperforms continuous learning methods based on explicit task consolidation or separation.

Beyond these results, MESU establishes a strong theoretical link between neuroscience and machine learning by formalizing an approach inspired by brain function to manage continuous learning. Our next step will be to extend MESU toward probabilistic models compatible with embedded hardware, in order to make this bio-inspired continuous learning applicable to real-world, low-power AI devices.


Continuous learning corresponds to a sequential training scenario, where several datasets are presented one after another. In the MESU framework, neural network weights follow a probability distribution that approximates a formulation harmoniously balancing learning and forgetting, unlike previous methods.
© Damien Querlioz, C2N
Ce site fait l'objet d'une déclaration à la CNIL
sous le numéro de dossier 1037632
Informations légales