Blog
We show how continual learning benefits from selecting only the more informative (or surprising) new data points. Based on our paper "Continual Learning with Informative Samples - An Empirical Evaluation of Coreset Strategies".
We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.
We show how sparse training helps us learn much faster while forgetting less. Based on our CoLLAs paper "Continual Learning with Dynamic Sparse Training".
We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.
We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.
About why we wrote our paper
We visited Probabl in Paris to discuss open source and open science.