Blog

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

About why we wrote our paper

We visited Probabl in Paris to discuss open source and open science.

Self-Regulated Neurogenesis for Online Data-Incremental Learning
We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

Continual Learning with Dynamic Sparse Training
We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

Continual Learning with Informative Samples
We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

Meta-learning for Likelihood-free Bayesian Optimization
We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

Adaptive Continual Learning
We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

The AutoML Benchmark
About why we wrote our paper

OpenML x Probabl Hackathon
We visited Probabl in Paris to discuss open source and open science.
Analysis

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".
Benchmarking

About why we wrote our paper
Hackathon

We visited Probabl in Paris to discuss open source and open science.
Method

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.
Analysis

Continual Learning with Dynamic Sparse Training
We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

Continual Learning with Informative Samples
We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".
Benchmarking
Hackathon
Method

Self-Regulated Neurogenesis for Online Data-Incremental Learning
We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

Meta-learning for Likelihood-free Bayesian Optimization
We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

Adaptive Continual Learning
We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.