Blog

Continual Learning with Informative Samples
Analysis 2025-06-04

We show how continual learning benefits from selecting only the more informative (or surprising) new data points. Based on our paper "Continual Learning with Informative Samples - An Empirical Evaluation of Coreset Strategies".


Adaptive Continual Learning
Method 2025-06-04

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.


Continual Learning with Dynamic Sparse Training
Analysis 2025-06-04

We show how sparse training helps us learn much faster while forgetting less. Based on our CoLLAs paper "Continual Learning with Dynamic Sparse Training".


Meta-learning for Likelihood-free Bayesian Optimization
Method 2025-06-04

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.


Self-Regulated Neurogenesis for Online Data-Incremental Learning
Method 2025-06-04

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.


The AutoML Benchmark
Benchmarking 2024-12-06

About why we wrote our paper "AMLB: an AutoML Benchmark" and its main contributions.


OpenML x Probabl Hackathon
Hackathon 2024-09-19

We visited Probabl in Paris to discuss open source and open science.