AMOR/e
  • Lab
  • Projects
  • Publications
  • Software
  • Blog
  • People
  • Join us!

Blog

Self-Regulated Neurogenesis for Online Data-Incremental Learning
Method 2025-06-05 CoLLAs 2025

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

Continual Learning with Dynamic Sparse Training
Analysis 2025-06-04 CPAL 2024

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

Continual Learning with Informative Samples
Analysis 2025-06-03

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

Meta-learning for Likelihood-free Bayesian Optimization
Method 2025-06-02 ICML 2024

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

Adaptive Continual Learning
Method 2025-06-01 ContinualAI Unconference 2023

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

The AutoML Benchmark
Benchmarking 2024-12-06 JMLR

About why we wrote our paper "AMLB: an AutoML Benchmark" and its main contributions.

OpenML x Probabl Hackathon
Hackathon 2024-09-19

We visited Probabl in Paris to discuss open source and open science.

Self-Regulated Neurogenesis for Online Data-Incremental Learning
Method CoLLAs 2025

Self-Regulated Neurogenesis for Online Data-Incremental Learning

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

2025-06-05 ➤
Continual Learning with Dynamic Sparse Training
Analysis CPAL 2024

Continual Learning with Dynamic Sparse Training

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

2025-06-04 ➤
Continual Learning with Informative Samples
Analysis

Continual Learning with Informative Samples

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

2025-06-03 ➤
Meta-learning for Likelihood-free Bayesian Optimization
Method ICML 2024

Meta-learning for Likelihood-free Bayesian Optimization

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

2025-06-02 ➤
Adaptive Continual Learning
Method ContinualAI Unconference 2023

Adaptive Continual Learning

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

2025-06-01 ➤
The AutoML Benchmark
Benchmarking JMLR

The AutoML Benchmark

About why we wrote our paper "AMLB: an AutoML Benchmark" and its main contributions.

2024-12-06 ➤
OpenML x Probabl Hackathon
Hackathon

OpenML x Probabl Hackathon

We visited Probabl in Paris to discuss open source and open science.

2024-09-19 ➤

Analysis

Continual Learning with Dynamic Sparse Training
Analysis 2025-06-04 CPAL 2024

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

Continual Learning with Informative Samples
Analysis 2025-06-03

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

Benchmarking

The AutoML Benchmark
Benchmarking 2024-12-06 JMLR

About why we wrote our paper "AMLB: an AutoML Benchmark" and its main contributions.

Hackathon

OpenML x Probabl Hackathon
Hackathon 2024-09-19

We visited Probabl in Paris to discuss open source and open science.

Method

Self-Regulated Neurogenesis for Online Data-Incremental Learning
Method 2025-06-05 CoLLAs 2025

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

Meta-learning for Likelihood-free Bayesian Optimization
Method 2025-06-02 ICML 2024

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

Adaptive Continual Learning
Method 2025-06-01 ContinualAI Unconference 2023

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

Analysis

Continual Learning with Dynamic Sparse Training
Analysis CPAL 2024

Continual Learning with Dynamic Sparse Training

We show how sparse training helps us learn much faster while forgetting less. Based on our CPAL paper "Continual Learning with Dynamic Sparse Training".

2025-06-04 ➤
Continual Learning with Informative Samples
Analysis

Continual Learning with Informative Samples

We show how continual learning benefits from selecting only the more informative (or surprising) new data points.".

2025-06-03 ➤

Benchmarking

The AutoML Benchmark
Benchmarking JMLR

The AutoML Benchmark

About why we wrote our paper "AMLB: an AutoML Benchmark" and its main contributions.

2024-12-06 ➤

Hackathon

OpenML x Probabl Hackathon
Hackathon

OpenML x Probabl Hackathon

We visited Probabl in Paris to discuss open source and open science.

2024-09-19 ➤

Method

Self-Regulated Neurogenesis for Online Data-Incremental Learning
Method CoLLAs 2025

Self-Regulated Neurogenesis for Online Data-Incremental Learning

We present SERENA, a neuro-inspired solution for continual learning that mimics the self-regulated neurogenesis process in the human brain.

2025-06-05 ➤
Meta-learning for Likelihood-free Bayesian Optimization
Method ICML 2024

Meta-learning for Likelihood-free Bayesian Optimization

We introduce MALIBO, a novel and scalable framework that leverages meta-learning for fast and efficient Bayesian optimization.

2025-06-02 ➤
Adaptive Continual Learning
Method ContinualAI Unconference 2023

Adaptive Continual Learning

We introduce AdaCL, a new method that optimally adapts continual learning hyperparameters to every new task.

2025-06-01 ➤