A fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is a fork of RSL RL incorporated with model-based RL algorithms supporting Robotic World Model and Uncertainty-Aware Robotic World Model.
Authors: Chenhao Li, Andreas Krause, Marco Hutter
Affiliation: ETH AI Center, Learning & Adaptive Systems Group and Robotic Systems Lab, ETH Zurich
The package can be installed via PyPI with:
pip install rsl-rl-libor by cloning this repository and installing it with:
git clone https://github.com/leggedrobotics/rsl_rl_rwm.git
cd rsl_rl_rwm
pip install -e .The package supports the following logging frameworks which can be configured through logger:
- Tensorboard: https://www.tensorflow.org/tensorboard/
- Weights & Biases: https://wandb.ai/site
- Neptune: https://docs.neptune.ai/
For a demo configuration of PPO, please check the example_config.yaml file.
If you use the library with model-based reinforcement learning, please cite the following work:
@article{li2025robotic,
title={Robotic world model: A neural network simulator for robust policy optimization in robotics},
author={Li, Chenhao and Krause, Andreas and Hutter, Marco},
journal={arXiv preprint arXiv:2501.10100},
year={2025}
}
@article{li2025offline,
title={Uncertainty-aware robotic world model makes offline model-based reinforcement learning work on real robots},
author={Li, Chenhao and Krause, Andreas and Hutter, Marco},
journal={arXiv preprint arXiv:2504.16680},
year={2025}
}

