The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning

Information

The RL Unplugged dataset is a comprehensive suite of benchmarks designed for offline reinforcement learning (RL). Offline RL methods allow agents to learn policies from logged datasets, bypassing the need for online interaction with the environment, which is crucial for real-world applications where online exploration may be costly, unsafe, or impractical.

RL Unplugged includes a variety of datasets from diverse domains such as:

  • Atari 2600 games: A popular benchmark for discrete-action environments, with data generated from DQN agents.
  • DM Control Suite: A set of continuous control tasks for simulated robotic environments.
  • DM Locomotion: High-dimensional motor control tasks for simulated humanoid and rodent agents.
  • Real-world RL Suite: Tasks designed to reflect real-world challenges like action delays, stochastic dynamics, and non-stationarity.

The dataset spans different types of environments, including partially observable ones, and supports both discrete and continuous action spaces. It aims to standardize the evaluation of offline RL algorithms and foster reproducibility and accessibility in RL research.

The original dataset is saved with tensorflow. We extract data and make them into numpy files.

Downloads last month
10