MattStammers's picture
Update README.md
2a28f31
metadata
library_name: stable-baselines3
tags:
  - QbertNoFrameskip-v4
  - deep-reinforcement-learning
  - reinforcement-learning
  - stable-baselines3
model-index:
  - name: QRDQN
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: QbertNoFrameskip-v4
          type: QbertNoFrameskip-v4
        metrics:
          - type: mean_reward
            value: 23787.50 +/- 2397.09
            name: mean_reward
            verified: false

QRDQN Agent playing QbertNoFrameskip-v4

This is a trained model of a QRDQN agent playing QbertNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo.

The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

Usage (with SB3 RL Zoo)

RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo
SB3: https://github.com/DLR-RM/stable-baselines3
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib

Install the RL Zoo (with SB3 and SB3-Contrib):

pip install rl_zoo3
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo qrdqn --env QbertNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env QbertNoFrameskip-v4  -f logs/

If you installed the RL Zoo3 via pip (pip install rl_zoo3), from anywhere you can do:

python -m rl_zoo3.load_from_hub --algo qrdqn --env QbertNoFrameskip-v4 -orga MattStammers -f logs/
python -m rl_zoo3.enjoy --algo qrdqn --env QbertNoFrameskip-v4  -f logs/

Training (with the RL Zoo)

python -m rl_zoo3.train --algo qrdqn --env QbertNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo qrdqn --env QbertNoFrameskip-v4 -f logs/ -orga MattStammers

Hyperparameters

OrderedDict([('batch_size', 64),
             ('env_wrapper',
              ['stable_baselines3.common.atari_wrappers.AtariWrapper']),
             ('exploration_fraction', 0.025),
             ('frame_stack', 4),
             ('n_timesteps', 50000000.0),
             ('normalize', False),
             ('optimize_memory_usage', False),
             ('policy', 'CnnPolicy')])

Environment Arguments

{'render_mode': 'rgb_array'}

Additional Comments

Training for this seems to peak at about 50 million timesteps

Interestingly this guy doesn't even seem to care about using the spinners. I guess he gets so good at dodging the snake that he considers them valueless.