lucasbertola commited on
Commit
0b11db9
1 Parent(s): 3f16f5c

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: stable-baselines3
3
+ tags:
4
+ - BreakoutNoFrameskip-v4
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - stable-baselines3
8
+ model-index:
9
+ - name: DQN
10
+ results:
11
+ - metrics:
12
+ - type: mean_reward
13
+ value: 11.40 +/- 1.56
14
+ name: mean_reward
15
+ task:
16
+ type: reinforcement-learning
17
+ name: reinforcement-learning
18
+ dataset:
19
+ name: BreakoutNoFrameskip-v4
20
+ type: BreakoutNoFrameskip-v4
21
+ ---
22
+
23
+ # **DQN** Agent playing **BreakoutNoFrameskip-v4**
24
+ This is a trained model of a **DQN** agent playing **BreakoutNoFrameskip-v4**
25
+ using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
26
+ and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
27
+
28
+ The RL Zoo is a training framework for Stable Baselines3
29
+ reinforcement learning agents,
30
+ with hyperparameter optimization and pre-trained agents included.
31
+
32
+ ## Usage (with SB3 RL Zoo)
33
+
34
+ RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
35
+ SB3: https://github.com/DLR-RM/stable-baselines3<br/>
36
+ SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
37
+
38
+ Install the RL Zoo (with SB3 and SB3-Contrib):
39
+ ```bash
40
+ pip install rl_zoo3
41
+ ```
42
+
43
+ ```
44
+ # Download model and save it into the logs/ folder
45
+ python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga lucasbertola -f logs/
46
+ python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
47
+ ```
48
+
49
+ If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
50
+ ```
51
+ python -m rl_zoo3.load_from_hub --algo dqn --env BreakoutNoFrameskip-v4 -orga lucasbertola -f logs/
52
+ python -m rl_zoo3.enjoy --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
53
+ ```
54
+
55
+ ## Training (with the RL Zoo)
56
+ ```
57
+ python -m rl_zoo3.train --algo dqn --env BreakoutNoFrameskip-v4 -f logs/
58
+ # Upload the model and generate video (when possible)
59
+ python -m rl_zoo3.push_to_hub --algo dqn --env BreakoutNoFrameskip-v4 -f logs/ -orga lucasbertola
60
+ ```
61
+
62
+ ## Hyperparameters
63
+ ```python
64
+ OrderedDict([('batch_size', 32),
65
+ ('buffer_size', 100000),
66
+ ('env_wrapper',
67
+ ['stable_baselines3.common.atari_wrappers.AtariWrapper']),
68
+ ('exploration_final_eps', 0.01),
69
+ ('exploration_fraction', 0.1),
70
+ ('frame_stack', 4),
71
+ ('gradient_steps', 1),
72
+ ('learning_rate', 0.0001),
73
+ ('learning_starts', 100000),
74
+ ('n_timesteps', 200000.0),
75
+ ('optimize_memory_usage', False),
76
+ ('policy', 'CnnPolicy'),
77
+ ('target_update_interval', 1000),
78
+ ('train_freq', 4),
79
+ ('normalize', False)])
80
+ ```
81
+
82
+ # Environment Arguments
83
+ ```python
84
+ {'render_mode': 'rgb_array'}
85
+ ```
args.yml ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - algo
3
+ - dqn
4
+ - - conf_file
5
+ - dqn.yml
6
+ - - device
7
+ - auto
8
+ - - env
9
+ - BreakoutNoFrameskip-v4
10
+ - - env_kwargs
11
+ - null
12
+ - - eval_episodes
13
+ - 5
14
+ - - eval_freq
15
+ - 25000
16
+ - - gym_packages
17
+ - []
18
+ - - hyperparams
19
+ - null
20
+ - - log_folder
21
+ - logs/
22
+ - - log_interval
23
+ - -1
24
+ - - max_total_trials
25
+ - null
26
+ - - n_eval_envs
27
+ - 1
28
+ - - n_evaluations
29
+ - null
30
+ - - n_jobs
31
+ - 1
32
+ - - n_startup_trials
33
+ - 10
34
+ - - n_timesteps
35
+ - -1
36
+ - - n_trials
37
+ - 500
38
+ - - no_optim_plots
39
+ - false
40
+ - - num_threads
41
+ - -1
42
+ - - optimization_log_path
43
+ - null
44
+ - - optimize_hyperparameters
45
+ - false
46
+ - - progress
47
+ - false
48
+ - - pruner
49
+ - median
50
+ - - sampler
51
+ - tpe
52
+ - - save_freq
53
+ - -1
54
+ - - save_replay_buffer
55
+ - false
56
+ - - seed
57
+ - 558558252
58
+ - - storage
59
+ - null
60
+ - - study_name
61
+ - null
62
+ - - tensorboard_log
63
+ - ''
64
+ - - track
65
+ - false
66
+ - - trained_agent
67
+ - ''
68
+ - - truncate_last_trajectory
69
+ - true
70
+ - - uuid
71
+ - false
72
+ - - vec_env
73
+ - dummy
74
+ - - verbose
75
+ - 1
76
+ - - wandb_entity
77
+ - null
78
+ - - wandb_project_name
79
+ - sb3
80
+ - - wandb_tags
81
+ - []
config.yml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ !!python/object/apply:collections.OrderedDict
2
+ - - - batch_size
3
+ - 32
4
+ - - buffer_size
5
+ - 100000
6
+ - - env_wrapper
7
+ - - stable_baselines3.common.atari_wrappers.AtariWrapper
8
+ - - exploration_final_eps
9
+ - 0.01
10
+ - - exploration_fraction
11
+ - 0.1
12
+ - - frame_stack
13
+ - 4
14
+ - - gradient_steps
15
+ - 1
16
+ - - learning_rate
17
+ - 0.0001
18
+ - - learning_starts
19
+ - 100000
20
+ - - n_timesteps
21
+ - 200000.0
22
+ - - optimize_memory_usage
23
+ - false
24
+ - - policy
25
+ - CnnPolicy
26
+ - - target_update_interval
27
+ - 1000
28
+ - - train_freq
29
+ - 4
dqn-BreakoutNoFrameskip-v4.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:446f55362f9843b0852ed054c909ca43a17663637c47197d1d5730f4ea22528b
3
+ size 27202099
dqn-BreakoutNoFrameskip-v4/_stable_baselines3_version ADDED
@@ -0,0 +1 @@
 
 
1
+ 2.0.0a5
dqn-BreakoutNoFrameskip-v4/data ADDED
The diff for this file is too large to render. See raw diff
 
dqn-BreakoutNoFrameskip-v4/policy.optimizer.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d768339be60b73633c6fff1e920400190a30534b8edf34c4f0a2fa07a196ac7e
3
+ size 13497547
dqn-BreakoutNoFrameskip-v4/policy.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f57086d38517220ee36861f2c65dc03708569b112a165e9408394660c7a10f5
3
+ size 13496745
dqn-BreakoutNoFrameskip-v4/pytorch_variables.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d030ad8db708280fcae77d87e973102039acd23a11bdecc3db8eb6c0ac940ee1
3
+ size 431
dqn-BreakoutNoFrameskip-v4/system_info.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ - OS: Windows-10-10.0.22621-SP0 10.0.22621
2
+ - Python: 3.11.4
3
+ - Stable-Baselines3: 2.0.0a5
4
+ - PyTorch: 2.0.1+cu118
5
+ - GPU Enabled: True
6
+ - Numpy: 1.25.0
7
+ - Cloudpickle: 2.2.1
8
+ - Gymnasium: 0.28.1
9
+ - OpenAI Gym: 0.25.2
env_kwargs.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ render_mode: rgb_array
replay.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bb022645f50c002440a6ed274bbaac4570407eab5f988fd2459157be6432e42
3
+ size 72041
results.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"mean_reward": 11.4, "std_reward": 1.5620499351813308, "is_deterministic": false, "n_eval_episodes": 10, "eval_datetime": "2023-07-02T15:37:06.664603"}
train_eval_metrics.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b3903a07aa8134ce3e93ea7b461b90135cee7eb7b3b53096a3b57495b3bc5cf
3
+ size 22032