ThNaToS commited on
Commit
f200fa3
·
verified ·
1 Parent(s): 85f1fc2

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1722513751.ea6709614f88 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1124f386aae3b52711033cca0081f5cb66442ca78626d91d9ed1449dd41d7cdd
3
+ size 452605
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 5.60 +/- 1.38
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 10.15 +/- 4.96
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/best_000000929_3805184_reward_22.984.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b820a60b5456f582f58ef205862d4fd2c97ac5d4f48a944bf720c60efabe3b7b
3
+ size 34929243
checkpoint_p0/checkpoint_000000898_3678208.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e700719fa2e017844c2ab4daf38f797b684f5696fd09a753e34d93ad34e2a496
3
+ size 34929669
checkpoint_p0/checkpoint_000000978_4005888.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7dc8fbf1c5b36b65e0825213a61508ff926f04e640ba12ac83a6928e996336b
3
+ size 34929669
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 2000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 4000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba09fdf676b9674ae9ae7089c2e9338c70ce5bf920a002ccbde8e36b573274ab
3
- size 9864889
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04d0aa7afcc208b9ff12043189c8ab1221af36bba574c4c184b07e07c3677cd4
3
+ size 18994842
sf_log.txt CHANGED
@@ -1899,3 +1899,970 @@ main_loop: 74.4915
1899
  [2024-08-01 12:01:10,320][00719] Avg episode rewards: #0: 8.600, true rewards: #0: 5.600
1900
  [2024-08-01 12:01:10,322][00719] Avg episode reward: 8.600, avg true_objective: 5.600
1901
  [2024-08-01 12:01:40,769][00719] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1899
  [2024-08-01 12:01:10,320][00719] Avg episode rewards: #0: 8.600, true rewards: #0: 5.600
1900
  [2024-08-01 12:01:10,322][00719] Avg episode reward: 8.600, avg true_objective: 5.600
1901
  [2024-08-01 12:01:40,769][00719] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1902
+ [2024-08-01 12:01:49,880][00719] The model has been pushed to https://huggingface.co/ThNaToS/rl_course_vizdoom_health_gathering_supreme
1903
+ [2024-08-01 12:02:31,800][00719] Environment doom_basic already registered, overwriting...
1904
+ [2024-08-01 12:02:31,801][00719] Environment doom_two_colors_easy already registered, overwriting...
1905
+ [2024-08-01 12:02:31,804][00719] Environment doom_two_colors_hard already registered, overwriting...
1906
+ [2024-08-01 12:02:31,805][00719] Environment doom_dm already registered, overwriting...
1907
+ [2024-08-01 12:02:31,807][00719] Environment doom_dwango5 already registered, overwriting...
1908
+ [2024-08-01 12:02:31,809][00719] Environment doom_my_way_home_flat_actions already registered, overwriting...
1909
+ [2024-08-01 12:02:31,810][00719] Environment doom_defend_the_center_flat_actions already registered, overwriting...
1910
+ [2024-08-01 12:02:31,812][00719] Environment doom_my_way_home already registered, overwriting...
1911
+ [2024-08-01 12:02:31,813][00719] Environment doom_deadly_corridor already registered, overwriting...
1912
+ [2024-08-01 12:02:31,814][00719] Environment doom_defend_the_center already registered, overwriting...
1913
+ [2024-08-01 12:02:31,816][00719] Environment doom_defend_the_line already registered, overwriting...
1914
+ [2024-08-01 12:02:31,818][00719] Environment doom_health_gathering already registered, overwriting...
1915
+ [2024-08-01 12:02:31,819][00719] Environment doom_health_gathering_supreme already registered, overwriting...
1916
+ [2024-08-01 12:02:31,820][00719] Environment doom_battle already registered, overwriting...
1917
+ [2024-08-01 12:02:31,822][00719] Environment doom_battle2 already registered, overwriting...
1918
+ [2024-08-01 12:02:31,823][00719] Environment doom_duel_bots already registered, overwriting...
1919
+ [2024-08-01 12:02:31,824][00719] Environment doom_deathmatch_bots already registered, overwriting...
1920
+ [2024-08-01 12:02:31,825][00719] Environment doom_duel already registered, overwriting...
1921
+ [2024-08-01 12:02:31,827][00719] Environment doom_deathmatch_full already registered, overwriting...
1922
+ [2024-08-01 12:02:31,828][00719] Environment doom_benchmark already registered, overwriting...
1923
+ [2024-08-01 12:02:31,829][00719] register_encoder_factory: <function make_vizdoom_encoder at 0x7e9b3fe57370>
1924
+ [2024-08-01 12:02:31,853][00719] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1925
+ [2024-08-01 12:02:31,854][00719] Overriding arg 'train_for_env_steps' with value 4000000 passed from command line
1926
+ [2024-08-01 12:02:31,861][00719] Experiment dir /content/train_dir/default_experiment already exists!
1927
+ [2024-08-01 12:02:31,862][00719] Resuming existing experiment from /content/train_dir/default_experiment...
1928
+ [2024-08-01 12:02:31,866][00719] Weights and Biases integration disabled
1929
+ [2024-08-01 12:02:31,869][00719] Environment var CUDA_VISIBLE_DEVICES is 0
1930
+
1931
+ [2024-08-01 12:02:33,787][00719] Starting experiment with the following configuration:
1932
+ help=False
1933
+ algo=APPO
1934
+ env=doom_health_gathering_supreme
1935
+ experiment=default_experiment
1936
+ train_dir=/content/train_dir
1937
+ restart_behavior=resume
1938
+ device=gpu
1939
+ seed=None
1940
+ num_policies=1
1941
+ async_rl=True
1942
+ serial_mode=False
1943
+ batched_sampling=False
1944
+ num_batches_to_accumulate=2
1945
+ worker_num_splits=2
1946
+ policy_workers_per_policy=1
1947
+ max_policy_lag=1000
1948
+ num_workers=8
1949
+ num_envs_per_worker=4
1950
+ batch_size=1024
1951
+ num_batches_per_epoch=1
1952
+ num_epochs=1
1953
+ rollout=32
1954
+ recurrence=32
1955
+ shuffle_minibatches=False
1956
+ gamma=0.99
1957
+ reward_scale=1.0
1958
+ reward_clip=1000.0
1959
+ value_bootstrap=False
1960
+ normalize_returns=True
1961
+ exploration_loss_coeff=0.001
1962
+ value_loss_coeff=0.5
1963
+ kl_loss_coeff=0.0
1964
+ exploration_loss=symmetric_kl
1965
+ gae_lambda=0.95
1966
+ ppo_clip_ratio=0.1
1967
+ ppo_clip_value=0.2
1968
+ with_vtrace=False
1969
+ vtrace_rho=1.0
1970
+ vtrace_c=1.0
1971
+ optimizer=adam
1972
+ adam_eps=1e-06
1973
+ adam_beta1=0.9
1974
+ adam_beta2=0.999
1975
+ max_grad_norm=4.0
1976
+ learning_rate=0.0001
1977
+ lr_schedule=constant
1978
+ lr_schedule_kl_threshold=0.008
1979
+ lr_adaptive_min=1e-06
1980
+ lr_adaptive_max=0.01
1981
+ obs_subtract_mean=0.0
1982
+ obs_scale=255.0
1983
+ normalize_input=True
1984
+ normalize_input_keys=None
1985
+ decorrelate_experience_max_seconds=0
1986
+ decorrelate_envs_on_one_worker=True
1987
+ actor_worker_gpus=[]
1988
+ set_workers_cpu_affinity=True
1989
+ force_envs_single_thread=False
1990
+ default_niceness=0
1991
+ log_to_file=True
1992
+ experiment_summaries_interval=10
1993
+ flush_summaries_interval=30
1994
+ stats_avg=100
1995
+ summaries_use_frameskip=True
1996
+ heartbeat_interval=20
1997
+ heartbeat_reporting_interval=600
1998
+ train_for_env_steps=4000000
1999
+ train_for_seconds=10000000000
2000
+ save_every_sec=120
2001
+ keep_checkpoints=2
2002
+ load_checkpoint_kind=latest
2003
+ save_milestones_sec=-1
2004
+ save_best_every_sec=5
2005
+ save_best_metric=reward
2006
+ save_best_after=100000
2007
+ benchmark=False
2008
+ encoder_mlp_layers=[512, 512]
2009
+ encoder_conv_architecture=convnet_simple
2010
+ encoder_conv_mlp_layers=[512]
2011
+ use_rnn=True
2012
+ rnn_size=512
2013
+ rnn_type=gru
2014
+ rnn_num_layers=1
2015
+ decoder_mlp_layers=[]
2016
+ nonlinearity=elu
2017
+ policy_initialization=orthogonal
2018
+ policy_init_gain=1.0
2019
+ actor_critic_share_weights=True
2020
+ adaptive_stddev=True
2021
+ continuous_tanh_scale=0.0
2022
+ initial_stddev=1.0
2023
+ use_env_info_cache=False
2024
+ env_gpu_actions=False
2025
+ env_gpu_observations=True
2026
+ env_frameskip=4
2027
+ env_framestack=1
2028
+ pixel_format=CHW
2029
+ use_record_episode_statistics=False
2030
+ with_wandb=False
2031
+ wandb_user=None
2032
+ wandb_project=sample_factory
2033
+ wandb_group=None
2034
+ wandb_job_type=SF
2035
+ wandb_tags=[]
2036
+ with_pbt=False
2037
+ pbt_mix_policies_in_one_env=True
2038
+ pbt_period_env_steps=5000000
2039
+ pbt_start_mutation=20000000
2040
+ pbt_replace_fraction=0.3
2041
+ pbt_mutation_rate=0.15
2042
+ pbt_replace_reward_gap=0.1
2043
+ pbt_replace_reward_gap_absolute=1e-06
2044
+ pbt_optimize_gamma=False
2045
+ pbt_target_objective=true_objective
2046
+ pbt_perturb_min=1.1
2047
+ pbt_perturb_max=1.5
2048
+ num_agents=-1
2049
+ num_humans=0
2050
+ num_bots=-1
2051
+ start_bot_difficulty=None
2052
+ timelimit=None
2053
+ res_w=128
2054
+ res_h=72
2055
+ wide_aspect_ratio=False
2056
+ eval_env_frameskip=1
2057
+ fps=35
2058
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
2059
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
2060
+ git_hash=unknown
2061
+ git_repo_name=not a git repository
2062
+ [2024-08-01 12:02:33,792][00719] Saving configuration to /content/train_dir/default_experiment/config.json...
2063
+ [2024-08-01 12:02:33,795][00719] Rollout worker 0 uses device cpu
2064
+ [2024-08-01 12:02:33,796][00719] Rollout worker 1 uses device cpu
2065
+ [2024-08-01 12:02:33,797][00719] Rollout worker 2 uses device cpu
2066
+ [2024-08-01 12:02:33,798][00719] Rollout worker 3 uses device cpu
2067
+ [2024-08-01 12:02:33,800][00719] Rollout worker 4 uses device cpu
2068
+ [2024-08-01 12:02:33,801][00719] Rollout worker 5 uses device cpu
2069
+ [2024-08-01 12:02:33,802][00719] Rollout worker 6 uses device cpu
2070
+ [2024-08-01 12:02:33,804][00719] Rollout worker 7 uses device cpu
2071
+ [2024-08-01 12:02:33,890][00719] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2072
+ [2024-08-01 12:02:33,893][00719] InferenceWorker_p0-w0: min num requests: 2
2073
+ [2024-08-01 12:02:33,925][00719] Starting all processes...
2074
+ [2024-08-01 12:02:33,926][00719] Starting process learner_proc0
2075
+ [2024-08-01 12:02:33,975][00719] Starting all processes...
2076
+ [2024-08-01 12:02:33,986][00719] Starting process inference_proc0-0
2077
+ [2024-08-01 12:02:33,986][00719] Starting process rollout_proc0
2078
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc1
2079
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc2
2080
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc3
2081
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc4
2082
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc5
2083
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc6
2084
+ [2024-08-01 12:02:33,988][00719] Starting process rollout_proc7
2085
+ [2024-08-01 12:02:47,832][11824] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2086
+ [2024-08-01 12:02:47,832][11824] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
2087
+ [2024-08-01 12:02:47,904][11824] Num visible devices: 1
2088
+ [2024-08-01 12:02:47,942][11824] Starting seed is not provided
2089
+ [2024-08-01 12:02:47,943][11824] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2090
+ [2024-08-01 12:02:47,943][11824] Initializing actor-critic model on device cuda:0
2091
+ [2024-08-01 12:02:47,943][11824] RunningMeanStd input shape: (3, 72, 128)
2092
+ [2024-08-01 12:02:47,944][11824] RunningMeanStd input shape: (1,)
2093
+ [2024-08-01 12:02:48,044][11824] ConvEncoder: input_channels=3
2094
+ [2024-08-01 12:02:48,365][11841] Worker 3 uses CPU cores [1]
2095
+ [2024-08-01 12:02:48,394][11840] Worker 2 uses CPU cores [0]
2096
+ [2024-08-01 12:02:48,498][11838] Worker 0 uses CPU cores [0]
2097
+ [2024-08-01 12:02:48,503][11842] Worker 4 uses CPU cores [0]
2098
+ [2024-08-01 12:02:48,593][11843] Worker 5 uses CPU cores [1]
2099
+ [2024-08-01 12:02:48,638][11839] Worker 1 uses CPU cores [1]
2100
+ [2024-08-01 12:02:48,661][11837] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2101
+ [2024-08-01 12:02:48,661][11837] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
2102
+ [2024-08-01 12:02:48,680][11837] Num visible devices: 1
2103
+ [2024-08-01 12:02:48,688][11848] Worker 7 uses CPU cores [1]
2104
+ [2024-08-01 12:02:48,718][11849] Worker 6 uses CPU cores [0]
2105
+ [2024-08-01 12:02:48,731][11824] Conv encoder output size: 512
2106
+ [2024-08-01 12:02:48,732][11824] Policy head output size: 512
2107
+ [2024-08-01 12:02:48,747][11824] Created Actor Critic model with architecture:
2108
+ [2024-08-01 12:02:48,747][11824] ActorCriticSharedWeights(
2109
+ (obs_normalizer): ObservationNormalizer(
2110
+ (running_mean_std): RunningMeanStdDictInPlace(
2111
+ (running_mean_std): ModuleDict(
2112
+ (obs): RunningMeanStdInPlace()
2113
+ )
2114
+ )
2115
+ )
2116
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
2117
+ (encoder): VizdoomEncoder(
2118
+ (basic_encoder): ConvEncoder(
2119
+ (enc): RecursiveScriptModule(
2120
+ original_name=ConvEncoderImpl
2121
+ (conv_head): RecursiveScriptModule(
2122
+ original_name=Sequential
2123
+ (0): RecursiveScriptModule(original_name=Conv2d)
2124
+ (1): RecursiveScriptModule(original_name=ELU)
2125
+ (2): RecursiveScriptModule(original_name=Conv2d)
2126
+ (3): RecursiveScriptModule(original_name=ELU)
2127
+ (4): RecursiveScriptModule(original_name=Conv2d)
2128
+ (5): RecursiveScriptModule(original_name=ELU)
2129
+ )
2130
+ (mlp_layers): RecursiveScriptModule(
2131
+ original_name=Sequential
2132
+ (0): RecursiveScriptModule(original_name=Linear)
2133
+ (1): RecursiveScriptModule(original_name=ELU)
2134
+ )
2135
+ )
2136
+ )
2137
+ )
2138
+ (core): ModelCoreRNN(
2139
+ (core): GRU(512, 512)
2140
+ )
2141
+ (decoder): MlpDecoder(
2142
+ (mlp): Identity()
2143
+ )
2144
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
2145
+ (action_parameterization): ActionParameterizationDefault(
2146
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
2147
+ )
2148
+ )
2149
+ [2024-08-01 12:02:49,006][11824] Using optimizer <class 'torch.optim.adam.Adam'>
2150
+ [2024-08-01 12:02:50,171][11824] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth...
2151
+ [2024-08-01 12:02:50,215][11824] Loading model from checkpoint
2152
+ [2024-08-01 12:02:50,218][11824] Loaded experiment state at self.train_step=490, self.env_steps=2007040
2153
+ [2024-08-01 12:02:50,219][11824] Initialized policy 0 weights for model version 490
2154
+ [2024-08-01 12:02:50,226][11824] LearnerWorker_p0 finished initialization!
2155
+ [2024-08-01 12:02:50,226][11824] Using GPUs [0] for process 0 (actually maps to GPUs [0])
2156
+ [2024-08-01 12:02:50,460][11837] RunningMeanStd input shape: (3, 72, 128)
2157
+ [2024-08-01 12:02:50,461][11837] RunningMeanStd input shape: (1,)
2158
+ [2024-08-01 12:02:50,480][11837] ConvEncoder: input_channels=3
2159
+ [2024-08-01 12:02:50,638][11837] Conv encoder output size: 512
2160
+ [2024-08-01 12:02:50,639][11837] Policy head output size: 512
2161
+ [2024-08-01 12:02:50,722][00719] Inference worker 0-0 is ready!
2162
+ [2024-08-01 12:02:50,724][00719] All inference workers are ready! Signal rollout workers to start!
2163
+ [2024-08-01 12:02:50,958][11838] Doom resolution: 160x120, resize resolution: (128, 72)
2164
+ [2024-08-01 12:02:50,957][11849] Doom resolution: 160x120, resize resolution: (128, 72)
2165
+ [2024-08-01 12:02:50,960][11840] Doom resolution: 160x120, resize resolution: (128, 72)
2166
+ [2024-08-01 12:02:50,963][11842] Doom resolution: 160x120, resize resolution: (128, 72)
2167
+ [2024-08-01 12:02:51,044][11848] Doom resolution: 160x120, resize resolution: (128, 72)
2168
+ [2024-08-01 12:02:51,051][11843] Doom resolution: 160x120, resize resolution: (128, 72)
2169
+ [2024-08-01 12:02:51,049][11839] Doom resolution: 160x120, resize resolution: (128, 72)
2170
+ [2024-08-01 12:02:51,053][11841] Doom resolution: 160x120, resize resolution: (128, 72)
2171
+ [2024-08-01 12:02:51,874][00719] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 2007040. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2172
+ [2024-08-01 12:02:52,246][11840] Decorrelating experience for 0 frames...
2173
+ [2024-08-01 12:02:52,251][11849] Decorrelating experience for 0 frames...
2174
+ [2024-08-01 12:02:53,291][11839] Decorrelating experience for 0 frames...
2175
+ [2024-08-01 12:02:53,284][11843] Decorrelating experience for 0 frames...
2176
+ [2024-08-01 12:02:53,294][11840] Decorrelating experience for 32 frames...
2177
+ [2024-08-01 12:02:53,303][11848] Decorrelating experience for 0 frames...
2178
+ [2024-08-01 12:02:53,442][11838] Decorrelating experience for 0 frames...
2179
+ [2024-08-01 12:02:53,882][00719] Heartbeat connected on Batcher_0
2180
+ [2024-08-01 12:02:53,886][00719] Heartbeat connected on LearnerWorker_p0
2181
+ [2024-08-01 12:02:53,926][00719] Heartbeat connected on InferenceWorker_p0-w0
2182
+ [2024-08-01 12:02:54,058][11841] Decorrelating experience for 0 frames...
2183
+ [2024-08-01 12:02:54,314][11849] Decorrelating experience for 32 frames...
2184
+ [2024-08-01 12:02:54,582][11839] Decorrelating experience for 32 frames...
2185
+ [2024-08-01 12:02:54,586][11843] Decorrelating experience for 32 frames...
2186
+ [2024-08-01 12:02:54,840][11842] Decorrelating experience for 0 frames...
2187
+ [2024-08-01 12:02:55,330][11841] Decorrelating experience for 32 frames...
2188
+ [2024-08-01 12:02:55,466][11849] Decorrelating experience for 64 frames...
2189
+ [2024-08-01 12:02:55,852][11842] Decorrelating experience for 32 frames...
2190
+ [2024-08-01 12:02:55,867][11840] Decorrelating experience for 64 frames...
2191
+ [2024-08-01 12:02:55,892][11848] Decorrelating experience for 32 frames...
2192
+ [2024-08-01 12:02:56,213][11843] Decorrelating experience for 64 frames...
2193
+ [2024-08-01 12:02:56,562][11839] Decorrelating experience for 64 frames...
2194
+ [2024-08-01 12:02:56,870][00719] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 2007040. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2195
+ [2024-08-01 12:02:57,043][11848] Decorrelating experience for 64 frames...
2196
+ [2024-08-01 12:02:57,334][11838] Decorrelating experience for 32 frames...
2197
+ [2024-08-01 12:02:57,377][11840] Decorrelating experience for 96 frames...
2198
+ [2024-08-01 12:02:57,437][11839] Decorrelating experience for 96 frames...
2199
+ [2024-08-01 12:02:57,558][00719] Heartbeat connected on RolloutWorker_w1
2200
+ [2024-08-01 12:02:57,614][00719] Heartbeat connected on RolloutWorker_w2
2201
+ [2024-08-01 12:02:57,698][11842] Decorrelating experience for 64 frames...
2202
+ [2024-08-01 12:02:57,795][11849] Decorrelating experience for 96 frames...
2203
+ [2024-08-01 12:02:58,142][00719] Heartbeat connected on RolloutWorker_w6
2204
+ [2024-08-01 12:02:58,271][11848] Decorrelating experience for 96 frames...
2205
+ [2024-08-01 12:02:58,497][00719] Heartbeat connected on RolloutWorker_w7
2206
+ [2024-08-01 12:02:59,263][11838] Decorrelating experience for 64 frames...
2207
+ [2024-08-01 12:02:59,391][11842] Decorrelating experience for 96 frames...
2208
+ [2024-08-01 12:02:59,682][11843] Decorrelating experience for 96 frames...
2209
+ [2024-08-01 12:02:59,867][11841] Decorrelating experience for 64 frames...
2210
+ [2024-08-01 12:02:59,948][00719] Heartbeat connected on RolloutWorker_w4
2211
+ [2024-08-01 12:02:59,977][00719] Heartbeat connected on RolloutWorker_w5
2212
+ [2024-08-01 12:03:01,870][00719] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 2007040. Throughput: 0: 75.8. Samples: 758. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
2213
+ [2024-08-01 12:03:01,872][00719] Avg episode reward: [(0, '3.108')]
2214
+ [2024-08-01 12:03:02,138][11838] Decorrelating experience for 96 frames...
2215
+ [2024-08-01 12:03:02,644][00719] Heartbeat connected on RolloutWorker_w0
2216
+ [2024-08-01 12:03:03,412][11824] Signal inference workers to stop experience collection...
2217
+ [2024-08-01 12:03:03,495][11837] InferenceWorker_p0-w0: stopping experience collection
2218
+ [2024-08-01 12:03:03,687][11841] Decorrelating experience for 96 frames...
2219
+ [2024-08-01 12:03:03,771][00719] Heartbeat connected on RolloutWorker_w3
2220
+ [2024-08-01 12:03:04,793][11824] Signal inference workers to resume experience collection...
2221
+ [2024-08-01 12:03:04,794][11837] InferenceWorker_p0-w0: resuming experience collection
2222
+ [2024-08-01 12:03:06,870][00719] Fps is (10 sec: 1228.8, 60 sec: 819.4, 300 sec: 819.4). Total num frames: 2019328. Throughput: 0: 212.3. Samples: 3184. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
2223
+ [2024-08-01 12:03:06,879][00719] Avg episode reward: [(0, '4.088')]
2224
+ [2024-08-01 12:03:11,870][00719] Fps is (10 sec: 2867.2, 60 sec: 1433.9, 300 sec: 1433.9). Total num frames: 2035712. Throughput: 0: 393.5. Samples: 7868. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0)
2225
+ [2024-08-01 12:03:11,877][00719] Avg episode reward: [(0, '5.510')]
2226
+ [2024-08-01 12:03:14,116][11837] Updated weights for policy 0, policy_version 500 (0.0021)
2227
+ [2024-08-01 12:03:16,870][00719] Fps is (10 sec: 4096.0, 60 sec: 2130.2, 300 sec: 2130.2). Total num frames: 2060288. Throughput: 0: 453.9. Samples: 11346. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0)
2228
+ [2024-08-01 12:03:16,877][00719] Avg episode reward: [(0, '6.805')]
2229
+ [2024-08-01 12:03:21,870][00719] Fps is (10 sec: 4505.6, 60 sec: 2457.9, 300 sec: 2457.9). Total num frames: 2080768. Throughput: 0: 601.1. Samples: 18032. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2230
+ [2024-08-01 12:03:21,876][00719] Avg episode reward: [(0, '7.640')]
2231
+ [2024-08-01 12:03:25,257][11837] Updated weights for policy 0, policy_version 510 (0.0027)
2232
+ [2024-08-01 12:03:26,870][00719] Fps is (10 sec: 3276.8, 60 sec: 2457.9, 300 sec: 2457.9). Total num frames: 2093056. Throughput: 0: 624.9. Samples: 21868. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2233
+ [2024-08-01 12:03:26,878][00719] Avg episode reward: [(0, '7.773')]
2234
+ [2024-08-01 12:03:26,881][11824] Saving new best policy, reward=7.773!
2235
+ [2024-08-01 12:03:31,870][00719] Fps is (10 sec: 3276.8, 60 sec: 2662.7, 300 sec: 2662.7). Total num frames: 2113536. Throughput: 0: 613.2. Samples: 24526. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2236
+ [2024-08-01 12:03:31,877][00719] Avg episode reward: [(0, '7.471')]
2237
+ [2024-08-01 12:03:35,383][11837] Updated weights for policy 0, policy_version 520 (0.0023)
2238
+ [2024-08-01 12:03:36,870][00719] Fps is (10 sec: 4096.0, 60 sec: 2821.9, 300 sec: 2821.9). Total num frames: 2134016. Throughput: 0: 698.9. Samples: 31446. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
2239
+ [2024-08-01 12:03:36,877][00719] Avg episode reward: [(0, '7.646')]
2240
+ [2024-08-01 12:03:41,870][00719] Fps is (10 sec: 3686.4, 60 sec: 2867.4, 300 sec: 2867.4). Total num frames: 2150400. Throughput: 0: 816.2. Samples: 36730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2241
+ [2024-08-01 12:03:41,876][00719] Avg episode reward: [(0, '7.895')]
2242
+ [2024-08-01 12:03:41,885][11824] Saving new best policy, reward=7.895!
2243
+ [2024-08-01 12:03:46,870][00719] Fps is (10 sec: 3276.8, 60 sec: 2904.6, 300 sec: 2904.6). Total num frames: 2166784. Throughput: 0: 844.5. Samples: 38760. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2244
+ [2024-08-01 12:03:46,873][00719] Avg episode reward: [(0, '8.047')]
2245
+ [2024-08-01 12:03:46,880][11824] Saving new best policy, reward=8.047!
2246
+ [2024-08-01 12:03:47,549][11837] Updated weights for policy 0, policy_version 530 (0.0035)
2247
+ [2024-08-01 12:03:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3003.9, 300 sec: 3003.9). Total num frames: 2187264. Throughput: 0: 932.8. Samples: 45160. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2248
+ [2024-08-01 12:03:51,872][00719] Avg episode reward: [(0, '9.277')]
2249
+ [2024-08-01 12:03:51,885][11824] Saving new best policy, reward=9.277!
2250
+ [2024-08-01 12:03:56,874][00719] Fps is (10 sec: 4094.1, 60 sec: 3344.8, 300 sec: 3087.7). Total num frames: 2207744. Throughput: 0: 961.5. Samples: 51140. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2251
+ [2024-08-01 12:03:56,877][00719] Avg episode reward: [(0, '8.553')]
2252
+ [2024-08-01 12:03:57,537][11837] Updated weights for policy 0, policy_version 540 (0.0023)
2253
+ [2024-08-01 12:04:01,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3042.9). Total num frames: 2220032. Throughput: 0: 927.6. Samples: 53086. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2254
+ [2024-08-01 12:04:01,874][00719] Avg episode reward: [(0, '8.819')]
2255
+ [2024-08-01 12:04:06,870][00719] Fps is (10 sec: 3688.1, 60 sec: 3754.7, 300 sec: 3167.7). Total num frames: 2244608. Throughput: 0: 905.2. Samples: 58766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2256
+ [2024-08-01 12:04:06,874][00719] Avg episode reward: [(0, '8.116')]
2257
+ [2024-08-01 12:04:08,753][11837] Updated weights for policy 0, policy_version 550 (0.0020)
2258
+ [2024-08-01 12:04:11,870][00719] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3225.8). Total num frames: 2265088. Throughput: 0: 970.6. Samples: 65544. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2259
+ [2024-08-01 12:04:11,877][00719] Avg episode reward: [(0, '9.217')]
2260
+ [2024-08-01 12:04:16,870][00719] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3228.8). Total num frames: 2281472. Throughput: 0: 967.1. Samples: 68044. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2261
+ [2024-08-01 12:04:16,876][00719] Avg episode reward: [(0, '9.031')]
2262
+ [2024-08-01 12:04:20,473][11837] Updated weights for policy 0, policy_version 560 (0.0024)
2263
+ [2024-08-01 12:04:21,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3231.4). Total num frames: 2297856. Throughput: 0: 915.1. Samples: 72624. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2264
+ [2024-08-01 12:04:21,873][00719] Avg episode reward: [(0, '9.470')]
2265
+ [2024-08-01 12:04:21,883][11824] Saving new best policy, reward=9.470!
2266
+ [2024-08-01 12:04:26,870][00719] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3320.1). Total num frames: 2322432. Throughput: 0: 946.0. Samples: 79298. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2267
+ [2024-08-01 12:04:26,872][00719] Avg episode reward: [(0, '8.755')]
2268
+ [2024-08-01 12:04:29,589][11837] Updated weights for policy 0, policy_version 570 (0.0027)
2269
+ [2024-08-01 12:04:31,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3317.9). Total num frames: 2338816. Throughput: 0: 976.8. Samples: 82718. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2270
+ [2024-08-01 12:04:31,875][00719] Avg episode reward: [(0, '9.244')]
2271
+ [2024-08-01 12:04:31,886][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000571_2338816.pth...
2272
+ [2024-08-01 12:04:32,080][11824] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000455_1863680.pth
2273
+ [2024-08-01 12:04:36,873][00719] Fps is (10 sec: 3275.7, 60 sec: 3686.2, 300 sec: 3315.8). Total num frames: 2355200. Throughput: 0: 928.8. Samples: 86958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2274
+ [2024-08-01 12:04:36,876][00719] Avg episode reward: [(0, '9.898')]
2275
+ [2024-08-01 12:04:36,877][11824] Saving new best policy, reward=9.898!
2276
+ [2024-08-01 12:04:41,239][11837] Updated weights for policy 0, policy_version 580 (0.0035)
2277
+ [2024-08-01 12:04:41,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3351.4). Total num frames: 2375680. Throughput: 0: 938.4. Samples: 93364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2278
+ [2024-08-01 12:04:41,876][00719] Avg episode reward: [(0, '9.922')]
2279
+ [2024-08-01 12:04:41,886][11824] Saving new best policy, reward=9.922!
2280
+ [2024-08-01 12:04:46,870][00719] Fps is (10 sec: 4507.1, 60 sec: 3891.2, 300 sec: 3419.4). Total num frames: 2400256. Throughput: 0: 969.0. Samples: 96690. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2281
+ [2024-08-01 12:04:46,874][00719] Avg episode reward: [(0, '10.710')]
2282
+ [2024-08-01 12:04:46,877][11824] Saving new best policy, reward=10.710!
2283
+ [2024-08-01 12:04:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3379.3). Total num frames: 2412544. Throughput: 0: 953.6. Samples: 101676. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2284
+ [2024-08-01 12:04:51,872][00719] Avg episode reward: [(0, '10.308')]
2285
+ [2024-08-01 12:04:52,437][11837] Updated weights for policy 0, policy_version 590 (0.0019)
2286
+ [2024-08-01 12:04:56,870][00719] Fps is (10 sec: 2867.2, 60 sec: 3686.7, 300 sec: 3375.2). Total num frames: 2428928. Throughput: 0: 917.8. Samples: 106846. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2287
+ [2024-08-01 12:04:56,872][00719] Avg episode reward: [(0, '10.446')]
2288
+ [2024-08-01 12:05:01,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3434.4). Total num frames: 2453504. Throughput: 0: 937.5. Samples: 110232. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2289
+ [2024-08-01 12:05:01,873][00719] Avg episode reward: [(0, '10.711')]
2290
+ [2024-08-01 12:05:02,338][11837] Updated weights for policy 0, policy_version 600 (0.0055)
2291
+ [2024-08-01 12:05:06,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3428.6). Total num frames: 2469888. Throughput: 0: 973.5. Samples: 116430. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2292
+ [2024-08-01 12:05:06,877][00719] Avg episode reward: [(0, '10.704')]
2293
+ [2024-08-01 12:05:11,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3423.2). Total num frames: 2486272. Throughput: 0: 913.3. Samples: 120396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2294
+ [2024-08-01 12:05:11,874][00719] Avg episode reward: [(0, '10.919')]
2295
+ [2024-08-01 12:05:11,888][11824] Saving new best policy, reward=10.919!
2296
+ [2024-08-01 12:05:14,346][11837] Updated weights for policy 0, policy_version 610 (0.0027)
2297
+ [2024-08-01 12:05:16,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3446.4). Total num frames: 2506752. Throughput: 0: 912.2. Samples: 123766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2298
+ [2024-08-01 12:05:16,878][00719] Avg episode reward: [(0, '11.950')]
2299
+ [2024-08-01 12:05:16,881][11824] Saving new best policy, reward=11.950!
2300
+ [2024-08-01 12:05:21,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3468.0). Total num frames: 2527232. Throughput: 0: 968.9. Samples: 130556. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
2301
+ [2024-08-01 12:05:21,877][00719] Avg episode reward: [(0, '12.216')]
2302
+ [2024-08-01 12:05:21,888][11824] Saving new best policy, reward=12.216!
2303
+ [2024-08-01 12:05:25,289][11837] Updated weights for policy 0, policy_version 620 (0.0020)
2304
+ [2024-08-01 12:05:26,874][00719] Fps is (10 sec: 3685.0, 60 sec: 3686.2, 300 sec: 3461.8). Total num frames: 2543616. Throughput: 0: 919.0. Samples: 134724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2305
+ [2024-08-01 12:05:26,880][00719] Avg episode reward: [(0, '12.296')]
2306
+ [2024-08-01 12:05:26,883][11824] Saving new best policy, reward=12.296!
2307
+ [2024-08-01 12:05:31,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3456.1). Total num frames: 2560000. Throughput: 0: 893.4. Samples: 136892. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2308
+ [2024-08-01 12:05:31,872][00719] Avg episode reward: [(0, '12.047')]
2309
+ [2024-08-01 12:05:36,154][11837] Updated weights for policy 0, policy_version 630 (0.0030)
2310
+ [2024-08-01 12:05:36,870][00719] Fps is (10 sec: 3687.8, 60 sec: 3754.9, 300 sec: 3475.5). Total num frames: 2580480. Throughput: 0: 932.2. Samples: 143626. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2311
+ [2024-08-01 12:05:36,876][00719] Avg episode reward: [(0, '12.265')]
2312
+ [2024-08-01 12:05:41,872][00719] Fps is (10 sec: 4095.1, 60 sec: 3754.5, 300 sec: 3493.7). Total num frames: 2600960. Throughput: 0: 940.3. Samples: 149160. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2313
+ [2024-08-01 12:05:41,874][00719] Avg episode reward: [(0, '11.543')]
2314
+ [2024-08-01 12:05:46,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3464.1). Total num frames: 2613248. Throughput: 0: 908.2. Samples: 151100. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2315
+ [2024-08-01 12:05:46,872][00719] Avg episode reward: [(0, '12.319')]
2316
+ [2024-08-01 12:05:46,880][11824] Saving new best policy, reward=12.319!
2317
+ [2024-08-01 12:05:48,127][11837] Updated weights for policy 0, policy_version 640 (0.0023)
2318
+ [2024-08-01 12:05:51,870][00719] Fps is (10 sec: 3277.5, 60 sec: 3686.4, 300 sec: 3481.7). Total num frames: 2633728. Throughput: 0: 903.7. Samples: 157096. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2319
+ [2024-08-01 12:05:51,876][00719] Avg episode reward: [(0, '12.954')]
2320
+ [2024-08-01 12:05:51,891][11824] Saving new best policy, reward=12.954!
2321
+ [2024-08-01 12:05:56,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3498.3). Total num frames: 2654208. Throughput: 0: 954.7. Samples: 163356. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2322
+ [2024-08-01 12:05:56,875][00719] Avg episode reward: [(0, '12.579')]
2323
+ [2024-08-01 12:05:58,454][11837] Updated weights for policy 0, policy_version 650 (0.0025)
2324
+ [2024-08-01 12:06:01,870][00719] Fps is (10 sec: 3686.2, 60 sec: 3618.1, 300 sec: 3492.4). Total num frames: 2670592. Throughput: 0: 924.3. Samples: 165358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2325
+ [2024-08-01 12:06:01,873][00719] Avg episode reward: [(0, '12.936')]
2326
+ [2024-08-01 12:06:06,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3486.9). Total num frames: 2686976. Throughput: 0: 882.8. Samples: 170282. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2327
+ [2024-08-01 12:06:06,877][00719] Avg episode reward: [(0, '13.393')]
2328
+ [2024-08-01 12:06:06,879][11824] Saving new best policy, reward=13.393!
2329
+ [2024-08-01 12:06:09,914][11837] Updated weights for policy 0, policy_version 660 (0.0026)
2330
+ [2024-08-01 12:06:11,870][00719] Fps is (10 sec: 4096.3, 60 sec: 3754.7, 300 sec: 3522.6). Total num frames: 2711552. Throughput: 0: 935.5. Samples: 176818. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2331
+ [2024-08-01 12:06:11,875][00719] Avg episode reward: [(0, '13.332')]
2332
+ [2024-08-01 12:06:16,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3516.6). Total num frames: 2727936. Throughput: 0: 949.4. Samples: 179614. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2333
+ [2024-08-01 12:06:16,872][00719] Avg episode reward: [(0, '14.066')]
2334
+ [2024-08-01 12:06:16,874][11824] Saving new best policy, reward=14.066!
2335
+ [2024-08-01 12:06:21,870][00719] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3491.4). Total num frames: 2740224. Throughput: 0: 890.4. Samples: 183694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2336
+ [2024-08-01 12:06:21,872][00719] Avg episode reward: [(0, '14.692')]
2337
+ [2024-08-01 12:06:21,898][11824] Saving new best policy, reward=14.692!
2338
+ [2024-08-01 12:06:21,902][11837] Updated weights for policy 0, policy_version 670 (0.0027)
2339
+ [2024-08-01 12:06:26,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3524.5). Total num frames: 2764800. Throughput: 0: 911.1. Samples: 190158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2340
+ [2024-08-01 12:06:26,872][00719] Avg episode reward: [(0, '14.051')]
2341
+ [2024-08-01 12:06:31,515][11837] Updated weights for policy 0, policy_version 680 (0.0020)
2342
+ [2024-08-01 12:06:31,870][00719] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3537.5). Total num frames: 2785280. Throughput: 0: 940.9. Samples: 193442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2343
+ [2024-08-01 12:06:31,874][00719] Avg episode reward: [(0, '13.288')]
2344
+ [2024-08-01 12:06:31,889][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000680_2785280.pth...
2345
+ [2024-08-01 12:06:32,064][11824] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000490_2007040.pth
2346
+ [2024-08-01 12:06:36,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3513.5). Total num frames: 2797568. Throughput: 0: 901.1. Samples: 197644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2347
+ [2024-08-01 12:06:36,875][00719] Avg episode reward: [(0, '12.283')]
2348
+ [2024-08-01 12:06:41,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3526.2). Total num frames: 2818048. Throughput: 0: 892.8. Samples: 203534. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2349
+ [2024-08-01 12:06:41,872][00719] Avg episode reward: [(0, '12.177')]
2350
+ [2024-08-01 12:06:43,301][11837] Updated weights for policy 0, policy_version 690 (0.0026)
2351
+ [2024-08-01 12:06:46,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3538.3). Total num frames: 2838528. Throughput: 0: 924.4. Samples: 206956. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2352
+ [2024-08-01 12:06:46,874][00719] Avg episode reward: [(0, '13.925')]
2353
+ [2024-08-01 12:06:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3532.9). Total num frames: 2854912. Throughput: 0: 934.7. Samples: 212344. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2354
+ [2024-08-01 12:06:51,876][00719] Avg episode reward: [(0, '14.358')]
2355
+ [2024-08-01 12:06:55,459][11837] Updated weights for policy 0, policy_version 700 (0.0032)
2356
+ [2024-08-01 12:06:56,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3527.6). Total num frames: 2871296. Throughput: 0: 889.1. Samples: 216828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2357
+ [2024-08-01 12:06:56,875][00719] Avg episode reward: [(0, '14.279')]
2358
+ [2024-08-01 12:07:01,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3539.0). Total num frames: 2891776. Throughput: 0: 901.9. Samples: 220200. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2359
+ [2024-08-01 12:07:01,872][00719] Avg episode reward: [(0, '14.994')]
2360
+ [2024-08-01 12:07:01,884][11824] Saving new best policy, reward=14.994!
2361
+ [2024-08-01 12:07:04,902][11837] Updated weights for policy 0, policy_version 710 (0.0026)
2362
+ [2024-08-01 12:07:06,871][00719] Fps is (10 sec: 4095.4, 60 sec: 3754.6, 300 sec: 3549.9). Total num frames: 2912256. Throughput: 0: 951.2. Samples: 226500. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2363
+ [2024-08-01 12:07:06,874][00719] Avg episode reward: [(0, '13.790')]
2364
+ [2024-08-01 12:07:11,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3528.9). Total num frames: 2924544. Throughput: 0: 894.6. Samples: 230416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2365
+ [2024-08-01 12:07:11,875][00719] Avg episode reward: [(0, '14.033')]
2366
+ [2024-08-01 12:07:16,870][00719] Fps is (10 sec: 3277.3, 60 sec: 3618.1, 300 sec: 3539.6). Total num frames: 2945024. Throughput: 0: 887.0. Samples: 233358. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2367
+ [2024-08-01 12:07:16,873][00719] Avg episode reward: [(0, '16.495')]
2368
+ [2024-08-01 12:07:16,875][11824] Saving new best policy, reward=16.495!
2369
+ [2024-08-01 12:07:17,181][11837] Updated weights for policy 0, policy_version 720 (0.0028)
2370
+ [2024-08-01 12:07:21,870][00719] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3565.1). Total num frames: 2969600. Throughput: 0: 942.4. Samples: 240054. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2371
+ [2024-08-01 12:07:21,881][00719] Avg episode reward: [(0, '17.297')]
2372
+ [2024-08-01 12:07:21,893][11824] Saving new best policy, reward=17.297!
2373
+ [2024-08-01 12:07:26,871][00719] Fps is (10 sec: 3685.8, 60 sec: 3618.0, 300 sec: 3544.9). Total num frames: 2981888. Throughput: 0: 908.9. Samples: 244438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2374
+ [2024-08-01 12:07:26,874][00719] Avg episode reward: [(0, '16.998')]
2375
+ [2024-08-01 12:07:29,677][11837] Updated weights for policy 0, policy_version 730 (0.0017)
2376
+ [2024-08-01 12:07:31,870][00719] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3540.2). Total num frames: 2998272. Throughput: 0: 878.9. Samples: 246508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2377
+ [2024-08-01 12:07:31,876][00719] Avg episode reward: [(0, '17.987')]
2378
+ [2024-08-01 12:07:31,885][11824] Saving new best policy, reward=17.987!
2379
+ [2024-08-01 12:07:36,870][00719] Fps is (10 sec: 3687.0, 60 sec: 3686.4, 300 sec: 3549.9). Total num frames: 3018752. Throughput: 0: 907.7. Samples: 253190. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2380
+ [2024-08-01 12:07:36,872][00719] Avg episode reward: [(0, '15.836')]
2381
+ [2024-08-01 12:07:38,634][11837] Updated weights for policy 0, policy_version 740 (0.0017)
2382
+ [2024-08-01 12:07:41,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3559.3). Total num frames: 3039232. Throughput: 0: 937.8. Samples: 259028. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2383
+ [2024-08-01 12:07:41,872][00719] Avg episode reward: [(0, '15.651')]
2384
+ [2024-08-01 12:07:46,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.7). Total num frames: 3051520. Throughput: 0: 907.6. Samples: 261042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2385
+ [2024-08-01 12:07:46,873][00719] Avg episode reward: [(0, '15.549')]
2386
+ [2024-08-01 12:07:50,671][11837] Updated weights for policy 0, policy_version 750 (0.0019)
2387
+ [2024-08-01 12:07:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3076096. Throughput: 0: 899.4. Samples: 266972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2388
+ [2024-08-01 12:07:51,877][00719] Avg episode reward: [(0, '14.995')]
2389
+ [2024-08-01 12:07:56,870][00719] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 3096576. Throughput: 0: 960.0. Samples: 273618. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2390
+ [2024-08-01 12:07:56,872][00719] Avg episode reward: [(0, '14.879')]
2391
+ [2024-08-01 12:08:01,691][11837] Updated weights for policy 0, policy_version 760 (0.0025)
2392
+ [2024-08-01 12:08:01,875][00719] Fps is (10 sec: 3684.3, 60 sec: 3686.1, 300 sec: 3707.2). Total num frames: 3112960. Throughput: 0: 939.1. Samples: 275622. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2393
+ [2024-08-01 12:08:01,878][00719] Avg episode reward: [(0, '15.911')]
2394
+ [2024-08-01 12:08:06,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3707.2). Total num frames: 3129344. Throughput: 0: 900.2. Samples: 280562. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2395
+ [2024-08-01 12:08:06,872][00719] Avg episode reward: [(0, '16.049')]
2396
+ [2024-08-01 12:08:11,660][11837] Updated weights for policy 0, policy_version 770 (0.0029)
2397
+ [2024-08-01 12:08:11,870][00719] Fps is (10 sec: 4098.3, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3153920. Throughput: 0: 953.2. Samples: 287330. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2398
+ [2024-08-01 12:08:11,875][00719] Avg episode reward: [(0, '16.724')]
2399
+ [2024-08-01 12:08:16,872][00719] Fps is (10 sec: 4095.1, 60 sec: 3754.5, 300 sec: 3693.3). Total num frames: 3170304. Throughput: 0: 974.2. Samples: 290350. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2400
+ [2024-08-01 12:08:16,877][00719] Avg episode reward: [(0, '16.911')]
2401
+ [2024-08-01 12:08:21,870][00719] Fps is (10 sec: 2867.1, 60 sec: 3549.8, 300 sec: 3693.3). Total num frames: 3182592. Throughput: 0: 916.4. Samples: 294428. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2402
+ [2024-08-01 12:08:21,873][00719] Avg episode reward: [(0, '16.554')]
2403
+ [2024-08-01 12:08:23,691][11837] Updated weights for policy 0, policy_version 780 (0.0017)
2404
+ [2024-08-01 12:08:26,870][00719] Fps is (10 sec: 3687.2, 60 sec: 3754.8, 300 sec: 3707.2). Total num frames: 3207168. Throughput: 0: 936.1. Samples: 301154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2405
+ [2024-08-01 12:08:26,874][00719] Avg episode reward: [(0, '16.407')]
2406
+ [2024-08-01 12:08:31,870][00719] Fps is (10 sec: 4505.8, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3227648. Throughput: 0: 964.1. Samples: 304428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2407
+ [2024-08-01 12:08:31,872][00719] Avg episode reward: [(0, '17.116')]
2408
+ [2024-08-01 12:08:31,886][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000788_3227648.pth...
2409
+ [2024-08-01 12:08:32,057][11824] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000571_2338816.pth
2410
+ [2024-08-01 12:08:34,083][11837] Updated weights for policy 0, policy_version 790 (0.0024)
2411
+ [2024-08-01 12:08:36,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 3239936. Throughput: 0: 929.5. Samples: 308800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2412
+ [2024-08-01 12:08:36,875][00719] Avg episode reward: [(0, '16.571')]
2413
+ [2024-08-01 12:08:41,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 3260416. Throughput: 0: 912.8. Samples: 314692. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
2414
+ [2024-08-01 12:08:41,872][00719] Avg episode reward: [(0, '18.520')]
2415
+ [2024-08-01 12:08:41,886][11824] Saving new best policy, reward=18.520!
2416
+ [2024-08-01 12:08:44,783][11837] Updated weights for policy 0, policy_version 800 (0.0023)
2417
+ [2024-08-01 12:08:46,870][00719] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 3284992. Throughput: 0: 942.2. Samples: 318016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2418
+ [2024-08-01 12:08:46,872][00719] Avg episode reward: [(0, '18.578')]
2419
+ [2024-08-01 12:08:46,878][11824] Saving new best policy, reward=18.578!
2420
+ [2024-08-01 12:08:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 3297280. Throughput: 0: 952.9. Samples: 323442. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2421
+ [2024-08-01 12:08:51,872][00719] Avg episode reward: [(0, '18.700')]
2422
+ [2024-08-01 12:08:51,941][11824] Saving new best policy, reward=18.700!
2423
+ [2024-08-01 12:08:56,813][11837] Updated weights for policy 0, policy_version 810 (0.0028)
2424
+ [2024-08-01 12:08:56,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 3317760. Throughput: 0: 908.7. Samples: 328220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2425
+ [2024-08-01 12:08:56,873][00719] Avg episode reward: [(0, '18.126')]
2426
+ [2024-08-01 12:09:01,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3755.0, 300 sec: 3707.2). Total num frames: 3338240. Throughput: 0: 911.4. Samples: 331362. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2427
+ [2024-08-01 12:09:01,877][00719] Avg episode reward: [(0, '17.835')]
2428
+ [2024-08-01 12:09:06,539][11837] Updated weights for policy 0, policy_version 820 (0.0019)
2429
+ [2024-08-01 12:09:06,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3358720. Throughput: 0: 962.9. Samples: 337758. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2430
+ [2024-08-01 12:09:06,875][00719] Avg episode reward: [(0, '18.048')]
2431
+ [2024-08-01 12:09:11,870][00719] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 3371008. Throughput: 0: 905.3. Samples: 341894. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2432
+ [2024-08-01 12:09:11,873][00719] Avg episode reward: [(0, '18.645')]
2433
+ [2024-08-01 12:09:16,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 3391488. Throughput: 0: 901.9. Samples: 345014. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2434
+ [2024-08-01 12:09:16,873][00719] Avg episode reward: [(0, '21.383')]
2435
+ [2024-08-01 12:09:16,881][11824] Saving new best policy, reward=21.383!
2436
+ [2024-08-01 12:09:18,084][11837] Updated weights for policy 0, policy_version 830 (0.0037)
2437
+ [2024-08-01 12:09:21,870][00719] Fps is (10 sec: 4505.9, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 3416064. Throughput: 0: 953.4. Samples: 351702. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2438
+ [2024-08-01 12:09:21,879][00719] Avg episode reward: [(0, '22.293')]
2439
+ [2024-08-01 12:09:21,889][11824] Saving new best policy, reward=22.293!
2440
+ [2024-08-01 12:09:26,870][00719] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 3428352. Throughput: 0: 926.3. Samples: 356378. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
2441
+ [2024-08-01 12:09:26,876][00719] Avg episode reward: [(0, '22.552')]
2442
+ [2024-08-01 12:09:26,883][11824] Saving new best policy, reward=22.552!
2443
+ [2024-08-01 12:09:30,219][11837] Updated weights for policy 0, policy_version 840 (0.0029)
2444
+ [2024-08-01 12:09:31,872][00719] Fps is (10 sec: 2866.5, 60 sec: 3618.0, 300 sec: 3693.3). Total num frames: 3444736. Throughput: 0: 897.5. Samples: 358408. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2445
+ [2024-08-01 12:09:31,874][00719] Avg episode reward: [(0, '22.323')]
2446
+ [2024-08-01 12:09:36,870][00719] Fps is (10 sec: 4096.1, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3469312. Throughput: 0: 927.7. Samples: 365188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2447
+ [2024-08-01 12:09:36,876][00719] Avg episode reward: [(0, '20.231')]
2448
+ [2024-08-01 12:09:39,377][11837] Updated weights for policy 0, policy_version 850 (0.0019)
2449
+ [2024-08-01 12:09:41,872][00719] Fps is (10 sec: 4096.1, 60 sec: 3754.5, 300 sec: 3679.4). Total num frames: 3485696. Throughput: 0: 950.4. Samples: 370990. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2450
+ [2024-08-01 12:09:41,876][00719] Avg episode reward: [(0, '18.581')]
2451
+ [2024-08-01 12:09:46,870][00719] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 3502080. Throughput: 0: 925.0. Samples: 372988. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2452
+ [2024-08-01 12:09:46,877][00719] Avg episode reward: [(0, '19.255')]
2453
+ [2024-08-01 12:09:51,047][11837] Updated weights for policy 0, policy_version 860 (0.0024)
2454
+ [2024-08-01 12:09:51,870][00719] Fps is (10 sec: 3687.3, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 3522560. Throughput: 0: 919.4. Samples: 379132. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2455
+ [2024-08-01 12:09:51,872][00719] Avg episode reward: [(0, '18.368')]
2456
+ [2024-08-01 12:09:56,870][00719] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 3547136. Throughput: 0: 976.1. Samples: 385820. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2457
+ [2024-08-01 12:09:56,872][00719] Avg episode reward: [(0, '19.757')]
2458
+ [2024-08-01 12:10:01,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 3559424. Throughput: 0: 950.2. Samples: 387774. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2459
+ [2024-08-01 12:10:01,877][00719] Avg episode reward: [(0, '19.424')]
2460
+ [2024-08-01 12:10:02,656][11837] Updated weights for policy 0, policy_version 870 (0.0035)
2461
+ [2024-08-01 12:10:06,870][00719] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 3579904. Throughput: 0: 906.4. Samples: 392488. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2462
+ [2024-08-01 12:10:06,876][00719] Avg episode reward: [(0, '20.203')]
2463
+ [2024-08-01 12:10:11,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3707.2). Total num frames: 3600384. Throughput: 0: 953.0. Samples: 399264. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2464
+ [2024-08-01 12:10:11,874][00719] Avg episode reward: [(0, '19.456')]
2465
+ [2024-08-01 12:10:12,417][11837] Updated weights for policy 0, policy_version 880 (0.0031)
2466
+ [2024-08-01 12:10:16,870][00719] Fps is (10 sec: 3686.5, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 3616768. Throughput: 0: 973.9. Samples: 402232. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2467
+ [2024-08-01 12:10:16,872][00719] Avg episode reward: [(0, '20.500')]
2468
+ [2024-08-01 12:10:21,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 3633152. Throughput: 0: 912.0. Samples: 406226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2469
+ [2024-08-01 12:10:21,872][00719] Avg episode reward: [(0, '20.083')]
2470
+ [2024-08-01 12:10:24,437][11837] Updated weights for policy 0, policy_version 890 (0.0025)
2471
+ [2024-08-01 12:10:26,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 3653632. Throughput: 0: 930.4. Samples: 412856. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2472
+ [2024-08-01 12:10:26,875][00719] Avg episode reward: [(0, '20.061')]
2473
+ [2024-08-01 12:10:31,876][00719] Fps is (10 sec: 4093.3, 60 sec: 3822.7, 300 sec: 3707.1). Total num frames: 3674112. Throughput: 0: 961.7. Samples: 416270. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2474
+ [2024-08-01 12:10:31,882][00719] Avg episode reward: [(0, '20.380')]
2475
+ [2024-08-01 12:10:31,898][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000898_3678208.pth...
2476
+ [2024-08-01 12:10:32,088][11824] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000680_2785280.pth
2477
+ [2024-08-01 12:10:35,101][11837] Updated weights for policy 0, policy_version 900 (0.0034)
2478
+ [2024-08-01 12:10:36,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 3690496. Throughput: 0: 924.4. Samples: 420730. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2479
+ [2024-08-01 12:10:36,875][00719] Avg episode reward: [(0, '20.253')]
2480
+ [2024-08-01 12:10:41,870][00719] Fps is (10 sec: 3279.0, 60 sec: 3686.5, 300 sec: 3707.2). Total num frames: 3706880. Throughput: 0: 900.9. Samples: 426360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2481
+ [2024-08-01 12:10:41,879][00719] Avg episode reward: [(0, '20.686')]
2482
+ [2024-08-01 12:10:45,678][11837] Updated weights for policy 0, policy_version 910 (0.0019)
2483
+ [2024-08-01 12:10:46,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3823.0, 300 sec: 3721.1). Total num frames: 3731456. Throughput: 0: 931.8. Samples: 429704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2484
+ [2024-08-01 12:10:46,872][00719] Avg episode reward: [(0, '22.407')]
2485
+ [2024-08-01 12:10:51,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 3747840. Throughput: 0: 953.6. Samples: 435400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2486
+ [2024-08-01 12:10:51,875][00719] Avg episode reward: [(0, '21.935')]
2487
+ [2024-08-01 12:10:56,870][00719] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 3764224. Throughput: 0: 906.3. Samples: 440046. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2488
+ [2024-08-01 12:10:56,877][00719] Avg episode reward: [(0, '21.694')]
2489
+ [2024-08-01 12:10:57,428][11837] Updated weights for policy 0, policy_version 920 (0.0026)
2490
+ [2024-08-01 12:11:01,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 3784704. Throughput: 0: 915.4. Samples: 443424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2491
+ [2024-08-01 12:11:01,876][00719] Avg episode reward: [(0, '22.639')]
2492
+ [2024-08-01 12:11:01,884][11824] Saving new best policy, reward=22.639!
2493
+ [2024-08-01 12:11:06,870][00719] Fps is (10 sec: 4096.1, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 3805184. Throughput: 0: 961.8. Samples: 449506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2494
+ [2024-08-01 12:11:06,876][00719] Avg episode reward: [(0, '22.984')]
2495
+ [2024-08-01 12:11:06,879][11824] Saving new best policy, reward=22.984!
2496
+ [2024-08-01 12:11:08,124][11837] Updated weights for policy 0, policy_version 930 (0.0020)
2497
+ [2024-08-01 12:11:11,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 3817472. Throughput: 0: 903.7. Samples: 453522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2498
+ [2024-08-01 12:11:11,872][00719] Avg episode reward: [(0, '21.402')]
2499
+ [2024-08-01 12:11:16,870][00719] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 3837952. Throughput: 0: 892.6. Samples: 456432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2500
+ [2024-08-01 12:11:16,873][00719] Avg episode reward: [(0, '20.302')]
2501
+ [2024-08-01 12:11:19,117][11837] Updated weights for policy 0, policy_version 940 (0.0029)
2502
+ [2024-08-01 12:11:21,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 3858432. Throughput: 0: 943.2. Samples: 463172. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
2503
+ [2024-08-01 12:11:21,872][00719] Avg episode reward: [(0, '20.968')]
2504
+ [2024-08-01 12:11:26,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 3874816. Throughput: 0: 926.2. Samples: 468040. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
2505
+ [2024-08-01 12:11:26,872][00719] Avg episode reward: [(0, '20.007')]
2506
+ [2024-08-01 12:11:31,401][11837] Updated weights for policy 0, policy_version 950 (0.0033)
2507
+ [2024-08-01 12:11:31,870][00719] Fps is (10 sec: 3276.6, 60 sec: 3618.5, 300 sec: 3707.2). Total num frames: 3891200. Throughput: 0: 897.3. Samples: 470084. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
2508
+ [2024-08-01 12:11:31,878][00719] Avg episode reward: [(0, '18.805')]
2509
+ [2024-08-01 12:11:36,870][00719] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 3915776. Throughput: 0: 918.8. Samples: 476748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2510
+ [2024-08-01 12:11:36,872][00719] Avg episode reward: [(0, '19.099')]
2511
+ [2024-08-01 12:11:40,105][11837] Updated weights for policy 0, policy_version 960 (0.0014)
2512
+ [2024-08-01 12:11:41,870][00719] Fps is (10 sec: 4505.9, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 3936256. Throughput: 0: 954.5. Samples: 482998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2513
+ [2024-08-01 12:11:41,874][00719] Avg episode reward: [(0, '18.094')]
2514
+ [2024-08-01 12:11:46,870][00719] Fps is (10 sec: 3276.6, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 3948544. Throughput: 0: 922.2. Samples: 484924. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2515
+ [2024-08-01 12:11:46,873][00719] Avg episode reward: [(0, '18.225')]
2516
+ [2024-08-01 12:11:51,636][11837] Updated weights for policy 0, policy_version 970 (0.0041)
2517
+ [2024-08-01 12:11:51,870][00719] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 3973120. Throughput: 0: 921.2. Samples: 490960. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
2518
+ [2024-08-01 12:11:51,877][00719] Avg episode reward: [(0, '19.173')]
2519
+ [2024-08-01 12:11:56,870][00719] Fps is (10 sec: 4505.8, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 3993600. Throughput: 0: 988.8. Samples: 498020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
2520
+ [2024-08-01 12:11:56,873][00719] Avg episode reward: [(0, '18.431')]
2521
+ [2024-08-01 12:11:59,604][11824] Stopping Batcher_0...
2522
+ [2024-08-01 12:11:59,604][11824] Loop batcher_evt_loop terminating...
2523
+ [2024-08-01 12:11:59,604][00719] Component Batcher_0 stopped!
2524
+ [2024-08-01 12:11:59,620][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
2525
+ [2024-08-01 12:11:59,702][00719] Component RolloutWorker_w4 stopped!
2526
+ [2024-08-01 12:11:59,706][11842] Stopping RolloutWorker_w4...
2527
+ [2024-08-01 12:11:59,706][11842] Loop rollout_proc4_evt_loop terminating...
2528
+ [2024-08-01 12:11:59,721][00719] Component RolloutWorker_w6 stopped!
2529
+ [2024-08-01 12:11:59,726][11849] Stopping RolloutWorker_w6...
2530
+ [2024-08-01 12:11:59,726][11849] Loop rollout_proc6_evt_loop terminating...
2531
+ [2024-08-01 12:11:59,733][11843] Stopping RolloutWorker_w5...
2532
+ [2024-08-01 12:11:59,733][11843] Loop rollout_proc5_evt_loop terminating...
2533
+ [2024-08-01 12:11:59,733][00719] Component RolloutWorker_w5 stopped!
2534
+ [2024-08-01 12:11:59,745][00719] Component RolloutWorker_w1 stopped!
2535
+ [2024-08-01 12:11:59,750][11837] Weights refcount: 2 0
2536
+ [2024-08-01 12:11:59,745][11839] Stopping RolloutWorker_w1...
2537
+ [2024-08-01 12:11:59,753][11839] Loop rollout_proc1_evt_loop terminating...
2538
+ [2024-08-01 12:11:59,754][00719] Component RolloutWorker_w0 stopped!
2539
+ [2024-08-01 12:11:59,756][11838] Stopping RolloutWorker_w0...
2540
+ [2024-08-01 12:11:59,756][11838] Loop rollout_proc0_evt_loop terminating...
2541
+ [2024-08-01 12:11:59,771][00719] Component InferenceWorker_p0-w0 stopped!
2542
+ [2024-08-01 12:11:59,776][00719] Component RolloutWorker_w2 stopped!
2543
+ [2024-08-01 12:11:59,778][11840] Stopping RolloutWorker_w2...
2544
+ [2024-08-01 12:11:59,778][11840] Loop rollout_proc2_evt_loop terminating...
2545
+ [2024-08-01 12:11:59,781][11837] Stopping InferenceWorker_p0-w0...
2546
+ [2024-08-01 12:11:59,782][11837] Loop inference_proc0-0_evt_loop terminating...
2547
+ [2024-08-01 12:11:59,816][11841] Stopping RolloutWorker_w3...
2548
+ [2024-08-01 12:11:59,815][00719] Component RolloutWorker_w3 stopped!
2549
+ [2024-08-01 12:11:59,826][11841] Loop rollout_proc3_evt_loop terminating...
2550
+ [2024-08-01 12:11:59,825][00719] Component RolloutWorker_w7 stopped!
2551
+ [2024-08-01 12:11:59,826][11848] Stopping RolloutWorker_w7...
2552
+ [2024-08-01 12:11:59,835][11848] Loop rollout_proc7_evt_loop terminating...
2553
+ [2024-08-01 12:11:59,883][11824] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000788_3227648.pth
2554
+ [2024-08-01 12:11:59,923][11824] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
2555
+ [2024-08-01 12:12:00,235][00719] Component LearnerWorker_p0 stopped!
2556
+ [2024-08-01 12:12:00,238][00719] Waiting for process learner_proc0 to stop...
2557
+ [2024-08-01 12:12:00,243][11824] Stopping LearnerWorker_p0...
2558
+ [2024-08-01 12:12:00,243][11824] Loop learner_proc0_evt_loop terminating...
2559
+ [2024-08-01 12:12:02,140][00719] Waiting for process inference_proc0-0 to join...
2560
+ [2024-08-01 12:12:02,242][00719] Waiting for process rollout_proc0 to join...
2561
+ [2024-08-01 12:12:04,440][00719] Waiting for process rollout_proc1 to join...
2562
+ [2024-08-01 12:12:04,445][00719] Waiting for process rollout_proc2 to join...
2563
+ [2024-08-01 12:12:04,448][00719] Waiting for process rollout_proc3 to join...
2564
+ [2024-08-01 12:12:04,451][00719] Waiting for process rollout_proc4 to join...
2565
+ [2024-08-01 12:12:04,454][00719] Waiting for process rollout_proc5 to join...
2566
+ [2024-08-01 12:12:04,458][00719] Waiting for process rollout_proc6 to join...
2567
+ [2024-08-01 12:12:04,461][00719] Waiting for process rollout_proc7 to join...
2568
+ [2024-08-01 12:12:04,464][00719] Batcher 0 profile tree view:
2569
+ batching: 14.2866, releasing_batches: 0.0414
2570
+ [2024-08-01 12:12:04,465][00719] InferenceWorker_p0-w0 profile tree view:
2571
+ wait_policy: 0.0000
2572
+ wait_policy_total: 206.9662
2573
+ update_model: 4.6550
2574
+ weight_update: 0.0019
2575
+ one_step: 0.0031
2576
+ handle_policy_step: 313.0695
2577
+ deserialize: 8.4927, stack: 1.6646, obs_to_device_normalize: 63.1328, forward: 168.1129, send_messages: 15.1628
2578
+ prepare_outputs: 41.5703
2579
+ to_cpu: 23.9472
2580
+ [2024-08-01 12:12:04,467][00719] Learner 0 profile tree view:
2581
+ misc: 0.0026, prepare_batch: 7.6723
2582
+ train: 39.1823
2583
+ epoch_init: 0.0033, minibatch_init: 0.0032, losses_postprocess: 0.3232, kl_divergence: 0.4214, after_optimizer: 2.0595
2584
+ calculate_losses: 13.8445
2585
+ losses_init: 0.0018, forward_head: 0.7894, bptt_initial: 9.3296, tail: 0.6387, advantages_returns: 0.1402, losses: 1.8140
2586
+ bptt: 0.9599
2587
+ bptt_forward_core: 0.9218
2588
+ update: 22.1558
2589
+ clip: 0.4885
2590
+ [2024-08-01 12:12:04,470][00719] RolloutWorker_w0 profile tree view:
2591
+ wait_for_trajectories: 0.2294, enqueue_policy_requests: 50.7448, env_step: 417.6763, overhead: 7.5089, complete_rollouts: 3.6553
2592
+ save_policy_outputs: 11.0508
2593
+ split_output_tensors: 4.4648
2594
+ [2024-08-01 12:12:04,471][00719] RolloutWorker_w7 profile tree view:
2595
+ wait_for_trajectories: 0.1597, enqueue_policy_requests: 52.2065, env_step: 420.3663, overhead: 7.2817, complete_rollouts: 3.4570
2596
+ save_policy_outputs: 11.0252
2597
+ split_output_tensors: 4.3506
2598
+ [2024-08-01 12:12:04,473][00719] Loop Runner_EvtLoop terminating...
2599
+ [2024-08-01 12:12:04,475][00719] Runner profile tree view:
2600
+ main_loop: 570.5493
2601
+ [2024-08-01 12:12:04,476][00719] Collected {0: 4005888}, FPS: 3503.4
2602
+ [2024-08-01 12:12:04,506][00719] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2603
+ [2024-08-01 12:12:04,508][00719] Overriding arg 'num_workers' with value 1 passed from command line
2604
+ [2024-08-01 12:12:04,510][00719] Adding new argument 'no_render'=True that is not in the saved config file!
2605
+ [2024-08-01 12:12:04,512][00719] Adding new argument 'save_video'=True that is not in the saved config file!
2606
+ [2024-08-01 12:12:04,513][00719] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2607
+ [2024-08-01 12:12:04,515][00719] Adding new argument 'video_name'=None that is not in the saved config file!
2608
+ [2024-08-01 12:12:04,516][00719] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
2609
+ [2024-08-01 12:12:04,517][00719] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2610
+ [2024-08-01 12:12:04,518][00719] Adding new argument 'push_to_hub'=False that is not in the saved config file!
2611
+ [2024-08-01 12:12:04,519][00719] Adding new argument 'hf_repository'=None that is not in the saved config file!
2612
+ [2024-08-01 12:12:04,520][00719] Adding new argument 'policy_index'=0 that is not in the saved config file!
2613
+ [2024-08-01 12:12:04,521][00719] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2614
+ [2024-08-01 12:12:04,522][00719] Adding new argument 'train_script'=None that is not in the saved config file!
2615
+ [2024-08-01 12:12:04,523][00719] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2616
+ [2024-08-01 12:12:04,524][00719] Using frameskip 1 and render_action_repeat=4 for evaluation
2617
+ [2024-08-01 12:12:04,552][00719] RunningMeanStd input shape: (3, 72, 128)
2618
+ [2024-08-01 12:12:04,554][00719] RunningMeanStd input shape: (1,)
2619
+ [2024-08-01 12:12:04,567][00719] ConvEncoder: input_channels=3
2620
+ [2024-08-01 12:12:04,605][00719] Conv encoder output size: 512
2621
+ [2024-08-01 12:12:04,606][00719] Policy head output size: 512
2622
+ [2024-08-01 12:12:04,625][00719] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
2623
+ [2024-08-01 12:12:05,081][00719] Num frames 100...
2624
+ [2024-08-01 12:12:05,206][00719] Num frames 200...
2625
+ [2024-08-01 12:12:05,332][00719] Num frames 300...
2626
+ [2024-08-01 12:12:05,452][00719] Num frames 400...
2627
+ [2024-08-01 12:12:05,571][00719] Num frames 500...
2628
+ [2024-08-01 12:12:05,689][00719] Num frames 600...
2629
+ [2024-08-01 12:12:05,811][00719] Num frames 700...
2630
+ [2024-08-01 12:12:05,944][00719] Num frames 800...
2631
+ [2024-08-01 12:12:05,997][00719] Avg episode rewards: #0: 14.000, true rewards: #0: 8.000
2632
+ [2024-08-01 12:12:05,999][00719] Avg episode reward: 14.000, avg true_objective: 8.000
2633
+ [2024-08-01 12:12:06,124][00719] Num frames 900...
2634
+ [2024-08-01 12:12:06,245][00719] Num frames 1000...
2635
+ [2024-08-01 12:12:06,370][00719] Num frames 1100...
2636
+ [2024-08-01 12:12:06,493][00719] Num frames 1200...
2637
+ [2024-08-01 12:12:06,622][00719] Num frames 1300...
2638
+ [2024-08-01 12:12:06,742][00719] Num frames 1400...
2639
+ [2024-08-01 12:12:06,867][00719] Num frames 1500...
2640
+ [2024-08-01 12:12:06,997][00719] Num frames 1600...
2641
+ [2024-08-01 12:12:07,123][00719] Num frames 1700...
2642
+ [2024-08-01 12:12:07,243][00719] Num frames 1800...
2643
+ [2024-08-01 12:12:07,363][00719] Num frames 1900...
2644
+ [2024-08-01 12:12:07,491][00719] Num frames 2000...
2645
+ [2024-08-01 12:12:07,613][00719] Num frames 2100...
2646
+ [2024-08-01 12:12:07,736][00719] Num frames 2200...
2647
+ [2024-08-01 12:12:07,864][00719] Num frames 2300...
2648
+ [2024-08-01 12:12:07,928][00719] Avg episode rewards: #0: 26.020, true rewards: #0: 11.520
2649
+ [2024-08-01 12:12:07,929][00719] Avg episode reward: 26.020, avg true_objective: 11.520
2650
+ [2024-08-01 12:12:08,057][00719] Num frames 2400...
2651
+ [2024-08-01 12:12:08,182][00719] Num frames 2500...
2652
+ [2024-08-01 12:12:08,309][00719] Num frames 2600...
2653
+ [2024-08-01 12:12:08,436][00719] Num frames 2700...
2654
+ [2024-08-01 12:12:08,558][00719] Num frames 2800...
2655
+ [2024-08-01 12:12:08,682][00719] Num frames 2900...
2656
+ [2024-08-01 12:12:08,813][00719] Num frames 3000...
2657
+ [2024-08-01 12:12:08,944][00719] Num frames 3100...
2658
+ [2024-08-01 12:12:09,074][00719] Num frames 3200...
2659
+ [2024-08-01 12:12:09,200][00719] Num frames 3300...
2660
+ [2024-08-01 12:12:09,320][00719] Num frames 3400...
2661
+ [2024-08-01 12:12:09,445][00719] Num frames 3500...
2662
+ [2024-08-01 12:12:09,563][00719] Avg episode rewards: #0: 27.840, true rewards: #0: 11.840
2663
+ [2024-08-01 12:12:09,565][00719] Avg episode reward: 27.840, avg true_objective: 11.840
2664
+ [2024-08-01 12:12:09,627][00719] Num frames 3600...
2665
+ [2024-08-01 12:12:09,746][00719] Num frames 3700...
2666
+ [2024-08-01 12:12:09,869][00719] Num frames 3800...
2667
+ [2024-08-01 12:12:09,991][00719] Num frames 3900...
2668
+ [2024-08-01 12:12:10,088][00719] Avg episode rewards: #0: 22.332, true rewards: #0: 9.832
2669
+ [2024-08-01 12:12:10,090][00719] Avg episode reward: 22.332, avg true_objective: 9.832
2670
+ [2024-08-01 12:12:10,172][00719] Num frames 4000...
2671
+ [2024-08-01 12:12:10,299][00719] Num frames 4100...
2672
+ [2024-08-01 12:12:10,419][00719] Num frames 4200...
2673
+ [2024-08-01 12:12:10,539][00719] Num frames 4300...
2674
+ [2024-08-01 12:12:10,667][00719] Avg episode rewards: #0: 19.524, true rewards: #0: 8.724
2675
+ [2024-08-01 12:12:10,668][00719] Avg episode reward: 19.524, avg true_objective: 8.724
2676
+ [2024-08-01 12:12:10,719][00719] Num frames 4400...
2677
+ [2024-08-01 12:12:10,841][00719] Num frames 4500...
2678
+ [2024-08-01 12:12:10,968][00719] Num frames 4600...
2679
+ [2024-08-01 12:12:11,093][00719] Num frames 4700...
2680
+ [2024-08-01 12:12:11,219][00719] Num frames 4800...
2681
+ [2024-08-01 12:12:11,337][00719] Num frames 4900...
2682
+ [2024-08-01 12:12:11,460][00719] Num frames 5000...
2683
+ [2024-08-01 12:12:11,581][00719] Num frames 5100...
2684
+ [2024-08-01 12:12:11,705][00719] Num frames 5200...
2685
+ [2024-08-01 12:12:11,830][00719] Num frames 5300...
2686
+ [2024-08-01 12:12:12,006][00719] Avg episode rewards: #0: 19.990, true rewards: #0: 8.990
2687
+ [2024-08-01 12:12:12,007][00719] Avg episode reward: 19.990, avg true_objective: 8.990
2688
+ [2024-08-01 12:12:12,019][00719] Num frames 5400...
2689
+ [2024-08-01 12:12:12,149][00719] Num frames 5500...
2690
+ [2024-08-01 12:12:12,271][00719] Num frames 5600...
2691
+ [2024-08-01 12:12:12,391][00719] Num frames 5700...
2692
+ [2024-08-01 12:12:12,510][00719] Num frames 5800...
2693
+ [2024-08-01 12:12:12,683][00719] Num frames 5900...
2694
+ [2024-08-01 12:12:12,855][00719] Num frames 6000...
2695
+ [2024-08-01 12:12:13,026][00719] Num frames 6100...
2696
+ [2024-08-01 12:12:13,125][00719] Avg episode rewards: #0: 18.890, true rewards: #0: 8.747
2697
+ [2024-08-01 12:12:13,127][00719] Avg episode reward: 18.890, avg true_objective: 8.747
2698
+ [2024-08-01 12:12:13,275][00719] Num frames 6200...
2699
+ [2024-08-01 12:12:13,439][00719] Num frames 6300...
2700
+ [2024-08-01 12:12:13,598][00719] Num frames 6400...
2701
+ [2024-08-01 12:12:13,768][00719] Num frames 6500...
2702
+ [2024-08-01 12:12:13,958][00719] Num frames 6600...
2703
+ [2024-08-01 12:12:14,126][00719] Num frames 6700...
2704
+ [2024-08-01 12:12:14,310][00719] Num frames 6800...
2705
+ [2024-08-01 12:12:14,488][00719] Num frames 6900...
2706
+ [2024-08-01 12:12:14,661][00719] Num frames 7000...
2707
+ [2024-08-01 12:12:14,841][00719] Num frames 7100...
2708
+ [2024-08-01 12:12:15,029][00719] Num frames 7200...
2709
+ [2024-08-01 12:12:15,154][00719] Num frames 7300...
2710
+ [2024-08-01 12:12:15,303][00719] Avg episode rewards: #0: 19.839, true rewards: #0: 9.214
2711
+ [2024-08-01 12:12:15,305][00719] Avg episode reward: 19.839, avg true_objective: 9.214
2712
+ [2024-08-01 12:12:15,345][00719] Num frames 7400...
2713
+ [2024-08-01 12:12:15,468][00719] Num frames 7500...
2714
+ [2024-08-01 12:12:15,590][00719] Num frames 7600...
2715
+ [2024-08-01 12:12:15,711][00719] Num frames 7700...
2716
+ [2024-08-01 12:12:15,834][00719] Num frames 7800...
2717
+ [2024-08-01 12:12:15,920][00719] Avg episode rewards: #0: 18.243, true rewards: #0: 8.688
2718
+ [2024-08-01 12:12:15,921][00719] Avg episode reward: 18.243, avg true_objective: 8.688
2719
+ [2024-08-01 12:12:16,020][00719] Num frames 7900...
2720
+ [2024-08-01 12:12:16,139][00719] Num frames 8000...
2721
+ [2024-08-01 12:12:16,266][00719] Num frames 8100...
2722
+ [2024-08-01 12:12:16,392][00719] Num frames 8200...
2723
+ [2024-08-01 12:12:16,490][00719] Avg episode rewards: #0: 16.935, true rewards: #0: 8.235
2724
+ [2024-08-01 12:12:16,491][00719] Avg episode reward: 16.935, avg true_objective: 8.235
2725
+ [2024-08-01 12:13:01,502][00719] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
2726
+ [2024-08-01 12:13:01,540][00719] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
2727
+ [2024-08-01 12:13:01,542][00719] Overriding arg 'num_workers' with value 1 passed from command line
2728
+ [2024-08-01 12:13:01,544][00719] Adding new argument 'no_render'=True that is not in the saved config file!
2729
+ [2024-08-01 12:13:01,545][00719] Adding new argument 'save_video'=True that is not in the saved config file!
2730
+ [2024-08-01 12:13:01,549][00719] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
2731
+ [2024-08-01 12:13:01,550][00719] Adding new argument 'video_name'=None that is not in the saved config file!
2732
+ [2024-08-01 12:13:01,552][00719] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
2733
+ [2024-08-01 12:13:01,553][00719] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
2734
+ [2024-08-01 12:13:01,554][00719] Adding new argument 'push_to_hub'=True that is not in the saved config file!
2735
+ [2024-08-01 12:13:01,555][00719] Adding new argument 'hf_repository'='ThNaToS/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
2736
+ [2024-08-01 12:13:01,556][00719] Adding new argument 'policy_index'=0 that is not in the saved config file!
2737
+ [2024-08-01 12:13:01,557][00719] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
2738
+ [2024-08-01 12:13:01,558][00719] Adding new argument 'train_script'=None that is not in the saved config file!
2739
+ [2024-08-01 12:13:01,559][00719] Adding new argument 'enjoy_script'=None that is not in the saved config file!
2740
+ [2024-08-01 12:13:01,560][00719] Using frameskip 1 and render_action_repeat=4 for evaluation
2741
+ [2024-08-01 12:13:01,588][00719] RunningMeanStd input shape: (3, 72, 128)
2742
+ [2024-08-01 12:13:01,590][00719] RunningMeanStd input shape: (1,)
2743
+ [2024-08-01 12:13:01,603][00719] ConvEncoder: input_channels=3
2744
+ [2024-08-01 12:13:01,642][00719] Conv encoder output size: 512
2745
+ [2024-08-01 12:13:01,644][00719] Policy head output size: 512
2746
+ [2024-08-01 12:13:01,661][00719] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
2747
+ [2024-08-01 12:13:02,112][00719] Num frames 100...
2748
+ [2024-08-01 12:13:02,252][00719] Num frames 200...
2749
+ [2024-08-01 12:13:02,385][00719] Avg episode rewards: #0: 2.560, true rewards: #0: 2.560
2750
+ [2024-08-01 12:13:02,386][00719] Avg episode reward: 2.560, avg true_objective: 2.560
2751
+ [2024-08-01 12:13:02,442][00719] Num frames 300...
2752
+ [2024-08-01 12:13:02,567][00719] Num frames 400...
2753
+ [2024-08-01 12:13:02,695][00719] Num frames 500...
2754
+ [2024-08-01 12:13:02,825][00719] Num frames 600...
2755
+ [2024-08-01 12:13:02,952][00719] Num frames 700...
2756
+ [2024-08-01 12:13:03,074][00719] Num frames 800...
2757
+ [2024-08-01 12:13:03,196][00719] Num frames 900...
2758
+ [2024-08-01 12:13:03,320][00719] Num frames 1000...
2759
+ [2024-08-01 12:13:03,446][00719] Num frames 1100...
2760
+ [2024-08-01 12:13:03,571][00719] Num frames 1200...
2761
+ [2024-08-01 12:13:03,695][00719] Num frames 1300...
2762
+ [2024-08-01 12:13:03,827][00719] Num frames 1400...
2763
+ [2024-08-01 12:13:03,961][00719] Num frames 1500...
2764
+ [2024-08-01 12:13:04,066][00719] Avg episode rewards: #0: 15.695, true rewards: #0: 7.695
2765
+ [2024-08-01 12:13:04,068][00719] Avg episode reward: 15.695, avg true_objective: 7.695
2766
+ [2024-08-01 12:13:04,144][00719] Num frames 1600...
2767
+ [2024-08-01 12:13:04,282][00719] Num frames 1700...
2768
+ [2024-08-01 12:13:04,460][00719] Num frames 1800...
2769
+ [2024-08-01 12:13:04,625][00719] Num frames 1900...
2770
+ [2024-08-01 12:13:04,791][00719] Num frames 2000...
2771
+ [2024-08-01 12:13:04,973][00719] Num frames 2100...
2772
+ [2024-08-01 12:13:05,144][00719] Num frames 2200...
2773
+ [2024-08-01 12:13:05,306][00719] Num frames 2300...
2774
+ [2024-08-01 12:13:05,475][00719] Num frames 2400...
2775
+ [2024-08-01 12:13:05,661][00719] Num frames 2500...
2776
+ [2024-08-01 12:13:05,832][00719] Num frames 2600...
2777
+ [2024-08-01 12:13:06,033][00719] Num frames 2700...
2778
+ [2024-08-01 12:13:06,214][00719] Num frames 2800...
2779
+ [2024-08-01 12:13:06,397][00719] Avg episode rewards: #0: 20.250, true rewards: #0: 9.583
2780
+ [2024-08-01 12:13:06,399][00719] Avg episode reward: 20.250, avg true_objective: 9.583
2781
+ [2024-08-01 12:13:06,445][00719] Num frames 2900...
2782
+ [2024-08-01 12:13:06,615][00719] Num frames 3000...
2783
+ [2024-08-01 12:13:06,770][00719] Num frames 3100...
2784
+ [2024-08-01 12:13:06,896][00719] Num frames 3200...
2785
+ [2024-08-01 12:13:07,026][00719] Num frames 3300...
2786
+ [2024-08-01 12:13:07,145][00719] Num frames 3400...
2787
+ [2024-08-01 12:13:07,269][00719] Num frames 3500...
2788
+ [2024-08-01 12:13:07,391][00719] Num frames 3600...
2789
+ [2024-08-01 12:13:07,512][00719] Num frames 3700...
2790
+ [2024-08-01 12:13:07,638][00719] Num frames 3800...
2791
+ [2024-08-01 12:13:07,764][00719] Num frames 3900...
2792
+ [2024-08-01 12:13:07,894][00719] Num frames 4000...
2793
+ [2024-08-01 12:13:08,026][00719] Num frames 4100...
2794
+ [2024-08-01 12:13:08,147][00719] Avg episode rewards: #0: 21.638, true rewards: #0: 10.387
2795
+ [2024-08-01 12:13:08,149][00719] Avg episode reward: 21.638, avg true_objective: 10.387
2796
+ [2024-08-01 12:13:08,206][00719] Num frames 4200...
2797
+ [2024-08-01 12:13:08,332][00719] Num frames 4300...
2798
+ [2024-08-01 12:13:08,456][00719] Num frames 4400...
2799
+ [2024-08-01 12:13:08,575][00719] Num frames 4500...
2800
+ [2024-08-01 12:13:08,695][00719] Num frames 4600...
2801
+ [2024-08-01 12:13:08,819][00719] Num frames 4700...
2802
+ [2024-08-01 12:13:08,946][00719] Num frames 4800...
2803
+ [2024-08-01 12:13:09,075][00719] Num frames 4900...
2804
+ [2024-08-01 12:13:09,159][00719] Avg episode rewards: #0: 19.846, true rewards: #0: 9.846
2805
+ [2024-08-01 12:13:09,160][00719] Avg episode reward: 19.846, avg true_objective: 9.846
2806
+ [2024-08-01 12:13:09,257][00719] Num frames 5000...
2807
+ [2024-08-01 12:13:09,376][00719] Num frames 5100...
2808
+ [2024-08-01 12:13:09,501][00719] Num frames 5200...
2809
+ [2024-08-01 12:13:09,621][00719] Num frames 5300...
2810
+ [2024-08-01 12:13:09,726][00719] Avg episode rewards: #0: 17.565, true rewards: #0: 8.898
2811
+ [2024-08-01 12:13:09,728][00719] Avg episode reward: 17.565, avg true_objective: 8.898
2812
+ [2024-08-01 12:13:09,806][00719] Num frames 5400...
2813
+ [2024-08-01 12:13:09,939][00719] Num frames 5500...
2814
+ [2024-08-01 12:13:10,071][00719] Num frames 5600...
2815
+ [2024-08-01 12:13:10,195][00719] Num frames 5700...
2816
+ [2024-08-01 12:13:10,318][00719] Num frames 5800...
2817
+ [2024-08-01 12:13:10,445][00719] Num frames 5900...
2818
+ [2024-08-01 12:13:10,569][00719] Num frames 6000...
2819
+ [2024-08-01 12:13:10,693][00719] Num frames 6100...
2820
+ [2024-08-01 12:13:10,822][00719] Num frames 6200...
2821
+ [2024-08-01 12:13:10,953][00719] Num frames 6300...
2822
+ [2024-08-01 12:13:11,082][00719] Num frames 6400...
2823
+ [2024-08-01 12:13:11,204][00719] Num frames 6500...
2824
+ [2024-08-01 12:13:11,327][00719] Num frames 6600...
2825
+ [2024-08-01 12:13:11,451][00719] Num frames 6700...
2826
+ [2024-08-01 12:13:11,576][00719] Num frames 6800...
2827
+ [2024-08-01 12:13:11,700][00719] Num frames 6900...
2828
+ [2024-08-01 12:13:11,826][00719] Num frames 7000...
2829
+ [2024-08-01 12:13:11,960][00719] Num frames 7100...
2830
+ [2024-08-01 12:13:12,092][00719] Num frames 7200...
2831
+ [2024-08-01 12:13:12,217][00719] Num frames 7300...
2832
+ [2024-08-01 12:13:12,295][00719] Avg episode rewards: #0: 21.596, true rewards: #0: 10.453
2833
+ [2024-08-01 12:13:12,297][00719] Avg episode reward: 21.596, avg true_objective: 10.453
2834
+ [2024-08-01 12:13:12,401][00719] Num frames 7400...
2835
+ [2024-08-01 12:13:12,526][00719] Num frames 7500...
2836
+ [2024-08-01 12:13:12,647][00719] Num frames 7600...
2837
+ [2024-08-01 12:13:12,768][00719] Num frames 7700...
2838
+ [2024-08-01 12:13:12,899][00719] Num frames 7800...
2839
+ [2024-08-01 12:13:13,024][00719] Num frames 7900...
2840
+ [2024-08-01 12:13:13,152][00719] Num frames 8000...
2841
+ [2024-08-01 12:13:13,257][00719] Avg episode rewards: #0: 20.675, true rewards: #0: 10.050
2842
+ [2024-08-01 12:13:13,259][00719] Avg episode reward: 20.675, avg true_objective: 10.050
2843
+ [2024-08-01 12:13:13,333][00719] Num frames 8100...
2844
+ [2024-08-01 12:13:13,456][00719] Num frames 8200...
2845
+ [2024-08-01 12:13:13,582][00719] Num frames 8300...
2846
+ [2024-08-01 12:13:13,702][00719] Num frames 8400...
2847
+ [2024-08-01 12:13:13,823][00719] Num frames 8500...
2848
+ [2024-08-01 12:13:13,951][00719] Num frames 8600...
2849
+ [2024-08-01 12:13:14,069][00719] Num frames 8700...
2850
+ [2024-08-01 12:13:14,198][00719] Num frames 8800...
2851
+ [2024-08-01 12:13:14,322][00719] Num frames 8900...
2852
+ [2024-08-01 12:13:14,445][00719] Num frames 9000...
2853
+ [2024-08-01 12:13:14,569][00719] Num frames 9100...
2854
+ [2024-08-01 12:13:14,700][00719] Num frames 9200...
2855
+ [2024-08-01 12:13:14,834][00719] Num frames 9300...
2856
+ [2024-08-01 12:13:14,964][00719] Num frames 9400...
2857
+ [2024-08-01 12:13:15,040][00719] Avg episode rewards: #0: 21.684, true rewards: #0: 10.462
2858
+ [2024-08-01 12:13:15,042][00719] Avg episode reward: 21.684, avg true_objective: 10.462
2859
+ [2024-08-01 12:13:15,154][00719] Num frames 9500...
2860
+ [2024-08-01 12:13:15,275][00719] Num frames 9600...
2861
+ [2024-08-01 12:13:15,395][00719] Num frames 9700...
2862
+ [2024-08-01 12:13:15,521][00719] Num frames 9800...
2863
+ [2024-08-01 12:13:15,642][00719] Num frames 9900...
2864
+ [2024-08-01 12:13:15,768][00719] Num frames 10000...
2865
+ [2024-08-01 12:13:15,899][00719] Num frames 10100...
2866
+ [2024-08-01 12:13:16,018][00719] Avg episode rewards: #0: 20.852, true rewards: #0: 10.152
2867
+ [2024-08-01 12:13:16,020][00719] Avg episode reward: 20.852, avg true_objective: 10.152
2868
+ [2024-08-01 12:14:12,911][00719] Replay video saved to /content/train_dir/default_experiment/replay.mp4!