|
[2024-08-16 08:36:08,676][00784] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-08-16 08:36:08,681][00784] Rollout worker 0 uses device cpu |
|
[2024-08-16 08:36:08,683][00784] Rollout worker 1 uses device cpu |
|
[2024-08-16 08:36:08,685][00784] Rollout worker 2 uses device cpu |
|
[2024-08-16 08:36:08,686][00784] Rollout worker 3 uses device cpu |
|
[2024-08-16 08:36:08,688][00784] Rollout worker 4 uses device cpu |
|
[2024-08-16 08:36:08,689][00784] Rollout worker 5 uses device cpu |
|
[2024-08-16 08:36:08,691][00784] Rollout worker 6 uses device cpu |
|
[2024-08-16 08:36:08,694][00784] Rollout worker 7 uses device cpu |
|
[2024-08-16 08:36:08,841][00784] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-16 08:36:08,842][00784] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-08-16 08:36:08,876][00784] Starting all processes... |
|
[2024-08-16 08:36:08,878][00784] Starting process learner_proc0 |
|
[2024-08-16 08:36:08,926][00784] Starting all processes... |
|
[2024-08-16 08:36:08,956][00784] Starting process inference_proc0-0 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc0 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc1 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc2 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc3 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc4 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc5 |
|
[2024-08-16 08:36:08,959][00784] Starting process rollout_proc6 |
|
[2024-08-16 08:36:08,960][00784] Starting process rollout_proc7 |
|
[2024-08-16 08:36:20,397][03929] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-16 08:36:20,399][03929] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-08-16 08:36:20,476][03929] Num visible devices: 1 |
|
[2024-08-16 08:36:20,522][03929] Starting seed is not provided |
|
[2024-08-16 08:36:20,523][03929] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-16 08:36:20,524][03929] Initializing actor-critic model on device cuda:0 |
|
[2024-08-16 08:36:20,525][03929] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-16 08:36:20,526][03929] RunningMeanStd input shape: (1,) |
|
[2024-08-16 08:36:20,617][03929] ConvEncoder: input_channels=3 |
|
[2024-08-16 08:36:20,869][03944] Worker 2 uses CPU cores [0] |
|
[2024-08-16 08:36:20,982][03946] Worker 4 uses CPU cores [0] |
|
[2024-08-16 08:36:20,981][03943] Worker 0 uses CPU cores [0] |
|
[2024-08-16 08:36:21,068][03945] Worker 1 uses CPU cores [1] |
|
[2024-08-16 08:36:21,072][03948] Worker 5 uses CPU cores [1] |
|
[2024-08-16 08:36:21,073][03950] Worker 7 uses CPU cores [1] |
|
[2024-08-16 08:36:21,104][03947] Worker 3 uses CPU cores [1] |
|
[2024-08-16 08:36:21,127][03942] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-16 08:36:21,128][03942] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-08-16 08:36:21,146][03949] Worker 6 uses CPU cores [0] |
|
[2024-08-16 08:36:21,147][03942] Num visible devices: 1 |
|
[2024-08-16 08:36:21,179][03929] Conv encoder output size: 512 |
|
[2024-08-16 08:36:21,179][03929] Policy head output size: 512 |
|
[2024-08-16 08:36:21,194][03929] Created Actor Critic model with architecture: |
|
[2024-08-16 08:36:21,194][03929] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-08-16 08:36:25,494][03929] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-08-16 08:36:25,495][03929] No checkpoints found |
|
[2024-08-16 08:36:25,495][03929] Did not load from checkpoint, starting from scratch! |
|
[2024-08-16 08:36:25,495][03929] Initialized policy 0 weights for model version 0 |
|
[2024-08-16 08:36:25,501][03929] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-08-16 08:36:25,512][03929] LearnerWorker_p0 finished initialization! |
|
[2024-08-16 08:36:25,873][03942] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-16 08:36:25,874][03942] RunningMeanStd input shape: (1,) |
|
[2024-08-16 08:36:25,899][03942] ConvEncoder: input_channels=3 |
|
[2024-08-16 08:36:26,102][03942] Conv encoder output size: 512 |
|
[2024-08-16 08:36:26,102][03942] Policy head output size: 512 |
|
[2024-08-16 08:36:28,347][00784] Inference worker 0-0 is ready! |
|
[2024-08-16 08:36:28,349][00784] All inference workers are ready! Signal rollout workers to start! |
|
[2024-08-16 08:36:28,451][03947] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,474][03948] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,487][03950] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,506][03945] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,664][03943] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,685][03949] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,699][03944] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,695][03946] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:36:28,834][00784] Heartbeat connected on Batcher_0 |
|
[2024-08-16 08:36:28,837][00784] Heartbeat connected on LearnerWorker_p0 |
|
[2024-08-16 08:36:28,877][00784] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-08-16 08:36:30,309][00784] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-08-16 08:36:30,591][03943] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:30,593][03944] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:30,601][03945] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:30,595][03947] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:30,596][03948] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:30,605][03950] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:31,609][03950] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:31,611][03948] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:32,292][03943] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:32,294][03944] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:32,296][03949] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:33,261][03945] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:33,444][03950] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:33,447][03948] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:33,873][03943] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:33,963][03947] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:34,586][03946] Decorrelating experience for 0 frames... |
|
[2024-08-16 08:36:34,711][03944] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:35,079][03945] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:35,119][03949] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:35,161][03950] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:35,169][03948] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:35,307][00784] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-08-16 08:36:35,444][00784] Heartbeat connected on RolloutWorker_w5 |
|
[2024-08-16 08:36:35,456][00784] Heartbeat connected on RolloutWorker_w7 |
|
[2024-08-16 08:36:35,660][03947] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:36,119][03945] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:36,229][00784] Heartbeat connected on RolloutWorker_w1 |
|
[2024-08-16 08:36:36,483][03946] Decorrelating experience for 32 frames... |
|
[2024-08-16 08:36:36,567][03943] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:36,689][03949] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:36,743][00784] Heartbeat connected on RolloutWorker_w0 |
|
[2024-08-16 08:36:36,982][03944] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:37,153][00784] Heartbeat connected on RolloutWorker_w2 |
|
[2024-08-16 08:36:37,544][03947] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:37,674][00784] Heartbeat connected on RolloutWorker_w3 |
|
[2024-08-16 08:36:37,683][03946] Decorrelating experience for 64 frames... |
|
[2024-08-16 08:36:37,757][03949] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:37,848][00784] Heartbeat connected on RolloutWorker_w6 |
|
[2024-08-16 08:36:38,097][03946] Decorrelating experience for 96 frames... |
|
[2024-08-16 08:36:38,158][00784] Heartbeat connected on RolloutWorker_w4 |
|
[2024-08-16 08:36:40,307][00784] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 3.6. Samples: 36. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-08-16 08:36:40,313][00784] Avg episode reward: [(0, '1.131')] |
|
[2024-08-16 08:36:41,212][03929] Signal inference workers to stop experience collection... |
|
[2024-08-16 08:36:41,248][03942] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-08-16 08:36:43,666][03929] Signal inference workers to resume experience collection... |
|
[2024-08-16 08:36:43,667][03942] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-08-16 08:36:45,307][00784] Fps is (10 sec: 409.6, 60 sec: 273.1, 300 sec: 273.1). Total num frames: 4096. Throughput: 0: 150.0. Samples: 2250. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-08-16 08:36:45,309][00784] Avg episode reward: [(0, '1.940')] |
|
[2024-08-16 08:36:50,310][00784] Fps is (10 sec: 2047.4, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 266.6. Samples: 5332. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-08-16 08:36:50,312][00784] Avg episode reward: [(0, '3.201')] |
|
[2024-08-16 08:36:54,412][03942] Updated weights for policy 0, policy_version 10 (0.0034) |
|
[2024-08-16 08:36:55,307][00784] Fps is (10 sec: 4096.0, 60 sec: 1802.4, 300 sec: 1802.4). Total num frames: 45056. Throughput: 0: 333.1. Samples: 8326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:36:55,312][00784] Avg episode reward: [(0, '4.040')] |
|
[2024-08-16 08:37:00,307][00784] Fps is (10 sec: 4097.2, 60 sec: 2048.1, 300 sec: 2048.1). Total num frames: 61440. Throughput: 0: 485.9. Samples: 14576. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-08-16 08:37:00,314][00784] Avg episode reward: [(0, '4.386')] |
|
[2024-08-16 08:37:05,307][00784] Fps is (10 sec: 3276.8, 60 sec: 2223.7, 300 sec: 2223.7). Total num frames: 77824. Throughput: 0: 562.8. Samples: 19698. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:37:05,315][00784] Avg episode reward: [(0, '4.514')] |
|
[2024-08-16 08:37:06,275][03942] Updated weights for policy 0, policy_version 20 (0.0023) |
|
[2024-08-16 08:37:10,307][00784] Fps is (10 sec: 3276.8, 60 sec: 2355.3, 300 sec: 2355.3). Total num frames: 94208. Throughput: 0: 546.1. Samples: 21844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:37:10,314][00784] Avg episode reward: [(0, '4.436')] |
|
[2024-08-16 08:37:15,258][03942] Updated weights for policy 0, policy_version 30 (0.0038) |
|
[2024-08-16 08:37:15,307][00784] Fps is (10 sec: 4505.6, 60 sec: 2730.8, 300 sec: 2730.8). Total num frames: 122880. Throughput: 0: 639.7. Samples: 28786. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:37:15,309][00784] Avg episode reward: [(0, '4.360')] |
|
[2024-08-16 08:37:15,319][03929] Saving new best policy, reward=4.360! |
|
[2024-08-16 08:37:20,307][00784] Fps is (10 sec: 4505.4, 60 sec: 2785.4, 300 sec: 2785.4). Total num frames: 139264. Throughput: 0: 781.3. Samples: 35160. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:37:20,311][00784] Avg episode reward: [(0, '4.375')] |
|
[2024-08-16 08:37:20,318][03929] Saving new best policy, reward=4.375! |
|
[2024-08-16 08:37:25,307][00784] Fps is (10 sec: 2867.2, 60 sec: 2755.6, 300 sec: 2755.6). Total num frames: 151552. Throughput: 0: 824.5. Samples: 37140. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-08-16 08:37:25,309][00784] Avg episode reward: [(0, '4.474')] |
|
[2024-08-16 08:37:25,321][03929] Saving new best policy, reward=4.474! |
|
[2024-08-16 08:37:27,477][03942] Updated weights for policy 0, policy_version 40 (0.0023) |
|
[2024-08-16 08:37:30,307][00784] Fps is (10 sec: 3686.6, 60 sec: 2935.6, 300 sec: 2935.6). Total num frames: 176128. Throughput: 0: 899.1. Samples: 42710. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:37:30,312][00784] Avg episode reward: [(0, '4.474')] |
|
[2024-08-16 08:37:35,311][00784] Fps is (10 sec: 4503.7, 60 sec: 3276.6, 300 sec: 3024.6). Total num frames: 196608. Throughput: 0: 978.7. Samples: 49376. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:37:35,314][00784] Avg episode reward: [(0, '4.537')] |
|
[2024-08-16 08:37:35,335][03929] Saving new best policy, reward=4.537! |
|
[2024-08-16 08:37:37,395][03942] Updated weights for policy 0, policy_version 50 (0.0016) |
|
[2024-08-16 08:37:40,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 2984.3). Total num frames: 208896. Throughput: 0: 963.2. Samples: 51670. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:37:40,311][00784] Avg episode reward: [(0, '4.503')] |
|
[2024-08-16 08:37:45,307][00784] Fps is (10 sec: 3278.2, 60 sec: 3754.7, 300 sec: 3058.4). Total num frames: 229376. Throughput: 0: 925.3. Samples: 56214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:37:45,311][00784] Avg episode reward: [(0, '4.420')] |
|
[2024-08-16 08:37:48,426][03942] Updated weights for policy 0, policy_version 60 (0.0025) |
|
[2024-08-16 08:37:50,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3891.4, 300 sec: 3174.5). Total num frames: 253952. Throughput: 0: 966.9. Samples: 63210. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-08-16 08:37:50,311][00784] Avg episode reward: [(0, '4.587')] |
|
[2024-08-16 08:37:50,314][03929] Saving new best policy, reward=4.587! |
|
[2024-08-16 08:37:55,307][00784] Fps is (10 sec: 4095.9, 60 sec: 3754.7, 300 sec: 3180.5). Total num frames: 270336. Throughput: 0: 994.4. Samples: 66594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:37:55,310][00784] Avg episode reward: [(0, '4.402')] |
|
[2024-08-16 08:38:00,307][00784] Fps is (10 sec: 2867.1, 60 sec: 3686.4, 300 sec: 3140.3). Total num frames: 282624. Throughput: 0: 933.6. Samples: 70798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:38:00,310][00784] Avg episode reward: [(0, '4.472')] |
|
[2024-08-16 08:38:00,431][03942] Updated weights for policy 0, policy_version 70 (0.0026) |
|
[2024-08-16 08:38:05,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3233.8). Total num frames: 307200. Throughput: 0: 922.9. Samples: 76692. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:38:05,309][00784] Avg episode reward: [(0, '4.609')] |
|
[2024-08-16 08:38:05,321][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000075_307200.pth... |
|
[2024-08-16 08:38:05,462][03929] Saving new best policy, reward=4.609! |
|
[2024-08-16 08:38:09,530][03942] Updated weights for policy 0, policy_version 80 (0.0012) |
|
[2024-08-16 08:38:10,312][00784] Fps is (10 sec: 4503.4, 60 sec: 3890.9, 300 sec: 3276.7). Total num frames: 327680. Throughput: 0: 948.9. Samples: 79844. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:38:10,315][00784] Avg episode reward: [(0, '4.549')] |
|
[2024-08-16 08:38:15,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3276.9). Total num frames: 344064. Throughput: 0: 944.5. Samples: 85212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:38:15,309][00784] Avg episode reward: [(0, '4.452')] |
|
[2024-08-16 08:38:20,307][00784] Fps is (10 sec: 3278.5, 60 sec: 3686.4, 300 sec: 3276.9). Total num frames: 360448. Throughput: 0: 911.9. Samples: 90408. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-08-16 08:38:20,315][00784] Avg episode reward: [(0, '4.473')] |
|
[2024-08-16 08:38:21,381][03942] Updated weights for policy 0, policy_version 90 (0.0029) |
|
[2024-08-16 08:38:25,307][00784] Fps is (10 sec: 4095.9, 60 sec: 3891.2, 300 sec: 3348.1). Total num frames: 385024. Throughput: 0: 934.9. Samples: 93742. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:25,316][00784] Avg episode reward: [(0, '4.552')] |
|
[2024-08-16 08:38:30,310][00784] Fps is (10 sec: 4094.8, 60 sec: 3754.5, 300 sec: 3345.0). Total num frames: 401408. Throughput: 0: 977.1. Samples: 100188. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:30,318][00784] Avg episode reward: [(0, '4.383')] |
|
[2024-08-16 08:38:32,466][03942] Updated weights for policy 0, policy_version 100 (0.0037) |
|
[2024-08-16 08:38:35,307][00784] Fps is (10 sec: 3276.9, 60 sec: 3686.7, 300 sec: 3342.4). Total num frames: 417792. Throughput: 0: 913.0. Samples: 104296. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:35,314][00784] Avg episode reward: [(0, '4.374')] |
|
[2024-08-16 08:38:40,307][00784] Fps is (10 sec: 3687.5, 60 sec: 3822.9, 300 sec: 3371.4). Total num frames: 438272. Throughput: 0: 907.5. Samples: 107432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:40,309][00784] Avg episode reward: [(0, '4.386')] |
|
[2024-08-16 08:38:42,351][03942] Updated weights for policy 0, policy_version 110 (0.0019) |
|
[2024-08-16 08:38:45,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3428.6). Total num frames: 462848. Throughput: 0: 965.7. Samples: 114256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:45,312][00784] Avg episode reward: [(0, '4.267')] |
|
[2024-08-16 08:38:50,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3393.9). Total num frames: 475136. Throughput: 0: 942.5. Samples: 119104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:38:50,312][00784] Avg episode reward: [(0, '4.315')] |
|
[2024-08-16 08:38:54,153][03942] Updated weights for policy 0, policy_version 120 (0.0021) |
|
[2024-08-16 08:38:55,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3418.1). Total num frames: 495616. Throughput: 0: 922.7. Samples: 121362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:38:55,313][00784] Avg episode reward: [(0, '4.563')] |
|
[2024-08-16 08:39:00,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3440.7). Total num frames: 516096. Throughput: 0: 959.3. Samples: 128382. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:39:00,315][00784] Avg episode reward: [(0, '4.559')] |
|
[2024-08-16 08:39:03,750][03942] Updated weights for policy 0, policy_version 130 (0.0012) |
|
[2024-08-16 08:39:05,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3461.8). Total num frames: 536576. Throughput: 0: 968.2. Samples: 133978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:39:05,312][00784] Avg episode reward: [(0, '4.586')] |
|
[2024-08-16 08:39:10,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.7, 300 sec: 3430.4). Total num frames: 548864. Throughput: 0: 939.4. Samples: 136016. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:39:10,313][00784] Avg episode reward: [(0, '4.728')] |
|
[2024-08-16 08:39:10,399][03929] Saving new best policy, reward=4.728! |
|
[2024-08-16 08:39:14,883][03942] Updated weights for policy 0, policy_version 140 (0.0015) |
|
[2024-08-16 08:39:15,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3475.4). Total num frames: 573440. Throughput: 0: 929.3. Samples: 142002. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:39:15,310][00784] Avg episode reward: [(0, '4.521')] |
|
[2024-08-16 08:39:20,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3493.7). Total num frames: 593920. Throughput: 0: 985.1. Samples: 148624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:39:20,313][00784] Avg episode reward: [(0, '4.338')] |
|
[2024-08-16 08:39:25,314][00784] Fps is (10 sec: 3274.4, 60 sec: 3686.0, 300 sec: 3463.9). Total num frames: 606208. Throughput: 0: 958.0. Samples: 150548. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:39:25,319][00784] Avg episode reward: [(0, '4.269')] |
|
[2024-08-16 08:39:27,302][03942] Updated weights for policy 0, policy_version 150 (0.0029) |
|
[2024-08-16 08:39:30,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.9, 300 sec: 3481.6). Total num frames: 626688. Throughput: 0: 915.2. Samples: 155438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:39:30,310][00784] Avg episode reward: [(0, '4.344')] |
|
[2024-08-16 08:39:35,313][00784] Fps is (10 sec: 4096.4, 60 sec: 3822.5, 300 sec: 3498.1). Total num frames: 647168. Throughput: 0: 950.6. Samples: 161888. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:39:35,319][00784] Avg episode reward: [(0, '4.602')] |
|
[2024-08-16 08:39:38,662][03942] Updated weights for policy 0, policy_version 160 (0.0017) |
|
[2024-08-16 08:39:40,308][00784] Fps is (10 sec: 2866.9, 60 sec: 3618.1, 300 sec: 3449.3). Total num frames: 655360. Throughput: 0: 944.1. Samples: 163848. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:39:40,314][00784] Avg episode reward: [(0, '4.539')] |
|
[2024-08-16 08:39:45,310][00784] Fps is (10 sec: 2048.6, 60 sec: 3413.1, 300 sec: 3423.8). Total num frames: 667648. Throughput: 0: 859.5. Samples: 167064. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:39:45,316][00784] Avg episode reward: [(0, '4.427')] |
|
[2024-08-16 08:39:50,307][00784] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3440.7). Total num frames: 688128. Throughput: 0: 858.0. Samples: 172590. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:39:50,310][00784] Avg episode reward: [(0, '4.372')] |
|
[2024-08-16 08:39:51,258][03942] Updated weights for policy 0, policy_version 170 (0.0014) |
|
[2024-08-16 08:39:55,307][00784] Fps is (10 sec: 4507.1, 60 sec: 3618.1, 300 sec: 3476.6). Total num frames: 712704. Throughput: 0: 887.3. Samples: 175944. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:39:55,314][00784] Avg episode reward: [(0, '4.452')] |
|
[2024-08-16 08:40:00,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3471.9). Total num frames: 729088. Throughput: 0: 888.1. Samples: 181966. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:40:00,316][00784] Avg episode reward: [(0, '4.651')] |
|
[2024-08-16 08:40:02,597][03942] Updated weights for policy 0, policy_version 180 (0.0016) |
|
[2024-08-16 08:40:05,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3467.3). Total num frames: 745472. Throughput: 0: 838.3. Samples: 186346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:40:05,309][00784] Avg episode reward: [(0, '4.709')] |
|
[2024-08-16 08:40:05,320][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000182_745472.pth... |
|
[2024-08-16 08:40:10,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3481.6). Total num frames: 765952. Throughput: 0: 870.2. Samples: 189702. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:40:10,311][00784] Avg episode reward: [(0, '4.549')] |
|
[2024-08-16 08:40:12,211][03942] Updated weights for policy 0, policy_version 190 (0.0019) |
|
[2024-08-16 08:40:15,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3495.3). Total num frames: 786432. Throughput: 0: 914.1. Samples: 196574. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:40:15,311][00784] Avg episode reward: [(0, '4.469')] |
|
[2024-08-16 08:40:20,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3490.5). Total num frames: 802816. Throughput: 0: 864.0. Samples: 200764. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:40:20,313][00784] Avg episode reward: [(0, '4.580')] |
|
[2024-08-16 08:40:23,948][03942] Updated weights for policy 0, policy_version 200 (0.0032) |
|
[2024-08-16 08:40:25,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3618.6, 300 sec: 3503.4). Total num frames: 823296. Throughput: 0: 881.9. Samples: 203532. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:40:25,313][00784] Avg episode reward: [(0, '4.752')] |
|
[2024-08-16 08:40:25,324][03929] Saving new best policy, reward=4.752! |
|
[2024-08-16 08:40:30,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3515.8). Total num frames: 843776. Throughput: 0: 962.2. Samples: 210360. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:40:30,310][00784] Avg episode reward: [(0, '4.533')] |
|
[2024-08-16 08:40:34,796][03942] Updated weights for policy 0, policy_version 210 (0.0012) |
|
[2024-08-16 08:40:35,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3550.2, 300 sec: 3510.9). Total num frames: 860160. Throughput: 0: 950.2. Samples: 215350. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:40:35,310][00784] Avg episode reward: [(0, '4.459')] |
|
[2024-08-16 08:40:40,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3506.2). Total num frames: 876544. Throughput: 0: 922.7. Samples: 217466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:40:40,312][00784] Avg episode reward: [(0, '4.660')] |
|
[2024-08-16 08:40:44,863][03942] Updated weights for policy 0, policy_version 220 (0.0019) |
|
[2024-08-16 08:40:45,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3533.8). Total num frames: 901120. Throughput: 0: 936.4. Samples: 224104. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:40:45,312][00784] Avg episode reward: [(0, '4.760')] |
|
[2024-08-16 08:40:45,320][03929] Saving new best policy, reward=4.760! |
|
[2024-08-16 08:40:50,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3528.9). Total num frames: 917504. Throughput: 0: 978.1. Samples: 230362. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-08-16 08:40:50,311][00784] Avg episode reward: [(0, '4.634')] |
|
[2024-08-16 08:40:55,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3524.1). Total num frames: 933888. Throughput: 0: 949.9. Samples: 232446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:40:55,312][00784] Avg episode reward: [(0, '4.376')] |
|
[2024-08-16 08:40:56,566][03942] Updated weights for policy 0, policy_version 230 (0.0025) |
|
[2024-08-16 08:41:00,307][00784] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3549.9). Total num frames: 958464. Throughput: 0: 924.3. Samples: 238166. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:41:00,314][00784] Avg episode reward: [(0, '4.819')] |
|
[2024-08-16 08:41:00,317][03929] Saving new best policy, reward=4.819! |
|
[2024-08-16 08:41:05,307][00784] Fps is (10 sec: 4505.7, 60 sec: 3891.2, 300 sec: 3559.8). Total num frames: 978944. Throughput: 0: 976.6. Samples: 244712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:05,313][00784] Avg episode reward: [(0, '4.943')] |
|
[2024-08-16 08:41:05,330][03929] Saving new best policy, reward=4.943! |
|
[2024-08-16 08:41:06,182][03942] Updated weights for policy 0, policy_version 240 (0.0012) |
|
[2024-08-16 08:41:10,307][00784] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3540.1). Total num frames: 991232. Throughput: 0: 968.0. Samples: 247094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:41:10,312][00784] Avg episode reward: [(0, '4.828')] |
|
[2024-08-16 08:41:15,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3549.9). Total num frames: 1011712. Throughput: 0: 916.7. Samples: 251610. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-08-16 08:41:15,310][00784] Avg episode reward: [(0, '4.654')] |
|
[2024-08-16 08:41:17,720][03942] Updated weights for policy 0, policy_version 250 (0.0027) |
|
[2024-08-16 08:41:20,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3559.3). Total num frames: 1032192. Throughput: 0: 960.0. Samples: 258550. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:41:20,314][00784] Avg episode reward: [(0, '4.900')] |
|
[2024-08-16 08:41:25,309][00784] Fps is (10 sec: 4095.0, 60 sec: 3822.8, 300 sec: 3568.4). Total num frames: 1052672. Throughput: 0: 985.9. Samples: 261836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:25,311][00784] Avg episode reward: [(0, '4.734')] |
|
[2024-08-16 08:41:29,310][03942] Updated weights for policy 0, policy_version 260 (0.0016) |
|
[2024-08-16 08:41:30,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1064960. Throughput: 0: 934.4. Samples: 266152. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:30,311][00784] Avg episode reward: [(0, '4.522')] |
|
[2024-08-16 08:41:35,307][00784] Fps is (10 sec: 3687.3, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1089536. Throughput: 0: 931.7. Samples: 272290. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:35,314][00784] Avg episode reward: [(0, '4.797')] |
|
[2024-08-16 08:41:38,597][03942] Updated weights for policy 0, policy_version 270 (0.0012) |
|
[2024-08-16 08:41:40,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1110016. Throughput: 0: 962.0. Samples: 275736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:40,316][00784] Avg episode reward: [(0, '4.741')] |
|
[2024-08-16 08:41:45,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1126400. Throughput: 0: 951.7. Samples: 280994. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:41:45,313][00784] Avg episode reward: [(0, '4.886')] |
|
[2024-08-16 08:41:50,285][03942] Updated weights for policy 0, policy_version 280 (0.0044) |
|
[2024-08-16 08:41:50,307][00784] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 1146880. Throughput: 0: 925.4. Samples: 286354. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:41:50,309][00784] Avg episode reward: [(0, '4.964')] |
|
[2024-08-16 08:41:50,312][03929] Saving new best policy, reward=4.964! |
|
[2024-08-16 08:41:55,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 1167360. Throughput: 0: 945.9. Samples: 289660. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:41:55,310][00784] Avg episode reward: [(0, '5.005')] |
|
[2024-08-16 08:41:55,320][03929] Saving new best policy, reward=5.005! |
|
[2024-08-16 08:42:00,307][00784] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1183744. Throughput: 0: 982.9. Samples: 295842. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:42:00,310][00784] Avg episode reward: [(0, '4.904')] |
|
[2024-08-16 08:42:00,842][03942] Updated weights for policy 0, policy_version 290 (0.0019) |
|
[2024-08-16 08:42:05,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 1200128. Throughput: 0: 919.2. Samples: 299916. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:05,315][00784] Avg episode reward: [(0, '4.901')] |
|
[2024-08-16 08:42:05,322][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000293_1200128.pth... |
|
[2024-08-16 08:42:05,490][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000075_307200.pth |
|
[2024-08-16 08:42:10,307][00784] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1220608. Throughput: 0: 920.3. Samples: 303246. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:10,310][00784] Avg episode reward: [(0, '5.060')] |
|
[2024-08-16 08:42:10,398][03929] Saving new best policy, reward=5.060! |
|
[2024-08-16 08:42:11,409][03942] Updated weights for policy 0, policy_version 300 (0.0021) |
|
[2024-08-16 08:42:15,313][00784] Fps is (10 sec: 4093.5, 60 sec: 3822.5, 300 sec: 3734.9). Total num frames: 1241088. Throughput: 0: 972.0. Samples: 309896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:42:15,316][00784] Avg episode reward: [(0, '5.221')] |
|
[2024-08-16 08:42:15,380][03929] Saving new best policy, reward=5.221! |
|
[2024-08-16 08:42:20,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1257472. Throughput: 0: 937.8. Samples: 314490. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:42:20,313][00784] Avg episode reward: [(0, '5.188')] |
|
[2024-08-16 08:42:23,097][03942] Updated weights for policy 0, policy_version 310 (0.0020) |
|
[2024-08-16 08:42:25,307][00784] Fps is (10 sec: 3688.7, 60 sec: 3754.8, 300 sec: 3735.0). Total num frames: 1277952. Throughput: 0: 920.1. Samples: 317142. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:42:25,312][00784] Avg episode reward: [(0, '5.136')] |
|
[2024-08-16 08:42:30,307][00784] Fps is (10 sec: 4505.5, 60 sec: 3959.4, 300 sec: 3748.9). Total num frames: 1302528. Throughput: 0: 961.2. Samples: 324246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:42:30,310][00784] Avg episode reward: [(0, '5.136')] |
|
[2024-08-16 08:42:31,980][03942] Updated weights for policy 0, policy_version 320 (0.0022) |
|
[2024-08-16 08:42:35,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 1318912. Throughput: 0: 966.9. Samples: 329864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:35,309][00784] Avg episode reward: [(0, '5.292')] |
|
[2024-08-16 08:42:35,324][03929] Saving new best policy, reward=5.292! |
|
[2024-08-16 08:42:40,307][00784] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 1335296. Throughput: 0: 940.3. Samples: 331972. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:40,313][00784] Avg episode reward: [(0, '5.475')] |
|
[2024-08-16 08:42:40,317][03929] Saving new best policy, reward=5.475! |
|
[2024-08-16 08:42:44,003][03942] Updated weights for policy 0, policy_version 330 (0.0016) |
|
[2024-08-16 08:42:45,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1351680. Throughput: 0: 938.8. Samples: 338086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:45,310][00784] Avg episode reward: [(0, '5.684')] |
|
[2024-08-16 08:42:45,320][03929] Saving new best policy, reward=5.684! |
|
[2024-08-16 08:42:50,309][00784] Fps is (10 sec: 3276.1, 60 sec: 3686.3, 300 sec: 3721.1). Total num frames: 1368064. Throughput: 0: 940.6. Samples: 342244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:42:50,312][00784] Avg episode reward: [(0, '5.604')] |
|
[2024-08-16 08:42:55,307][00784] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3721.1). Total num frames: 1380352. Throughput: 0: 910.1. Samples: 344202. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:42:55,313][00784] Avg episode reward: [(0, '5.632')] |
|
[2024-08-16 08:42:57,745][03942] Updated weights for policy 0, policy_version 340 (0.0015) |
|
[2024-08-16 08:43:00,307][00784] Fps is (10 sec: 3277.5, 60 sec: 3618.2, 300 sec: 3707.2). Total num frames: 1400832. Throughput: 0: 880.0. Samples: 349490. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:43:00,314][00784] Avg episode reward: [(0, '5.343')] |
|
[2024-08-16 08:43:05,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3721.2). Total num frames: 1425408. Throughput: 0: 933.9. Samples: 356516. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:43:05,309][00784] Avg episode reward: [(0, '5.698')] |
|
[2024-08-16 08:43:05,317][03929] Saving new best policy, reward=5.698! |
|
[2024-08-16 08:43:06,620][03942] Updated weights for policy 0, policy_version 350 (0.0027) |
|
[2024-08-16 08:43:10,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3721.1). Total num frames: 1441792. Throughput: 0: 943.5. Samples: 359598. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:43:10,309][00784] Avg episode reward: [(0, '5.760')] |
|
[2024-08-16 08:43:10,314][03929] Saving new best policy, reward=5.760! |
|
[2024-08-16 08:43:15,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3618.5, 300 sec: 3721.1). Total num frames: 1458176. Throughput: 0: 877.4. Samples: 363730. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:43:15,313][00784] Avg episode reward: [(0, '5.983')] |
|
[2024-08-16 08:43:15,325][03929] Saving new best policy, reward=5.983! |
|
[2024-08-16 08:43:18,243][03942] Updated weights for policy 0, policy_version 360 (0.0024) |
|
[2024-08-16 08:43:20,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1482752. Throughput: 0: 905.5. Samples: 370610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:43:20,309][00784] Avg episode reward: [(0, '6.274')] |
|
[2024-08-16 08:43:20,316][03929] Saving new best policy, reward=6.274! |
|
[2024-08-16 08:43:25,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1503232. Throughput: 0: 935.1. Samples: 374052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:43:25,311][00784] Avg episode reward: [(0, '6.730')] |
|
[2024-08-16 08:43:25,323][03929] Saving new best policy, reward=6.730! |
|
[2024-08-16 08:43:29,282][03942] Updated weights for policy 0, policy_version 370 (0.0013) |
|
[2024-08-16 08:43:30,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3721.1). Total num frames: 1515520. Throughput: 0: 906.2. Samples: 378866. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:43:30,312][00784] Avg episode reward: [(0, '6.514')] |
|
[2024-08-16 08:43:35,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3735.0). Total num frames: 1540096. Throughput: 0: 944.4. Samples: 384740. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:43:35,317][00784] Avg episode reward: [(0, '7.046')] |
|
[2024-08-16 08:43:35,327][03929] Saving new best policy, reward=7.046! |
|
[2024-08-16 08:43:38,570][03942] Updated weights for policy 0, policy_version 380 (0.0023) |
|
[2024-08-16 08:43:40,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3721.1). Total num frames: 1560576. Throughput: 0: 977.5. Samples: 388190. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:43:40,314][00784] Avg episode reward: [(0, '7.455')] |
|
[2024-08-16 08:43:40,362][03929] Saving new best policy, reward=7.455! |
|
[2024-08-16 08:43:45,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1576960. Throughput: 0: 989.2. Samples: 394006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:43:45,309][00784] Avg episode reward: [(0, '7.582')] |
|
[2024-08-16 08:43:45,316][03929] Saving new best policy, reward=7.582! |
|
[2024-08-16 08:43:50,209][03942] Updated weights for policy 0, policy_version 390 (0.0026) |
|
[2024-08-16 08:43:50,307][00784] Fps is (10 sec: 3686.3, 60 sec: 3823.1, 300 sec: 3735.0). Total num frames: 1597440. Throughput: 0: 943.9. Samples: 398992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:43:50,309][00784] Avg episode reward: [(0, '8.020')] |
|
[2024-08-16 08:43:50,314][03929] Saving new best policy, reward=8.020! |
|
[2024-08-16 08:43:55,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3735.0). Total num frames: 1617920. Throughput: 0: 951.7. Samples: 402424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:43:55,309][00784] Avg episode reward: [(0, '8.489')] |
|
[2024-08-16 08:43:55,316][03929] Saving new best policy, reward=8.489! |
|
[2024-08-16 08:43:59,567][03942] Updated weights for policy 0, policy_version 400 (0.0012) |
|
[2024-08-16 08:44:00,308][00784] Fps is (10 sec: 4095.7, 60 sec: 3959.4, 300 sec: 3735.0). Total num frames: 1638400. Throughput: 0: 1011.2. Samples: 409236. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:44:00,312][00784] Avg episode reward: [(0, '9.557')] |
|
[2024-08-16 08:44:00,318][03929] Saving new best policy, reward=9.557! |
|
[2024-08-16 08:44:05,307][00784] Fps is (10 sec: 3276.7, 60 sec: 3754.6, 300 sec: 3735.0). Total num frames: 1650688. Throughput: 0: 950.6. Samples: 413386. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:44:05,309][00784] Avg episode reward: [(0, '9.285')] |
|
[2024-08-16 08:44:05,393][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000404_1654784.pth... |
|
[2024-08-16 08:44:05,549][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000182_745472.pth |
|
[2024-08-16 08:44:10,307][00784] Fps is (10 sec: 3686.8, 60 sec: 3891.2, 300 sec: 3735.0). Total num frames: 1675264. Throughput: 0: 936.5. Samples: 416196. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:44:10,311][00784] Avg episode reward: [(0, '8.589')] |
|
[2024-08-16 08:44:11,122][03942] Updated weights for policy 0, policy_version 410 (0.0024) |
|
[2024-08-16 08:44:15,308][00784] Fps is (10 sec: 4505.3, 60 sec: 3959.4, 300 sec: 3735.0). Total num frames: 1695744. Throughput: 0: 985.8. Samples: 423226. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:44:15,316][00784] Avg episode reward: [(0, '8.390')] |
|
[2024-08-16 08:44:20,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3749.0). Total num frames: 1712128. Throughput: 0: 968.9. Samples: 428340. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:44:20,311][00784] Avg episode reward: [(0, '8.758')] |
|
[2024-08-16 08:44:22,676][03942] Updated weights for policy 0, policy_version 420 (0.0029) |
|
[2024-08-16 08:44:25,307][00784] Fps is (10 sec: 3277.1, 60 sec: 3754.7, 300 sec: 3735.0). Total num frames: 1728512. Throughput: 0: 938.3. Samples: 430412. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:44:25,314][00784] Avg episode reward: [(0, '9.055')] |
|
[2024-08-16 08:44:30,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3749.0). Total num frames: 1753088. Throughput: 0: 947.1. Samples: 436624. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:44:30,309][00784] Avg episode reward: [(0, '9.024')] |
|
[2024-08-16 08:44:32,092][03942] Updated weights for policy 0, policy_version 430 (0.0013) |
|
[2024-08-16 08:44:35,307][00784] Fps is (10 sec: 4095.9, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 1769472. Throughput: 0: 970.8. Samples: 442678. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:44:35,313][00784] Avg episode reward: [(0, '9.585')] |
|
[2024-08-16 08:44:35,325][03929] Saving new best policy, reward=9.585! |
|
[2024-08-16 08:44:40,310][00784] Fps is (10 sec: 2866.4, 60 sec: 3686.2, 300 sec: 3776.7). Total num frames: 1781760. Throughput: 0: 936.5. Samples: 444570. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:44:40,311][00784] Avg episode reward: [(0, '9.697')] |
|
[2024-08-16 08:44:40,315][03929] Saving new best policy, reward=9.697! |
|
[2024-08-16 08:44:44,212][03942] Updated weights for policy 0, policy_version 440 (0.0039) |
|
[2024-08-16 08:44:45,307][00784] Fps is (10 sec: 3686.5, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1806336. Throughput: 0: 909.6. Samples: 450168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:44:45,314][00784] Avg episode reward: [(0, '10.261')] |
|
[2024-08-16 08:44:45,322][03929] Saving new best policy, reward=10.261! |
|
[2024-08-16 08:44:50,313][00784] Fps is (10 sec: 4504.0, 60 sec: 3822.6, 300 sec: 3776.6). Total num frames: 1826816. Throughput: 0: 971.3. Samples: 457100. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:44:50,319][00784] Avg episode reward: [(0, '9.496')] |
|
[2024-08-16 08:44:54,950][03942] Updated weights for policy 0, policy_version 450 (0.0032) |
|
[2024-08-16 08:44:55,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1843200. Throughput: 0: 961.7. Samples: 459474. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:44:55,314][00784] Avg episode reward: [(0, '9.846')] |
|
[2024-08-16 08:45:00,307][00784] Fps is (10 sec: 3688.6, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 1863680. Throughput: 0: 915.9. Samples: 464442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:45:00,310][00784] Avg episode reward: [(0, '10.075')] |
|
[2024-08-16 08:45:04,511][03942] Updated weights for policy 0, policy_version 460 (0.0013) |
|
[2024-08-16 08:45:05,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 1884160. Throughput: 0: 958.4. Samples: 471466. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:45:05,313][00784] Avg episode reward: [(0, '11.375')] |
|
[2024-08-16 08:45:05,409][03929] Saving new best policy, reward=11.375! |
|
[2024-08-16 08:45:10,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3776.7). Total num frames: 1900544. Throughput: 0: 982.8. Samples: 474636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:45:10,313][00784] Avg episode reward: [(0, '11.422')] |
|
[2024-08-16 08:45:10,333][03929] Saving new best policy, reward=11.422! |
|
[2024-08-16 08:45:15,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3776.7). Total num frames: 1916928. Throughput: 0: 927.6. Samples: 478368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:45:15,313][00784] Avg episode reward: [(0, '11.143')] |
|
[2024-08-16 08:45:16,753][03942] Updated weights for policy 0, policy_version 470 (0.0022) |
|
[2024-08-16 08:45:20,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 1941504. Throughput: 0: 942.4. Samples: 485086. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:45:20,309][00784] Avg episode reward: [(0, '11.369')] |
|
[2024-08-16 08:45:25,307][00784] Fps is (10 sec: 4505.5, 60 sec: 3891.2, 300 sec: 3790.5). Total num frames: 1961984. Throughput: 0: 979.0. Samples: 488622. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:45:25,314][00784] Avg episode reward: [(0, '12.074')] |
|
[2024-08-16 08:45:25,323][03929] Saving new best policy, reward=12.074! |
|
[2024-08-16 08:45:26,565][03942] Updated weights for policy 0, policy_version 480 (0.0025) |
|
[2024-08-16 08:45:30,310][00784] Fps is (10 sec: 3275.8, 60 sec: 3686.2, 300 sec: 3776.6). Total num frames: 1974272. Throughput: 0: 964.5. Samples: 493574. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:45:30,317][00784] Avg episode reward: [(0, '12.895')] |
|
[2024-08-16 08:45:30,325][03929] Saving new best policy, reward=12.895! |
|
[2024-08-16 08:45:35,307][00784] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 1994752. Throughput: 0: 934.9. Samples: 499166. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:45:35,313][00784] Avg episode reward: [(0, '13.655')] |
|
[2024-08-16 08:45:35,324][03929] Saving new best policy, reward=13.655! |
|
[2024-08-16 08:45:37,344][03942] Updated weights for policy 0, policy_version 490 (0.0012) |
|
[2024-08-16 08:45:40,307][00784] Fps is (10 sec: 4506.9, 60 sec: 3959.6, 300 sec: 3790.5). Total num frames: 2019328. Throughput: 0: 956.6. Samples: 502520. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:45:40,314][00784] Avg episode reward: [(0, '12.953')] |
|
[2024-08-16 08:45:45,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 2035712. Throughput: 0: 976.5. Samples: 508384. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:45:45,314][00784] Avg episode reward: [(0, '13.303')] |
|
[2024-08-16 08:45:49,048][03942] Updated weights for policy 0, policy_version 500 (0.0025) |
|
[2024-08-16 08:45:50,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3755.1, 300 sec: 3790.5). Total num frames: 2052096. Throughput: 0: 921.0. Samples: 512910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:45:50,315][00784] Avg episode reward: [(0, '12.522')] |
|
[2024-08-16 08:45:55,311][00784] Fps is (10 sec: 3275.4, 60 sec: 3754.4, 300 sec: 3762.7). Total num frames: 2068480. Throughput: 0: 915.2. Samples: 515826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:45:55,317][00784] Avg episode reward: [(0, '12.636')] |
|
[2024-08-16 08:46:00,307][00784] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3735.0). Total num frames: 2080768. Throughput: 0: 925.2. Samples: 520004. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:00,312][00784] Avg episode reward: [(0, '13.065')] |
|
[2024-08-16 08:46:02,635][03942] Updated weights for policy 0, policy_version 510 (0.0020) |
|
[2024-08-16 08:46:05,307][00784] Fps is (10 sec: 2458.6, 60 sec: 3481.6, 300 sec: 3735.0). Total num frames: 2093056. Throughput: 0: 865.1. Samples: 524016. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:05,313][00784] Avg episode reward: [(0, '12.918')] |
|
[2024-08-16 08:46:05,324][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000511_2093056.pth... |
|
[2024-08-16 08:46:05,533][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000293_1200128.pth |
|
[2024-08-16 08:46:10,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3748.9). Total num frames: 2117632. Throughput: 0: 848.4. Samples: 526800. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:46:10,314][00784] Avg episode reward: [(0, '13.437')] |
|
[2024-08-16 08:46:12,789][03942] Updated weights for policy 0, policy_version 520 (0.0018) |
|
[2024-08-16 08:46:15,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 2138112. Throughput: 0: 893.0. Samples: 533758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:15,313][00784] Avg episode reward: [(0, '14.042')] |
|
[2024-08-16 08:46:15,324][03929] Saving new best policy, reward=14.042! |
|
[2024-08-16 08:46:20,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3735.0). Total num frames: 2154496. Throughput: 0: 886.4. Samples: 539054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:46:20,312][00784] Avg episode reward: [(0, '14.843')] |
|
[2024-08-16 08:46:20,314][03929] Saving new best policy, reward=14.843! |
|
[2024-08-16 08:46:24,287][03942] Updated weights for policy 0, policy_version 530 (0.0012) |
|
[2024-08-16 08:46:25,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3762.8). Total num frames: 2174976. Throughput: 0: 858.9. Samples: 541170. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:46:25,312][00784] Avg episode reward: [(0, '16.016')] |
|
[2024-08-16 08:46:25,321][03929] Saving new best policy, reward=16.016! |
|
[2024-08-16 08:46:30,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3686.6, 300 sec: 3748.9). Total num frames: 2195456. Throughput: 0: 880.0. Samples: 547982. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:46:30,309][00784] Avg episode reward: [(0, '16.706')] |
|
[2024-08-16 08:46:30,313][03929] Saving new best policy, reward=16.706! |
|
[2024-08-16 08:46:33,365][03942] Updated weights for policy 0, policy_version 540 (0.0019) |
|
[2024-08-16 08:46:35,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3748.9). Total num frames: 2215936. Throughput: 0: 920.1. Samples: 554316. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:46:35,310][00784] Avg episode reward: [(0, '16.885')] |
|
[2024-08-16 08:46:35,319][03929] Saving new best policy, reward=16.885! |
|
[2024-08-16 08:46:40,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3735.0). Total num frames: 2228224. Throughput: 0: 898.8. Samples: 556270. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:40,309][00784] Avg episode reward: [(0, '15.817')] |
|
[2024-08-16 08:46:44,851][03942] Updated weights for policy 0, policy_version 550 (0.0015) |
|
[2024-08-16 08:46:45,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3748.9). Total num frames: 2252800. Throughput: 0: 935.0. Samples: 562078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:45,309][00784] Avg episode reward: [(0, '16.645')] |
|
[2024-08-16 08:46:50,307][00784] Fps is (10 sec: 4915.2, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2277376. Throughput: 0: 1003.6. Samples: 569178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:46:50,314][00784] Avg episode reward: [(0, '15.722')] |
|
[2024-08-16 08:46:55,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3748.9). Total num frames: 2289664. Throughput: 0: 997.2. Samples: 571676. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:46:55,312][00784] Avg episode reward: [(0, '16.326')] |
|
[2024-08-16 08:46:55,854][03942] Updated weights for policy 0, policy_version 560 (0.0034) |
|
[2024-08-16 08:47:00,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2310144. Throughput: 0: 951.1. Samples: 576558. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:47:00,310][00784] Avg episode reward: [(0, '16.876')] |
|
[2024-08-16 08:47:04,907][03942] Updated weights for policy 0, policy_version 570 (0.0014) |
|
[2024-08-16 08:47:05,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3776.6). Total num frames: 2334720. Throughput: 0: 991.2. Samples: 583658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-08-16 08:47:05,310][00784] Avg episode reward: [(0, '17.454')] |
|
[2024-08-16 08:47:05,318][03929] Saving new best policy, reward=17.454! |
|
[2024-08-16 08:47:10,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3762.8). Total num frames: 2351104. Throughput: 0: 1014.2. Samples: 586808. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:47:10,311][00784] Avg episode reward: [(0, '16.980')] |
|
[2024-08-16 08:47:15,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2367488. Throughput: 0: 960.0. Samples: 591182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:47:15,309][00784] Avg episode reward: [(0, '17.816')] |
|
[2024-08-16 08:47:15,323][03929] Saving new best policy, reward=17.816! |
|
[2024-08-16 08:47:16,524][03942] Updated weights for policy 0, policy_version 580 (0.0014) |
|
[2024-08-16 08:47:20,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3776.7). Total num frames: 2392064. Throughput: 0: 965.8. Samples: 597778. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:47:20,309][00784] Avg episode reward: [(0, '18.646')] |
|
[2024-08-16 08:47:20,316][03929] Saving new best policy, reward=18.646! |
|
[2024-08-16 08:47:25,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3762.8). Total num frames: 2412544. Throughput: 0: 1000.1. Samples: 601276. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:47:25,314][00784] Avg episode reward: [(0, '18.053')] |
|
[2024-08-16 08:47:25,974][03942] Updated weights for policy 0, policy_version 590 (0.0016) |
|
[2024-08-16 08:47:30,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3762.8). Total num frames: 2428928. Throughput: 0: 985.6. Samples: 606432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:47:30,312][00784] Avg episode reward: [(0, '18.080')] |
|
[2024-08-16 08:47:35,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 2449408. Throughput: 0: 958.4. Samples: 612306. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:47:35,310][00784] Avg episode reward: [(0, '17.065')] |
|
[2024-08-16 08:47:36,799][03942] Updated weights for policy 0, policy_version 600 (0.0039) |
|
[2024-08-16 08:47:40,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4096.0, 300 sec: 3804.4). Total num frames: 2473984. Throughput: 0: 979.8. Samples: 615768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:47:40,314][00784] Avg episode reward: [(0, '16.031')] |
|
[2024-08-16 08:47:45,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3804.4). Total num frames: 2490368. Throughput: 0: 1007.3. Samples: 621886. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:47:45,312][00784] Avg episode reward: [(0, '15.210')] |
|
[2024-08-16 08:47:47,803][03942] Updated weights for policy 0, policy_version 610 (0.0018) |
|
[2024-08-16 08:47:50,307][00784] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2506752. Throughput: 0: 955.6. Samples: 626660. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:47:50,310][00784] Avg episode reward: [(0, '14.984')] |
|
[2024-08-16 08:47:55,307][00784] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3832.2). Total num frames: 2531328. Throughput: 0: 965.2. Samples: 630244. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:47:55,314][00784] Avg episode reward: [(0, '15.056')] |
|
[2024-08-16 08:47:56,796][03942] Updated weights for policy 0, policy_version 620 (0.0019) |
|
[2024-08-16 08:48:00,307][00784] Fps is (10 sec: 4505.7, 60 sec: 4027.7, 300 sec: 3818.3). Total num frames: 2551808. Throughput: 0: 1023.2. Samples: 637228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:48:00,309][00784] Avg episode reward: [(0, '15.895')] |
|
[2024-08-16 08:48:05,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2564096. Throughput: 0: 973.2. Samples: 641574. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:48:05,313][00784] Avg episode reward: [(0, '16.636')] |
|
[2024-08-16 08:48:05,325][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000626_2564096.pth... |
|
[2024-08-16 08:48:05,534][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000404_1654784.pth |
|
[2024-08-16 08:48:08,454][03942] Updated weights for policy 0, policy_version 630 (0.0012) |
|
[2024-08-16 08:48:10,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3832.2). Total num frames: 2588672. Throughput: 0: 957.5. Samples: 644362. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:48:10,310][00784] Avg episode reward: [(0, '17.203')] |
|
[2024-08-16 08:48:15,307][00784] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3832.2). Total num frames: 2613248. Throughput: 0: 1002.1. Samples: 651526. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:48:15,310][00784] Avg episode reward: [(0, '17.380')] |
|
[2024-08-16 08:48:17,960][03942] Updated weights for policy 0, policy_version 640 (0.0017) |
|
[2024-08-16 08:48:20,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2625536. Throughput: 0: 990.4. Samples: 656874. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:48:20,313][00784] Avg episode reward: [(0, '17.937')] |
|
[2024-08-16 08:48:25,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 2646016. Throughput: 0: 961.1. Samples: 659016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:48:25,309][00784] Avg episode reward: [(0, '17.292')] |
|
[2024-08-16 08:48:28,429][03942] Updated weights for policy 0, policy_version 650 (0.0024) |
|
[2024-08-16 08:48:30,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3832.2). Total num frames: 2670592. Throughput: 0: 982.0. Samples: 666076. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:48:30,309][00784] Avg episode reward: [(0, '17.536')] |
|
[2024-08-16 08:48:35,307][00784] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 2686976. Throughput: 0: 1018.6. Samples: 672496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:48:35,310][00784] Avg episode reward: [(0, '17.575')] |
|
[2024-08-16 08:48:39,901][03942] Updated weights for policy 0, policy_version 660 (0.0016) |
|
[2024-08-16 08:48:40,307][00784] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2703360. Throughput: 0: 985.0. Samples: 674570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:48:40,310][00784] Avg episode reward: [(0, '18.359')] |
|
[2024-08-16 08:48:45,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.4, 300 sec: 3832.2). Total num frames: 2727936. Throughput: 0: 959.4. Samples: 680402. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:48:45,315][00784] Avg episode reward: [(0, '18.493')] |
|
[2024-08-16 08:48:48,619][03942] Updated weights for policy 0, policy_version 670 (0.0021) |
|
[2024-08-16 08:48:50,308][00784] Fps is (10 sec: 4914.7, 60 sec: 4095.9, 300 sec: 3846.1). Total num frames: 2752512. Throughput: 0: 1023.0. Samples: 687612. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:48:50,310][00784] Avg episode reward: [(0, '19.363')] |
|
[2024-08-16 08:48:50,316][03929] Saving new best policy, reward=19.363! |
|
[2024-08-16 08:48:55,307][00784] Fps is (10 sec: 3686.5, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 2764800. Throughput: 0: 1014.5. Samples: 690014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:48:55,313][00784] Avg episode reward: [(0, '20.263')] |
|
[2024-08-16 08:48:55,326][03929] Saving new best policy, reward=20.263! |
|
[2024-08-16 08:49:00,095][03942] Updated weights for policy 0, policy_version 680 (0.0036) |
|
[2024-08-16 08:49:00,307][00784] Fps is (10 sec: 3277.2, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 2785280. Throughput: 0: 960.4. Samples: 694744. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:49:00,313][00784] Avg episode reward: [(0, '19.484')] |
|
[2024-08-16 08:49:05,307][00784] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3832.2). Total num frames: 2805760. Throughput: 0: 1001.1. Samples: 701922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:49:05,309][00784] Avg episode reward: [(0, '19.570')] |
|
[2024-08-16 08:49:10,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2818048. Throughput: 0: 1003.7. Samples: 704182. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:49:10,312][00784] Avg episode reward: [(0, '18.218')] |
|
[2024-08-16 08:49:12,493][03942] Updated weights for policy 0, policy_version 690 (0.0032) |
|
[2024-08-16 08:49:15,309][00784] Fps is (10 sec: 2457.0, 60 sec: 3618.0, 300 sec: 3790.5). Total num frames: 2830336. Throughput: 0: 922.8. Samples: 707606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:49:15,316][00784] Avg episode reward: [(0, '17.870')] |
|
[2024-08-16 08:49:20,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3804.4). Total num frames: 2850816. Throughput: 0: 896.7. Samples: 712848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:49:20,312][00784] Avg episode reward: [(0, '18.237')] |
|
[2024-08-16 08:49:23,280][03942] Updated weights for policy 0, policy_version 700 (0.0016) |
|
[2024-08-16 08:49:25,307][00784] Fps is (10 sec: 4506.6, 60 sec: 3822.9, 300 sec: 3804.4). Total num frames: 2875392. Throughput: 0: 928.9. Samples: 716372. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:49:25,314][00784] Avg episode reward: [(0, '18.040')] |
|
[2024-08-16 08:49:30,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3804.4). Total num frames: 2891776. Throughput: 0: 943.8. Samples: 722872. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:49:30,313][00784] Avg episode reward: [(0, '17.661')] |
|
[2024-08-16 08:49:34,598][03942] Updated weights for policy 0, policy_version 710 (0.0038) |
|
[2024-08-16 08:49:35,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3818.3). Total num frames: 2908160. Throughput: 0: 881.7. Samples: 727288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:49:35,309][00784] Avg episode reward: [(0, '17.172')] |
|
[2024-08-16 08:49:40,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 2932736. Throughput: 0: 903.6. Samples: 730678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:49:40,309][00784] Avg episode reward: [(0, '17.248')] |
|
[2024-08-16 08:49:43,404][03942] Updated weights for policy 0, policy_version 720 (0.0024) |
|
[2024-08-16 08:49:45,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3818.4). Total num frames: 2953216. Throughput: 0: 958.0. Samples: 737856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:49:45,315][00784] Avg episode reward: [(0, '17.822')] |
|
[2024-08-16 08:49:50,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3818.3). Total num frames: 2969600. Throughput: 0: 904.9. Samples: 742642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:49:50,309][00784] Avg episode reward: [(0, '18.188')] |
|
[2024-08-16 08:49:54,743][03942] Updated weights for policy 0, policy_version 730 (0.0026) |
|
[2024-08-16 08:49:55,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3818.3). Total num frames: 2990080. Throughput: 0: 913.3. Samples: 745282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:49:55,310][00784] Avg episode reward: [(0, '19.021')] |
|
[2024-08-16 08:50:00,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3014656. Throughput: 0: 994.5. Samples: 752358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:50:00,309][00784] Avg episode reward: [(0, '19.891')] |
|
[2024-08-16 08:50:04,708][03942] Updated weights for policy 0, policy_version 740 (0.0012) |
|
[2024-08-16 08:50:05,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3832.2). Total num frames: 3031040. Throughput: 0: 1005.9. Samples: 758112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:50:05,313][00784] Avg episode reward: [(0, '20.799')] |
|
[2024-08-16 08:50:05,322][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth... |
|
[2024-08-16 08:50:05,518][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000511_2093056.pth |
|
[2024-08-16 08:50:05,534][03929] Saving new best policy, reward=20.799! |
|
[2024-08-16 08:50:10,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3047424. Throughput: 0: 969.9. Samples: 760018. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:50:10,312][00784] Avg episode reward: [(0, '20.560')] |
|
[2024-08-16 08:50:15,031][03942] Updated weights for policy 0, policy_version 750 (0.0020) |
|
[2024-08-16 08:50:15,310][00784] Fps is (10 sec: 4094.9, 60 sec: 4027.7, 300 sec: 3832.2). Total num frames: 3072000. Throughput: 0: 970.7. Samples: 766554. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:50:15,316][00784] Avg episode reward: [(0, '20.641')] |
|
[2024-08-16 08:50:20,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3832.2). Total num frames: 3092480. Throughput: 0: 1023.6. Samples: 773352. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:50:20,310][00784] Avg episode reward: [(0, '19.639')] |
|
[2024-08-16 08:50:25,309][00784] Fps is (10 sec: 3277.0, 60 sec: 3822.8, 300 sec: 3832.2). Total num frames: 3104768. Throughput: 0: 995.4. Samples: 775472. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:50:25,314][00784] Avg episode reward: [(0, '19.539')] |
|
[2024-08-16 08:50:26,436][03942] Updated weights for policy 0, policy_version 760 (0.0025) |
|
[2024-08-16 08:50:30,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3129344. Throughput: 0: 961.4. Samples: 781118. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:50:30,314][00784] Avg episode reward: [(0, '17.750')] |
|
[2024-08-16 08:50:35,108][03942] Updated weights for policy 0, policy_version 770 (0.0019) |
|
[2024-08-16 08:50:35,307][00784] Fps is (10 sec: 4916.2, 60 sec: 4096.0, 300 sec: 3846.1). Total num frames: 3153920. Throughput: 0: 1012.8. Samples: 788218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:50:35,309][00784] Avg episode reward: [(0, '17.481')] |
|
[2024-08-16 08:50:40,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3832.2). Total num frames: 3166208. Throughput: 0: 1015.9. Samples: 790998. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:50:40,313][00784] Avg episode reward: [(0, '18.414')] |
|
[2024-08-16 08:50:45,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3846.1). Total num frames: 3186688. Throughput: 0: 956.0. Samples: 795380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:50:45,314][00784] Avg episode reward: [(0, '18.023')] |
|
[2024-08-16 08:50:46,599][03942] Updated weights for policy 0, policy_version 780 (0.0026) |
|
[2024-08-16 08:50:50,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3873.9). Total num frames: 3211264. Throughput: 0: 986.8. Samples: 802516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:50:50,312][00784] Avg episode reward: [(0, '18.868')] |
|
[2024-08-16 08:50:55,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 3231744. Throughput: 0: 1024.9. Samples: 806140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:50:55,309][00784] Avg episode reward: [(0, '17.981')] |
|
[2024-08-16 08:50:56,477][03942] Updated weights for policy 0, policy_version 790 (0.0018) |
|
[2024-08-16 08:51:00,311][00784] Fps is (10 sec: 3275.4, 60 sec: 3822.7, 300 sec: 3901.6). Total num frames: 3244032. Throughput: 0: 982.8. Samples: 810782. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-08-16 08:51:00,313][00784] Avg episode reward: [(0, '19.489')] |
|
[2024-08-16 08:51:05,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3268608. Throughput: 0: 971.4. Samples: 817066. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:51:05,313][00784] Avg episode reward: [(0, '20.359')] |
|
[2024-08-16 08:51:06,653][03942] Updated weights for policy 0, policy_version 800 (0.0014) |
|
[2024-08-16 08:51:10,307][00784] Fps is (10 sec: 4917.2, 60 sec: 4096.0, 300 sec: 3915.5). Total num frames: 3293184. Throughput: 0: 1004.8. Samples: 820688. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:51:10,314][00784] Avg episode reward: [(0, '21.978')] |
|
[2024-08-16 08:51:10,320][03929] Saving new best policy, reward=21.978! |
|
[2024-08-16 08:51:15,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.4, 300 sec: 3901.6). Total num frames: 3305472. Throughput: 0: 1000.9. Samples: 826158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:51:15,312][00784] Avg episode reward: [(0, '23.061')] |
|
[2024-08-16 08:51:15,330][03929] Saving new best policy, reward=23.061! |
|
[2024-08-16 08:51:18,178][03942] Updated weights for policy 0, policy_version 810 (0.0017) |
|
[2024-08-16 08:51:20,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3325952. Throughput: 0: 957.6. Samples: 831312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:51:20,313][00784] Avg episode reward: [(0, '23.970')] |
|
[2024-08-16 08:51:20,319][03929] Saving new best policy, reward=23.970! |
|
[2024-08-16 08:51:25,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4096.1, 300 sec: 3915.5). Total num frames: 3350528. Throughput: 0: 973.1. Samples: 834788. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:51:25,310][00784] Avg episode reward: [(0, '23.346')] |
|
[2024-08-16 08:51:26,874][03942] Updated weights for policy 0, policy_version 820 (0.0023) |
|
[2024-08-16 08:51:30,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3366912. Throughput: 0: 1025.9. Samples: 841546. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:51:30,311][00784] Avg episode reward: [(0, '22.431')] |
|
[2024-08-16 08:51:35,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3915.5). Total num frames: 3383296. Throughput: 0: 961.7. Samples: 845792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:51:35,311][00784] Avg episode reward: [(0, '22.297')] |
|
[2024-08-16 08:51:38,551][03942] Updated weights for policy 0, policy_version 830 (0.0015) |
|
[2024-08-16 08:51:40,307][00784] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3915.5). Total num frames: 3407872. Throughput: 0: 957.7. Samples: 849236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:51:40,315][00784] Avg episode reward: [(0, '19.617')] |
|
[2024-08-16 08:51:45,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 3428352. Throughput: 0: 1008.8. Samples: 856176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:51:45,309][00784] Avg episode reward: [(0, '18.916')] |
|
[2024-08-16 08:51:49,155][03942] Updated weights for policy 0, policy_version 840 (0.0029) |
|
[2024-08-16 08:51:50,309][00784] Fps is (10 sec: 3276.1, 60 sec: 3822.8, 300 sec: 3901.6). Total num frames: 3440640. Throughput: 0: 974.8. Samples: 860932. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:51:50,312][00784] Avg episode reward: [(0, '19.216')] |
|
[2024-08-16 08:51:55,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3915.5). Total num frames: 3465216. Throughput: 0: 950.8. Samples: 863472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:51:55,309][00784] Avg episode reward: [(0, '19.505')] |
|
[2024-08-16 08:51:58,689][03942] Updated weights for policy 0, policy_version 850 (0.0029) |
|
[2024-08-16 08:52:00,307][00784] Fps is (10 sec: 4916.1, 60 sec: 4096.3, 300 sec: 3915.5). Total num frames: 3489792. Throughput: 0: 988.0. Samples: 870620. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:52:00,309][00784] Avg episode reward: [(0, '20.246')] |
|
[2024-08-16 08:52:05,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3901.6). Total num frames: 3502080. Throughput: 0: 1003.1. Samples: 876450. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:52:05,309][00784] Avg episode reward: [(0, '21.692')] |
|
[2024-08-16 08:52:05,362][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth... |
|
[2024-08-16 08:52:05,563][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000626_2564096.pth |
|
[2024-08-16 08:52:10,307][00784] Fps is (10 sec: 2867.3, 60 sec: 3754.7, 300 sec: 3901.6). Total num frames: 3518464. Throughput: 0: 970.7. Samples: 878468. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:52:10,309][00784] Avg episode reward: [(0, '21.568')] |
|
[2024-08-16 08:52:10,324][03942] Updated weights for policy 0, policy_version 860 (0.0019) |
|
[2024-08-16 08:52:15,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3901.6). Total num frames: 3543040. Throughput: 0: 961.7. Samples: 884824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:52:15,312][00784] Avg episode reward: [(0, '22.703')] |
|
[2024-08-16 08:52:20,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3887.7). Total num frames: 3559424. Throughput: 0: 1001.2. Samples: 890848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:52:20,309][00784] Avg episode reward: [(0, '22.195')] |
|
[2024-08-16 08:52:20,355][03942] Updated weights for policy 0, policy_version 870 (0.0043) |
|
[2024-08-16 08:52:25,311][00784] Fps is (10 sec: 2866.0, 60 sec: 3686.1, 300 sec: 3873.8). Total num frames: 3571712. Throughput: 0: 963.4. Samples: 892594. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:52:25,315][00784] Avg episode reward: [(0, '22.981')] |
|
[2024-08-16 08:52:30,307][00784] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3860.0). Total num frames: 3588096. Throughput: 0: 892.4. Samples: 896336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:52:30,309][00784] Avg episode reward: [(0, '21.500')] |
|
[2024-08-16 08:52:33,100][03942] Updated weights for policy 0, policy_version 880 (0.0030) |
|
[2024-08-16 08:52:35,307][00784] Fps is (10 sec: 4097.7, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3612672. Throughput: 0: 939.2. Samples: 903194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:52:35,310][00784] Avg episode reward: [(0, '21.527')] |
|
[2024-08-16 08:52:40,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 3633152. Throughput: 0: 961.6. Samples: 906746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:52:40,313][00784] Avg episode reward: [(0, '22.071')] |
|
[2024-08-16 08:52:43,889][03942] Updated weights for policy 0, policy_version 890 (0.0021) |
|
[2024-08-16 08:52:45,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3873.8). Total num frames: 3649536. Throughput: 0: 912.4. Samples: 911676. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:52:45,313][00784] Avg episode reward: [(0, '22.451')] |
|
[2024-08-16 08:52:50,307][00784] Fps is (10 sec: 3686.4, 60 sec: 3823.1, 300 sec: 3860.0). Total num frames: 3670016. Throughput: 0: 912.8. Samples: 917526. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:52:50,313][00784] Avg episode reward: [(0, '23.074')] |
|
[2024-08-16 08:52:53,285][03942] Updated weights for policy 0, policy_version 900 (0.0026) |
|
[2024-08-16 08:52:55,307][00784] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3873.8). Total num frames: 3694592. Throughput: 0: 946.7. Samples: 921070. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:52:55,309][00784] Avg episode reward: [(0, '25.152')] |
|
[2024-08-16 08:52:55,323][03929] Saving new best policy, reward=25.152! |
|
[2024-08-16 08:53:00,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3887.7). Total num frames: 3710976. Throughput: 0: 939.1. Samples: 927084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:53:00,313][00784] Avg episode reward: [(0, '24.542')] |
|
[2024-08-16 08:53:04,829][03942] Updated weights for policy 0, policy_version 910 (0.0026) |
|
[2024-08-16 08:53:05,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3727360. Throughput: 0: 912.2. Samples: 931896. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:53:05,314][00784] Avg episode reward: [(0, '24.068')] |
|
[2024-08-16 08:53:10,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3751936. Throughput: 0: 951.5. Samples: 935406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:53:10,310][00784] Avg episode reward: [(0, '22.860')] |
|
[2024-08-16 08:53:13,960][03942] Updated weights for policy 0, policy_version 920 (0.0023) |
|
[2024-08-16 08:53:15,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3873.8). Total num frames: 3768320. Throughput: 0: 1021.8. Samples: 942318. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-08-16 08:53:15,313][00784] Avg episode reward: [(0, '22.226')] |
|
[2024-08-16 08:53:20,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3860.0). Total num frames: 3784704. Throughput: 0: 964.9. Samples: 946614. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-08-16 08:53:20,309][00784] Avg episode reward: [(0, '21.718')] |
|
[2024-08-16 08:53:25,000][03942] Updated weights for policy 0, policy_version 930 (0.0019) |
|
[2024-08-16 08:53:25,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.7, 300 sec: 3860.0). Total num frames: 3809280. Throughput: 0: 955.2. Samples: 949732. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:53:25,309][00784] Avg episode reward: [(0, '22.285')] |
|
[2024-08-16 08:53:30,307][00784] Fps is (10 sec: 4915.2, 60 sec: 4096.0, 300 sec: 3887.7). Total num frames: 3833856. Throughput: 0: 1005.4. Samples: 956918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:53:30,309][00784] Avg episode reward: [(0, '21.483')] |
|
[2024-08-16 08:53:35,312][00784] Fps is (10 sec: 3684.5, 60 sec: 3890.9, 300 sec: 3873.8). Total num frames: 3846144. Throughput: 0: 990.9. Samples: 962122. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-08-16 08:53:35,321][00784] Avg episode reward: [(0, '21.682')] |
|
[2024-08-16 08:53:35,833][03942] Updated weights for policy 0, policy_version 940 (0.0027) |
|
[2024-08-16 08:53:40,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3866624. Throughput: 0: 961.1. Samples: 964320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:53:40,313][00784] Avg episode reward: [(0, '20.808')] |
|
[2024-08-16 08:53:45,202][03942] Updated weights for policy 0, policy_version 950 (0.0016) |
|
[2024-08-16 08:53:45,307][00784] Fps is (10 sec: 4507.9, 60 sec: 4027.7, 300 sec: 3860.0). Total num frames: 3891200. Throughput: 0: 980.5. Samples: 971208. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-08-16 08:53:45,315][00784] Avg episode reward: [(0, '20.854')] |
|
[2024-08-16 08:53:50,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3907584. Throughput: 0: 1012.4. Samples: 977452. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-08-16 08:53:50,310][00784] Avg episode reward: [(0, '21.905')] |
|
[2024-08-16 08:53:55,307][00784] Fps is (10 sec: 3276.8, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3923968. Throughput: 0: 981.5. Samples: 979572. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:53:55,313][00784] Avg episode reward: [(0, '22.731')] |
|
[2024-08-16 08:53:56,498][03942] Updated weights for policy 0, policy_version 960 (0.0022) |
|
[2024-08-16 08:54:00,307][00784] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3948544. Throughput: 0: 966.4. Samples: 985804. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:54:00,309][00784] Avg episode reward: [(0, '22.454')] |
|
[2024-08-16 08:54:05,307][00784] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3901.6). Total num frames: 3969024. Throughput: 0: 1030.0. Samples: 992962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-08-16 08:54:05,315][00784] Avg episode reward: [(0, '21.641')] |
|
[2024-08-16 08:54:05,325][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000969_3969024.pth... |
|
[2024-08-16 08:54:05,560][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000740_3031040.pth |
|
[2024-08-16 08:54:05,660][03942] Updated weights for policy 0, policy_version 970 (0.0016) |
|
[2024-08-16 08:54:10,309][00784] Fps is (10 sec: 3685.5, 60 sec: 3891.0, 300 sec: 3915.5). Total num frames: 3985408. Throughput: 0: 1009.2. Samples: 995150. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-08-16 08:54:10,312][00784] Avg episode reward: [(0, '22.001')] |
|
[2024-08-16 08:54:15,032][00784] Component Batcher_0 stopped! |
|
[2024-08-16 08:54:15,032][03929] Stopping Batcher_0... |
|
[2024-08-16 08:54:15,038][03929] Loop batcher_evt_loop terminating... |
|
[2024-08-16 08:54:15,041][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-08-16 08:54:15,093][00784] Component RolloutWorker_w7 stopped! |
|
[2024-08-16 08:54:15,098][00784] Component RolloutWorker_w1 stopped! |
|
[2024-08-16 08:54:15,098][03945] Stopping RolloutWorker_w1... |
|
[2024-08-16 08:54:15,101][00784] Component RolloutWorker_w2 stopped! |
|
[2024-08-16 08:54:15,104][03944] Stopping RolloutWorker_w2... |
|
[2024-08-16 08:54:15,107][03942] Weights refcount: 2 0 |
|
[2024-08-16 08:54:15,109][03945] Loop rollout_proc1_evt_loop terminating... |
|
[2024-08-16 08:54:15,093][03950] Stopping RolloutWorker_w7... |
|
[2024-08-16 08:54:15,124][03950] Loop rollout_proc7_evt_loop terminating... |
|
[2024-08-16 08:54:15,126][00784] Component InferenceWorker_p0-w0 stopped! |
|
[2024-08-16 08:54:15,105][03944] Loop rollout_proc2_evt_loop terminating... |
|
[2024-08-16 08:54:15,128][03942] Stopping InferenceWorker_p0-w0... |
|
[2024-08-16 08:54:15,129][03942] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-08-16 08:54:15,138][00784] Component RolloutWorker_w4 stopped! |
|
[2024-08-16 08:54:15,140][03946] Stopping RolloutWorker_w4... |
|
[2024-08-16 08:54:15,144][03946] Loop rollout_proc4_evt_loop terminating... |
|
[2024-08-16 08:54:15,148][00784] Component RolloutWorker_w0 stopped! |
|
[2024-08-16 08:54:15,153][03947] Stopping RolloutWorker_w3... |
|
[2024-08-16 08:54:15,154][03947] Loop rollout_proc3_evt_loop terminating... |
|
[2024-08-16 08:54:15,153][00784] Component RolloutWorker_w3 stopped! |
|
[2024-08-16 08:54:15,162][00784] Component RolloutWorker_w6 stopped! |
|
[2024-08-16 08:54:15,164][03949] Stopping RolloutWorker_w6... |
|
[2024-08-16 08:54:15,150][03943] Stopping RolloutWorker_w0... |
|
[2024-08-16 08:54:15,175][03949] Loop rollout_proc6_evt_loop terminating... |
|
[2024-08-16 08:54:15,176][03943] Loop rollout_proc0_evt_loop terminating... |
|
[2024-08-16 08:54:15,205][03948] Stopping RolloutWorker_w5... |
|
[2024-08-16 08:54:15,206][03948] Loop rollout_proc5_evt_loop terminating... |
|
[2024-08-16 08:54:15,205][00784] Component RolloutWorker_w5 stopped! |
|
[2024-08-16 08:54:15,229][03929] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000856_3506176.pth |
|
[2024-08-16 08:54:15,244][03929] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-08-16 08:54:15,512][00784] Component LearnerWorker_p0 stopped! |
|
[2024-08-16 08:54:15,515][00784] Waiting for process learner_proc0 to stop... |
|
[2024-08-16 08:54:15,517][03929] Stopping LearnerWorker_p0... |
|
[2024-08-16 08:54:15,517][03929] Loop learner_proc0_evt_loop terminating... |
|
[2024-08-16 08:54:16,939][00784] Waiting for process inference_proc0-0 to join... |
|
[2024-08-16 08:54:17,172][00784] Waiting for process rollout_proc0 to join... |
|
[2024-08-16 08:54:18,432][00784] Waiting for process rollout_proc1 to join... |
|
[2024-08-16 08:54:18,437][00784] Waiting for process rollout_proc2 to join... |
|
[2024-08-16 08:54:18,441][00784] Waiting for process rollout_proc3 to join... |
|
[2024-08-16 08:54:18,446][00784] Waiting for process rollout_proc4 to join... |
|
[2024-08-16 08:54:18,449][00784] Waiting for process rollout_proc5 to join... |
|
[2024-08-16 08:54:18,453][00784] Waiting for process rollout_proc6 to join... |
|
[2024-08-16 08:54:18,457][00784] Waiting for process rollout_proc7 to join... |
|
[2024-08-16 08:54:18,460][00784] Batcher 0 profile tree view: |
|
batching: 29.7697, releasing_batches: 0.0257 |
|
[2024-08-16 08:54:18,464][00784] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0100 |
|
wait_policy_total: 408.0159 |
|
update_model: 7.4290 |
|
weight_update: 0.0019 |
|
one_step: 0.0043 |
|
handle_policy_step: 603.5794 |
|
deserialize: 14.5886, stack: 3.0811, obs_to_device_normalize: 130.6681, forward: 277.1909, send_messages: 29.3243 |
|
prepare_outputs: 120.1353 |
|
to_cpu: 87.1566 |
|
[2024-08-16 08:54:18,465][00784] Learner 0 profile tree view: |
|
misc: 0.0055, prepare_batch: 16.9176 |
|
train: 76.0566 |
|
epoch_init: 0.0227, minibatch_init: 0.0065, losses_postprocess: 0.7486, kl_divergence: 0.6916, after_optimizer: 34.5623 |
|
calculate_losses: 24.9520 |
|
losses_init: 0.0061, forward_head: 1.7244, bptt_initial: 15.8375, tail: 1.0730, advantages_returns: 0.2597, losses: 3.4642 |
|
bptt: 2.2895 |
|
bptt_forward_core: 2.1971 |
|
update: 14.3848 |
|
clip: 1.4621 |
|
[2024-08-16 08:54:18,469][00784] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3010, enqueue_policy_requests: 99.5789, env_step: 830.0814, overhead: 13.5500, complete_rollouts: 7.2137 |
|
save_policy_outputs: 25.2000 |
|
split_output_tensors: 8.3764 |
|
[2024-08-16 08:54:18,470][00784] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.4212, enqueue_policy_requests: 100.6676, env_step: 830.1248, overhead: 13.1825, complete_rollouts: 6.4185 |
|
save_policy_outputs: 24.4080 |
|
split_output_tensors: 8.4258 |
|
[2024-08-16 08:54:18,474][00784] Loop Runner_EvtLoop terminating... |
|
[2024-08-16 08:54:18,475][00784] Runner profile tree view: |
|
main_loop: 1089.5989 |
|
[2024-08-16 08:54:18,476][00784] Collected {0: 4005888}, FPS: 3676.5 |
|
[2024-08-16 08:58:46,821][00784] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-08-16 08:58:46,822][00784] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-08-16 08:58:46,824][00784] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-08-16 08:58:46,828][00784] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-08-16 08:58:46,829][00784] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-08-16 08:58:46,831][00784] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-08-16 08:58:46,833][00784] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-08-16 08:58:46,836][00784] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-08-16 08:58:46,837][00784] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-08-16 08:58:46,838][00784] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-08-16 08:58:46,839][00784] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-08-16 08:58:46,840][00784] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-08-16 08:58:46,842][00784] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-08-16 08:58:46,843][00784] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-08-16 08:58:46,844][00784] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-08-16 08:58:46,866][00784] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-08-16 08:58:46,870][00784] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-16 08:58:46,873][00784] RunningMeanStd input shape: (1,) |
|
[2024-08-16 08:58:46,888][00784] ConvEncoder: input_channels=3 |
|
[2024-08-16 08:58:47,042][00784] Conv encoder output size: 512 |
|
[2024-08-16 08:58:47,044][00784] Policy head output size: 512 |
|
[2024-08-16 08:58:48,759][00784] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-08-16 08:58:49,622][00784] Num frames 100... |
|
[2024-08-16 08:58:49,739][00784] Num frames 200... |
|
[2024-08-16 08:58:49,861][00784] Num frames 300... |
|
[2024-08-16 08:58:49,991][00784] Num frames 400... |
|
[2024-08-16 08:58:50,101][00784] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480 |
|
[2024-08-16 08:58:50,102][00784] Avg episode reward: 5.480, avg true_objective: 4.480 |
|
[2024-08-16 08:58:50,165][00784] Num frames 500... |
|
[2024-08-16 08:58:50,281][00784] Num frames 600... |
|
[2024-08-16 08:58:50,346][00784] Avg episode rewards: #0: 4.040, true rewards: #0: 3.040 |
|
[2024-08-16 08:58:50,347][00784] Avg episode reward: 4.040, avg true_objective: 3.040 |
|
[2024-08-16 08:58:50,457][00784] Num frames 700... |
|
[2024-08-16 08:58:50,577][00784] Num frames 800... |
|
[2024-08-16 08:58:50,698][00784] Num frames 900... |
|
[2024-08-16 08:58:50,817][00784] Num frames 1000... |
|
[2024-08-16 08:58:50,935][00784] Num frames 1100... |
|
[2024-08-16 08:58:51,076][00784] Num frames 1200... |
|
[2024-08-16 08:58:51,193][00784] Num frames 1300... |
|
[2024-08-16 08:58:51,312][00784] Num frames 1400... |
|
[2024-08-16 08:58:51,430][00784] Num frames 1500... |
|
[2024-08-16 08:58:51,550][00784] Num frames 1600... |
|
[2024-08-16 08:58:51,670][00784] Num frames 1700... |
|
[2024-08-16 08:58:51,788][00784] Avg episode rewards: #0: 13.177, true rewards: #0: 5.843 |
|
[2024-08-16 08:58:51,791][00784] Avg episode reward: 13.177, avg true_objective: 5.843 |
|
[2024-08-16 08:58:51,846][00784] Num frames 1800... |
|
[2024-08-16 08:58:51,970][00784] Num frames 1900... |
|
[2024-08-16 08:58:52,091][00784] Num frames 2000... |
|
[2024-08-16 08:58:52,215][00784] Num frames 2100... |
|
[2024-08-16 08:58:52,315][00784] Avg episode rewards: #0: 11.843, true rewards: #0: 5.342 |
|
[2024-08-16 08:58:52,316][00784] Avg episode reward: 11.843, avg true_objective: 5.342 |
|
[2024-08-16 08:58:52,390][00784] Num frames 2200... |
|
[2024-08-16 08:58:52,515][00784] Num frames 2300... |
|
[2024-08-16 08:58:52,650][00784] Num frames 2400... |
|
[2024-08-16 08:58:52,766][00784] Num frames 2500... |
|
[2024-08-16 08:58:52,881][00784] Num frames 2600... |
|
[2024-08-16 08:58:53,019][00784] Num frames 2700... |
|
[2024-08-16 08:58:53,143][00784] Num frames 2800... |
|
[2024-08-16 08:58:53,257][00784] Num frames 2900... |
|
[2024-08-16 08:58:53,378][00784] Num frames 3000... |
|
[2024-08-16 08:58:53,497][00784] Num frames 3100... |
|
[2024-08-16 08:58:53,623][00784] Num frames 3200... |
|
[2024-08-16 08:58:53,760][00784] Num frames 3300... |
|
[2024-08-16 08:58:53,878][00784] Num frames 3400... |
|
[2024-08-16 08:58:54,004][00784] Num frames 3500... |
|
[2024-08-16 08:58:54,133][00784] Num frames 3600... |
|
[2024-08-16 08:58:54,210][00784] Avg episode rewards: #0: 17.036, true rewards: #0: 7.236 |
|
[2024-08-16 08:58:54,211][00784] Avg episode reward: 17.036, avg true_objective: 7.236 |
|
[2024-08-16 08:58:54,308][00784] Num frames 3700... |
|
[2024-08-16 08:58:54,426][00784] Num frames 3800... |
|
[2024-08-16 08:58:54,548][00784] Num frames 3900... |
|
[2024-08-16 08:58:54,669][00784] Num frames 4000... |
|
[2024-08-16 08:58:54,787][00784] Num frames 4100... |
|
[2024-08-16 08:58:54,910][00784] Num frames 4200... |
|
[2024-08-16 08:58:55,035][00784] Num frames 4300... |
|
[2024-08-16 08:58:55,163][00784] Num frames 4400... |
|
[2024-08-16 08:58:55,284][00784] Num frames 4500... |
|
[2024-08-16 08:58:55,403][00784] Num frames 4600... |
|
[2024-08-16 08:58:55,523][00784] Num frames 4700... |
|
[2024-08-16 08:58:55,646][00784] Num frames 4800... |
|
[2024-08-16 08:58:55,795][00784] Num frames 4900... |
|
[2024-08-16 08:58:55,966][00784] Num frames 5000... |
|
[2024-08-16 08:58:56,198][00784] Avg episode rewards: #0: 20.160, true rewards: #0: 8.493 |
|
[2024-08-16 08:58:56,200][00784] Avg episode reward: 20.160, avg true_objective: 8.493 |
|
[2024-08-16 08:58:56,212][00784] Num frames 5100... |
|
[2024-08-16 08:58:56,368][00784] Num frames 5200... |
|
[2024-08-16 08:58:56,533][00784] Num frames 5300... |
|
[2024-08-16 08:58:56,694][00784] Num frames 5400... |
|
[2024-08-16 08:58:56,852][00784] Num frames 5500... |
|
[2024-08-16 08:58:57,041][00784] Num frames 5600... |
|
[2024-08-16 08:58:57,214][00784] Num frames 5700... |
|
[2024-08-16 08:58:57,382][00784] Num frames 5800... |
|
[2024-08-16 08:58:57,554][00784] Num frames 5900... |
|
[2024-08-16 08:58:57,726][00784] Num frames 6000... |
|
[2024-08-16 08:58:57,898][00784] Num frames 6100... |
|
[2024-08-16 08:58:58,066][00784] Num frames 6200... |
|
[2024-08-16 08:58:58,237][00784] Num frames 6300... |
|
[2024-08-16 08:58:58,354][00784] Num frames 6400... |
|
[2024-08-16 08:58:58,507][00784] Avg episode rewards: #0: 22.261, true rewards: #0: 9.261 |
|
[2024-08-16 08:58:58,510][00784] Avg episode reward: 22.261, avg true_objective: 9.261 |
|
[2024-08-16 08:58:58,533][00784] Num frames 6500... |
|
[2024-08-16 08:58:58,651][00784] Num frames 6600... |
|
[2024-08-16 08:58:58,766][00784] Num frames 6700... |
|
[2024-08-16 08:58:58,888][00784] Num frames 6800... |
|
[2024-08-16 08:58:59,011][00784] Num frames 6900... |
|
[2024-08-16 08:58:59,130][00784] Num frames 7000... |
|
[2024-08-16 08:58:59,254][00784] Num frames 7100... |
|
[2024-08-16 08:58:59,370][00784] Num frames 7200... |
|
[2024-08-16 08:58:59,495][00784] Num frames 7300... |
|
[2024-08-16 08:58:59,616][00784] Num frames 7400... |
|
[2024-08-16 08:58:59,735][00784] Num frames 7500... |
|
[2024-08-16 08:58:59,854][00784] Num frames 7600... |
|
[2024-08-16 08:58:59,979][00784] Num frames 7700... |
|
[2024-08-16 08:59:00,100][00784] Num frames 7800... |
|
[2024-08-16 08:59:00,218][00784] Num frames 7900... |
|
[2024-08-16 08:59:00,345][00784] Num frames 8000... |
|
[2024-08-16 08:59:00,502][00784] Avg episode rewards: #0: 24.354, true rewards: #0: 10.104 |
|
[2024-08-16 08:59:00,504][00784] Avg episode reward: 24.354, avg true_objective: 10.104 |
|
[2024-08-16 08:59:00,528][00784] Num frames 8100... |
|
[2024-08-16 08:59:00,649][00784] Num frames 8200... |
|
[2024-08-16 08:59:00,766][00784] Num frames 8300... |
|
[2024-08-16 08:59:00,889][00784] Num frames 8400... |
|
[2024-08-16 08:59:01,013][00784] Num frames 8500... |
|
[2024-08-16 08:59:01,130][00784] Num frames 8600... |
|
[2024-08-16 08:59:01,259][00784] Num frames 8700... |
|
[2024-08-16 08:59:01,334][00784] Avg episode rewards: #0: 23.018, true rewards: #0: 9.684 |
|
[2024-08-16 08:59:01,336][00784] Avg episode reward: 23.018, avg true_objective: 9.684 |
|
[2024-08-16 08:59:01,436][00784] Num frames 8800... |
|
[2024-08-16 08:59:01,557][00784] Num frames 8900... |
|
[2024-08-16 08:59:01,677][00784] Num frames 9000... |
|
[2024-08-16 08:59:01,796][00784] Num frames 9100... |
|
[2024-08-16 08:59:01,915][00784] Num frames 9200... |
|
[2024-08-16 08:59:02,045][00784] Num frames 9300... |
|
[2024-08-16 08:59:02,166][00784] Num frames 9400... |
|
[2024-08-16 08:59:02,292][00784] Num frames 9500... |
|
[2024-08-16 08:59:02,414][00784] Num frames 9600... |
|
[2024-08-16 08:59:02,533][00784] Num frames 9700... |
|
[2024-08-16 08:59:02,654][00784] Num frames 9800... |
|
[2024-08-16 08:59:02,772][00784] Num frames 9900... |
|
[2024-08-16 08:59:02,890][00784] Num frames 10000... |
|
[2024-08-16 08:59:03,016][00784] Num frames 10100... |
|
[2024-08-16 08:59:03,135][00784] Num frames 10200... |
|
[2024-08-16 08:59:03,256][00784] Num frames 10300... |
|
[2024-08-16 08:59:03,385][00784] Num frames 10400... |
|
[2024-08-16 08:59:03,504][00784] Num frames 10500... |
|
[2024-08-16 08:59:03,625][00784] Num frames 10600... |
|
[2024-08-16 08:59:03,744][00784] Num frames 10700... |
|
[2024-08-16 08:59:03,868][00784] Num frames 10800... |
|
[2024-08-16 08:59:03,943][00784] Avg episode rewards: #0: 26.016, true rewards: #0: 10.816 |
|
[2024-08-16 08:59:03,944][00784] Avg episode reward: 26.016, avg true_objective: 10.816 |
|
[2024-08-16 09:00:05,619][00784] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-08-16 09:02:19,147][00784] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-08-16 09:02:19,149][00784] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-08-16 09:02:19,151][00784] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-08-16 09:02:19,153][00784] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-08-16 09:02:19,155][00784] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-08-16 09:02:19,157][00784] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-08-16 09:02:19,159][00784] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-08-16 09:02:19,160][00784] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-08-16 09:02:19,161][00784] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-08-16 09:02:19,162][00784] Adding new argument 'hf_repository'='jimjiang203/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-08-16 09:02:19,163][00784] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-08-16 09:02:19,164][00784] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-08-16 09:02:19,165][00784] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-08-16 09:02:19,166][00784] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-08-16 09:02:19,167][00784] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-08-16 09:02:19,176][00784] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-08-16 09:02:19,181][00784] RunningMeanStd input shape: (1,) |
|
[2024-08-16 09:02:19,195][00784] ConvEncoder: input_channels=3 |
|
[2024-08-16 09:02:19,230][00784] Conv encoder output size: 512 |
|
[2024-08-16 09:02:19,232][00784] Policy head output size: 512 |
|
[2024-08-16 09:02:19,252][00784] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-08-16 09:02:19,727][00784] Num frames 100... |
|
[2024-08-16 09:02:19,843][00784] Num frames 200... |
|
[2024-08-16 09:02:19,978][00784] Num frames 300... |
|
[2024-08-16 09:02:20,113][00784] Num frames 400... |
|
[2024-08-16 09:02:20,252][00784] Num frames 500... |
|
[2024-08-16 09:02:20,425][00784] Num frames 600... |
|
[2024-08-16 09:02:20,598][00784] Num frames 700... |
|
[2024-08-16 09:02:20,761][00784] Num frames 800... |
|
[2024-08-16 09:02:20,916][00784] Num frames 900... |
|
[2024-08-16 09:02:21,081][00784] Num frames 1000... |
|
[2024-08-16 09:02:21,265][00784] Avg episode rewards: #0: 21.700, true rewards: #0: 10.700 |
|
[2024-08-16 09:02:21,267][00784] Avg episode reward: 21.700, avg true_objective: 10.700 |
|
[2024-08-16 09:02:21,319][00784] Num frames 1100... |
|
[2024-08-16 09:02:21,484][00784] Num frames 1200... |
|
[2024-08-16 09:02:21,651][00784] Num frames 1300... |
|
[2024-08-16 09:02:21,811][00784] Num frames 1400... |
|
[2024-08-16 09:02:21,981][00784] Num frames 1500... |
|
[2024-08-16 09:02:22,165][00784] Num frames 1600... |
|
[2024-08-16 09:02:22,331][00784] Num frames 1700... |
|
[2024-08-16 09:02:22,506][00784] Num frames 1800... |
|
[2024-08-16 09:02:22,680][00784] Num frames 1900... |
|
[2024-08-16 09:02:22,813][00784] Num frames 2000... |
|
[2024-08-16 09:02:22,939][00784] Num frames 2100... |
|
[2024-08-16 09:02:23,049][00784] Avg episode rewards: #0: 22.705, true rewards: #0: 10.705 |
|
[2024-08-16 09:02:23,051][00784] Avg episode reward: 22.705, avg true_objective: 10.705 |
|
[2024-08-16 09:02:23,121][00784] Num frames 2200... |
|
[2024-08-16 09:02:23,246][00784] Num frames 2300... |
|
[2024-08-16 09:02:23,364][00784] Num frames 2400... |
|
[2024-08-16 09:02:23,484][00784] Num frames 2500... |
|
[2024-08-16 09:02:23,621][00784] Num frames 2600... |
|
[2024-08-16 09:02:23,739][00784] Num frames 2700... |
|
[2024-08-16 09:02:23,890][00784] Avg episode rewards: #0: 19.270, true rewards: #0: 9.270 |
|
[2024-08-16 09:02:23,891][00784] Avg episode reward: 19.270, avg true_objective: 9.270 |
|
[2024-08-16 09:02:23,918][00784] Num frames 2800... |
|
[2024-08-16 09:02:24,044][00784] Num frames 2900... |
|
[2024-08-16 09:02:24,163][00784] Num frames 3000... |
|
[2024-08-16 09:02:24,288][00784] Num frames 3100... |
|
[2024-08-16 09:02:24,405][00784] Num frames 3200... |
|
[2024-08-16 09:02:24,537][00784] Num frames 3300... |
|
[2024-08-16 09:02:24,663][00784] Num frames 3400... |
|
[2024-08-16 09:02:24,778][00784] Num frames 3500... |
|
[2024-08-16 09:02:24,895][00784] Num frames 3600... |
|
[2024-08-16 09:02:25,022][00784] Num frames 3700... |
|
[2024-08-16 09:02:25,135][00784] Avg episode rewards: #0: 20.368, true rewards: #0: 9.367 |
|
[2024-08-16 09:02:25,138][00784] Avg episode reward: 20.368, avg true_objective: 9.367 |
|
[2024-08-16 09:02:25,204][00784] Num frames 3800... |
|
[2024-08-16 09:02:25,330][00784] Num frames 3900... |
|
[2024-08-16 09:02:25,447][00784] Num frames 4000... |
|
[2024-08-16 09:02:25,566][00784] Num frames 4100... |
|
[2024-08-16 09:02:25,685][00784] Num frames 4200... |
|
[2024-08-16 09:02:25,800][00784] Num frames 4300... |
|
[2024-08-16 09:02:25,917][00784] Num frames 4400... |
|
[2024-08-16 09:02:26,045][00784] Num frames 4500... |
|
[2024-08-16 09:02:26,158][00784] Num frames 4600... |
|
[2024-08-16 09:02:26,283][00784] Num frames 4700... |
|
[2024-08-16 09:02:26,403][00784] Num frames 4800... |
|
[2024-08-16 09:02:26,523][00784] Num frames 4900... |
|
[2024-08-16 09:02:26,639][00784] Num frames 5000... |
|
[2024-08-16 09:02:26,760][00784] Num frames 5100... |
|
[2024-08-16 09:02:26,878][00784] Num frames 5200... |
|
[2024-08-16 09:02:26,957][00784] Avg episode rewards: #0: 23.438, true rewards: #0: 10.438 |
|
[2024-08-16 09:02:26,959][00784] Avg episode reward: 23.438, avg true_objective: 10.438 |
|
[2024-08-16 09:02:27,055][00784] Num frames 5300... |
|
[2024-08-16 09:02:27,177][00784] Num frames 5400... |
|
[2024-08-16 09:02:27,303][00784] Num frames 5500... |
|
[2024-08-16 09:02:27,425][00784] Num frames 5600... |
|
[2024-08-16 09:02:27,550][00784] Num frames 5700... |
|
[2024-08-16 09:02:27,669][00784] Num frames 5800... |
|
[2024-08-16 09:02:27,788][00784] Num frames 5900... |
|
[2024-08-16 09:02:27,905][00784] Num frames 6000... |
|
[2024-08-16 09:02:28,066][00784] Avg episode rewards: #0: 22.473, true rewards: #0: 10.140 |
|
[2024-08-16 09:02:28,067][00784] Avg episode reward: 22.473, avg true_objective: 10.140 |
|
[2024-08-16 09:02:28,094][00784] Num frames 6100... |
|
[2024-08-16 09:02:28,210][00784] Num frames 6200... |
|
[2024-08-16 09:02:28,335][00784] Num frames 6300... |
|
[2024-08-16 09:02:28,454][00784] Num frames 6400... |
|
[2024-08-16 09:02:28,572][00784] Num frames 6500... |
|
[2024-08-16 09:02:28,693][00784] Num frames 6600... |
|
[2024-08-16 09:02:28,813][00784] Num frames 6700... |
|
[2024-08-16 09:02:28,935][00784] Num frames 6800... |
|
[2024-08-16 09:02:29,021][00784] Avg episode rewards: #0: 21.172, true rewards: #0: 9.743 |
|
[2024-08-16 09:02:29,023][00784] Avg episode reward: 21.172, avg true_objective: 9.743 |
|
[2024-08-16 09:02:29,119][00784] Num frames 6900... |
|
[2024-08-16 09:02:29,238][00784] Num frames 7000... |
|
[2024-08-16 09:02:29,370][00784] Num frames 7100... |
|
[2024-08-16 09:02:29,490][00784] Num frames 7200... |
|
[2024-08-16 09:02:29,612][00784] Num frames 7300... |
|
[2024-08-16 09:02:29,731][00784] Num frames 7400... |
|
[2024-08-16 09:02:29,847][00784] Num frames 7500... |
|
[2024-08-16 09:02:29,977][00784] Num frames 7600... |
|
[2024-08-16 09:02:30,100][00784] Num frames 7700... |
|
[2024-08-16 09:02:30,225][00784] Num frames 7800... |
|
[2024-08-16 09:02:30,356][00784] Num frames 7900... |
|
[2024-08-16 09:02:30,473][00784] Num frames 8000... |
|
[2024-08-16 09:02:30,595][00784] Num frames 8100... |
|
[2024-08-16 09:02:30,713][00784] Num frames 8200... |
|
[2024-08-16 09:02:30,835][00784] Num frames 8300... |
|
[2024-08-16 09:02:30,933][00784] Avg episode rewards: #0: 23.795, true rewards: #0: 10.420 |
|
[2024-08-16 09:02:30,936][00784] Avg episode reward: 23.795, avg true_objective: 10.420 |
|
[2024-08-16 09:02:31,018][00784] Num frames 8400... |
|
[2024-08-16 09:02:31,140][00784] Num frames 8500... |
|
[2024-08-16 09:02:31,255][00784] Num frames 8600... |
|
[2024-08-16 09:02:31,379][00784] Num frames 8700... |
|
[2024-08-16 09:02:31,497][00784] Num frames 8800... |
|
[2024-08-16 09:02:31,617][00784] Num frames 8900... |
|
[2024-08-16 09:02:31,736][00784] Num frames 9000... |
|
[2024-08-16 09:02:31,854][00784] Num frames 9100... |
|
[2024-08-16 09:02:31,978][00784] Num frames 9200... |
|
[2024-08-16 09:02:32,054][00784] Avg episode rewards: #0: 23.462, true rewards: #0: 10.240 |
|
[2024-08-16 09:02:32,057][00784] Avg episode reward: 23.462, avg true_objective: 10.240 |
|
[2024-08-16 09:02:32,161][00784] Num frames 9300... |
|
[2024-08-16 09:02:32,278][00784] Num frames 9400... |
|
[2024-08-16 09:02:32,406][00784] Num frames 9500... |
|
[2024-08-16 09:02:32,526][00784] Num frames 9600... |
|
[2024-08-16 09:02:32,644][00784] Num frames 9700... |
|
[2024-08-16 09:02:32,739][00784] Avg episode rewards: #0: 22.333, true rewards: #0: 9.733 |
|
[2024-08-16 09:02:32,741][00784] Avg episode reward: 22.333, avg true_objective: 9.733 |
|
[2024-08-16 09:03:29,316][00784] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|