diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,973 @@ +[2023-02-27 10:46:05,198][00394] Saving configuration to /content/train_dir/default_experiment/config.json... +[2023-02-27 10:46:05,201][00394] Rollout worker 0 uses device cpu +[2023-02-27 10:46:05,204][00394] Rollout worker 1 uses device cpu +[2023-02-27 10:46:05,205][00394] Rollout worker 2 uses device cpu +[2023-02-27 10:46:05,207][00394] Rollout worker 3 uses device cpu +[2023-02-27 10:46:05,209][00394] Rollout worker 4 uses device cpu +[2023-02-27 10:46:05,210][00394] Rollout worker 5 uses device cpu +[2023-02-27 10:46:05,211][00394] Rollout worker 6 uses device cpu +[2023-02-27 10:46:05,213][00394] Rollout worker 7 uses device cpu +[2023-02-27 10:46:05,405][00394] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 10:46:05,407][00394] InferenceWorker_p0-w0: min num requests: 2 +[2023-02-27 10:46:05,440][00394] Starting all processes... +[2023-02-27 10:46:05,441][00394] Starting process learner_proc0 +[2023-02-27 10:46:05,499][00394] Starting all processes... +[2023-02-27 10:46:05,507][00394] Starting process inference_proc0-0 +[2023-02-27 10:46:05,508][00394] Starting process rollout_proc0 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc1 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc2 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc3 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc4 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc5 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc6 +[2023-02-27 10:46:05,510][00394] Starting process rollout_proc7 +[2023-02-27 10:46:14,657][11881] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 10:46:14,658][11881] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-02-27 10:46:15,299][11881] Num visible devices: 1 +[2023-02-27 10:46:15,305][11899] Worker 4 uses CPU cores [0] +[2023-02-27 10:46:15,323][11881] Starting seed is not provided +[2023-02-27 10:46:15,324][11881] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 10:46:15,324][11881] Initializing actor-critic model on device cuda:0 +[2023-02-27 10:46:15,325][11881] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 10:46:15,327][11881] RunningMeanStd input shape: (1,) +[2023-02-27 10:46:15,444][11881] ConvEncoder: input_channels=3 +[2023-02-27 10:46:15,446][11897] Worker 1 uses CPU cores [1] +[2023-02-27 10:46:15,510][11901] Worker 5 uses CPU cores [1] +[2023-02-27 10:46:15,514][11900] Worker 3 uses CPU cores [1] +[2023-02-27 10:46:15,538][11895] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 10:46:15,538][11895] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-02-27 10:46:15,539][11898] Worker 2 uses CPU cores [0] +[2023-02-27 10:46:15,558][11895] Num visible devices: 1 +[2023-02-27 10:46:15,560][11902] Worker 6 uses CPU cores [0] +[2023-02-27 10:46:15,596][11903] Worker 7 uses CPU cores [1] +[2023-02-27 10:46:15,712][11896] Worker 0 uses CPU cores [0] +[2023-02-27 10:46:15,919][11881] Conv encoder output size: 512 +[2023-02-27 10:46:15,920][11881] Policy head output size: 512 +[2023-02-27 10:46:15,978][11881] Created Actor Critic model with architecture: +[2023-02-27 10:46:15,978][11881] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): VizdoomEncoder( + (basic_encoder): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ELU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ELU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ELU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ELU) + ) + ) + ) + ) + (core): ModelCoreRNN( + (core): GRU(512, 512) + ) + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=5, bias=True) + ) +) +[2023-02-27 10:46:23,492][11881] Using optimizer +[2023-02-27 10:46:23,493][11881] No checkpoints found +[2023-02-27 10:46:23,494][11881] Did not load from checkpoint, starting from scratch! +[2023-02-27 10:46:23,494][11881] Initialized policy 0 weights for model version 0 +[2023-02-27 10:46:23,497][11881] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-02-27 10:46:23,505][11881] LearnerWorker_p0 finished initialization! +[2023-02-27 10:46:23,688][11895] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 10:46:23,689][11895] RunningMeanStd input shape: (1,) +[2023-02-27 10:46:23,702][11895] ConvEncoder: input_channels=3 +[2023-02-27 10:46:23,801][11895] Conv encoder output size: 512 +[2023-02-27 10:46:23,801][11895] Policy head output size: 512 +[2023-02-27 10:46:25,091][00394] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 10:46:25,398][00394] Heartbeat connected on Batcher_0 +[2023-02-27 10:46:25,402][00394] Heartbeat connected on LearnerWorker_p0 +[2023-02-27 10:46:25,416][00394] Heartbeat connected on RolloutWorker_w0 +[2023-02-27 10:46:25,421][00394] Heartbeat connected on RolloutWorker_w1 +[2023-02-27 10:46:25,425][00394] Heartbeat connected on RolloutWorker_w2 +[2023-02-27 10:46:25,426][00394] Heartbeat connected on RolloutWorker_w3 +[2023-02-27 10:46:25,430][00394] Heartbeat connected on RolloutWorker_w4 +[2023-02-27 10:46:25,435][00394] Heartbeat connected on RolloutWorker_w5 +[2023-02-27 10:46:25,436][00394] Heartbeat connected on RolloutWorker_w6 +[2023-02-27 10:46:25,444][00394] Heartbeat connected on RolloutWorker_w7 +[2023-02-27 10:46:26,160][00394] Inference worker 0-0 is ready! +[2023-02-27 10:46:26,162][00394] All inference workers are ready! Signal rollout workers to start! +[2023-02-27 10:46:26,167][00394] Heartbeat connected on InferenceWorker_p0-w0 +[2023-02-27 10:46:26,294][11897] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,297][11903] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,338][11900] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,343][11901] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,346][11898] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,346][11896] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,351][11899] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:26,357][11902] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 10:46:27,217][11900] Decorrelating experience for 0 frames... +[2023-02-27 10:46:27,217][11899] Decorrelating experience for 0 frames... +[2023-02-27 10:46:27,218][11903] Decorrelating experience for 0 frames... +[2023-02-27 10:46:27,215][11902] Decorrelating experience for 0 frames... +[2023-02-27 10:46:27,602][11902] Decorrelating experience for 32 frames... +[2023-02-27 10:46:28,027][11902] Decorrelating experience for 64 frames... +[2023-02-27 10:46:28,194][11901] Decorrelating experience for 0 frames... +[2023-02-27 10:46:28,207][11903] Decorrelating experience for 32 frames... +[2023-02-27 10:46:28,226][11900] Decorrelating experience for 32 frames... +[2023-02-27 10:46:29,138][11901] Decorrelating experience for 32 frames... +[2023-02-27 10:46:29,153][11897] Decorrelating experience for 0 frames... +[2023-02-27 10:46:29,446][11900] Decorrelating experience for 64 frames... +[2023-02-27 10:46:29,448][11903] Decorrelating experience for 64 frames... +[2023-02-27 10:46:30,091][00394] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 10:46:30,595][11897] Decorrelating experience for 32 frames... +[2023-02-27 10:46:30,799][11901] Decorrelating experience for 64 frames... +[2023-02-27 10:46:30,923][11903] Decorrelating experience for 96 frames... +[2023-02-27 10:46:31,585][11902] Decorrelating experience for 96 frames... +[2023-02-27 10:46:31,926][11897] Decorrelating experience for 64 frames... +[2023-02-27 10:46:31,984][11901] Decorrelating experience for 96 frames... +[2023-02-27 10:46:32,799][11900] Decorrelating experience for 96 frames... +[2023-02-27 10:46:32,850][11897] Decorrelating experience for 96 frames... +[2023-02-27 10:46:33,120][11899] Decorrelating experience for 32 frames... +[2023-02-27 10:46:34,085][11896] Decorrelating experience for 0 frames... +[2023-02-27 10:46:34,154][11899] Decorrelating experience for 64 frames... +[2023-02-27 10:46:35,091][00394] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 10:46:35,156][11896] Decorrelating experience for 32 frames... +[2023-02-27 10:46:35,350][11899] Decorrelating experience for 96 frames... +[2023-02-27 10:46:36,140][11898] Decorrelating experience for 0 frames... +[2023-02-27 10:46:36,239][11896] Decorrelating experience for 64 frames... +[2023-02-27 10:46:37,860][11898] Decorrelating experience for 32 frames... +[2023-02-27 10:46:38,301][11896] Decorrelating experience for 96 frames... +[2023-02-27 10:46:40,091][00394] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 79.1. Samples: 1186. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-02-27 10:46:40,099][00394] Avg episode reward: [(0, '2.258')] +[2023-02-27 10:46:40,606][11881] Signal inference workers to stop experience collection... +[2023-02-27 10:46:40,624][11895] InferenceWorker_p0-w0: stopping experience collection +[2023-02-27 10:46:40,693][11898] Decorrelating experience for 64 frames... +[2023-02-27 10:46:41,062][11898] Decorrelating experience for 96 frames... +[2023-02-27 10:46:42,875][11881] Signal inference workers to resume experience collection... +[2023-02-27 10:46:42,877][11895] InferenceWorker_p0-w0: resuming experience collection +[2023-02-27 10:46:45,091][00394] Fps is (10 sec: 1228.8, 60 sec: 614.4, 300 sec: 614.4). Total num frames: 12288. Throughput: 0: 176.9. Samples: 3538. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-02-27 10:46:45,094][00394] Avg episode reward: [(0, '3.078')] +[2023-02-27 10:46:50,091][00394] Fps is (10 sec: 2867.2, 60 sec: 1146.9, 300 sec: 1146.9). Total num frames: 28672. Throughput: 0: 257.0. Samples: 6424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:46:50,097][00394] Avg episode reward: [(0, '3.711')] +[2023-02-27 10:46:53,617][11895] Updated weights for policy 0, policy_version 10 (0.0017) +[2023-02-27 10:46:55,091][00394] Fps is (10 sec: 3276.7, 60 sec: 1501.9, 300 sec: 1501.9). Total num frames: 45056. Throughput: 0: 353.9. Samples: 10618. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:46:55,096][00394] Avg episode reward: [(0, '4.061')] +[2023-02-27 10:47:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 1755.4, 300 sec: 1755.4). Total num frames: 61440. Throughput: 0: 441.3. Samples: 15446. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:47:00,093][00394] Avg episode reward: [(0, '4.278')] +[2023-02-27 10:47:04,185][11895] Updated weights for policy 0, policy_version 20 (0.0013) +[2023-02-27 10:47:05,091][00394] Fps is (10 sec: 3686.5, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 81920. Throughput: 0: 469.9. Samples: 18794. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:47:05,095][00394] Avg episode reward: [(0, '4.538')] +[2023-02-27 10:47:10,091][00394] Fps is (10 sec: 4096.0, 60 sec: 2275.6, 300 sec: 2275.6). Total num frames: 102400. Throughput: 0: 561.9. Samples: 25284. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:47:10,093][00394] Avg episode reward: [(0, '4.478')] +[2023-02-27 10:47:10,106][11881] Saving new best policy, reward=4.478! +[2023-02-27 10:47:15,091][00394] Fps is (10 sec: 3276.7, 60 sec: 2293.7, 300 sec: 2293.7). Total num frames: 114688. Throughput: 0: 654.9. Samples: 29470. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 10:47:15,094][00394] Avg episode reward: [(0, '4.442')] +[2023-02-27 10:47:16,689][11895] Updated weights for policy 0, policy_version 30 (0.0025) +[2023-02-27 10:47:20,091][00394] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 135168. Throughput: 0: 702.0. Samples: 31592. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:47:20,094][00394] Avg episode reward: [(0, '4.419')] +[2023-02-27 10:47:25,091][00394] Fps is (10 sec: 4096.0, 60 sec: 2594.1, 300 sec: 2594.1). Total num frames: 155648. Throughput: 0: 817.4. Samples: 37970. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:47:25,096][00394] Avg episode reward: [(0, '4.367')] +[2023-02-27 10:47:26,944][11895] Updated weights for policy 0, policy_version 40 (0.0018) +[2023-02-27 10:47:30,091][00394] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2583.6). Total num frames: 167936. Throughput: 0: 877.6. Samples: 43030. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:47:30,094][00394] Avg episode reward: [(0, '4.348')] +[2023-02-27 10:47:35,092][00394] Fps is (10 sec: 2457.4, 60 sec: 3003.7, 300 sec: 2574.6). Total num frames: 180224. Throughput: 0: 851.2. Samples: 44728. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:47:35,095][00394] Avg episode reward: [(0, '4.427')] +[2023-02-27 10:47:40,091][00394] Fps is (10 sec: 2457.6, 60 sec: 3208.5, 300 sec: 2566.8). Total num frames: 192512. Throughput: 0: 830.5. Samples: 47992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:47:40,095][00394] Avg episode reward: [(0, '4.464')] +[2023-02-27 10:47:43,270][11895] Updated weights for policy 0, policy_version 50 (0.0020) +[2023-02-27 10:47:45,091][00394] Fps is (10 sec: 3277.2, 60 sec: 3345.1, 300 sec: 2662.4). Total num frames: 212992. Throughput: 0: 833.9. Samples: 52970. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:47:45,094][00394] Avg episode reward: [(0, '4.463')] +[2023-02-27 10:47:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 2746.7). Total num frames: 233472. Throughput: 0: 834.3. Samples: 56336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:47:50,094][00394] Avg episode reward: [(0, '4.405')] +[2023-02-27 10:47:52,397][11895] Updated weights for policy 0, policy_version 60 (0.0016) +[2023-02-27 10:47:55,094][00394] Fps is (10 sec: 3685.3, 60 sec: 3413.2, 300 sec: 2776.1). Total num frames: 249856. Throughput: 0: 831.8. Samples: 62716. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:47:55,101][00394] Avg episode reward: [(0, '4.580')] +[2023-02-27 10:47:55,188][11881] Saving new best policy, reward=4.580! +[2023-02-27 10:48:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3413.3, 300 sec: 2802.5). Total num frames: 266240. Throughput: 0: 829.0. Samples: 66776. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:48:00,094][00394] Avg episode reward: [(0, '4.795')] +[2023-02-27 10:48:00,103][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000065_266240.pth... +[2023-02-27 10:48:00,295][11881] Saving new best policy, reward=4.795! +[2023-02-27 10:48:05,091][00394] Fps is (10 sec: 3277.8, 60 sec: 3345.1, 300 sec: 2826.2). Total num frames: 282624. Throughput: 0: 828.6. Samples: 68878. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:48:05,094][00394] Avg episode reward: [(0, '4.575')] +[2023-02-27 10:48:05,443][11895] Updated weights for policy 0, policy_version 70 (0.0035) +[2023-02-27 10:48:10,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 307200. Throughput: 0: 834.8. Samples: 75534. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:48:10,093][00394] Avg episode reward: [(0, '4.513')] +[2023-02-27 10:48:15,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 2941.7). Total num frames: 323584. Throughput: 0: 853.0. Samples: 81416. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:48:15,094][00394] Avg episode reward: [(0, '4.583')] +[2023-02-27 10:48:16,035][11895] Updated weights for policy 0, policy_version 80 (0.0013) +[2023-02-27 10:48:20,093][00394] Fps is (10 sec: 2866.6, 60 sec: 3345.0, 300 sec: 2920.6). Total num frames: 335872. Throughput: 0: 860.9. Samples: 83470. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:48:20,096][00394] Avg episode reward: [(0, '4.648')] +[2023-02-27 10:48:25,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 2969.6). Total num frames: 356352. Throughput: 0: 886.3. Samples: 87874. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:48:25,093][00394] Avg episode reward: [(0, '4.414')] +[2023-02-27 10:48:27,627][11895] Updated weights for policy 0, policy_version 90 (0.0015) +[2023-02-27 10:48:30,091][00394] Fps is (10 sec: 4096.8, 60 sec: 3481.6, 300 sec: 3014.7). Total num frames: 376832. Throughput: 0: 924.3. Samples: 94564. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:48:30,094][00394] Avg episode reward: [(0, '4.283')] +[2023-02-27 10:48:35,091][00394] Fps is (10 sec: 4095.8, 60 sec: 3618.2, 300 sec: 3056.2). Total num frames: 397312. Throughput: 0: 922.5. Samples: 97850. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:48:35,094][00394] Avg episode reward: [(0, '4.520')] +[2023-02-27 10:48:38,798][11895] Updated weights for policy 0, policy_version 100 (0.0024) +[2023-02-27 10:48:40,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3034.1). Total num frames: 409600. Throughput: 0: 881.3. Samples: 102374. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:48:40,099][00394] Avg episode reward: [(0, '4.614')] +[2023-02-27 10:48:45,091][00394] Fps is (10 sec: 3277.0, 60 sec: 3618.1, 300 sec: 3072.0). Total num frames: 430080. Throughput: 0: 894.9. Samples: 107048. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:48:45,093][00394] Avg episode reward: [(0, '4.602')] +[2023-02-27 10:48:49,686][11895] Updated weights for policy 0, policy_version 110 (0.0014) +[2023-02-27 10:48:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3107.3). Total num frames: 450560. Throughput: 0: 921.7. Samples: 110356. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:48:50,096][00394] Avg episode reward: [(0, '4.400')] +[2023-02-27 10:48:55,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3113.0). Total num frames: 466944. Throughput: 0: 919.5. Samples: 116910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:48:55,097][00394] Avg episode reward: [(0, '4.426')] +[2023-02-27 10:49:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3118.2). Total num frames: 483328. Throughput: 0: 884.4. Samples: 121212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:49:00,094][00394] Avg episode reward: [(0, '4.502')] +[2023-02-27 10:49:02,394][11895] Updated weights for policy 0, policy_version 120 (0.0020) +[2023-02-27 10:49:05,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3123.2). Total num frames: 499712. Throughput: 0: 887.2. Samples: 123394. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:49:05,099][00394] Avg episode reward: [(0, '4.613')] +[2023-02-27 10:49:10,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3177.5). Total num frames: 524288. Throughput: 0: 925.5. Samples: 129522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:49:10,097][00394] Avg episode reward: [(0, '4.571')] +[2023-02-27 10:49:11,981][11895] Updated weights for policy 0, policy_version 130 (0.0025) +[2023-02-27 10:49:15,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3180.4). Total num frames: 540672. Throughput: 0: 917.3. Samples: 135844. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:49:15,094][00394] Avg episode reward: [(0, '4.442')] +[2023-02-27 10:49:20,093][00394] Fps is (10 sec: 3276.2, 60 sec: 3686.4, 300 sec: 3183.1). Total num frames: 557056. Throughput: 0: 890.4. Samples: 137920. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:49:20,095][00394] Avg episode reward: [(0, '4.483')] +[2023-02-27 10:49:25,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3163.0). Total num frames: 569344. Throughput: 0: 885.8. Samples: 142234. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:49:25,094][00394] Avg episode reward: [(0, '4.508')] +[2023-02-27 10:49:25,116][11895] Updated weights for policy 0, policy_version 140 (0.0026) +[2023-02-27 10:49:30,096][00394] Fps is (10 sec: 3685.2, 60 sec: 3617.8, 300 sec: 3210.3). Total num frames: 593920. Throughput: 0: 925.1. Samples: 148682. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:49:30,100][00394] Avg episode reward: [(0, '4.674')] +[2023-02-27 10:49:33,926][11895] Updated weights for policy 0, policy_version 150 (0.0013) +[2023-02-27 10:49:35,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3618.2, 300 sec: 3233.7). Total num frames: 614400. Throughput: 0: 926.0. Samples: 152028. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:49:35,096][00394] Avg episode reward: [(0, '4.588')] +[2023-02-27 10:49:40,091][00394] Fps is (10 sec: 3688.3, 60 sec: 3686.4, 300 sec: 3234.8). Total num frames: 630784. Throughput: 0: 892.0. Samples: 157052. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:49:40,094][00394] Avg episode reward: [(0, '4.361')] +[2023-02-27 10:49:45,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3215.4). Total num frames: 643072. Throughput: 0: 890.0. Samples: 161264. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:49:45,094][00394] Avg episode reward: [(0, '4.558')] +[2023-02-27 10:49:47,068][11895] Updated weights for policy 0, policy_version 160 (0.0020) +[2023-02-27 10:49:50,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3256.8). Total num frames: 667648. Throughput: 0: 914.0. Samples: 164524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:49:50,093][00394] Avg episode reward: [(0, '4.765')] +[2023-02-27 10:49:55,095][00394] Fps is (10 sec: 4503.7, 60 sec: 3686.2, 300 sec: 3276.7). Total num frames: 688128. Throughput: 0: 930.7. Samples: 171406. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:49:55,102][00394] Avg episode reward: [(0, '4.667')] +[2023-02-27 10:49:56,880][11895] Updated weights for policy 0, policy_version 170 (0.0032) +[2023-02-27 10:50:00,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3276.8). Total num frames: 704512. Throughput: 0: 895.2. Samples: 176130. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:50:00,093][00394] Avg episode reward: [(0, '4.381')] +[2023-02-27 10:50:00,112][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth... +[2023-02-27 10:50:05,091][00394] Fps is (10 sec: 2868.4, 60 sec: 3618.1, 300 sec: 3258.2). Total num frames: 716800. Throughput: 0: 894.7. Samples: 178180. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:50:05,094][00394] Avg episode reward: [(0, '4.656')] +[2023-02-27 10:50:08,900][11895] Updated weights for policy 0, policy_version 180 (0.0026) +[2023-02-27 10:50:10,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3295.0). Total num frames: 741376. Throughput: 0: 929.0. Samples: 184038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:50:10,094][00394] Avg episode reward: [(0, '4.764')] +[2023-02-27 10:50:15,093][00394] Fps is (10 sec: 4504.5, 60 sec: 3686.3, 300 sec: 3312.4). Total num frames: 761856. Throughput: 0: 934.2. Samples: 190720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 10:50:15,102][00394] Avg episode reward: [(0, '4.482')] +[2023-02-27 10:50:19,997][11895] Updated weights for policy 0, policy_version 190 (0.0013) +[2023-02-27 10:50:20,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3311.7). Total num frames: 778240. Throughput: 0: 910.9. Samples: 193018. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:50:20,095][00394] Avg episode reward: [(0, '4.550')] +[2023-02-27 10:50:25,091][00394] Fps is (10 sec: 2867.8, 60 sec: 3686.4, 300 sec: 3293.9). Total num frames: 790528. Throughput: 0: 894.9. Samples: 197324. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:50:25,097][00394] Avg episode reward: [(0, '4.616')] +[2023-02-27 10:50:30,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3327.0). Total num frames: 815104. Throughput: 0: 935.6. Samples: 203368. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:50:30,097][00394] Avg episode reward: [(0, '4.793')] +[2023-02-27 10:50:30,906][11895] Updated weights for policy 0, policy_version 200 (0.0023) +[2023-02-27 10:50:35,091][00394] Fps is (10 sec: 4505.7, 60 sec: 3686.4, 300 sec: 3342.3). Total num frames: 835584. Throughput: 0: 938.5. Samples: 206756. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:50:35,096][00394] Avg episode reward: [(0, '4.526')] +[2023-02-27 10:50:40,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3325.0). Total num frames: 847872. Throughput: 0: 907.2. Samples: 212226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:50:40,093][00394] Avg episode reward: [(0, '4.335')] +[2023-02-27 10:50:43,418][11895] Updated weights for policy 0, policy_version 210 (0.0017) +[2023-02-27 10:50:45,091][00394] Fps is (10 sec: 2867.1, 60 sec: 3686.4, 300 sec: 3324.1). Total num frames: 864256. Throughput: 0: 892.0. Samples: 216268. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:50:45,099][00394] Avg episode reward: [(0, '4.276')] +[2023-02-27 10:50:50,091][00394] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3338.6). Total num frames: 884736. Throughput: 0: 905.9. Samples: 218946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:50:50,100][00394] Avg episode reward: [(0, '4.339')] +[2023-02-27 10:50:53,421][11895] Updated weights for policy 0, policy_version 220 (0.0026) +[2023-02-27 10:50:55,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3618.4, 300 sec: 3352.7). Total num frames: 905216. Throughput: 0: 920.6. Samples: 225466. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:50:55,098][00394] Avg episode reward: [(0, '4.519')] +[2023-02-27 10:51:00,091][00394] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3351.3). Total num frames: 921600. Throughput: 0: 883.3. Samples: 230466. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-27 10:51:00,095][00394] Avg episode reward: [(0, '4.457')] +[2023-02-27 10:51:05,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3335.3). Total num frames: 933888. Throughput: 0: 876.8. Samples: 232474. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:51:05,099][00394] Avg episode reward: [(0, '4.572')] +[2023-02-27 10:51:06,886][11895] Updated weights for policy 0, policy_version 230 (0.0012) +[2023-02-27 10:51:10,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3348.7). Total num frames: 954368. Throughput: 0: 890.8. Samples: 237412. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:51:10,099][00394] Avg episode reward: [(0, '4.712')] +[2023-02-27 10:51:15,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3361.5). Total num frames: 974848. Throughput: 0: 902.0. Samples: 243958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:51:15,096][00394] Avg episode reward: [(0, '4.564')] +[2023-02-27 10:51:16,454][11895] Updated weights for policy 0, policy_version 240 (0.0019) +[2023-02-27 10:51:20,094][00394] Fps is (10 sec: 3685.2, 60 sec: 3549.7, 300 sec: 3360.1). Total num frames: 991232. Throughput: 0: 889.1. Samples: 246770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:51:20,103][00394] Avg episode reward: [(0, '4.446')] +[2023-02-27 10:51:25,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3401.8). Total num frames: 1003520. Throughput: 0: 857.5. Samples: 250812. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:51:25,098][00394] Avg episode reward: [(0, '4.326')] +[2023-02-27 10:51:30,091][00394] Fps is (10 sec: 2868.1, 60 sec: 3413.3, 300 sec: 3457.3). Total num frames: 1019904. Throughput: 0: 870.7. Samples: 255450. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:51:30,094][00394] Avg episode reward: [(0, '4.488')] +[2023-02-27 10:51:30,277][11895] Updated weights for policy 0, policy_version 250 (0.0021) +[2023-02-27 10:51:35,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 1036288. Throughput: 0: 876.9. Samples: 258406. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:51:35,093][00394] Avg episode reward: [(0, '4.692')] +[2023-02-27 10:51:40,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3345.1, 300 sec: 3512.8). Total num frames: 1048576. Throughput: 0: 822.4. Samples: 262476. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:51:40,096][00394] Avg episode reward: [(0, '4.763')] +[2023-02-27 10:51:45,095][00394] Fps is (10 sec: 2456.6, 60 sec: 3276.6, 300 sec: 3498.9). Total num frames: 1060864. Throughput: 0: 786.5. Samples: 265862. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:51:45,102][00394] Avg episode reward: [(0, '4.635')] +[2023-02-27 10:51:45,614][11895] Updated weights for policy 0, policy_version 260 (0.0022) +[2023-02-27 10:51:50,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3208.5, 300 sec: 3499.0). Total num frames: 1077248. Throughput: 0: 788.0. Samples: 267934. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:51:50,094][00394] Avg episode reward: [(0, '4.479')] +[2023-02-27 10:51:55,093][00394] Fps is (10 sec: 3687.1, 60 sec: 3208.4, 300 sec: 3512.8). Total num frames: 1097728. Throughput: 0: 809.4. Samples: 273838. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:51:55,099][00394] Avg episode reward: [(0, '4.394')] +[2023-02-27 10:51:56,032][11895] Updated weights for policy 0, policy_version 270 (0.0018) +[2023-02-27 10:52:00,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3345.1, 300 sec: 3526.7). Total num frames: 1122304. Throughput: 0: 815.0. Samples: 280634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:52:00,093][00394] Avg episode reward: [(0, '4.500')] +[2023-02-27 10:52:00,113][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000274_1122304.pth... +[2023-02-27 10:52:00,296][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000065_266240.pth +[2023-02-27 10:52:05,091][00394] Fps is (10 sec: 3687.2, 60 sec: 3345.1, 300 sec: 3499.0). Total num frames: 1134592. Throughput: 0: 800.2. Samples: 282778. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) +[2023-02-27 10:52:05,094][00394] Avg episode reward: [(0, '4.565')] +[2023-02-27 10:52:08,296][11895] Updated weights for policy 0, policy_version 280 (0.0019) +[2023-02-27 10:52:10,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3276.8, 300 sec: 3512.8). Total num frames: 1150976. Throughput: 0: 807.8. Samples: 287162. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:52:10,094][00394] Avg episode reward: [(0, '4.602')] +[2023-02-27 10:52:15,096][00394] Fps is (10 sec: 3684.5, 60 sec: 3276.5, 300 sec: 3512.8). Total num frames: 1171456. Throughput: 0: 842.6. Samples: 293370. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:52:15,103][00394] Avg episode reward: [(0, '4.498')] +[2023-02-27 10:52:18,005][11895] Updated weights for policy 0, policy_version 290 (0.0013) +[2023-02-27 10:52:20,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3413.5, 300 sec: 3526.7). Total num frames: 1196032. Throughput: 0: 849.6. Samples: 296640. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:52:20,097][00394] Avg episode reward: [(0, '4.467')] +[2023-02-27 10:52:25,091][00394] Fps is (10 sec: 3688.3, 60 sec: 3413.3, 300 sec: 3526.7). Total num frames: 1208320. Throughput: 0: 877.1. Samples: 301946. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:52:25,098][00394] Avg episode reward: [(0, '4.652')] +[2023-02-27 10:52:30,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3540.6). Total num frames: 1224704. Throughput: 0: 900.3. Samples: 306372. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:52:30,094][00394] Avg episode reward: [(0, '4.686')] +[2023-02-27 10:52:30,675][11895] Updated weights for policy 0, policy_version 300 (0.0032) +[2023-02-27 10:52:35,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3568.4). Total num frames: 1245184. Throughput: 0: 924.2. Samples: 309524. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:52:35,094][00394] Avg episode reward: [(0, '4.778')] +[2023-02-27 10:52:39,752][11895] Updated weights for policy 0, policy_version 310 (0.0020) +[2023-02-27 10:52:40,092][00394] Fps is (10 sec: 4505.0, 60 sec: 3686.3, 300 sec: 3582.2). Total num frames: 1269760. Throughput: 0: 941.5. Samples: 316204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:52:40,096][00394] Avg episode reward: [(0, '4.845')] +[2023-02-27 10:52:40,108][11881] Saving new best policy, reward=4.845! +[2023-02-27 10:52:45,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3554.5). Total num frames: 1282048. Throughput: 0: 901.6. Samples: 321206. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:52:45,095][00394] Avg episode reward: [(0, '4.879')] +[2023-02-27 10:52:45,099][11881] Saving new best policy, reward=4.879! +[2023-02-27 10:52:50,091][00394] Fps is (10 sec: 2867.6, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 1298432. Throughput: 0: 899.8. Samples: 323270. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:52:50,094][00394] Avg episode reward: [(0, '4.768')] +[2023-02-27 10:52:52,636][11895] Updated weights for policy 0, policy_version 320 (0.0026) +[2023-02-27 10:52:55,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.5, 300 sec: 3568.4). Total num frames: 1318912. Throughput: 0: 926.9. Samples: 328872. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:52:55,093][00394] Avg episode reward: [(0, '4.615')] +[2023-02-27 10:53:00,091][00394] Fps is (10 sec: 4505.4, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 1343488. Throughput: 0: 940.9. Samples: 335706. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:53:00,094][00394] Avg episode reward: [(0, '4.531')] +[2023-02-27 10:53:02,282][11895] Updated weights for policy 0, policy_version 330 (0.0015) +[2023-02-27 10:53:05,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 1355776. Throughput: 0: 926.6. Samples: 338338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:53:05,093][00394] Avg episode reward: [(0, '4.554')] +[2023-02-27 10:53:10,091][00394] Fps is (10 sec: 2867.3, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 1372160. Throughput: 0: 901.7. Samples: 342522. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 10:53:10,094][00394] Avg episode reward: [(0, '4.470')] +[2023-02-27 10:53:14,640][11895] Updated weights for policy 0, policy_version 340 (0.0014) +[2023-02-27 10:53:15,091][00394] Fps is (10 sec: 3686.3, 60 sec: 3686.7, 300 sec: 3582.3). Total num frames: 1392640. Throughput: 0: 933.4. Samples: 348376. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:53:15,094][00394] Avg episode reward: [(0, '4.440')] +[2023-02-27 10:53:20,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 1417216. Throughput: 0: 938.7. Samples: 351764. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:53:20,098][00394] Avg episode reward: [(0, '4.749')] +[2023-02-27 10:53:25,091][00394] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 1429504. Throughput: 0: 915.0. Samples: 357380. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:53:25,101][00394] Avg episode reward: [(0, '4.941')] +[2023-02-27 10:53:25,106][11881] Saving new best policy, reward=4.941! +[2023-02-27 10:53:25,772][11895] Updated weights for policy 0, policy_version 350 (0.0012) +[2023-02-27 10:53:30,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3554.5). Total num frames: 1445888. Throughput: 0: 896.5. Samples: 361550. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:53:30,095][00394] Avg episode reward: [(0, '5.061')] +[2023-02-27 10:53:30,110][11881] Saving new best policy, reward=5.061! +[2023-02-27 10:53:35,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1466368. Throughput: 0: 907.3. Samples: 364098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:53:35,097][00394] Avg episode reward: [(0, '4.842')] +[2023-02-27 10:53:36,771][11895] Updated weights for policy 0, policy_version 360 (0.0027) +[2023-02-27 10:53:40,091][00394] Fps is (10 sec: 4095.9, 60 sec: 3618.2, 300 sec: 3582.3). Total num frames: 1486848. Throughput: 0: 932.3. Samples: 370824. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:53:40,094][00394] Avg episode reward: [(0, '4.757')] +[2023-02-27 10:53:45,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 1503232. Throughput: 0: 902.4. Samples: 376314. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:53:45,095][00394] Avg episode reward: [(0, '4.621')] +[2023-02-27 10:53:48,734][11895] Updated weights for policy 0, policy_version 370 (0.0018) +[2023-02-27 10:53:50,091][00394] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 1519616. Throughput: 0: 890.7. Samples: 378418. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) +[2023-02-27 10:53:50,099][00394] Avg episode reward: [(0, '4.645')] +[2023-02-27 10:53:55,091][00394] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1540096. Throughput: 0: 913.6. Samples: 383634. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 10:53:55,097][00394] Avg episode reward: [(0, '4.730')] +[2023-02-27 10:53:58,407][11895] Updated weights for policy 0, policy_version 380 (0.0025) +[2023-02-27 10:54:00,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3618.2, 300 sec: 3596.1). Total num frames: 1560576. Throughput: 0: 938.0. Samples: 390586. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:54:00,093][00394] Avg episode reward: [(0, '4.667')] +[2023-02-27 10:54:00,175][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000382_1564672.pth... +[2023-02-27 10:54:00,300][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000172_704512.pth +[2023-02-27 10:54:05,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 1576960. Throughput: 0: 928.7. Samples: 393554. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:05,097][00394] Avg episode reward: [(0, '4.640')] +[2023-02-27 10:54:10,093][00394] Fps is (10 sec: 3276.1, 60 sec: 3686.3, 300 sec: 3568.4). Total num frames: 1593344. Throughput: 0: 899.5. Samples: 397860. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:54:10,097][00394] Avg episode reward: [(0, '4.544')] +[2023-02-27 10:54:11,048][11895] Updated weights for policy 0, policy_version 390 (0.0034) +[2023-02-27 10:54:15,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1613824. Throughput: 0: 930.2. Samples: 403408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:15,093][00394] Avg episode reward: [(0, '4.669')] +[2023-02-27 10:54:20,091][00394] Fps is (10 sec: 4096.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 1634304. Throughput: 0: 948.2. Samples: 406766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:20,094][00394] Avg episode reward: [(0, '4.552')] +[2023-02-27 10:54:20,165][11895] Updated weights for policy 0, policy_version 400 (0.0015) +[2023-02-27 10:54:25,104][00394] Fps is (10 sec: 4090.8, 60 sec: 3753.9, 300 sec: 3596.1). Total num frames: 1654784. Throughput: 0: 934.8. Samples: 412902. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:54:25,108][00394] Avg episode reward: [(0, '4.492')] +[2023-02-27 10:54:30,095][00394] Fps is (10 sec: 3275.4, 60 sec: 3686.1, 300 sec: 3568.3). Total num frames: 1667072. Throughput: 0: 909.2. Samples: 417230. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:54:30,098][00394] Avg episode reward: [(0, '4.833')] +[2023-02-27 10:54:32,925][11895] Updated weights for policy 0, policy_version 410 (0.0034) +[2023-02-27 10:54:35,091][00394] Fps is (10 sec: 3281.0, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1687552. Throughput: 0: 913.1. Samples: 419508. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:54:35,096][00394] Avg episode reward: [(0, '4.803')] +[2023-02-27 10:54:40,091][00394] Fps is (10 sec: 4097.7, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1708032. Throughput: 0: 952.6. Samples: 426500. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:40,096][00394] Avg episode reward: [(0, '4.606')] +[2023-02-27 10:54:42,168][11895] Updated weights for policy 0, policy_version 420 (0.0022) +[2023-02-27 10:54:45,104][00394] Fps is (10 sec: 4090.5, 60 sec: 3753.8, 300 sec: 3596.0). Total num frames: 1728512. Throughput: 0: 926.2. Samples: 432276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:45,111][00394] Avg episode reward: [(0, '4.689')] +[2023-02-27 10:54:50,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3568.4). Total num frames: 1740800. Throughput: 0: 907.6. Samples: 434396. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:54:50,098][00394] Avg episode reward: [(0, '5.034')] +[2023-02-27 10:54:54,597][11895] Updated weights for policy 0, policy_version 430 (0.0022) +[2023-02-27 10:54:55,091][00394] Fps is (10 sec: 3281.2, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1761280. Throughput: 0: 920.5. Samples: 439282. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:54:55,097][00394] Avg episode reward: [(0, '5.295')] +[2023-02-27 10:54:55,101][11881] Saving new best policy, reward=5.295! +[2023-02-27 10:55:00,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1781760. Throughput: 0: 946.4. Samples: 445994. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:55:00,100][00394] Avg episode reward: [(0, '5.437')] +[2023-02-27 10:55:00,113][11881] Saving new best policy, reward=5.437! +[2023-02-27 10:55:04,946][11895] Updated weights for policy 0, policy_version 440 (0.0014) +[2023-02-27 10:55:05,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3596.1). Total num frames: 1802240. Throughput: 0: 943.1. Samples: 449204. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:55:05,097][00394] Avg episode reward: [(0, '5.254')] +[2023-02-27 10:55:10,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3568.4). Total num frames: 1814528. Throughput: 0: 902.7. Samples: 453510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:55:10,093][00394] Avg episode reward: [(0, '5.488')] +[2023-02-27 10:55:10,108][11881] Saving new best policy, reward=5.488! +[2023-02-27 10:55:15,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1835008. Throughput: 0: 918.9. Samples: 458578. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:55:15,101][00394] Avg episode reward: [(0, '5.041')] +[2023-02-27 10:55:16,714][11895] Updated weights for policy 0, policy_version 450 (0.0022) +[2023-02-27 10:55:20,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1855488. Throughput: 0: 941.8. Samples: 461890. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:55:20,098][00394] Avg episode reward: [(0, '4.847')] +[2023-02-27 10:55:25,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.9, 300 sec: 3582.3). Total num frames: 1871872. Throughput: 0: 927.8. Samples: 468250. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:55:25,097][00394] Avg episode reward: [(0, '5.037')] +[2023-02-27 10:55:27,954][11895] Updated weights for policy 0, policy_version 460 (0.0014) +[2023-02-27 10:55:30,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.7, 300 sec: 3568.4). Total num frames: 1888256. Throughput: 0: 894.4. Samples: 472512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 10:55:30,101][00394] Avg episode reward: [(0, '5.114')] +[2023-02-27 10:55:35,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 1908736. Throughput: 0: 896.1. Samples: 474720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:55:35,093][00394] Avg episode reward: [(0, '5.236')] +[2023-02-27 10:55:38,578][11895] Updated weights for policy 0, policy_version 470 (0.0014) +[2023-02-27 10:55:40,092][00394] Fps is (10 sec: 4095.7, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 1929216. Throughput: 0: 937.2. Samples: 481456. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:55:40,094][00394] Avg episode reward: [(0, '4.911')] +[2023-02-27 10:55:45,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3687.2, 300 sec: 3610.0). Total num frames: 1949696. Throughput: 0: 918.8. Samples: 487342. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:55:45,094][00394] Avg episode reward: [(0, '4.651')] +[2023-02-27 10:55:50,091][00394] Fps is (10 sec: 3277.1, 60 sec: 3686.4, 300 sec: 3582.3). Total num frames: 1961984. Throughput: 0: 894.2. Samples: 489444. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 10:55:50,098][00394] Avg episode reward: [(0, '4.544')] +[2023-02-27 10:55:50,872][11895] Updated weights for policy 0, policy_version 480 (0.0021) +[2023-02-27 10:55:55,094][00394] Fps is (10 sec: 2866.3, 60 sec: 3617.9, 300 sec: 3582.2). Total num frames: 1978368. Throughput: 0: 899.6. Samples: 493996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:55:55,103][00394] Avg episode reward: [(0, '4.671')] +[2023-02-27 10:56:00,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 2002944. Throughput: 0: 936.3. Samples: 500712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:56:00,094][00394] Avg episode reward: [(0, '4.563')] +[2023-02-27 10:56:00,105][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000489_2002944.pth... +[2023-02-27 10:56:00,224][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000274_1122304.pth +[2023-02-27 10:56:00,879][11895] Updated weights for policy 0, policy_version 490 (0.0037) +[2023-02-27 10:56:05,091][00394] Fps is (10 sec: 4097.2, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 2019328. Throughput: 0: 934.9. Samples: 503960. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:56:05,099][00394] Avg episode reward: [(0, '4.525')] +[2023-02-27 10:56:10,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 2035712. Throughput: 0: 892.9. Samples: 508430. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:56:10,093][00394] Avg episode reward: [(0, '4.580')] +[2023-02-27 10:56:13,612][11895] Updated weights for policy 0, policy_version 500 (0.0029) +[2023-02-27 10:56:15,091][00394] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 2052096. Throughput: 0: 905.8. Samples: 513274. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:56:15,094][00394] Avg episode reward: [(0, '4.593')] +[2023-02-27 10:56:20,095][00394] Fps is (10 sec: 3684.8, 60 sec: 3617.9, 300 sec: 3623.9). Total num frames: 2072576. Throughput: 0: 927.2. Samples: 516446. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:56:20,102][00394] Avg episode reward: [(0, '4.710')] +[2023-02-27 10:56:22,830][11895] Updated weights for policy 0, policy_version 510 (0.0015) +[2023-02-27 10:56:25,100][00394] Fps is (10 sec: 4092.3, 60 sec: 3685.8, 300 sec: 3637.7). Total num frames: 2093056. Throughput: 0: 922.8. Samples: 522992. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:56:25,109][00394] Avg episode reward: [(0, '4.703')] +[2023-02-27 10:56:30,095][00394] Fps is (10 sec: 3686.5, 60 sec: 3686.2, 300 sec: 3637.8). Total num frames: 2109440. Throughput: 0: 887.0. Samples: 527260. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:56:30,099][00394] Avg episode reward: [(0, '4.625')] +[2023-02-27 10:56:35,091][00394] Fps is (10 sec: 3279.9, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2125824. Throughput: 0: 887.7. Samples: 529392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:56:35,098][00394] Avg episode reward: [(0, '4.499')] +[2023-02-27 10:56:35,765][11895] Updated weights for policy 0, policy_version 520 (0.0033) +[2023-02-27 10:56:40,092][00394] Fps is (10 sec: 3687.5, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2146304. Throughput: 0: 928.6. Samples: 535780. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:56:40,096][00394] Avg episode reward: [(0, '4.336')] +[2023-02-27 10:56:45,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 2166784. Throughput: 0: 921.5. Samples: 542178. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:56:45,099][00394] Avg episode reward: [(0, '4.486')] +[2023-02-27 10:56:45,752][11895] Updated weights for policy 0, policy_version 530 (0.0013) +[2023-02-27 10:56:50,100][00394] Fps is (10 sec: 3274.1, 60 sec: 3617.6, 300 sec: 3665.5). Total num frames: 2179072. Throughput: 0: 895.1. Samples: 544246. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:56:50,108][00394] Avg episode reward: [(0, '4.689')] +[2023-02-27 10:56:55,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.6, 300 sec: 3651.7). Total num frames: 2199552. Throughput: 0: 892.4. Samples: 548590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:56:55,093][00394] Avg episode reward: [(0, '4.882')] +[2023-02-27 10:56:57,676][11895] Updated weights for policy 0, policy_version 540 (0.0042) +[2023-02-27 10:57:00,091][00394] Fps is (10 sec: 4099.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2220032. Throughput: 0: 933.0. Samples: 555258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:00,093][00394] Avg episode reward: [(0, '4.583')] +[2023-02-27 10:57:05,091][00394] Fps is (10 sec: 4095.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2240512. Throughput: 0: 937.8. Samples: 558644. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:57:05,099][00394] Avg episode reward: [(0, '4.628')] +[2023-02-27 10:57:08,599][11895] Updated weights for policy 0, policy_version 550 (0.0014) +[2023-02-27 10:57:10,094][00394] Fps is (10 sec: 3685.4, 60 sec: 3686.2, 300 sec: 3679.5). Total num frames: 2256896. Throughput: 0: 898.9. Samples: 563436. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:10,095][00394] Avg episode reward: [(0, '4.688')] +[2023-02-27 10:57:15,091][00394] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 2273280. Throughput: 0: 900.9. Samples: 567798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:15,094][00394] Avg episode reward: [(0, '4.626')] +[2023-02-27 10:57:19,757][11895] Updated weights for policy 0, policy_version 560 (0.0018) +[2023-02-27 10:57:20,091][00394] Fps is (10 sec: 3687.4, 60 sec: 3686.7, 300 sec: 3679.5). Total num frames: 2293760. Throughput: 0: 926.4. Samples: 571078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:20,097][00394] Avg episode reward: [(0, '4.682')] +[2023-02-27 10:57:25,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3687.0, 300 sec: 3693.3). Total num frames: 2314240. Throughput: 0: 931.5. Samples: 577696. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:25,101][00394] Avg episode reward: [(0, '4.613')] +[2023-02-27 10:57:30,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.4, 300 sec: 3665.6). Total num frames: 2326528. Throughput: 0: 891.9. Samples: 582312. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:30,101][00394] Avg episode reward: [(0, '4.590')] +[2023-02-27 10:57:31,742][11895] Updated weights for policy 0, policy_version 570 (0.0017) +[2023-02-27 10:57:35,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2342912. Throughput: 0: 894.1. Samples: 584470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:35,097][00394] Avg episode reward: [(0, '4.771')] +[2023-02-27 10:57:40,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.5, 300 sec: 3679.5). Total num frames: 2367488. Throughput: 0: 927.5. Samples: 590326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:40,094][00394] Avg episode reward: [(0, '4.931')] +[2023-02-27 10:57:41,805][11895] Updated weights for policy 0, policy_version 580 (0.0022) +[2023-02-27 10:57:45,091][00394] Fps is (10 sec: 4505.5, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2387968. Throughput: 0: 930.6. Samples: 597134. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:45,101][00394] Avg episode reward: [(0, '4.799')] +[2023-02-27 10:57:50,095][00394] Fps is (10 sec: 3275.5, 60 sec: 3686.7, 300 sec: 3665.5). Total num frames: 2400256. Throughput: 0: 904.2. Samples: 599338. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:57:50,101][00394] Avg episode reward: [(0, '4.697')] +[2023-02-27 10:57:54,837][11895] Updated weights for policy 0, policy_version 590 (0.0048) +[2023-02-27 10:57:55,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2416640. Throughput: 0: 890.6. Samples: 603512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:57:55,094][00394] Avg episode reward: [(0, '4.594')] +[2023-02-27 10:58:00,091][00394] Fps is (10 sec: 3687.9, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2437120. Throughput: 0: 932.4. Samples: 609756. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:58:00,094][00394] Avg episode reward: [(0, '4.580')] +[2023-02-27 10:58:00,106][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000595_2437120.pth... +[2023-02-27 10:58:00,248][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000382_1564672.pth +[2023-02-27 10:58:03,999][11895] Updated weights for policy 0, policy_version 600 (0.0014) +[2023-02-27 10:58:05,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2461696. Throughput: 0: 932.2. Samples: 613026. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:58:05,097][00394] Avg episode reward: [(0, '4.558')] +[2023-02-27 10:58:10,093][00394] Fps is (10 sec: 3685.6, 60 sec: 3618.2, 300 sec: 3665.5). Total num frames: 2473984. Throughput: 0: 901.8. Samples: 618280. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:58:10,096][00394] Avg episode reward: [(0, '4.538')] +[2023-02-27 10:58:15,091][00394] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2490368. Throughput: 0: 892.7. Samples: 622486. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:58:15,100][00394] Avg episode reward: [(0, '4.574')] +[2023-02-27 10:58:16,883][11895] Updated weights for policy 0, policy_version 610 (0.0025) +[2023-02-27 10:58:20,093][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.0, 300 sec: 3665.5). Total num frames: 2510848. Throughput: 0: 909.9. Samples: 625416. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 10:58:20,101][00394] Avg episode reward: [(0, '4.678')] +[2023-02-27 10:58:25,091][00394] Fps is (10 sec: 4096.2, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2531328. Throughput: 0: 929.8. Samples: 632168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:58:25,094][00394] Avg episode reward: [(0, '4.738')] +[2023-02-27 10:58:26,502][11895] Updated weights for policy 0, policy_version 620 (0.0021) +[2023-02-27 10:58:30,091][00394] Fps is (10 sec: 3687.2, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2547712. Throughput: 0: 890.8. Samples: 637222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 10:58:30,098][00394] Avg episode reward: [(0, '4.743')] +[2023-02-27 10:58:35,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2560000. Throughput: 0: 888.2. Samples: 639302. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:58:35,098][00394] Avg episode reward: [(0, '4.675')] +[2023-02-27 10:58:38,975][11895] Updated weights for policy 0, policy_version 630 (0.0028) +[2023-02-27 10:58:40,092][00394] Fps is (10 sec: 3685.9, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2584576. Throughput: 0: 915.8. Samples: 644724. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:58:40,099][00394] Avg episode reward: [(0, '4.629')] +[2023-02-27 10:58:45,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2605056. Throughput: 0: 925.5. Samples: 651402. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:58:45,094][00394] Avg episode reward: [(0, '4.921')] +[2023-02-27 10:58:50,024][11895] Updated weights for policy 0, policy_version 640 (0.0018) +[2023-02-27 10:58:50,097][00394] Fps is (10 sec: 3684.8, 60 sec: 3686.3, 300 sec: 3665.5). Total num frames: 2621440. Throughput: 0: 910.2. Samples: 653990. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:58:50,099][00394] Avg episode reward: [(0, '4.830')] +[2023-02-27 10:58:55,091][00394] Fps is (10 sec: 2867.1, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 2633728. Throughput: 0: 887.6. Samples: 658220. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 10:58:55,097][00394] Avg episode reward: [(0, '4.688')] +[2023-02-27 10:59:00,091][00394] Fps is (10 sec: 3278.6, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2654208. Throughput: 0: 918.7. Samples: 663826. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:59:00,093][00394] Avg episode reward: [(0, '4.633')] +[2023-02-27 10:59:01,307][11895] Updated weights for policy 0, policy_version 650 (0.0023) +[2023-02-27 10:59:05,091][00394] Fps is (10 sec: 4505.7, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 2678784. Throughput: 0: 928.2. Samples: 667182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:59:05,094][00394] Avg episode reward: [(0, '4.841')] +[2023-02-27 10:59:10,095][00394] Fps is (10 sec: 3684.9, 60 sec: 3618.0, 300 sec: 3651.6). Total num frames: 2691072. Throughput: 0: 905.1. Samples: 672900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:59:10,102][00394] Avg episode reward: [(0, '4.859')] +[2023-02-27 10:59:13,325][11895] Updated weights for policy 0, policy_version 660 (0.0013) +[2023-02-27 10:59:15,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.2, 300 sec: 3637.8). Total num frames: 2707456. Throughput: 0: 887.2. Samples: 677144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:59:15,098][00394] Avg episode reward: [(0, '4.698')] +[2023-02-27 10:59:20,091][00394] Fps is (10 sec: 3687.9, 60 sec: 3618.3, 300 sec: 3638.0). Total num frames: 2727936. Throughput: 0: 898.8. Samples: 679748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:59:20,098][00394] Avg episode reward: [(0, '4.608')] +[2023-02-27 10:59:23,337][11895] Updated weights for policy 0, policy_version 670 (0.0017) +[2023-02-27 10:59:25,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2748416. Throughput: 0: 927.7. Samples: 686468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:59:25,098][00394] Avg episode reward: [(0, '4.740')] +[2023-02-27 10:59:30,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 2764800. Throughput: 0: 900.0. Samples: 691900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:59:30,097][00394] Avg episode reward: [(0, '4.913')] +[2023-02-27 10:59:35,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2781184. Throughput: 0: 888.8. Samples: 693982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:59:35,094][00394] Avg episode reward: [(0, '4.905')] +[2023-02-27 10:59:36,072][11895] Updated weights for policy 0, policy_version 680 (0.0014) +[2023-02-27 10:59:40,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3638.0). Total num frames: 2801664. Throughput: 0: 909.0. Samples: 699124. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:59:40,096][00394] Avg episode reward: [(0, '5.060')] +[2023-02-27 10:59:45,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2822144. Throughput: 0: 936.3. Samples: 705958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 10:59:45,096][00394] Avg episode reward: [(0, '4.948')] +[2023-02-27 10:59:45,286][11895] Updated weights for policy 0, policy_version 690 (0.0024) +[2023-02-27 10:59:50,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.5, 300 sec: 3651.7). Total num frames: 2838528. Throughput: 0: 930.4. Samples: 709050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 10:59:50,099][00394] Avg episode reward: [(0, '4.965')] +[2023-02-27 10:59:55,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2854912. Throughput: 0: 898.7. Samples: 713340. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 10:59:55,098][00394] Avg episode reward: [(0, '4.864')] +[2023-02-27 10:59:58,083][11895] Updated weights for policy 0, policy_version 700 (0.0012) +[2023-02-27 11:00:00,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2875392. Throughput: 0: 923.6. Samples: 718706. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:00:00,094][00394] Avg episode reward: [(0, '4.912')] +[2023-02-27 11:00:00,103][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000702_2875392.pth... +[2023-02-27 11:00:00,218][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000489_2002944.pth +[2023-02-27 11:00:05,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2895872. Throughput: 0: 938.5. Samples: 721980. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) +[2023-02-27 11:00:05,096][00394] Avg episode reward: [(0, '4.854')] +[2023-02-27 11:00:07,250][11895] Updated weights for policy 0, policy_version 710 (0.0012) +[2023-02-27 11:00:10,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.7, 300 sec: 3651.7). Total num frames: 2912256. Throughput: 0: 927.0. Samples: 728182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:00:10,100][00394] Avg episode reward: [(0, '4.650')] +[2023-02-27 11:00:15,091][00394] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 2928640. Throughput: 0: 901.3. Samples: 732460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:00:15,098][00394] Avg episode reward: [(0, '4.565')] +[2023-02-27 11:00:19,892][11895] Updated weights for policy 0, policy_version 720 (0.0026) +[2023-02-27 11:00:20,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 2949120. Throughput: 0: 903.8. Samples: 734654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:00:20,099][00394] Avg episode reward: [(0, '4.593')] +[2023-02-27 11:00:25,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2969600. Throughput: 0: 936.4. Samples: 741264. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:00:25,100][00394] Avg episode reward: [(0, '4.593')] +[2023-02-27 11:00:30,091][00394] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 2985984. Throughput: 0: 915.5. Samples: 747154. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:00:30,098][00394] Avg episode reward: [(0, '4.668')] +[2023-02-27 11:00:30,545][11895] Updated weights for policy 0, policy_version 730 (0.0020) +[2023-02-27 11:00:35,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3002368. Throughput: 0: 894.2. Samples: 749288. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:00:35,093][00394] Avg episode reward: [(0, '4.537')] +[2023-02-27 11:00:40,091][00394] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3018752. Throughput: 0: 902.4. Samples: 753948. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:00:40,100][00394] Avg episode reward: [(0, '4.617')] +[2023-02-27 11:00:42,078][11895] Updated weights for policy 0, policy_version 740 (0.0012) +[2023-02-27 11:00:45,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3043328. Throughput: 0: 932.6. Samples: 760674. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:00:45,098][00394] Avg episode reward: [(0, '4.546')] +[2023-02-27 11:00:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3059712. Throughput: 0: 933.4. Samples: 763982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:00:50,098][00394] Avg episode reward: [(0, '4.621')] +[2023-02-27 11:00:53,674][11895] Updated weights for policy 0, policy_version 750 (0.0019) +[2023-02-27 11:00:55,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3076096. Throughput: 0: 891.9. Samples: 768318. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:00:55,099][00394] Avg episode reward: [(0, '4.585')] +[2023-02-27 11:01:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3092480. Throughput: 0: 909.2. Samples: 773374. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:01:00,094][00394] Avg episode reward: [(0, '4.499')] +[2023-02-27 11:01:04,235][11895] Updated weights for policy 0, policy_version 760 (0.0022) +[2023-02-27 11:01:05,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3112960. Throughput: 0: 932.8. Samples: 776630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:01:05,093][00394] Avg episode reward: [(0, '4.640')] +[2023-02-27 11:01:10,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3133440. Throughput: 0: 928.8. Samples: 783060. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:01:10,093][00394] Avg episode reward: [(0, '4.705')] +[2023-02-27 11:01:15,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3637.9). Total num frames: 3145728. Throughput: 0: 894.8. Samples: 787420. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:01:15,098][00394] Avg episode reward: [(0, '4.611')] +[2023-02-27 11:01:16,611][11895] Updated weights for policy 0, policy_version 770 (0.0025) +[2023-02-27 11:01:20,091][00394] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3637.9). Total num frames: 3166208. Throughput: 0: 894.8. Samples: 789552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:01:20,093][00394] Avg episode reward: [(0, '4.614')] +[2023-02-27 11:01:25,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3186688. Throughput: 0: 932.5. Samples: 795912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:01:25,093][00394] Avg episode reward: [(0, '4.762')] +[2023-02-27 11:01:26,264][11895] Updated weights for policy 0, policy_version 780 (0.0026) +[2023-02-27 11:01:30,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3207168. Throughput: 0: 920.4. Samples: 802090. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:01:30,095][00394] Avg episode reward: [(0, '4.586')] +[2023-02-27 11:01:35,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3219456. Throughput: 0: 894.1. Samples: 804218. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:01:35,098][00394] Avg episode reward: [(0, '4.527')] +[2023-02-27 11:01:38,969][11895] Updated weights for policy 0, policy_version 790 (0.0013) +[2023-02-27 11:01:40,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3239936. Throughput: 0: 895.3. Samples: 808606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:01:40,098][00394] Avg episode reward: [(0, '4.563')] +[2023-02-27 11:01:45,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3665.7). Total num frames: 3260416. Throughput: 0: 935.2. Samples: 815460. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:01:45,098][00394] Avg episode reward: [(0, '4.822')] +[2023-02-27 11:01:48,173][11895] Updated weights for policy 0, policy_version 800 (0.0015) +[2023-02-27 11:01:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3280896. Throughput: 0: 935.6. Samples: 818730. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:01:50,094][00394] Avg episode reward: [(0, '4.806')] +[2023-02-27 11:01:55,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3293184. Throughput: 0: 893.5. Samples: 823268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:01:55,096][00394] Avg episode reward: [(0, '4.770')] +[2023-02-27 11:02:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3313664. Throughput: 0: 901.6. Samples: 827990. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:02:00,097][00394] Avg episode reward: [(0, '4.617')] +[2023-02-27 11:02:00,107][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000809_3313664.pth... +[2023-02-27 11:02:00,218][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000595_2437120.pth +[2023-02-27 11:02:00,988][11895] Updated weights for policy 0, policy_version 810 (0.0019) +[2023-02-27 11:02:05,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 3334144. Throughput: 0: 928.8. Samples: 831346. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:02:05,093][00394] Avg episode reward: [(0, '4.651')] +[2023-02-27 11:02:10,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3354624. Throughput: 0: 936.8. Samples: 838066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:02:10,098][00394] Avg episode reward: [(0, '4.839')] +[2023-02-27 11:02:11,008][11895] Updated weights for policy 0, policy_version 820 (0.0014) +[2023-02-27 11:02:15,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3366912. Throughput: 0: 895.3. Samples: 842380. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:02:15,098][00394] Avg episode reward: [(0, '4.789')] +[2023-02-27 11:02:20,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3383296. Throughput: 0: 894.8. Samples: 844484. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:02:20,094][00394] Avg episode reward: [(0, '4.735')] +[2023-02-27 11:02:23,186][11895] Updated weights for policy 0, policy_version 830 (0.0019) +[2023-02-27 11:02:25,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3407872. Throughput: 0: 931.8. Samples: 850538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:02:25,098][00394] Avg episode reward: [(0, '4.634')] +[2023-02-27 11:02:30,096][00394] Fps is (10 sec: 4503.3, 60 sec: 3686.1, 300 sec: 3679.4). Total num frames: 3428352. Throughput: 0: 925.1. Samples: 857096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:02:30,100][00394] Avg episode reward: [(0, '4.475')] +[2023-02-27 11:02:34,278][11895] Updated weights for policy 0, policy_version 840 (0.0023) +[2023-02-27 11:02:35,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3440640. Throughput: 0: 899.2. Samples: 859196. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:02:35,102][00394] Avg episode reward: [(0, '4.495')] +[2023-02-27 11:02:40,091][00394] Fps is (10 sec: 2868.7, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3457024. Throughput: 0: 893.6. Samples: 863482. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:02:40,098][00394] Avg episode reward: [(0, '4.626')] +[2023-02-27 11:02:44,952][11895] Updated weights for policy 0, policy_version 850 (0.0016) +[2023-02-27 11:02:45,091][00394] Fps is (10 sec: 4096.1, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3481600. Throughput: 0: 934.8. Samples: 870058. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:02:45,094][00394] Avg episode reward: [(0, '4.720')] +[2023-02-27 11:02:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 3497984. Throughput: 0: 935.3. Samples: 873434. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:02:50,095][00394] Avg episode reward: [(0, '4.558')] +[2023-02-27 11:02:55,095][00394] Fps is (10 sec: 3275.5, 60 sec: 3686.2, 300 sec: 3651.6). Total num frames: 3514368. Throughput: 0: 893.9. Samples: 878294. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:02:55,097][00394] Avg episode reward: [(0, '4.391')] +[2023-02-27 11:02:57,327][11895] Updated weights for policy 0, policy_version 860 (0.0017) +[2023-02-27 11:03:00,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3530752. Throughput: 0: 896.3. Samples: 882714. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:03:00,094][00394] Avg episode reward: [(0, '4.404')] +[2023-02-27 11:03:05,091][00394] Fps is (10 sec: 3687.9, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3551232. Throughput: 0: 924.4. Samples: 886084. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:03:05,097][00394] Avg episode reward: [(0, '4.604')] +[2023-02-27 11:03:06,997][11895] Updated weights for policy 0, policy_version 870 (0.0019) +[2023-02-27 11:03:10,091][00394] Fps is (10 sec: 4505.6, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 3575808. Throughput: 0: 937.6. Samples: 892728. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:03:10,099][00394] Avg episode reward: [(0, '4.688')] +[2023-02-27 11:03:15,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 3588096. Throughput: 0: 896.5. Samples: 897436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:03:15,097][00394] Avg episode reward: [(0, '4.614')] +[2023-02-27 11:03:19,737][11895] Updated weights for policy 0, policy_version 880 (0.0016) +[2023-02-27 11:03:20,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3604480. Throughput: 0: 896.2. Samples: 899524. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:03:20,093][00394] Avg episode reward: [(0, '4.798')] +[2023-02-27 11:03:25,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3624960. Throughput: 0: 932.2. Samples: 905432. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:03:25,099][00394] Avg episode reward: [(0, '4.654')] +[2023-02-27 11:03:29,060][11895] Updated weights for policy 0, policy_version 890 (0.0013) +[2023-02-27 11:03:30,092][00394] Fps is (10 sec: 4095.6, 60 sec: 3618.4, 300 sec: 3679.4). Total num frames: 3645440. Throughput: 0: 933.9. Samples: 912086. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:03:30,099][00394] Avg episode reward: [(0, '4.436')] +[2023-02-27 11:03:35,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3651.7). Total num frames: 3661824. Throughput: 0: 906.8. Samples: 914240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:03:35,099][00394] Avg episode reward: [(0, '4.556')] +[2023-02-27 11:03:40,091][00394] Fps is (10 sec: 3277.1, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3678208. Throughput: 0: 894.6. Samples: 918546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:03:40,094][00394] Avg episode reward: [(0, '4.492')] +[2023-02-27 11:03:41,786][11895] Updated weights for policy 0, policy_version 900 (0.0033) +[2023-02-27 11:03:45,091][00394] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3651.8). Total num frames: 3698688. Throughput: 0: 936.5. Samples: 924858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:03:45,097][00394] Avg episode reward: [(0, '4.363')] +[2023-02-27 11:03:50,091][00394] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 3719168. Throughput: 0: 935.4. Samples: 928176. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:03:50,096][00394] Avg episode reward: [(0, '4.451')] +[2023-02-27 11:03:51,891][11895] Updated weights for policy 0, policy_version 910 (0.0018) +[2023-02-27 11:03:55,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.6, 300 sec: 3665.6). Total num frames: 3735552. Throughput: 0: 901.4. Samples: 933292. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:03:55,098][00394] Avg episode reward: [(0, '4.517')] +[2023-02-27 11:04:00,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3747840. Throughput: 0: 891.5. Samples: 937552. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:04:00,100][00394] Avg episode reward: [(0, '4.497')] +[2023-02-27 11:04:00,113][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000915_3747840.pth... +[2023-02-27 11:04:00,236][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000702_2875392.pth +[2023-02-27 11:04:03,914][11895] Updated weights for policy 0, policy_version 920 (0.0046) +[2023-02-27 11:04:05,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3772416. Throughput: 0: 913.9. Samples: 940648. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:04:05,093][00394] Avg episode reward: [(0, '4.528')] +[2023-02-27 11:04:10,094][00394] Fps is (10 sec: 4504.2, 60 sec: 3617.9, 300 sec: 3679.4). Total num frames: 3792896. Throughput: 0: 935.0. Samples: 947508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:04:10,097][00394] Avg episode reward: [(0, '4.720')] +[2023-02-27 11:04:14,801][11895] Updated weights for policy 0, policy_version 930 (0.0013) +[2023-02-27 11:04:15,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3809280. Throughput: 0: 896.5. Samples: 952428. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) +[2023-02-27 11:04:15,098][00394] Avg episode reward: [(0, '4.708')] +[2023-02-27 11:04:20,092][00394] Fps is (10 sec: 2867.9, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3821568. Throughput: 0: 896.7. Samples: 954592. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:04:20,098][00394] Avg episode reward: [(0, '4.744')] +[2023-02-27 11:04:25,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3846144. Throughput: 0: 925.7. Samples: 960204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:04:25,093][00394] Avg episode reward: [(0, '4.699')] +[2023-02-27 11:04:25,848][11895] Updated weights for policy 0, policy_version 940 (0.0019) +[2023-02-27 11:04:30,091][00394] Fps is (10 sec: 4505.9, 60 sec: 3686.5, 300 sec: 3679.5). Total num frames: 3866624. Throughput: 0: 930.4. Samples: 966728. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) +[2023-02-27 11:04:30,098][00394] Avg episode reward: [(0, '4.686')] +[2023-02-27 11:04:35,091][00394] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 3883008. Throughput: 0: 911.8. Samples: 969208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:04:35,098][00394] Avg episode reward: [(0, '4.696')] +[2023-02-27 11:04:37,938][11895] Updated weights for policy 0, policy_version 950 (0.0012) +[2023-02-27 11:04:40,091][00394] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3637.8). Total num frames: 3895296. Throughput: 0: 891.2. Samples: 973396. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:04:40,094][00394] Avg episode reward: [(0, '4.786')] +[2023-02-27 11:04:45,091][00394] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3915776. Throughput: 0: 924.2. Samples: 979142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) +[2023-02-27 11:04:45,095][00394] Avg episode reward: [(0, '4.642')] +[2023-02-27 11:04:48,443][11895] Updated weights for policy 0, policy_version 960 (0.0029) +[2023-02-27 11:04:50,091][00394] Fps is (10 sec: 4095.9, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 3936256. Throughput: 0: 928.2. Samples: 982418. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) +[2023-02-27 11:04:50,093][00394] Avg episode reward: [(0, '4.615')] +[2023-02-27 11:04:55,091][00394] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3952640. Throughput: 0: 899.3. Samples: 987976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) +[2023-02-27 11:04:55,098][00394] Avg episode reward: [(0, '4.546')] +[2023-02-27 11:05:00,091][00394] Fps is (10 sec: 3276.9, 60 sec: 3686.4, 300 sec: 3637.8). Total num frames: 3969024. Throughput: 0: 885.0. Samples: 992254. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) +[2023-02-27 11:05:00,094][00394] Avg episode reward: [(0, '4.628')] +[2023-02-27 11:05:01,291][11895] Updated weights for policy 0, policy_version 970 (0.0011) +[2023-02-27 11:05:05,091][00394] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3651.7). Total num frames: 3989504. Throughput: 0: 897.7. Samples: 994986. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) +[2023-02-27 11:05:05,098][00394] Avg episode reward: [(0, '4.637')] +[2023-02-27 11:05:08,730][11881] Stopping Batcher_0... +[2023-02-27 11:05:08,732][11881] Loop batcher_evt_loop terminating... +[2023-02-27 11:05:08,734][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:05:08,738][00394] Component Batcher_0 stopped! +[2023-02-27 11:05:08,783][11895] Weights refcount: 2 0 +[2023-02-27 11:05:08,793][00394] Component InferenceWorker_p0-w0 stopped! +[2023-02-27 11:05:08,793][11895] Stopping InferenceWorker_p0-w0... +[2023-02-27 11:05:08,804][11895] Loop inference_proc0-0_evt_loop terminating... +[2023-02-27 11:05:08,811][00394] Component RolloutWorker_w0 stopped! +[2023-02-27 11:05:08,816][00394] Component RolloutWorker_w5 stopped! +[2023-02-27 11:05:08,819][11901] Stopping RolloutWorker_w5... +[2023-02-27 11:05:08,811][11896] Stopping RolloutWorker_w0... +[2023-02-27 11:05:08,828][11902] Stopping RolloutWorker_w6... +[2023-02-27 11:05:08,826][11896] Loop rollout_proc0_evt_loop terminating... +[2023-02-27 11:05:08,827][00394] Component RolloutWorker_w6 stopped! +[2023-02-27 11:05:08,820][11901] Loop rollout_proc5_evt_loop terminating... +[2023-02-27 11:05:08,836][11899] Stopping RolloutWorker_w4... +[2023-02-27 11:05:08,836][00394] Component RolloutWorker_w4 stopped! +[2023-02-27 11:05:08,832][11902] Loop rollout_proc6_evt_loop terminating... +[2023-02-27 11:05:08,849][00394] Component RolloutWorker_w3 stopped! +[2023-02-27 11:05:08,850][11898] Stopping RolloutWorker_w2... +[2023-02-27 11:05:08,838][11899] Loop rollout_proc4_evt_loop terminating... +[2023-02-27 11:05:08,852][00394] Component RolloutWorker_w2 stopped! +[2023-02-27 11:05:08,856][11900] Stopping RolloutWorker_w3... +[2023-02-27 11:05:08,861][11898] Loop rollout_proc2_evt_loop terminating... +[2023-02-27 11:05:08,871][00394] Component RolloutWorker_w1 stopped! +[2023-02-27 11:05:08,875][11897] Stopping RolloutWorker_w1... +[2023-02-27 11:05:08,875][11897] Loop rollout_proc1_evt_loop terminating... +[2023-02-27 11:05:08,879][00394] Component RolloutWorker_w7 stopped! +[2023-02-27 11:05:08,883][11903] Stopping RolloutWorker_w7... +[2023-02-27 11:05:08,876][11900] Loop rollout_proc3_evt_loop terminating... +[2023-02-27 11:05:08,889][11903] Loop rollout_proc7_evt_loop terminating... +[2023-02-27 11:05:08,919][11881] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000809_3313664.pth +[2023-02-27 11:05:08,932][11881] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:05:09,118][00394] Component LearnerWorker_p0 stopped! +[2023-02-27 11:05:09,123][00394] Waiting for process learner_proc0 to stop... +[2023-02-27 11:05:09,126][11881] Stopping LearnerWorker_p0... +[2023-02-27 11:05:09,127][11881] Loop learner_proc0_evt_loop terminating... +[2023-02-27 11:05:11,086][00394] Waiting for process inference_proc0-0 to join... +[2023-02-27 11:05:11,753][00394] Waiting for process rollout_proc0 to join... +[2023-02-27 11:05:11,760][00394] Waiting for process rollout_proc1 to join... +[2023-02-27 11:05:12,402][00394] Waiting for process rollout_proc2 to join... +[2023-02-27 11:05:12,411][00394] Waiting for process rollout_proc3 to join... +[2023-02-27 11:05:12,414][00394] Waiting for process rollout_proc4 to join... +[2023-02-27 11:05:12,416][00394] Waiting for process rollout_proc5 to join... +[2023-02-27 11:05:12,417][00394] Waiting for process rollout_proc6 to join... +[2023-02-27 11:05:12,421][00394] Waiting for process rollout_proc7 to join... +[2023-02-27 11:05:12,422][00394] Batcher 0 profile tree view: +batching: 25.7700, releasing_batches: 0.0255 +[2023-02-27 11:05:12,424][00394] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0000 + wait_policy_total: 554.6697 +update_model: 7.7464 + weight_update: 0.0014 +one_step: 0.0059 + handle_policy_step: 516.4659 + deserialize: 15.0708, stack: 2.9974, obs_to_device_normalize: 114.0557, forward: 248.2549, send_messages: 26.1759 + prepare_outputs: 83.6037 + to_cpu: 51.9044 +[2023-02-27 11:05:12,425][00394] Learner 0 profile tree view: +misc: 0.0067, prepare_batch: 16.5836 +train: 76.4678 + epoch_init: 0.0097, minibatch_init: 0.0103, losses_postprocess: 0.5752, kl_divergence: 0.5372, after_optimizer: 32.9634 + calculate_losses: 27.5546 + losses_init: 0.0143, forward_head: 1.8299, bptt_initial: 18.0597, tail: 1.1343, advantages_returns: 0.2970, losses: 3.5739 + bptt: 2.3090 + bptt_forward_core: 2.2016 + update: 14.2526 + clip: 1.4293 +[2023-02-27 11:05:12,428][00394] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.4081, enqueue_policy_requests: 149.7261, env_step: 841.2801, overhead: 21.5995, complete_rollouts: 7.4514 +save_policy_outputs: 20.6615 + split_output_tensors: 10.3818 +[2023-02-27 11:05:12,431][00394] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3089, enqueue_policy_requests: 154.1436, env_step: 839.9402, overhead: 20.9316, complete_rollouts: 6.7222 +save_policy_outputs: 20.2122 + split_output_tensors: 9.9854 +[2023-02-27 11:05:12,432][00394] Loop Runner_EvtLoop terminating... +[2023-02-27 11:05:12,434][00394] Runner profile tree view: +main_loop: 1146.9944 +[2023-02-27 11:05:12,437][00394] Collected {0: 4005888}, FPS: 3492.5 +[2023-02-27 11:06:05,797][00394] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-27 11:06:05,799][00394] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-27 11:06:05,804][00394] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-27 11:06:05,806][00394] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-27 11:06:05,809][00394] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:06:05,812][00394] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-27 11:06:05,814][00394] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:06:05,816][00394] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-27 11:06:05,817][00394] Adding new argument 'push_to_hub'=False that is not in the saved config file! +[2023-02-27 11:06:05,819][00394] Adding new argument 'hf_repository'=None that is not in the saved config file! +[2023-02-27 11:06:05,821][00394] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-27 11:06:05,823][00394] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-27 11:06:05,824][00394] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-27 11:06:05,826][00394] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-27 11:06:05,828][00394] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-27 11:06:05,853][00394] Doom resolution: 160x120, resize resolution: (128, 72) +[2023-02-27 11:06:05,856][00394] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:06:05,859][00394] RunningMeanStd input shape: (1,) +[2023-02-27 11:06:05,876][00394] ConvEncoder: input_channels=3 +[2023-02-27 11:06:06,533][00394] Conv encoder output size: 512 +[2023-02-27 11:06:06,535][00394] Policy head output size: 512 +[2023-02-27 11:06:08,982][00394] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:06:10,281][00394] Num frames 100... +[2023-02-27 11:06:10,402][00394] Num frames 200... +[2023-02-27 11:06:10,529][00394] Num frames 300... +[2023-02-27 11:06:10,679][00394] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840 +[2023-02-27 11:06:10,680][00394] Avg episode reward: 3.840, avg true_objective: 3.840 +[2023-02-27 11:06:10,713][00394] Num frames 400... +[2023-02-27 11:06:10,827][00394] Num frames 500... +[2023-02-27 11:06:10,951][00394] Num frames 600... +[2023-02-27 11:06:11,073][00394] Num frames 700... +[2023-02-27 11:06:11,196][00394] Num frames 800... +[2023-02-27 11:06:11,290][00394] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2023-02-27 11:06:11,291][00394] Avg episode reward: 4.660, avg true_objective: 4.160 +[2023-02-27 11:06:11,378][00394] Num frames 900... +[2023-02-27 11:06:11,514][00394] Num frames 1000... +[2023-02-27 11:06:11,632][00394] Num frames 1100... +[2023-02-27 11:06:11,752][00394] Num frames 1200... +[2023-02-27 11:06:11,869][00394] Num frames 1300... +[2023-02-27 11:06:11,997][00394] Num frames 1400... +[2023-02-27 11:06:12,065][00394] Avg episode rewards: #0: 6.360, true rewards: #0: 4.693 +[2023-02-27 11:06:12,067][00394] Avg episode reward: 6.360, avg true_objective: 4.693 +[2023-02-27 11:06:12,176][00394] Num frames 1500... +[2023-02-27 11:06:12,303][00394] Num frames 1600... +[2023-02-27 11:06:12,419][00394] Num frames 1700... +[2023-02-27 11:06:12,587][00394] Avg episode rewards: #0: 5.730, true rewards: #0: 4.480 +[2023-02-27 11:06:12,589][00394] Avg episode reward: 5.730, avg true_objective: 4.480 +[2023-02-27 11:06:12,602][00394] Num frames 1800... +[2023-02-27 11:06:12,712][00394] Num frames 1900... +[2023-02-27 11:06:12,827][00394] Num frames 2000... +[2023-02-27 11:06:12,947][00394] Num frames 2100... +[2023-02-27 11:06:13,092][00394] Avg episode rewards: #0: 5.352, true rewards: #0: 4.352 +[2023-02-27 11:06:13,095][00394] Avg episode reward: 5.352, avg true_objective: 4.352 +[2023-02-27 11:06:13,125][00394] Num frames 2200... +[2023-02-27 11:06:13,243][00394] Num frames 2300... +[2023-02-27 11:06:13,362][00394] Num frames 2400... +[2023-02-27 11:06:13,481][00394] Num frames 2500... +[2023-02-27 11:06:13,612][00394] Num frames 2600... +[2023-02-27 11:06:13,697][00394] Avg episode rewards: #0: 5.373, true rewards: #0: 4.373 +[2023-02-27 11:06:13,700][00394] Avg episode reward: 5.373, avg true_objective: 4.373 +[2023-02-27 11:06:13,791][00394] Num frames 2700... +[2023-02-27 11:06:13,911][00394] Num frames 2800... +[2023-02-27 11:06:14,031][00394] Num frames 2900... +[2023-02-27 11:06:14,155][00394] Num frames 3000... +[2023-02-27 11:06:14,222][00394] Avg episode rewards: #0: 5.154, true rewards: #0: 4.297 +[2023-02-27 11:06:14,223][00394] Avg episode reward: 5.154, avg true_objective: 4.297 +[2023-02-27 11:06:14,336][00394] Num frames 3100... +[2023-02-27 11:06:14,503][00394] Num frames 3200... +[2023-02-27 11:06:14,686][00394] Num frames 3300... +[2023-02-27 11:06:14,897][00394] Avg episode rewards: #0: 4.990, true rewards: #0: 4.240 +[2023-02-27 11:06:14,902][00394] Avg episode reward: 4.990, avg true_objective: 4.240 +[2023-02-27 11:06:14,930][00394] Num frames 3400... +[2023-02-27 11:06:15,100][00394] Num frames 3500... +[2023-02-27 11:06:15,263][00394] Num frames 3600... +[2023-02-27 11:06:15,429][00394] Num frames 3700... +[2023-02-27 11:06:15,602][00394] Num frames 3800... +[2023-02-27 11:06:15,732][00394] Avg episode rewards: #0: 5.044, true rewards: #0: 4.267 +[2023-02-27 11:06:15,738][00394] Avg episode reward: 5.044, avg true_objective: 4.267 +[2023-02-27 11:06:15,845][00394] Num frames 3900... +[2023-02-27 11:06:16,014][00394] Num frames 4000... +[2023-02-27 11:06:16,181][00394] Num frames 4100... +[2023-02-27 11:06:16,356][00394] Num frames 4200... +[2023-02-27 11:06:16,456][00394] Avg episode rewards: #0: 4.924, true rewards: #0: 4.224 +[2023-02-27 11:06:16,459][00394] Avg episode reward: 4.924, avg true_objective: 4.224 +[2023-02-27 11:06:38,142][00394] Replay video saved to /content/train_dir/default_experiment/replay.mp4! +[2023-02-27 11:07:55,668][00394] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json +[2023-02-27 11:07:55,669][00394] Overriding arg 'num_workers' with value 1 passed from command line +[2023-02-27 11:07:55,673][00394] Adding new argument 'no_render'=True that is not in the saved config file! +[2023-02-27 11:07:55,675][00394] Adding new argument 'save_video'=True that is not in the saved config file! +[2023-02-27 11:07:55,677][00394] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! +[2023-02-27 11:07:55,679][00394] Adding new argument 'video_name'=None that is not in the saved config file! +[2023-02-27 11:07:55,680][00394] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! +[2023-02-27 11:07:55,681][00394] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! +[2023-02-27 11:07:55,683][00394] Adding new argument 'push_to_hub'=True that is not in the saved config file! +[2023-02-27 11:07:55,684][00394] Adding new argument 'hf_repository'='Clawoo/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! +[2023-02-27 11:07:55,685][00394] Adding new argument 'policy_index'=0 that is not in the saved config file! +[2023-02-27 11:07:55,686][00394] Adding new argument 'eval_deterministic'=False that is not in the saved config file! +[2023-02-27 11:07:55,687][00394] Adding new argument 'train_script'=None that is not in the saved config file! +[2023-02-27 11:07:55,689][00394] Adding new argument 'enjoy_script'=None that is not in the saved config file! +[2023-02-27 11:07:55,690][00394] Using frameskip 1 and render_action_repeat=4 for evaluation +[2023-02-27 11:07:55,719][00394] RunningMeanStd input shape: (3, 72, 128) +[2023-02-27 11:07:55,721][00394] RunningMeanStd input shape: (1,) +[2023-02-27 11:07:55,736][00394] ConvEncoder: input_channels=3 +[2023-02-27 11:07:55,775][00394] Conv encoder output size: 512 +[2023-02-27 11:07:55,779][00394] Policy head output size: 512 +[2023-02-27 11:07:55,799][00394] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... +[2023-02-27 11:07:56,280][00394] Num frames 100... +[2023-02-27 11:07:56,402][00394] Num frames 200... +[2023-02-27 11:07:56,521][00394] Num frames 300... +[2023-02-27 11:07:56,647][00394] Num frames 400... +[2023-02-27 11:07:56,765][00394] Avg episode rewards: #0: 5.480, true rewards: #0: 4.480 +[2023-02-27 11:07:56,767][00394] Avg episode reward: 5.480, avg true_objective: 4.480 +[2023-02-27 11:07:56,840][00394] Num frames 500... +[2023-02-27 11:07:56,964][00394] Num frames 600... +[2023-02-27 11:07:57,096][00394] Num frames 700... +[2023-02-27 11:07:57,225][00394] Num frames 800... +[2023-02-27 11:07:57,329][00394] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2023-02-27 11:07:57,331][00394] Avg episode reward: 4.660, avg true_objective: 4.160 +[2023-02-27 11:07:57,424][00394] Num frames 900... +[2023-02-27 11:07:57,554][00394] Num frames 1000... +[2023-02-27 11:07:57,688][00394] Num frames 1100... +[2023-02-27 11:07:57,811][00394] Num frames 1200... +[2023-02-27 11:07:57,894][00394] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053 +[2023-02-27 11:07:57,895][00394] Avg episode reward: 4.387, avg true_objective: 4.053 +[2023-02-27 11:07:57,998][00394] Num frames 1300... +[2023-02-27 11:07:58,138][00394] Num frames 1400... +[2023-02-27 11:07:58,318][00394] Num frames 1500... +[2023-02-27 11:07:58,489][00394] Num frames 1600... +[2023-02-27 11:07:58,653][00394] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2023-02-27 11:07:58,659][00394] Avg episode reward: 4.660, avg true_objective: 4.160 +[2023-02-27 11:07:58,720][00394] Num frames 1700... +[2023-02-27 11:07:58,881][00394] Num frames 1800... +[2023-02-27 11:07:59,047][00394] Num frames 1900... +[2023-02-27 11:07:59,207][00394] Num frames 2000... +[2023-02-27 11:07:59,386][00394] Num frames 2100... +[2023-02-27 11:07:59,468][00394] Avg episode rewards: #0: 4.824, true rewards: #0: 4.224 +[2023-02-27 11:07:59,474][00394] Avg episode reward: 4.824, avg true_objective: 4.224 +[2023-02-27 11:07:59,614][00394] Num frames 2200... +[2023-02-27 11:07:59,776][00394] Num frames 2300... +[2023-02-27 11:07:59,945][00394] Num frames 2400... +[2023-02-27 11:08:00,159][00394] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160 +[2023-02-27 11:08:00,162][00394] Avg episode reward: 4.660, avg true_objective: 4.160 +[2023-02-27 11:08:00,173][00394] Num frames 2500... +[2023-02-27 11:08:00,347][00394] Num frames 2600... +[2023-02-27 11:08:00,516][00394] Num frames 2700... +[2023-02-27 11:08:00,680][00394] Num frames 2800... +[2023-02-27 11:08:00,866][00394] Avg episode rewards: #0: 4.543, true rewards: #0: 4.114 +[2023-02-27 11:08:00,869][00394] Avg episode reward: 4.543, avg true_objective: 4.114 +[2023-02-27 11:08:00,909][00394] Num frames 2900... +[2023-02-27 11:08:01,076][00394] Num frames 3000... +[2023-02-27 11:08:01,238][00394] Num frames 3100... +[2023-02-27 11:08:01,409][00394] Num frames 3200... +[2023-02-27 11:08:01,579][00394] Avg episode rewards: #0: 4.455, true rewards: #0: 4.080 +[2023-02-27 11:08:01,581][00394] Avg episode reward: 4.455, avg true_objective: 4.080 +[2023-02-27 11:08:01,642][00394] Num frames 3300... +[2023-02-27 11:08:01,772][00394] Num frames 3400... +[2023-02-27 11:08:01,901][00394] Num frames 3500... +[2023-02-27 11:08:02,025][00394] Num frames 3600... +[2023-02-27 11:08:02,136][00394] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053 +[2023-02-27 11:08:02,138][00394] Avg episode reward: 4.387, avg true_objective: 4.053 +[2023-02-27 11:08:02,201][00394] Num frames 3700... +[2023-02-27 11:08:02,318][00394] Num frames 3800... +[2023-02-27 11:08:02,442][00394] Num frames 3900... +[2023-02-27 11:08:02,570][00394] Num frames 4000... +[2023-02-27 11:08:02,663][00394] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032 +[2023-02-27 11:08:02,665][00394] Avg episode reward: 4.332, avg true_objective: 4.032 +[2023-02-27 11:08:21,442][00394] Replay video saved to /content/train_dir/default_experiment/replay.mp4!