markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can also convert to a different time scale, for instance from UTC to TT. This uses the same attribute mechanism as above but now returns a new `Time` object:
t2 = t.tt t2 t2.jd
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Note that both the ISO (ISOT) and JD representations of t2 are different than for t because they are expressed relative to the TT time scale. Of course, from the numbers or strings one could not tell; one format in which this information is kept is the `fits` format:
print(t2.fits)
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Sidereal TimeApparent or mean sidereal time can be calculated using `sidereal_time()`. The method returns a `Longitude` with units of hourangle, which by default is for the longitude corresponding to the location with which the `Time` object is initialized. Like the scale transformations, ERFA C-library routines are used under the hood, which support calculations following different IAU resolutions. Sample usage:
t = Time('2006-01-15 21:24:37.5', scale='utc', location=('120d', '45d')) t.sidereal_time('mean') t.sidereal_time('apparent')
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Time DeltasSimple time arithmetic is supported using the TimeDelta class. The following operations are available:* Create a TimeDelta explicitly by instantiating a class object* Create a TimeDelta by subtracting two Times* Add a TimeDelta to a Time object to get a new Time* Subtract a TimeDelta from a Time object to get a new Time* Add two TimeDelta objects to get a new TimeDelta* Negate a TimeDelta or take its absolute value* Multiply or divide a TimeDelta by a constant or array* Convert TimeDelta objects to and from time-like QuantitiesThe `TimeDelta` class is derived from the `Time` class and shares many of its properties. One difference is that the time scale has to be one for which one day is exactly 86400 seconds. Hence, the scale cannot be UTC.
t1 = Time('2010-01-01 00:00:00') t2 = Time('2010-02-01 00:00:00') dt = t2 - t1 # Difference between two Times dt dt.sec from astropy.time import TimeDelta dt2 = TimeDelta(50.0, format='sec') t3 = t2 + dt2 # Add a TimeDelta to a Time t3
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
TimezonesWhen a Time object is constructed from a timezone-aware `datetime`, no timezone information is saved in the `Time` object. However, `Time` objects can be converted to timezone-aware datetime objects:
from datetime import datetime from astropy.time import Time, TimezoneInfo import astropy.units as u utc_plus_one_hour = TimezoneInfo(utc_offset=1*u.hour) dt_aware = datetime(2000, 1, 1, 0, 0, 0, tzinfo=utc_plus_one_hour) t = Time(dt_aware) # Loses timezone info, converts to UTC print(t) # will return UTC print(t.to_datetime(timezone=utc_plus_one_hour)) # to timezone-aware datetime
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Model parameters
# Model parameters BATCH_SIZE = 64 EPOCHS = 30 LEARNING_RATE = 0.0001 HEIGHT = 64 WIDTH = 64 CANAL = 3 N_CLASSES = labels.shape[0] ES_PATIENCE = 5 DECAY_DROP = 0.5 DECAY_EPOCHS = 10 classes = list(map(str, range(N_CLASSES))) def f2_score_thr(threshold=0.5): def f2_score(y_true, y_pred): beta = 2 y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx()) true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1) predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1) possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1) precision = true_positives / (predicted_positives + K.epsilon()) recall = true_positives / (possible_positives + K.epsilon()) return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon())) return f2_score def custom_f2(y_true, y_pred): beta = 2 tp = np.sum((y_true == 1) & (y_pred == 1)) tn = np.sum((y_true == 0) & (y_pred == 0)) fp = np.sum((y_true == 0) & (y_pred == 1)) fn = np.sum((y_true == 1) & (y_pred == 0)) p = tp / (tp + fp + K.epsilon()) r = tp / (tp + fn + K.epsilon()) f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15) return f2 def step_decay(epoch): initial_lrate = LEARNING_RATE drop = DECAY_DROP epochs_drop = DECAY_EPOCHS lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) return lrate train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25) train_generator=train_datagen.flow_from_dataframe( dataframe=train, directory="../input/imet-2019-fgvc6/train", x_col="id", y_col="attribute_ids", batch_size=BATCH_SIZE, shuffle=True, class_mode="categorical", classes=classes, target_size=(HEIGHT, WIDTH), subset='training') valid_generator=train_datagen.flow_from_dataframe( dataframe=train, directory="../input/imet-2019-fgvc6/train", x_col="id", y_col="attribute_ids", batch_size=BATCH_SIZE, shuffle=True, class_mode="categorical", classes=classes, target_size=(HEIGHT, WIDTH), subset='validation') test_datagen = ImageDataGenerator(rescale=1./255) test_generator = test_datagen.flow_from_dataframe( dataframe=test, directory = "../input/imet-2019-fgvc6/test", x_col="id", target_size=(HEIGHT, WIDTH), batch_size=1, shuffle=False, class_mode=None)
Found 81928 images belonging to 1103 classes. Found 27309 images belonging to 1103 classes. Found 7443 images.
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Model
print(os.listdir("../input/nasnetlarge")) def create_model(input_shape, n_out): input_tensor = Input(shape=input_shape) base_model = applications.NASNetLarge(weights=None, include_top=False, input_tensor=input_tensor) base_model.load_weights('../input/nasnetlarge/NASNet-large-no-top.h5') x = GlobalAveragePooling2D()(base_model.output) x = Dropout(0.5)(x) x = Dense(1024, activation='relu')(x) x = Dropout(0.5)(x) final_output = Dense(n_out, activation='sigmoid', name='final_output')(x) model = Model(input_tensor, final_output) return model # warm up model # first: train only the top layers (which were randomly initialized) model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES) for layer in model.layers: layer.trainable = False for i in range(-5,0): model.layers[i].trainable = True optimizer = optimizers.Adam(lr=LEARNING_RATE) metrics = ["accuracy", "categorical_accuracy"] es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE) callbacks = [es] model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics) model.summary()
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 64, 64, 3) 0 __________________________________________________________________________________________________ stem_conv1 (Conv2D) (None, 31, 31, 96) 2592 input_1[0][0] __________________________________________________________________________________________________ stem_bn1 (BatchNormalization) (None, 31, 31, 96) 384 stem_conv1[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] __________________________________________________________________________________________________ reduction_conv_1_stem_1 (Conv2D (None, 31, 31, 42) 4032 activation_1[0][0] __________________________________________________________________________________________________ reduction_bn_1_stem_1 (BatchNor (None, 31, 31, 42) 168 reduction_conv_1_stem_1[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 31, 31, 42) 0 reduction_bn_1_stem_1[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 35, 35, 42) 0 activation_2[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 37, 37, 96) 0 activation_4[0][0] __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 16, 16, 42) 2814 separable_conv_1_pad_reduction_le __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 16, 16, 42) 8736 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_1_reduction_left1_ __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right1 __________________________________________________________________________________________________ activation_3 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ activation_5 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 16, 16, 42) 2814 activation_3[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 16, 16, 42) 3822 activation_5[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_2_reduction_left1_ __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right1 __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 37, 37, 96) 0 activation_6[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] __________________________________________________________________________________________________ reduction_add_1_stem_1 (Add) (None, 16, 16, 42) 0 separable_conv_2_bn_reduction_lef separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 16, 16, 42) 8736 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 35, 35, 96) 0 activation_8[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 16, 16, 42) 0 reduction_add_1_stem_1[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right2 __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 16, 16, 42) 6432 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 16, 16, 42) 2142 activation_10[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right3 __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_1_reduction_left4_ __________________________________________________________________________________________________ reduction_pad_1_stem_1 (ZeroPad (None, 33, 33, 42) 0 reduction_bn_1_stem_1[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 16, 16, 42) 3822 activation_7[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ activation_11 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ reduction_left2_stem_1 (MaxPool (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right2 __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 16, 16, 42) 2814 activation_9[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 16, 16, 42) 2142 activation_11[0][0] __________________________________________________________________________________________________ adjust_relu_1_stem_2 (Activatio (None, 31, 31, 96) 0 stem_bn1[0][0] __________________________________________________________________________________________________ reduction_add_2_stem_1 (Add) (None, 16, 16, 42) 0 reduction_left2_stem_1[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ reduction_left3_stem_1 (Average (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right3 __________________________________________________________________________________________________ reduction_left4_stem_1 (Average (None, 16, 16, 42) 0 reduction_add_1_stem_1[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_2_reduction_left4_ __________________________________________________________________________________________________ reduction_right5_stem_1 (MaxPoo (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] __________________________________________________________________________________________________ zero_padding2d_1 (ZeroPadding2D (None, 32, 32, 96) 0 adjust_relu_1_stem_2[0][0] __________________________________________________________________________________________________ reduction_add3_stem_1 (Add) (None, 16, 16, 42) 0 reduction_left3_stem_1[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ add_1 (Add) (None, 16, 16, 42) 0 reduction_add_2_stem_1[0][0] reduction_left4_stem_1[0][0] __________________________________________________________________________________________________ reduction_add4_stem_1 (Add) (None, 16, 16, 42) 0 separable_conv_2_bn_reduction_lef reduction_right5_stem_1[0][0] __________________________________________________________________________________________________ cropping2d_1 (Cropping2D) (None, 31, 31, 96) 0 zero_padding2d_1[0][0] __________________________________________________________________________________________________ reduction_concat_stem_1 (Concat (None, 16, 16, 168) 0 reduction_add_2_stem_1[0][0] reduction_add3_stem_1[0][0] add_1[0][0] reduction_add4_stem_1[0][0] __________________________________________________________________________________________________ adjust_avg_pool_1_stem_2 (Avera (None, 16, 16, 96) 0 adjust_relu_1_stem_2[0][0] __________________________________________________________________________________________________ adjust_avg_pool_2_stem_2 (Avera (None, 16, 16, 96) 0 cropping2d_1[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 16, 16, 168) 0 reduction_concat_stem_1[0][0] __________________________________________________________________________________________________ adjust_conv_1_stem_2 (Conv2D) (None, 16, 16, 42) 4032 adjust_avg_pool_1_stem_2[0][0] __________________________________________________________________________________________________ adjust_conv_2_stem_2 (Conv2D) (None, 16, 16, 42) 4032 adjust_avg_pool_2_stem_2[0][0] __________________________________________________________________________________________________ reduction_conv_1_stem_2 (Conv2D (None, 16, 16, 84) 14112 activation_12[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 16, 16, 84) 0 adjust_conv_1_stem_2[0][0] adjust_conv_2_stem_2[0][0] __________________________________________________________________________________________________ reduction_bn_1_stem_2 (BatchNor (None, 16, 16, 84) 336 reduction_conv_1_stem_2[0][0] __________________________________________________________________________________________________ adjust_bn_stem_2 (BatchNormaliz (None, 16, 16, 84) 336 concatenate_1[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 16, 16, 84) 0 reduction_bn_1_stem_2[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 19, 19, 84) 0 activation_13[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 21, 21, 84) 0 activation_15[0][0] __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 8, 8, 84) 9156 separable_conv_1_pad_reduction_le __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 8, 8, 84) 11172 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_1_reduction_left1_ __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right1 __________________________________________________________________________________________________ activation_14 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ activation_16 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 8, 8, 84) 9156 activation_14[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 8, 8, 84) 11172 activation_16[0][0] __________________________________________________________________________________________________ activation_17 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_2_reduction_left1_ __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right1 __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 21, 21, 84) 0 activation_17[0][0] __________________________________________________________________________________________________ activation_19 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] __________________________________________________________________________________________________ reduction_add_1_stem_2 (Add) (None, 8, 8, 84) 0 separable_conv_2_bn_reduction_lef separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 8, 8, 84) 11172 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 19, 19, 84) 0 activation_19[0][0] __________________________________________________________________________________________________ activation_21 (Activation) (None, 8, 8, 84) 0 reduction_add_1_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right2 __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 8, 8, 84) 9156 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 8, 8, 84) 7812 activation_21[0][0] __________________________________________________________________________________________________ activation_18 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right3 __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_1_reduction_left4_ __________________________________________________________________________________________________ reduction_pad_1_stem_2 (ZeroPad (None, 17, 17, 84) 0 reduction_bn_1_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 8, 8, 84) 11172 activation_18[0][0] __________________________________________________________________________________________________ activation_20 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ activation_22 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ reduction_left2_stem_2 (MaxPool (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right2 __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 8, 8, 84) 9156 activation_20[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 8, 8, 84) 7812 activation_22[0][0] __________________________________________________________________________________________________ adjust_relu_1_0 (Activation) (None, 16, 16, 168) 0 reduction_concat_stem_1[0][0] __________________________________________________________________________________________________ reduction_add_2_stem_2 (Add) (None, 8, 8, 84) 0 reduction_left2_stem_2[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ reduction_left3_stem_2 (Average (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right3 __________________________________________________________________________________________________ reduction_left4_stem_2 (Average (None, 8, 8, 84) 0 reduction_add_1_stem_2[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_2_reduction_left4_ __________________________________________________________________________________________________ reduction_right5_stem_2 (MaxPoo (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] __________________________________________________________________________________________________ zero_padding2d_2 (ZeroPadding2D (None, 17, 17, 168) 0 adjust_relu_1_0[0][0] __________________________________________________________________________________________________ reduction_add3_stem_2 (Add) (None, 8, 8, 84) 0 reduction_left3_stem_2[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ add_2 (Add) (None, 8, 8, 84) 0 reduction_add_2_stem_2[0][0] reduction_left4_stem_2[0][0] __________________________________________________________________________________________________ reduction_add4_stem_2 (Add) (None, 8, 8, 84) 0 separable_conv_2_bn_reduction_lef reduction_right5_stem_2[0][0] __________________________________________________________________________________________________ cropping2d_2 (Cropping2D) (None, 16, 16, 168) 0 zero_padding2d_2[0][0] __________________________________________________________________________________________________ reduction_concat_stem_2 (Concat (None, 8, 8, 336) 0 reduction_add_2_stem_2[0][0] reduction_add3_stem_2[0][0] add_2[0][0] reduction_add4_stem_2[0][0] __________________________________________________________________________________________________ adjust_avg_pool_1_0 (AveragePoo (None, 8, 8, 168) 0 adjust_relu_1_0[0][0] __________________________________________________________________________________________________ adjust_avg_pool_2_0 (AveragePoo (None, 8, 8, 168) 0 cropping2d_2[0][0] __________________________________________________________________________________________________ adjust_conv_1_0 (Conv2D) (None, 8, 8, 84) 14112 adjust_avg_pool_1_0[0][0] __________________________________________________________________________________________________ adjust_conv_2_0 (Conv2D) (None, 8, 8, 84) 14112 adjust_avg_pool_2_0[0][0] __________________________________________________________________________________________________ activation_23 (Activation) (None, 8, 8, 336) 0 reduction_concat_stem_2[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 8, 8, 168) 0 adjust_conv_1_0[0][0] adjust_conv_2_0[0][0] __________________________________________________________________________________________________ normal_conv_1_0 (Conv2D) (None, 8, 8, 168) 56448 activation_23[0][0] __________________________________________________________________________________________________ adjust_bn_0 (BatchNormalization (None, 8, 8, 168) 672 concatenate_2[0][0] __________________________________________________________________________________________________ normal_bn_1_0 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_0[0][0] __________________________________________________________________________________________________ activation_24 (Activation) (None, 8, 8, 168) 0 normal_bn_1_0[0][0] __________________________________________________________________________________________________ activation_26 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] __________________________________________________________________________________________________ activation_28 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] __________________________________________________________________________________________________ activation_30 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] __________________________________________________________________________________________________ activation_32 (Activation) (None, 8, 8, 168) 0 normal_bn_1_0[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_0 (None, 8, 8, 168) 32424 activation_24[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_26[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_0 (None, 8, 8, 168) 32424 activation_28[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_30[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_0 (None, 8, 8, 168) 29736 activation_32[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_0[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_0[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_0[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_0[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_0[0 __________________________________________________________________________________________________ activation_25 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_27 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_29 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_31 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_33 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_0 (None, 8, 8, 168) 32424 activation_25[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_27[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_0 (None, 8, 8, 168) 32424 activation_29[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_31[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_0 (None, 8, 8, 168) 29736 activation_33[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_0[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_0[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_0[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_0[ __________________________________________________________________________________________________ normal_left3_0 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_0[0][0] __________________________________________________________________________________________________ normal_left4_0 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_0[0][0] __________________________________________________________________________________________________ normal_right4_0 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_0[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_0[0 __________________________________________________________________________________________________ normal_add_1_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_0 (Add) (None, 8, 8, 168) 0 normal_left3_0[0][0] adjust_bn_0[0][0] __________________________________________________________________________________________________ normal_add_4_0 (Add) (None, 8, 8, 168) 0 normal_left4_0[0][0] normal_right4_0[0][0] __________________________________________________________________________________________________ normal_add_5_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_0[0][0] __________________________________________________________________________________________________ normal_concat_0 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_0[0][0] normal_add_1_0[0][0] normal_add_2_0[0][0] normal_add_3_0[0][0] normal_add_4_0[0][0] normal_add_5_0[0][0] __________________________________________________________________________________________________ activation_34 (Activation) (None, 8, 8, 336) 0 reduction_concat_stem_2[0][0] __________________________________________________________________________________________________ activation_35 (Activation) (None, 8, 8, 1008) 0 normal_concat_0[0][0] __________________________________________________________________________________________________ adjust_conv_projection_1 (Conv2 (None, 8, 8, 168) 56448 activation_34[0][0] __________________________________________________________________________________________________ normal_conv_1_1 (Conv2D) (None, 8, 8, 168) 169344 activation_35[0][0] __________________________________________________________________________________________________ adjust_bn_1 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_1[0][0] __________________________________________________________________________________________________ normal_bn_1_1 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_1[0][0] __________________________________________________________________________________________________ activation_36 (Activation) (None, 8, 8, 168) 0 normal_bn_1_1[0][0] __________________________________________________________________________________________________ activation_38 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] __________________________________________________________________________________________________ activation_40 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] __________________________________________________________________________________________________ activation_42 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] __________________________________________________________________________________________________ activation_44 (Activation) (None, 8, 8, 168) 0 normal_bn_1_1[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 8, 8, 168) 32424 activation_36[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_38[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 8, 8, 168) 32424 activation_40[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_42[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 8, 8, 168) 29736 activation_44[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_1[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_1[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_1[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_1[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_1[0 __________________________________________________________________________________________________ activation_37 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_39 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_41 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_43 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_45 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 8, 8, 168) 32424 activation_37[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_39[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 8, 8, 168) 32424 activation_41[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_43[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 8, 8, 168) 29736 activation_45[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_1[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_1[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_1[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_1[ __________________________________________________________________________________________________ normal_left3_1 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_1[0][0] __________________________________________________________________________________________________ normal_left4_1 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_1[0][0] __________________________________________________________________________________________________ normal_right4_1 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_1[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_1[0 __________________________________________________________________________________________________ normal_add_1_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_1 (Add) (None, 8, 8, 168) 0 normal_left3_1[0][0] adjust_bn_1[0][0] __________________________________________________________________________________________________ normal_add_4_1 (Add) (None, 8, 8, 168) 0 normal_left4_1[0][0] normal_right4_1[0][0] __________________________________________________________________________________________________ normal_add_5_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_1[0][0] __________________________________________________________________________________________________ normal_concat_1 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_1[0][0] normal_add_1_1[0][0] normal_add_2_1[0][0] normal_add_3_1[0][0] normal_add_4_1[0][0] normal_add_5_1[0][0] __________________________________________________________________________________________________ activation_46 (Activation) (None, 8, 8, 1008) 0 normal_concat_0[0][0] __________________________________________________________________________________________________ activation_47 (Activation) (None, 8, 8, 1008) 0 normal_concat_1[0][0] __________________________________________________________________________________________________ adjust_conv_projection_2 (Conv2 (None, 8, 8, 168) 169344 activation_46[0][0] __________________________________________________________________________________________________ normal_conv_1_2 (Conv2D) (None, 8, 8, 168) 169344 activation_47[0][0] __________________________________________________________________________________________________ adjust_bn_2 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_2[0][0] __________________________________________________________________________________________________ normal_bn_1_2 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_2[0][0] __________________________________________________________________________________________________ activation_48 (Activation) (None, 8, 8, 168) 0 normal_bn_1_2[0][0] __________________________________________________________________________________________________ activation_50 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] __________________________________________________________________________________________________ activation_52 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] __________________________________________________________________________________________________ activation_54 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] __________________________________________________________________________________________________ activation_56 (Activation) (None, 8, 8, 168) 0 normal_bn_1_2[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_2 (None, 8, 8, 168) 32424 activation_48[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_50[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_2 (None, 8, 8, 168) 32424 activation_52[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_54[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_2 (None, 8, 8, 168) 29736 activation_56[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_2[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_2[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_2[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_2[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_2[0 __________________________________________________________________________________________________ activation_49 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_51 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_53 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_55 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_57 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_2 (None, 8, 8, 168) 32424 activation_49[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_51[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_2 (None, 8, 8, 168) 32424 activation_53[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_55[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_2 (None, 8, 8, 168) 29736 activation_57[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_2[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_2[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_2[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_2[ __________________________________________________________________________________________________ normal_left3_2 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_2[0][0] __________________________________________________________________________________________________ normal_left4_2 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_2[0][0] __________________________________________________________________________________________________ normal_right4_2 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_2[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_2[0 __________________________________________________________________________________________________ normal_add_1_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_2 (Add) (None, 8, 8, 168) 0 normal_left3_2[0][0] adjust_bn_2[0][0] __________________________________________________________________________________________________ normal_add_4_2 (Add) (None, 8, 8, 168) 0 normal_left4_2[0][0] normal_right4_2[0][0] __________________________________________________________________________________________________ normal_add_5_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_2[0][0] __________________________________________________________________________________________________ normal_concat_2 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_2[0][0] normal_add_1_2[0][0] normal_add_2_2[0][0] normal_add_3_2[0][0] normal_add_4_2[0][0] normal_add_5_2[0][0] __________________________________________________________________________________________________ activation_58 (Activation) (None, 8, 8, 1008) 0 normal_concat_1[0][0] __________________________________________________________________________________________________ activation_59 (Activation) (None, 8, 8, 1008) 0 normal_concat_2[0][0] __________________________________________________________________________________________________ adjust_conv_projection_3 (Conv2 (None, 8, 8, 168) 169344 activation_58[0][0] __________________________________________________________________________________________________ normal_conv_1_3 (Conv2D) (None, 8, 8, 168) 169344 activation_59[0][0] __________________________________________________________________________________________________ adjust_bn_3 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_3[0][0] __________________________________________________________________________________________________ normal_bn_1_3 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_3[0][0] __________________________________________________________________________________________________ activation_60 (Activation) (None, 8, 8, 168) 0 normal_bn_1_3[0][0] __________________________________________________________________________________________________ activation_62 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] __________________________________________________________________________________________________ activation_64 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] __________________________________________________________________________________________________ activation_66 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] __________________________________________________________________________________________________ activation_68 (Activation) (None, 8, 8, 168) 0 normal_bn_1_3[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_3 (None, 8, 8, 168) 32424 activation_60[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_62[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_3 (None, 8, 8, 168) 32424 activation_64[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_66[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_3 (None, 8, 8, 168) 29736 activation_68[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_3[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_3[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_3[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_3[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_3[0 __________________________________________________________________________________________________ activation_61 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_63 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_65 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_67 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_69 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_3 (None, 8, 8, 168) 32424 activation_61[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_63[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_3 (None, 8, 8, 168) 32424 activation_65[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_67[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_3 (None, 8, 8, 168) 29736 activation_69[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_3[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_3[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_3[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_3[ __________________________________________________________________________________________________ normal_left3_3 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_3[0][0] __________________________________________________________________________________________________ normal_left4_3 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_3[0][0] __________________________________________________________________________________________________ normal_right4_3 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_3[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_3[0 __________________________________________________________________________________________________ normal_add_1_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_3 (Add) (None, 8, 8, 168) 0 normal_left3_3[0][0] adjust_bn_3[0][0] __________________________________________________________________________________________________ normal_add_4_3 (Add) (None, 8, 8, 168) 0 normal_left4_3[0][0] normal_right4_3[0][0] __________________________________________________________________________________________________ normal_add_5_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_3[0][0] __________________________________________________________________________________________________ normal_concat_3 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_3[0][0] normal_add_1_3[0][0] normal_add_2_3[0][0] normal_add_3_3[0][0] normal_add_4_3[0][0] normal_add_5_3[0][0] __________________________________________________________________________________________________ activation_70 (Activation) (None, 8, 8, 1008) 0 normal_concat_2[0][0] __________________________________________________________________________________________________ activation_71 (Activation) (None, 8, 8, 1008) 0 normal_concat_3[0][0] __________________________________________________________________________________________________ adjust_conv_projection_4 (Conv2 (None, 8, 8, 168) 169344 activation_70[0][0] __________________________________________________________________________________________________ normal_conv_1_4 (Conv2D) (None, 8, 8, 168) 169344 activation_71[0][0] __________________________________________________________________________________________________ adjust_bn_4 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_4[0][0] __________________________________________________________________________________________________ normal_bn_1_4 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_4[0][0] __________________________________________________________________________________________________ activation_72 (Activation) (None, 8, 8, 168) 0 normal_bn_1_4[0][0] __________________________________________________________________________________________________ activation_74 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] __________________________________________________________________________________________________ activation_76 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] __________________________________________________________________________________________________ activation_78 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] __________________________________________________________________________________________________ activation_80 (Activation) (None, 8, 8, 168) 0 normal_bn_1_4[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_4 (None, 8, 8, 168) 32424 activation_72[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_74[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_4 (None, 8, 8, 168) 32424 activation_76[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_78[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_4 (None, 8, 8, 168) 29736 activation_80[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_4[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_4[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_4[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_4[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_4[0 __________________________________________________________________________________________________ activation_73 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_75 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_77 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_79 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_81 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_4 (None, 8, 8, 168) 32424 activation_73[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_75[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_4 (None, 8, 8, 168) 32424 activation_77[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_79[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_4 (None, 8, 8, 168) 29736 activation_81[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_4[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_4[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_4[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_4[ __________________________________________________________________________________________________ normal_left3_4 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_4[0][0] __________________________________________________________________________________________________ normal_left4_4 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_4[0][0] __________________________________________________________________________________________________ normal_right4_4 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_4[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_4[0 __________________________________________________________________________________________________ normal_add_1_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_4 (Add) (None, 8, 8, 168) 0 normal_left3_4[0][0] adjust_bn_4[0][0] __________________________________________________________________________________________________ normal_add_4_4 (Add) (None, 8, 8, 168) 0 normal_left4_4[0][0] normal_right4_4[0][0] __________________________________________________________________________________________________ normal_add_5_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_4[0][0] __________________________________________________________________________________________________ normal_concat_4 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_4[0][0] normal_add_1_4[0][0] normal_add_2_4[0][0] normal_add_3_4[0][0] normal_add_4_4[0][0] normal_add_5_4[0][0] __________________________________________________________________________________________________ activation_82 (Activation) (None, 8, 8, 1008) 0 normal_concat_3[0][0] __________________________________________________________________________________________________ activation_83 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] __________________________________________________________________________________________________ adjust_conv_projection_5 (Conv2 (None, 8, 8, 168) 169344 activation_82[0][0] __________________________________________________________________________________________________ normal_conv_1_5 (Conv2D) (None, 8, 8, 168) 169344 activation_83[0][0] __________________________________________________________________________________________________ adjust_bn_5 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_5[0][0] __________________________________________________________________________________________________ normal_bn_1_5 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_5[0][0] __________________________________________________________________________________________________ activation_84 (Activation) (None, 8, 8, 168) 0 normal_bn_1_5[0][0] __________________________________________________________________________________________________ activation_86 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] __________________________________________________________________________________________________ activation_88 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] __________________________________________________________________________________________________ activation_90 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] __________________________________________________________________________________________________ activation_92 (Activation) (None, 8, 8, 168) 0 normal_bn_1_5[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_5 (None, 8, 8, 168) 32424 activation_84[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_86[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_5 (None, 8, 8, 168) 32424 activation_88[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_90[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_5 (None, 8, 8, 168) 29736 activation_92[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_5[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_5[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_5[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_5[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_5[0 __________________________________________________________________________________________________ activation_85 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_87 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_89 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_91 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_93 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_5 (None, 8, 8, 168) 32424 activation_85[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_87[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_5 (None, 8, 8, 168) 32424 activation_89[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_91[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_5 (None, 8, 8, 168) 29736 activation_93[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_5[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_5[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_5[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_5[ __________________________________________________________________________________________________ normal_left3_5 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_5[0][0] __________________________________________________________________________________________________ normal_left4_5 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_5[0][0] __________________________________________________________________________________________________ normal_right4_5 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_5[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_5[0 __________________________________________________________________________________________________ normal_add_1_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_5 (Add) (None, 8, 8, 168) 0 normal_left3_5[0][0] adjust_bn_5[0][0] __________________________________________________________________________________________________ normal_add_4_5 (Add) (None, 8, 8, 168) 0 normal_left4_5[0][0] normal_right4_5[0][0] __________________________________________________________________________________________________ normal_add_5_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_5[0][0] __________________________________________________________________________________________________ normal_concat_5 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_5[0][0] normal_add_1_5[0][0] normal_add_2_5[0][0] normal_add_3_5[0][0] normal_add_4_5[0][0] normal_add_5_5[0][0] __________________________________________________________________________________________________ activation_95 (Activation) (None, 8, 8, 1008) 0 normal_concat_5[0][0] __________________________________________________________________________________________________ activation_94 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] __________________________________________________________________________________________________ reduction_conv_1_reduce_6 (Conv (None, 8, 8, 336) 338688 activation_95[0][0] __________________________________________________________________________________________________ adjust_conv_projection_reduce_6 (None, 8, 8, 336) 338688 activation_94[0][0] __________________________________________________________________________________________________ reduction_bn_1_reduce_6 (BatchN (None, 8, 8, 336) 1344 reduction_conv_1_reduce_6[0][0] __________________________________________________________________________________________________ adjust_bn_reduce_6 (BatchNormal (None, 8, 8, 336) 1344 adjust_conv_projection_reduce_6[0 __________________________________________________________________________________________________ activation_96 (Activation) (None, 8, 8, 336) 0 reduction_bn_1_reduce_6[0][0] __________________________________________________________________________________________________ activation_98 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 11, 11, 336) 0 activation_96[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 13, 13, 336) 0 activation_98[0][0] __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 4, 4, 336) 121296 separable_conv_1_pad_reduction_le __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 4, 4, 336) 129360 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_1_reduction_left1_ __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right1 __________________________________________________________________________________________________ activation_97 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ activation_99 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 4, 4, 336) 121296 activation_97[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 4, 4, 336) 129360 activation_99[0][0] __________________________________________________________________________________________________ activation_100 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_2_reduction_left1_ __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right1 __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 13, 13, 336) 0 activation_100[0][0] __________________________________________________________________________________________________ activation_102 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] __________________________________________________________________________________________________ reduction_add_1_reduce_6 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_reduction_lef separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 4, 4, 336) 129360 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 11, 11, 336) 0 activation_102[0][0] __________________________________________________________________________________________________ activation_104 (Activation) (None, 4, 4, 336) 0 reduction_add_1_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right2 __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 4, 4, 336) 121296 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 4, 4, 336) 115920 activation_104[0][0] __________________________________________________________________________________________________ activation_101 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right3 __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_1_reduction_left4_ __________________________________________________________________________________________________ reduction_pad_1_reduce_6 (ZeroP (None, 9, 9, 336) 0 reduction_bn_1_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 4, 4, 336) 129360 activation_101[0][0] __________________________________________________________________________________________________ activation_103 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ activation_105 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ reduction_left2_reduce_6 (MaxPo (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right2 __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 4, 4, 336) 121296 activation_103[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 4, 4, 336) 115920 activation_105[0][0] __________________________________________________________________________________________________ adjust_relu_1_7 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] __________________________________________________________________________________________________ reduction_add_2_reduce_6 (Add) (None, 4, 4, 336) 0 reduction_left2_reduce_6[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ reduction_left3_reduce_6 (Avera (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right3 __________________________________________________________________________________________________ reduction_left4_reduce_6 (Avera (None, 4, 4, 336) 0 reduction_add_1_reduce_6[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_2_reduction_left4_ __________________________________________________________________________________________________ reduction_right5_reduce_6 (MaxP (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] __________________________________________________________________________________________________ zero_padding2d_3 (ZeroPadding2D (None, 9, 9, 1008) 0 adjust_relu_1_7[0][0] __________________________________________________________________________________________________ reduction_add3_reduce_6 (Add) (None, 4, 4, 336) 0 reduction_left3_reduce_6[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ add_3 (Add) (None, 4, 4, 336) 0 reduction_add_2_reduce_6[0][0] reduction_left4_reduce_6[0][0] __________________________________________________________________________________________________ reduction_add4_reduce_6 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_reduction_lef reduction_right5_reduce_6[0][0] __________________________________________________________________________________________________ cropping2d_3 (Cropping2D) (None, 8, 8, 1008) 0 zero_padding2d_3[0][0] __________________________________________________________________________________________________ reduction_concat_reduce_6 (Conc (None, 4, 4, 1344) 0 reduction_add_2_reduce_6[0][0] reduction_add3_reduce_6[0][0] add_3[0][0] reduction_add4_reduce_6[0][0] __________________________________________________________________________________________________ adjust_avg_pool_1_7 (AveragePoo (None, 4, 4, 1008) 0 adjust_relu_1_7[0][0] __________________________________________________________________________________________________ adjust_avg_pool_2_7 (AveragePoo (None, 4, 4, 1008) 0 cropping2d_3[0][0] __________________________________________________________________________________________________ adjust_conv_1_7 (Conv2D) (None, 4, 4, 168) 169344 adjust_avg_pool_1_7[0][0] __________________________________________________________________________________________________ adjust_conv_2_7 (Conv2D) (None, 4, 4, 168) 169344 adjust_avg_pool_2_7[0][0] __________________________________________________________________________________________________ activation_106 (Activation) (None, 4, 4, 1344) 0 reduction_concat_reduce_6[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 4, 4, 336) 0 adjust_conv_1_7[0][0] adjust_conv_2_7[0][0] __________________________________________________________________________________________________ normal_conv_1_7 (Conv2D) (None, 4, 4, 336) 451584 activation_106[0][0] __________________________________________________________________________________________________ adjust_bn_7 (BatchNormalization (None, 4, 4, 336) 1344 concatenate_3[0][0] __________________________________________________________________________________________________ normal_bn_1_7 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_7[0][0] __________________________________________________________________________________________________ activation_107 (Activation) (None, 4, 4, 336) 0 normal_bn_1_7[0][0] __________________________________________________________________________________________________ activation_109 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] __________________________________________________________________________________________________ activation_111 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] __________________________________________________________________________________________________ activation_113 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] __________________________________________________________________________________________________ activation_115 (Activation) (None, 4, 4, 336) 0 normal_bn_1_7[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_7 (None, 4, 4, 336) 121296 activation_107[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_109[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_7 (None, 4, 4, 336) 121296 activation_111[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_113[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_7 (None, 4, 4, 336) 115920 activation_115[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_7[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_7[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_7[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_7[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_7[0 __________________________________________________________________________________________________ activation_108 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_110 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_112 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_114 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_116 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_7 (None, 4, 4, 336) 121296 activation_108[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_110[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_7 (None, 4, 4, 336) 121296 activation_112[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_114[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_7 (None, 4, 4, 336) 115920 activation_116[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_7[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_7[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_7[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_7[ __________________________________________________________________________________________________ normal_left3_7 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_7[0][0] __________________________________________________________________________________________________ normal_left4_7 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_7[0][0] __________________________________________________________________________________________________ normal_right4_7 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_7[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_7[0 __________________________________________________________________________________________________ normal_add_1_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_7 (Add) (None, 4, 4, 336) 0 normal_left3_7[0][0] adjust_bn_7[0][0] __________________________________________________________________________________________________ normal_add_4_7 (Add) (None, 4, 4, 336) 0 normal_left4_7[0][0] normal_right4_7[0][0] __________________________________________________________________________________________________ normal_add_5_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_7[0][0] __________________________________________________________________________________________________ normal_concat_7 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_7[0][0] normal_add_1_7[0][0] normal_add_2_7[0][0] normal_add_3_7[0][0] normal_add_4_7[0][0] normal_add_5_7[0][0] __________________________________________________________________________________________________ activation_117 (Activation) (None, 4, 4, 1344) 0 reduction_concat_reduce_6[0][0] __________________________________________________________________________________________________ activation_118 (Activation) (None, 4, 4, 2016) 0 normal_concat_7[0][0] __________________________________________________________________________________________________ adjust_conv_projection_8 (Conv2 (None, 4, 4, 336) 451584 activation_117[0][0] __________________________________________________________________________________________________ normal_conv_1_8 (Conv2D) (None, 4, 4, 336) 677376 activation_118[0][0] __________________________________________________________________________________________________ adjust_bn_8 (BatchNormalization (None, 4, 4, 336) 1344 adjust_conv_projection_8[0][0] __________________________________________________________________________________________________ normal_bn_1_8 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_8[0][0] __________________________________________________________________________________________________ activation_119 (Activation) (None, 4, 4, 336) 0 normal_bn_1_8[0][0] __________________________________________________________________________________________________ activation_121 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] __________________________________________________________________________________________________ activation_123 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] __________________________________________________________________________________________________ activation_125 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] __________________________________________________________________________________________________ activation_127 (Activation) (None, 4, 4, 336) 0 normal_bn_1_8[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_8 (None, 4, 4, 336) 121296 activation_119[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_121[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_8 (None, 4, 4, 336) 121296 activation_123[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_125[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_8 (None, 4, 4, 336) 115920 activation_127[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_8[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_8[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_8[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_8[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_8[0 __________________________________________________________________________________________________ activation_120 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_122 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_124 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_126 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_128 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_8 (None, 4, 4, 336) 121296 activation_120[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_122[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_8 (None, 4, 4, 336) 121296 activation_124[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_126[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_8 (None, 4, 4, 336) 115920 activation_128[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_8[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_8[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_8[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_8[ __________________________________________________________________________________________________ normal_left3_8 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_8[0][0] __________________________________________________________________________________________________ normal_left4_8 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_8[0][0] __________________________________________________________________________________________________ normal_right4_8 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_8[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_8[0 __________________________________________________________________________________________________ normal_add_1_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_8 (Add) (None, 4, 4, 336) 0 normal_left3_8[0][0] adjust_bn_8[0][0] __________________________________________________________________________________________________ normal_add_4_8 (Add) (None, 4, 4, 336) 0 normal_left4_8[0][0] normal_right4_8[0][0] __________________________________________________________________________________________________ normal_add_5_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_8[0][0] __________________________________________________________________________________________________ normal_concat_8 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_8[0][0] normal_add_1_8[0][0] normal_add_2_8[0][0] normal_add_3_8[0][0] normal_add_4_8[0][0] normal_add_5_8[0][0] __________________________________________________________________________________________________ activation_129 (Activation) (None, 4, 4, 2016) 0 normal_concat_7[0][0] __________________________________________________________________________________________________ activation_130 (Activation) (None, 4, 4, 2016) 0 normal_concat_8[0][0] __________________________________________________________________________________________________ adjust_conv_projection_9 (Conv2 (None, 4, 4, 336) 677376 activation_129[0][0] __________________________________________________________________________________________________ normal_conv_1_9 (Conv2D) (None, 4, 4, 336) 677376 activation_130[0][0] __________________________________________________________________________________________________ adjust_bn_9 (BatchNormalization (None, 4, 4, 336) 1344 adjust_conv_projection_9[0][0] __________________________________________________________________________________________________ normal_bn_1_9 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_9[0][0] __________________________________________________________________________________________________ activation_131 (Activation) (None, 4, 4, 336) 0 normal_bn_1_9[0][0] __________________________________________________________________________________________________ activation_133 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] __________________________________________________________________________________________________ activation_135 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] __________________________________________________________________________________________________ activation_137 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] __________________________________________________________________________________________________ activation_139 (Activation) (None, 4, 4, 336) 0 normal_bn_1_9[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_9 (None, 4, 4, 336) 121296 activation_131[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_133[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_9 (None, 4, 4, 336) 121296 activation_135[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_137[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_9 (None, 4, 4, 336) 115920 activation_139[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_9[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_9[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_9[0 __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_9[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_9[0 __________________________________________________________________________________________________ activation_132 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_134 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_136 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_138 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_140 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_9 (None, 4, 4, 336) 121296 activation_132[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_134[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_9 (None, 4, 4, 336) 121296 activation_136[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_138[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_9 (None, 4, 4, 336) 115920 activation_140[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_9[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_9[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_9[0 __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_9[ __________________________________________________________________________________________________ normal_left3_9 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_9[0][0] __________________________________________________________________________________________________ normal_left4_9 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_9[0][0] __________________________________________________________________________________________________ normal_right4_9 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_9[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_9[0 __________________________________________________________________________________________________ normal_add_1_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_9 (Add) (None, 4, 4, 336) 0 normal_left3_9[0][0] adjust_bn_9[0][0] __________________________________________________________________________________________________ normal_add_4_9 (Add) (None, 4, 4, 336) 0 normal_left4_9[0][0] normal_right4_9[0][0] __________________________________________________________________________________________________ normal_add_5_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_9[0][0] __________________________________________________________________________________________________ normal_concat_9 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_9[0][0] normal_add_1_9[0][0] normal_add_2_9[0][0] normal_add_3_9[0][0] normal_add_4_9[0][0] normal_add_5_9[0][0] __________________________________________________________________________________________________ activation_141 (Activation) (None, 4, 4, 2016) 0 normal_concat_8[0][0] __________________________________________________________________________________________________ activation_142 (Activation) (None, 4, 4, 2016) 0 normal_concat_9[0][0] __________________________________________________________________________________________________ adjust_conv_projection_10 (Conv (None, 4, 4, 336) 677376 activation_141[0][0] __________________________________________________________________________________________________ normal_conv_1_10 (Conv2D) (None, 4, 4, 336) 677376 activation_142[0][0] __________________________________________________________________________________________________ adjust_bn_10 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_10[0][0] __________________________________________________________________________________________________ normal_bn_1_10 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_10[0][0] __________________________________________________________________________________________________ activation_143 (Activation) (None, 4, 4, 336) 0 normal_bn_1_10[0][0] __________________________________________________________________________________________________ activation_145 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] __________________________________________________________________________________________________ activation_147 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] __________________________________________________________________________________________________ activation_149 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] __________________________________________________________________________________________________ activation_151 (Activation) (None, 4, 4, 336) 0 normal_bn_1_10[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_143[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_145[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_147[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_149[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_151[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_10[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_10 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_10[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_10 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_10[ __________________________________________________________________________________________________ activation_144 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_146 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_148 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_150 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_152 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_144[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_146[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_148[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_150[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_152[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_10[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_10 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_10[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_10 __________________________________________________________________________________________________ normal_left3_10 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_10[0][0] __________________________________________________________________________________________________ normal_left4_10 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_10[0][0] __________________________________________________________________________________________________ normal_right4_10 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_10[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_10[ __________________________________________________________________________________________________ normal_add_1_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_10 (Add) (None, 4, 4, 336) 0 normal_left3_10[0][0] adjust_bn_10[0][0] __________________________________________________________________________________________________ normal_add_4_10 (Add) (None, 4, 4, 336) 0 normal_left4_10[0][0] normal_right4_10[0][0] __________________________________________________________________________________________________ normal_add_5_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_10[0][0] __________________________________________________________________________________________________ normal_concat_10 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_10[0][0] normal_add_1_10[0][0] normal_add_2_10[0][0] normal_add_3_10[0][0] normal_add_4_10[0][0] normal_add_5_10[0][0] __________________________________________________________________________________________________ activation_153 (Activation) (None, 4, 4, 2016) 0 normal_concat_9[0][0] __________________________________________________________________________________________________ activation_154 (Activation) (None, 4, 4, 2016) 0 normal_concat_10[0][0] __________________________________________________________________________________________________ adjust_conv_projection_11 (Conv (None, 4, 4, 336) 677376 activation_153[0][0] __________________________________________________________________________________________________ normal_conv_1_11 (Conv2D) (None, 4, 4, 336) 677376 activation_154[0][0] __________________________________________________________________________________________________ adjust_bn_11 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_11[0][0] __________________________________________________________________________________________________ normal_bn_1_11 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_11[0][0] __________________________________________________________________________________________________ activation_155 (Activation) (None, 4, 4, 336) 0 normal_bn_1_11[0][0] __________________________________________________________________________________________________ activation_157 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] __________________________________________________________________________________________________ activation_159 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] __________________________________________________________________________________________________ activation_161 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] __________________________________________________________________________________________________ activation_163 (Activation) (None, 4, 4, 336) 0 normal_bn_1_11[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_155[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_157[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_159[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_161[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_163[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_11[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_11 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_11[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_11 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_11[ __________________________________________________________________________________________________ activation_156 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_158 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_160 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_162 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_164 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_156[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_158[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_160[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_162[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_164[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_11[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_11 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_11[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_11 __________________________________________________________________________________________________ normal_left3_11 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_11[0][0] __________________________________________________________________________________________________ normal_left4_11 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_11[0][0] __________________________________________________________________________________________________ normal_right4_11 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_11[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_11[ __________________________________________________________________________________________________ normal_add_1_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_11 (Add) (None, 4, 4, 336) 0 normal_left3_11[0][0] adjust_bn_11[0][0] __________________________________________________________________________________________________ normal_add_4_11 (Add) (None, 4, 4, 336) 0 normal_left4_11[0][0] normal_right4_11[0][0] __________________________________________________________________________________________________ normal_add_5_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_11[0][0] __________________________________________________________________________________________________ normal_concat_11 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_11[0][0] normal_add_1_11[0][0] normal_add_2_11[0][0] normal_add_3_11[0][0] normal_add_4_11[0][0] normal_add_5_11[0][0] __________________________________________________________________________________________________ activation_165 (Activation) (None, 4, 4, 2016) 0 normal_concat_10[0][0] __________________________________________________________________________________________________ activation_166 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] __________________________________________________________________________________________________ adjust_conv_projection_12 (Conv (None, 4, 4, 336) 677376 activation_165[0][0] __________________________________________________________________________________________________ normal_conv_1_12 (Conv2D) (None, 4, 4, 336) 677376 activation_166[0][0] __________________________________________________________________________________________________ adjust_bn_12 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_12[0][0] __________________________________________________________________________________________________ normal_bn_1_12 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_12[0][0] __________________________________________________________________________________________________ activation_167 (Activation) (None, 4, 4, 336) 0 normal_bn_1_12[0][0] __________________________________________________________________________________________________ activation_169 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] __________________________________________________________________________________________________ activation_171 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] __________________________________________________________________________________________________ activation_173 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] __________________________________________________________________________________________________ activation_175 (Activation) (None, 4, 4, 336) 0 normal_bn_1_12[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_167[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_169[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_171[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_173[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_175[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_12[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_12 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_12[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_12 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_12[ __________________________________________________________________________________________________ activation_168 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_170 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_172 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_174 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_176 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_168[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_170[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_172[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_174[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_176[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_12[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_12 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_12[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_12 __________________________________________________________________________________________________ normal_left3_12 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_12[0][0] __________________________________________________________________________________________________ normal_left4_12 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_12[0][0] __________________________________________________________________________________________________ normal_right4_12 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_12[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_12[ __________________________________________________________________________________________________ normal_add_1_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_12 (Add) (None, 4, 4, 336) 0 normal_left3_12[0][0] adjust_bn_12[0][0] __________________________________________________________________________________________________ normal_add_4_12 (Add) (None, 4, 4, 336) 0 normal_left4_12[0][0] normal_right4_12[0][0] __________________________________________________________________________________________________ normal_add_5_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_12[0][0] __________________________________________________________________________________________________ normal_concat_12 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_12[0][0] normal_add_1_12[0][0] normal_add_2_12[0][0] normal_add_3_12[0][0] normal_add_4_12[0][0] normal_add_5_12[0][0] __________________________________________________________________________________________________ activation_178 (Activation) (None, 4, 4, 2016) 0 normal_concat_12[0][0] __________________________________________________________________________________________________ activation_177 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] __________________________________________________________________________________________________ reduction_conv_1_reduce_12 (Con (None, 4, 4, 672) 1354752 activation_178[0][0] __________________________________________________________________________________________________ adjust_conv_projection_reduce_1 (None, 4, 4, 672) 1354752 activation_177[0][0] __________________________________________________________________________________________________ reduction_bn_1_reduce_12 (Batch (None, 4, 4, 672) 2688 reduction_conv_1_reduce_12[0][0] __________________________________________________________________________________________________ adjust_bn_reduce_12 (BatchNorma (None, 4, 4, 672) 2688 adjust_conv_projection_reduce_12[ __________________________________________________________________________________________________ activation_179 (Activation) (None, 4, 4, 672) 0 reduction_bn_1_reduce_12[0][0] __________________________________________________________________________________________________ activation_181 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 7, 7, 672) 0 activation_179[0][0] __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 9, 9, 672) 0 activation_181[0][0] __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 2, 2, 672) 468384 separable_conv_1_pad_reduction_le __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 2, 2, 672) 484512 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_1_reduction_left1_ __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right1 __________________________________________________________________________________________________ activation_180 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ activation_182 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 2, 2, 672) 468384 activation_180[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 2, 2, 672) 484512 activation_182[0][0] __________________________________________________________________________________________________ activation_183 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_2_reduction_left1_ __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right1 __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 9, 9, 672) 0 activation_183[0][0] __________________________________________________________________________________________________ activation_185 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] __________________________________________________________________________________________________ reduction_add_1_reduce_12 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_reduction_lef separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 2, 2, 672) 484512 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_pad_reduction_ (None, 7, 7, 672) 0 activation_185[0][0] __________________________________________________________________________________________________ activation_187 (Activation) (None, 2, 2, 672) 0 reduction_add_1_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right2 __________________________________________________________________________________________________ separable_conv_1_reduction_righ (None, 2, 2, 672) 468384 separable_conv_1_pad_reduction_ri __________________________________________________________________________________________________ separable_conv_1_reduction_left (None, 2, 2, 672) 457632 activation_187[0][0] __________________________________________________________________________________________________ activation_184 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ separable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right3 __________________________________________________________________________________________________ separable_conv_1_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_1_reduction_left4_ __________________________________________________________________________________________________ reduction_pad_1_reduce_12 (Zero (None, 5, 5, 672) 0 reduction_bn_1_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 2, 2, 672) 484512 activation_184[0][0] __________________________________________________________________________________________________ activation_186 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig __________________________________________________________________________________________________ activation_188 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_lef __________________________________________________________________________________________________ reduction_left2_reduce_12 (MaxP (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right2 __________________________________________________________________________________________________ separable_conv_2_reduction_righ (None, 2, 2, 672) 468384 activation_186[0][0] __________________________________________________________________________________________________ separable_conv_2_reduction_left (None, 2, 2, 672) 457632 activation_188[0][0] __________________________________________________________________________________________________ adjust_relu_1_13 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] __________________________________________________________________________________________________ reduction_add_2_reduce_12 (Add) (None, 2, 2, 672) 0 reduction_left2_reduce_12[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ reduction_left3_reduce_12 (Aver (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right3 __________________________________________________________________________________________________ reduction_left4_reduce_12 (Aver (None, 2, 2, 672) 0 reduction_add_1_reduce_12[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_2_reduction_left4_ __________________________________________________________________________________________________ reduction_right5_reduce_12 (Max (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] __________________________________________________________________________________________________ zero_padding2d_4 (ZeroPadding2D (None, 5, 5, 2016) 0 adjust_relu_1_13[0][0] __________________________________________________________________________________________________ reduction_add3_reduce_12 (Add) (None, 2, 2, 672) 0 reduction_left3_reduce_12[0][0] separable_conv_2_bn_reduction_rig __________________________________________________________________________________________________ add_4 (Add) (None, 2, 2, 672) 0 reduction_add_2_reduce_12[0][0] reduction_left4_reduce_12[0][0] __________________________________________________________________________________________________ reduction_add4_reduce_12 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_reduction_lef reduction_right5_reduce_12[0][0] __________________________________________________________________________________________________ cropping2d_4 (Cropping2D) (None, 4, 4, 2016) 0 zero_padding2d_4[0][0] __________________________________________________________________________________________________ reduction_concat_reduce_12 (Con (None, 2, 2, 2688) 0 reduction_add_2_reduce_12[0][0] reduction_add3_reduce_12[0][0] add_4[0][0] reduction_add4_reduce_12[0][0] __________________________________________________________________________________________________ adjust_avg_pool_1_13 (AveragePo (None, 2, 2, 2016) 0 adjust_relu_1_13[0][0] __________________________________________________________________________________________________ adjust_avg_pool_2_13 (AveragePo (None, 2, 2, 2016) 0 cropping2d_4[0][0] __________________________________________________________________________________________________ adjust_conv_1_13 (Conv2D) (None, 2, 2, 336) 677376 adjust_avg_pool_1_13[0][0] __________________________________________________________________________________________________ adjust_conv_2_13 (Conv2D) (None, 2, 2, 336) 677376 adjust_avg_pool_2_13[0][0] __________________________________________________________________________________________________ activation_189 (Activation) (None, 2, 2, 2688) 0 reduction_concat_reduce_12[0][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 2, 2, 672) 0 adjust_conv_1_13[0][0] adjust_conv_2_13[0][0] __________________________________________________________________________________________________ normal_conv_1_13 (Conv2D) (None, 2, 2, 672) 1806336 activation_189[0][0] __________________________________________________________________________________________________ adjust_bn_13 (BatchNormalizatio (None, 2, 2, 672) 2688 concatenate_4[0][0] __________________________________________________________________________________________________ normal_bn_1_13 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_13[0][0] __________________________________________________________________________________________________ activation_190 (Activation) (None, 2, 2, 672) 0 normal_bn_1_13[0][0] __________________________________________________________________________________________________ activation_192 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] __________________________________________________________________________________________________ activation_194 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] __________________________________________________________________________________________________ activation_196 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] __________________________________________________________________________________________________ activation_198 (Activation) (None, 2, 2, 672) 0 normal_bn_1_13[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_190[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_192[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_194[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_196[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_198[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_13[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_13 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_13[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_13 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_13[ __________________________________________________________________________________________________ activation_191 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_193 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_195 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_197 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_199 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_191[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_193[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_195[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_197[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_199[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_13[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_13 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_13[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_13 __________________________________________________________________________________________________ normal_left3_13 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_13[0][0] __________________________________________________________________________________________________ normal_left4_13 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_13[0][0] __________________________________________________________________________________________________ normal_right4_13 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_13[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_13[ __________________________________________________________________________________________________ normal_add_1_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_13 (Add) (None, 2, 2, 672) 0 normal_left3_13[0][0] adjust_bn_13[0][0] __________________________________________________________________________________________________ normal_add_4_13 (Add) (None, 2, 2, 672) 0 normal_left4_13[0][0] normal_right4_13[0][0] __________________________________________________________________________________________________ normal_add_5_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_13[0][0] __________________________________________________________________________________________________ normal_concat_13 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_13[0][0] normal_add_1_13[0][0] normal_add_2_13[0][0] normal_add_3_13[0][0] normal_add_4_13[0][0] normal_add_5_13[0][0] __________________________________________________________________________________________________ activation_200 (Activation) (None, 2, 2, 2688) 0 reduction_concat_reduce_12[0][0] __________________________________________________________________________________________________ activation_201 (Activation) (None, 2, 2, 4032) 0 normal_concat_13[0][0] __________________________________________________________________________________________________ adjust_conv_projection_14 (Conv (None, 2, 2, 672) 1806336 activation_200[0][0] __________________________________________________________________________________________________ normal_conv_1_14 (Conv2D) (None, 2, 2, 672) 2709504 activation_201[0][0] __________________________________________________________________________________________________ adjust_bn_14 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_14[0][0] __________________________________________________________________________________________________ normal_bn_1_14 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_14[0][0] __________________________________________________________________________________________________ activation_202 (Activation) (None, 2, 2, 672) 0 normal_bn_1_14[0][0] __________________________________________________________________________________________________ activation_204 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] __________________________________________________________________________________________________ activation_206 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] __________________________________________________________________________________________________ activation_208 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] __________________________________________________________________________________________________ activation_210 (Activation) (None, 2, 2, 672) 0 normal_bn_1_14[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_202[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_204[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_206[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_208[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_210[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_14[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_14 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_14[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_14 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_14[ __________________________________________________________________________________________________ activation_203 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_205 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_207 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_209 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_211 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_203[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_205[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_207[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_209[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_211[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_14[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_14 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_14[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_14 __________________________________________________________________________________________________ normal_left3_14 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_14[0][0] __________________________________________________________________________________________________ normal_left4_14 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_14[0][0] __________________________________________________________________________________________________ normal_right4_14 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_14[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_14[ __________________________________________________________________________________________________ normal_add_1_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_14 (Add) (None, 2, 2, 672) 0 normal_left3_14[0][0] adjust_bn_14[0][0] __________________________________________________________________________________________________ normal_add_4_14 (Add) (None, 2, 2, 672) 0 normal_left4_14[0][0] normal_right4_14[0][0] __________________________________________________________________________________________________ normal_add_5_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_14[0][0] __________________________________________________________________________________________________ normal_concat_14 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_14[0][0] normal_add_1_14[0][0] normal_add_2_14[0][0] normal_add_3_14[0][0] normal_add_4_14[0][0] normal_add_5_14[0][0] __________________________________________________________________________________________________ activation_212 (Activation) (None, 2, 2, 4032) 0 normal_concat_13[0][0] __________________________________________________________________________________________________ activation_213 (Activation) (None, 2, 2, 4032) 0 normal_concat_14[0][0] __________________________________________________________________________________________________ adjust_conv_projection_15 (Conv (None, 2, 2, 672) 2709504 activation_212[0][0] __________________________________________________________________________________________________ normal_conv_1_15 (Conv2D) (None, 2, 2, 672) 2709504 activation_213[0][0] __________________________________________________________________________________________________ adjust_bn_15 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_15[0][0] __________________________________________________________________________________________________ normal_bn_1_15 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_15[0][0] __________________________________________________________________________________________________ activation_214 (Activation) (None, 2, 2, 672) 0 normal_bn_1_15[0][0] __________________________________________________________________________________________________ activation_216 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] __________________________________________________________________________________________________ activation_218 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] __________________________________________________________________________________________________ activation_220 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] __________________________________________________________________________________________________ activation_222 (Activation) (None, 2, 2, 672) 0 normal_bn_1_15[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_214[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_216[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_218[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_220[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_222[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_15[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_15 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_15[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_15 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_15[ __________________________________________________________________________________________________ activation_215 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_217 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_219 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_221 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_223 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_215[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_217[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_219[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_221[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_223[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_15[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_15 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_15[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_15 __________________________________________________________________________________________________ normal_left3_15 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_15[0][0] __________________________________________________________________________________________________ normal_left4_15 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_15[0][0] __________________________________________________________________________________________________ normal_right4_15 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_15[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_15[ __________________________________________________________________________________________________ normal_add_1_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_15 (Add) (None, 2, 2, 672) 0 normal_left3_15[0][0] adjust_bn_15[0][0] __________________________________________________________________________________________________ normal_add_4_15 (Add) (None, 2, 2, 672) 0 normal_left4_15[0][0] normal_right4_15[0][0] __________________________________________________________________________________________________ normal_add_5_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_15[0][0] __________________________________________________________________________________________________ normal_concat_15 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_15[0][0] normal_add_1_15[0][0] normal_add_2_15[0][0] normal_add_3_15[0][0] normal_add_4_15[0][0] normal_add_5_15[0][0] __________________________________________________________________________________________________ activation_224 (Activation) (None, 2, 2, 4032) 0 normal_concat_14[0][0] __________________________________________________________________________________________________ activation_225 (Activation) (None, 2, 2, 4032) 0 normal_concat_15[0][0] __________________________________________________________________________________________________ adjust_conv_projection_16 (Conv (None, 2, 2, 672) 2709504 activation_224[0][0] __________________________________________________________________________________________________ normal_conv_1_16 (Conv2D) (None, 2, 2, 672) 2709504 activation_225[0][0] __________________________________________________________________________________________________ adjust_bn_16 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_16[0][0] __________________________________________________________________________________________________ normal_bn_1_16 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_16[0][0] __________________________________________________________________________________________________ activation_226 (Activation) (None, 2, 2, 672) 0 normal_bn_1_16[0][0] __________________________________________________________________________________________________ activation_228 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] __________________________________________________________________________________________________ activation_230 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] __________________________________________________________________________________________________ activation_232 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] __________________________________________________________________________________________________ activation_234 (Activation) (None, 2, 2, 672) 0 normal_bn_1_16[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_226[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_228[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_230[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_232[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_234[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_16[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_16 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_16[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_16 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_16[ __________________________________________________________________________________________________ activation_227 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_229 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_231 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_233 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_235 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_227[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_229[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_231[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_233[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_235[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_16[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_16 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_16[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_16 __________________________________________________________________________________________________ normal_left3_16 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_16[0][0] __________________________________________________________________________________________________ normal_left4_16 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_16[0][0] __________________________________________________________________________________________________ normal_right4_16 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_16[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_16[ __________________________________________________________________________________________________ normal_add_1_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_16 (Add) (None, 2, 2, 672) 0 normal_left3_16[0][0] adjust_bn_16[0][0] __________________________________________________________________________________________________ normal_add_4_16 (Add) (None, 2, 2, 672) 0 normal_left4_16[0][0] normal_right4_16[0][0] __________________________________________________________________________________________________ normal_add_5_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_16[0][0] __________________________________________________________________________________________________ normal_concat_16 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_16[0][0] normal_add_1_16[0][0] normal_add_2_16[0][0] normal_add_3_16[0][0] normal_add_4_16[0][0] normal_add_5_16[0][0] __________________________________________________________________________________________________ activation_236 (Activation) (None, 2, 2, 4032) 0 normal_concat_15[0][0] __________________________________________________________________________________________________ activation_237 (Activation) (None, 2, 2, 4032) 0 normal_concat_16[0][0] __________________________________________________________________________________________________ adjust_conv_projection_17 (Conv (None, 2, 2, 672) 2709504 activation_236[0][0] __________________________________________________________________________________________________ normal_conv_1_17 (Conv2D) (None, 2, 2, 672) 2709504 activation_237[0][0] __________________________________________________________________________________________________ adjust_bn_17 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_17[0][0] __________________________________________________________________________________________________ normal_bn_1_17 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_17[0][0] __________________________________________________________________________________________________ activation_238 (Activation) (None, 2, 2, 672) 0 normal_bn_1_17[0][0] __________________________________________________________________________________________________ activation_240 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] __________________________________________________________________________________________________ activation_242 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] __________________________________________________________________________________________________ activation_244 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] __________________________________________________________________________________________________ activation_246 (Activation) (None, 2, 2, 672) 0 normal_bn_1_17[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_238[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_240[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_242[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_244[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_246[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_17[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_17 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_17[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_17 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_17[ __________________________________________________________________________________________________ activation_239 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_241 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_243 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_245 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_247 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_239[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_241[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_243[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_245[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_247[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_17[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_17 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_17[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_17 __________________________________________________________________________________________________ normal_left3_17 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_17[0][0] __________________________________________________________________________________________________ normal_left4_17 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_17[0][0] __________________________________________________________________________________________________ normal_right4_17 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_17[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_17[ __________________________________________________________________________________________________ normal_add_1_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_17 (Add) (None, 2, 2, 672) 0 normal_left3_17[0][0] adjust_bn_17[0][0] __________________________________________________________________________________________________ normal_add_4_17 (Add) (None, 2, 2, 672) 0 normal_left4_17[0][0] normal_right4_17[0][0] __________________________________________________________________________________________________ normal_add_5_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_17[0][0] __________________________________________________________________________________________________ normal_concat_17 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_17[0][0] normal_add_1_17[0][0] normal_add_2_17[0][0] normal_add_3_17[0][0] normal_add_4_17[0][0] normal_add_5_17[0][0] __________________________________________________________________________________________________ activation_248 (Activation) (None, 2, 2, 4032) 0 normal_concat_16[0][0] __________________________________________________________________________________________________ activation_249 (Activation) (None, 2, 2, 4032) 0 normal_concat_17[0][0] __________________________________________________________________________________________________ adjust_conv_projection_18 (Conv (None, 2, 2, 672) 2709504 activation_248[0][0] __________________________________________________________________________________________________ normal_conv_1_18 (Conv2D) (None, 2, 2, 672) 2709504 activation_249[0][0] __________________________________________________________________________________________________ adjust_bn_18 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_18[0][0] __________________________________________________________________________________________________ normal_bn_1_18 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_18[0][0] __________________________________________________________________________________________________ activation_250 (Activation) (None, 2, 2, 672) 0 normal_bn_1_18[0][0] __________________________________________________________________________________________________ activation_252 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] __________________________________________________________________________________________________ activation_254 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] __________________________________________________________________________________________________ activation_256 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] __________________________________________________________________________________________________ activation_258 (Activation) (None, 2, 2, 672) 0 normal_bn_1_18[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_250[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_252[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_254[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_256[0][0] __________________________________________________________________________________________________ separable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_258[0][0] __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_18[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_18 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_18[ __________________________________________________________________________________________________ separable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_18 __________________________________________________________________________________________________ separable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_18[ __________________________________________________________________________________________________ activation_251 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_ __________________________________________________________________________________________________ activation_253 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1 __________________________________________________________________________________________________ activation_255 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_ __________________________________________________________________________________________________ activation_257 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2 __________________________________________________________________________________________________ activation_259 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_ __________________________________________________________________________________________________ separable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_251[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_253[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_255[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_257[0][0] __________________________________________________________________________________________________ separable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_259[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_18[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_18 __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_18[ __________________________________________________________________________________________________ separable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_18 __________________________________________________________________________________________________ normal_left3_18 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_18[0][0] __________________________________________________________________________________________________ normal_left4_18 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_18[0][0] __________________________________________________________________________________________________ normal_right4_18 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_18[0][0] __________________________________________________________________________________________________ separable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_18[ __________________________________________________________________________________________________ normal_add_1_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_ separable_conv_2_bn_normal_right1 __________________________________________________________________________________________________ normal_add_2_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_ separable_conv_2_bn_normal_right2 __________________________________________________________________________________________________ normal_add_3_18 (Add) (None, 2, 2, 672) 0 normal_left3_18[0][0] adjust_bn_18[0][0] __________________________________________________________________________________________________ normal_add_4_18 (Add) (None, 2, 2, 672) 0 normal_left4_18[0][0] normal_right4_18[0][0] __________________________________________________________________________________________________ normal_add_5_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_ normal_bn_1_18[0][0] __________________________________________________________________________________________________ normal_concat_18 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_18[0][0] normal_add_1_18[0][0] normal_add_2_18[0][0] normal_add_3_18[0][0] normal_add_4_18[0][0] normal_add_5_18[0][0] __________________________________________________________________________________________________ activation_260 (Activation) (None, 2, 2, 4032) 0 normal_concat_18[0][0] __________________________________________________________________________________________________ global_average_pooling2d_1 (Glo (None, 4032) 0 activation_260[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 4032) 0 global_average_pooling2d_1[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 1024) 4129792 dropout_1[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 1024) 0 dense_1[0][0] __________________________________________________________________________________________________ final_output (Dense) (None, 1103) 1130575 dropout_2[0][0] ================================================================================================== Total params: 90,177,185 Trainable params: 5,260,367 Non-trainable params: 84,916,818 __________________________________________________________________________________________________
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Train top layers
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size history = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=valid_generator, validation_steps=STEP_SIZE_VALID, epochs=EPOCHS, callbacks=callbacks, verbose=2, max_queue_size=16, workers=3, use_multiprocessing=True)
WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/30 - 579s - loss: 0.0435 - acc: 0.9903 - categorical_accuracy: 0.0462 - val_loss: 0.1462 - val_acc: 0.9971 - val_categorical_accuracy: 0.0245 Epoch 2/30 - 565s - loss: 0.0192 - acc: 0.9970 - categorical_accuracy: 0.0713 - val_loss: 0.0919 - val_acc: 0.9971 - val_categorical_accuracy: 0.0304 Epoch 3/30 - 559s - loss: 0.0166 - acc: 0.9971 - categorical_accuracy: 0.0783 - val_loss: 0.0503 - val_acc: 0.9971 - val_categorical_accuracy: 0.0248 Epoch 4/30 - 552s - loss: 0.0153 - acc: 0.9971 - categorical_accuracy: 0.0843 - val_loss: 0.0295 - val_acc: 0.9971 - val_categorical_accuracy: 0.0299 Epoch 5/30 - 569s - loss: 0.0144 - acc: 0.9971 - categorical_accuracy: 0.0856 - val_loss: 0.0200 - val_acc: 0.9971 - val_categorical_accuracy: 0.0274 Epoch 6/30 - 563s - loss: 0.0140 - acc: 0.9971 - categorical_accuracy: 0.0838 - val_loss: 0.0165 - val_acc: 0.9971 - val_categorical_accuracy: 0.0346 Epoch 7/30 - 550s - loss: 0.0136 - acc: 0.9971 - categorical_accuracy: 0.0900 - val_loss: 0.0152 - val_acc: 0.9971 - val_categorical_accuracy: 0.0262 Epoch 8/30 - 549s - loss: 0.0134 - acc: 0.9972 - categorical_accuracy: 0.0884 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0270 Epoch 9/30 - 549s - loss: 0.0134 - acc: 0.9971 - categorical_accuracy: 0.0905 - val_loss: 0.0150 - val_acc: 0.9971 - val_categorical_accuracy: 0.0299 Epoch 10/30 - 542s - loss: 0.0132 - acc: 0.9971 - categorical_accuracy: 0.0925 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0248 Epoch 11/30 - 548s - loss: 0.0131 - acc: 0.9972 - categorical_accuracy: 0.0915 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0242 Epoch 12/30 - 545s - loss: 0.0131 - acc: 0.9972 - categorical_accuracy: 0.0929 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0266 Epoch 13/30 - 540s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0938 - val_loss: 0.0150 - val_acc: 0.9971 - val_categorical_accuracy: 0.0251 Epoch 14/30 - 549s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0965 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0238 Epoch 15/30 - 548s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0986 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0303 Epoch 00015: early stopping
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Complete model graph loss
sns.set_style("whitegrid") fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7)) ax1.plot(history.history['loss'], label='Train loss') ax1.plot(history.history['val_loss'], label='Validation loss') ax1.legend(loc='best') ax1.set_title('Loss') ax2.plot(history.history['acc'], label='Train Accuracy') ax2.plot(history.history['val_acc'], label='Validation accuracy') ax2.legend(loc='best') ax2.set_title('Accuracy') ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy') ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy') ax3.legend(loc='best') ax3.set_title('Cat Accuracy') plt.xlabel('Epochs') sns.despine() plt.show()
_____no_output_____
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Find best threshold value
lastFullValPred = np.empty((0, N_CLASSES)) lastFullValLabels = np.empty((0, N_CLASSES)) for i in range(STEP_SIZE_VALID+1): im, lbl = next(valid_generator) scores = model.predict(im, batch_size=valid_generator.batch_size) lastFullValPred = np.append(lastFullValPred, scores, axis=0) lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0) print(lastFullValPred.shape, lastFullValLabels.shape) def find_best_fixed_threshold(preds, targs, do_plot=True): score = [] thrs = np.arange(0, 0.5, 0.01) for thr in thrs: score.append(custom_f2(targs, (preds > thr).astype(int))) score = np.array(score) pm = score.argmax() best_thr, best_score = thrs[pm], score[pm].item() print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}') if do_plot: plt.plot(thrs, score) plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max()) plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14); plt.show() return best_thr, best_score threshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)
thr=0.050 F2=0.211
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Apply model to test set and output predictions
test_generator.reset() STEP_SIZE_TEST = test_generator.n//test_generator.batch_size preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST) predictions = [] for pred_ar in preds: valid = [] for idx, pred in enumerate(pred_ar): if pred > threshold: valid.append(idx) if len(valid) == 0: valid.append(np.argmax(pred_ar)) predictions.append(valid) filenames = test_generator.filenames label_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices} results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions}) results['id'] = results['id'].map(lambda x: str(x)[:-4]) results['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x))) results["attribute_ids"] = results["attribute_ids"].apply(lambda x: ' '.join(x)) results.to_csv('submission.csv',index=False) results.head(10)
_____no_output_____
MIT
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
Civil Air Patrol and GeoregistrationIn the previous session, we saw how a series of 2D images taken of a 3D scene can be used to recover the 3D information, by exploiting geometric constraints of the cameras. Now the question is, how do we take this technique and apply it in a disaster response scenario?We are going to look at a specific case study, using images from the Low Altitude Disaster Imagery (LADI) dataset, taken by the Civil Air Patrol (CAP). As we work with this dataset, keep in mind the two major questions from the previous lecture:- _What_ is in an image (e.g. debris, buildings, etc.)?- _Where_ are these things located _in 3D space_ ? Civil Air PatrolCivil Air Patrol (CAP) is the civilian auxiliary of the United States Air Force (USAF). The origins of CAP date back to the pre-World War II era. As the Axis powers became a growing threat to the world, civilian aviators in the United States feared that the government would shut down general aviation as a precautionary measure. These aviators thus had to prove to the federal government that civilian aviation was not only not a danger, but actually a benefit to the war effort. As a result of these efforts, two separate programs were created. One was a Civilian Pilot Training Program, intended to increase the available people that could operate an aircraft should the need to deploy additional troops arise. The second actually called for the organization of civilian aviators and opened the door to the creation of CAP. Once the United States entered WWII proper, CAP began to embark a plethora of activities, some of which are still practiced today. They continued to do cadet education programs. They also began patrolling the coasts and borders. Finally, they started in 1942 conducting search and rescue (SAR) missions. These missions were a resounding success, and one of the main components of CAP today.CAP has five congressionally mandated missions:(1) To provide an organization to—(A) encourage and aid citizens of the United States in contributing their efforts, services, and resources in developing aviation and in maintaining air supremacy; and(B) encourage and develop by example the voluntary contribution of private citizens to the public welfare.(2) To provide aviation education and training especially to its senior and cadet members.(3) To encourage and foster civil aviation in local communities.(4) To provide an organization of private citizens with adequate facilities to assist in meeting local and national emergencies.(5) To assist the Department of the Air Force in fulfilling its noncombat programs and missions.source: https://www.law.cornell.edu/uscode/text/36/40302CAP's main series of missions revolve around emergency response. CAP is involved in roughly 85% of all SAR missions in the United States and its territories. After natural disasters, CAP is responsible for assessing damage in affected communities, delivering supplies, providing transportation, in addition to its usual SAR missions. https://kvia.com/health/2020/06/18/el-paso-civil-air-patrol-flying-virus-tests-to-labs-in-money-saving-effort-for-texas/https://www.southernminn.com/article_2c5739a5-826f-53bb-a658-922fb1aa1627.htmlPart of their emergency programming is taking aerial imagery of affected areas. This imagery is the highest resolution, most timely imagery that we have available of a post-disaster situation. Even the highest resolution satellite imagery is often either limited in their geographical coverage, not very timely or occluded by clouds. These are images taken of Puerto Rico after Hurricane Maria in 2017.CAP has taken hundreds of thousands of images of disaster-affected areas in the past decades. And yet, even though it is some of the best imagery we have access to, it is rarely if ever used in practice. _Why?_ The LADI datasetPart of the effort in making CAP imagery more useful is trying to make more sense of the content of the images. To that end, researchers at MIT Lincoln Laboratory released the Low Altitude Disaster Imagery (LADI) dataset. This dataset contains hundreds of thousands of CAP images that have crowdsourced labels corresponding to infrastructure, environment and damage categories. This begins to answer the first of the two questions we set out initially. We'll start working on these labels tomorrow. For now, we will solely focus on the images themselves.What are some of the limitations of this dataset? ExerciseImagine you have acquired $200,000 to implement some improvement to the way CAP takes aerial imagery. Hurricane season starts in five months, so whatever improvements need to be implemented by then. Separate into your breakout rooms and answer the following questions:- What specific hurdles to using CAP images do you want to address? Identify at least two.- Design a proposal to address the challenges you identified above, taking into account the budget and time constraints. Improvements can be of any sort (technical, political, social, etc).- What are the advantages and disadvantages of implementing your proposal?- Identify at least three different stakeholder groups in this situation. What are their specific needs? How does your proposal address these needs? How does your proposal fall short?- Draw out a budget breakdown and a timeline, as well as a breakdown of which stakeholders you are prioritizing and why. Prepare to present these at 1:30pm. 3D Reconstruction in a Real World Reference FrameIf we want to answer the second of our two guiding questions, we must be able to make a translation between where something is in the image and its location in real world coordinates. Let's stake stock of what tools we have thus far. We spent a good amount of time discussing structure from motion as a way to reconstruct a 3D scene from 2D images. Recall the limitations of this approach:- There need to be more than one image in a sequence.- Sequential images need to have enough overlap that there are common features.- At least one pair of sequential images must have sufficient translation such that the problem is not ill-posed.- The reconstruction is given in an arbitrary reference frame up to scale. What does that last point mean? The arbitrary reference part refers to the fact that the origin and the axes are aligned with the first camera. The up to scale part means that all distances are preserved up to a factor $\lambda$. Therefore the scene retains the general shape, but the size of the scene is not conserved. Without additional information, it is impossible to know how the reconstructed scene relates to any other reference frame, and translating the reconstruction to real world coordinates is impossible.However, recall that we do typically have at least a coarse estimate of the camera's GPS coordinates, therefore we have estimates of the distances between sequential cameras. Consider a reconstruction of just two images. Then a good estimate of $\lambda$ is:$\lambda = \frac{D_{GPS}}{D_{reconstruction}}$This is slightly more complicated for more than two images. Typically, a solver will initialize the camera positions at their GPS coordinates and use bundle adjustment to correct the errors in the GPS measurements, although certainly there's more than one way to do this.Let's give this a shot and see what happens! As it so happens, OpenSfM is already equipped to handle GPS coordinates.
import sys import open3d as o3d import json import numpy as np import pandas as pd from matplotlib import pyplot as plt import cv2 import os # Take initial guess of intrinsic parameters through metadata !opensfm extract_metadata CAP_sample_1 # Detect features points !opensfm detect_features CAP_sample_1 # Match feature points across images !opensfm match_features CAP_sample_1 # This creates "tracks" for the features. That is to say, if a feature in image 1 is matched with one in image 2, # and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. !opensfm create_tracks CAP_sample_1 # Calculates the essential matrix, the camera pose and the reconstructed feature points !opensfm reconstruct CAP_sample_1 # adding the --all command to include all partial reconstructions !opensfm export_ply --all CAP_sample_1 import open3d as o3d from open3d import JVisualizer # it turns out that we have two partial reconstructions from the reconstruct command # open3d actually has a very convenient way of combining point clouds, just by using the + operator pcd = o3d.io.read_point_cloud("CAP_sample_1/reconstruction_files/reconstruction_0.ply") pcd += o3d.io.read_point_cloud("CAP_sample_1/reconstruction_files/reconstruction_1.ply") visualizer = JVisualizer() visualizer.add_geometry(pcd) visualizer.show()
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
So what are we seeing? We see two collections of points, both mostly coplanar internally (which we expect, given that this is a mostly planar scene), but the two sets are not aligned with each other! Let's look a bit more closely...
# here, we're just going to plot the z (altitude) values of the reconstructed points point_coord = np.asarray(pcd.points) plt.hist(point_coord[:, 2].ravel()) plt.show()
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
So not only are the points misaligned, but we're getting wild altitude values! **What's going on?** ExerciseLet's make a critical assumption: all of the image coordinates (the GPS coordinates of the camera as it takes an image) all lie on a plane (in the mathematical sense). Answer the following questions:- How many points are needed to specify a (mathematical) plane?- In addition to the number of points, what other requirement do those points need?- Look at the visualization above. Do the camera points fulfill that requirement?- One way to resolve the ambiguity is to determine what direction is "up" (i.e. pointing away from the center of the Earth). Propose a solution to determine the up-vector. You can either assume the same setup that we currently have or propose new sensors/other setups. CLICK HERE TO SEE THE PROPOSED SOLUTION We're going to make a fair (but limited) assumption that the ground is mostly flat. It turns out we can fit a plane through the reconstructed ground points and find a direction perpendicular to the plane (called the plane normal). If the ground is flat, then the normal should be close enough to the up direction. Note that this assumption does not hold for an area with a lot of inclination. In practice, we would most likely augment this with a Digital Elevation Model (DEM)To implement the proposed solution, we can go to the CAP_sample_1/config.yaml file and modify "align_orientation_prior" from "horizontal" to "plane_based". Afterwards, we run the previous commands as usual.
# This creates "tracks" for the features. That is to say, if a feature in image 1 is matched with one in image 2, # and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. !opensfm create_tracks CAP_sample_1 # Calculates the essential matrix, the camera pose and the reconstructed feature points !opensfm reconstruct CAP_sample_1 # adding the --all command to include all partial reconstructions !opensfm export_ply --all CAP_sample_1
2020-07-17 15:50:42,846 INFO: reading features 2020-07-17 15:50:42,900 DEBUG: Merging features onto tracks 2020-07-17 15:50:42,943 DEBUG: Good tracks: 3429 2020-07-17 15:50:45,033 INFO: Starting incremental reconstruction 2020-07-17 15:50:45,081 INFO: Starting reconstruction with image_url_pr_10_13_sample_11.jpg and image_url_pr_10_13_sample_12.jpg 2020-07-17 15:50:45,119 INFO: Two-view reconstruction inliers: 1748 / 1748 2020-07-17 15:50:45,316 INFO: Triangulated: 1551 2020-07-17 15:50:45,343 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.447386e+02, Final cost: 3.387344e+02, Termination: CONVERGENCE 2020-07-17 15:50:45,494 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.402295e+02, Final cost: 3.387048e+02, Termination: CONVERGENCE Align plane: [ 0.03595051 -0.99933397 0.00625842 0. ] 2020-07-17 15:50:45,725 DEBUG: Ceres Solver Report: Iterations: 16, Initial cost: 2.366617e+01, Final cost: 1.747716e+01, Termination: CONVERGENCE 2020-07-17 15:50:45,733 INFO: Removed outliers: 0 2020-07-17 15:50:45,734 INFO: ------------------------------------------------------- 2020-07-17 15:50:45,751 INFO: image_url_pr_10_13_sample_13.jpg resection inliers: 766 / 769 2020-07-17 15:50:45,786 DEBUG: Ceres Solver Report: Iterations: 4, Initial cost: 6.813228e+01, Final cost: 5.968688e+01, Termination: CONVERGENCE 2020-07-17 15:50:45,787 INFO: Adding image_url_pr_10_13_sample_13.jpg to the reconstruction 2020-07-17 15:50:45,911 INFO: Re-triangulating Align plane: [-1.64391829e-01 8.28579845e-02 9.82908887e-01 1.68926384e-14] 2020-07-17 15:50:47,131 DEBUG: Ceres Solver Report: Iterations: 72, Initial cost: 1.071345e+02, Final cost: 5.625512e+01, Termination: CONVERGENCE 2020-07-17 15:50:47,593 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 5.525121e+01, Final cost: 5.462564e+01, Termination: CONVERGENCE 2020-07-17 15:50:47,608 INFO: Removed outliers: 0 2020-07-17 15:50:47,610 INFO: ------------------------------------------------------- 2020-07-17 15:50:47,613 INFO: Some images can not be added 2020-07-17 15:50:47,614 INFO: ------------------------------------------------------- Align plane: [-1.14533010e-01 1.24543053e-01 9.85581665e-01 1.94621923e-14] 2020-07-17 15:50:48,227 DEBUG: Ceres Solver Report: Iterations: 31, Initial cost: 6.335694e+01, Final cost: 5.542189e+01, Termination: CONVERGENCE 2020-07-17 15:50:48,243 INFO: Removed outliers: 0 2020-07-17 15:50:48,275 INFO: {'points_count': 2519, 'cameras_count': 3, 'observations_count': 5860, 'average_track_length': 2.3263199682413656, 'average_track_length_notwo': 3.0} 2020-07-17 15:50:48,275 INFO: Starting reconstruction with image_url_pr_10_13_sample_07.jpg and image_url_pr_10_13_sample_08.jpg 2020-07-17 15:50:48,337 INFO: Two-view reconstruction inliers: 737 / 738 2020-07-17 15:50:48,491 INFO: Triangulated: 738 2020-07-17 15:50:48,502 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.094747e+02, Final cost: 4.090971e+02, Termination: CONVERGENCE 2020-07-17 15:50:48,564 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.091877e+02, Final cost: 4.090466e+02, Termination: CONVERGENCE Align plane: [ 0.0163867 -0.99986469 -0.00144285 0. ] 2020-07-17 15:50:48,738 DEBUG: Ceres Solver Report: Iterations: 28, Initial cost: 1.971526e+01, Final cost: 1.242737e+01, Termination: CONVERGENCE 2020-07-17 15:50:48,742 INFO: Removed outliers: 1 2020-07-17 15:50:48,742 INFO: ------------------------------------------------------- Align plane: [-5.31242776e-02 3.42329121e-02 9.98000961e-01 -3.78115673e-15] 2020-07-17 15:50:48,824 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 1.899088e+01, Final cost: 1.239818e+01, Termination: CONVERGENCE 2020-07-17 15:50:48,828 INFO: Removed outliers: 0 2020-07-17 15:50:48,878 INFO: {'points_count': 737, 'cameras_count': 2, 'observations_count': 1474, 'average_track_length': 2.0, 'average_track_length_notwo': -1} 2020-07-17 15:50:48,878 INFO: Reconstruction 0: 3 images, 2519 points 2020-07-17 15:50:48,878 INFO: Reconstruction 1: 2 images, 737 points 2020-07-17 15:50:48,878 INFO: 2 partial reconstructions in total.
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
GeoregistrationThe process of assigning GPS coordinates to individual pixels is called _georegistration_ or _georeferencing_. This requires us to perform a final transformation from pixel coordinates *per each image* to the 3D reconstructed coordinates. Before doing so, it is worthwhile talking a bit about what exactly our 3D coordinate system is. You might recall that not all coordinate referece systems lend themselves well to geometric transformations. Specifically, we want our 3D coordinate system to be Cartesian (i.e. three orthogonal, right-handed axes). OpenSfM performs its reconstructions in what is known as a *local tangent plane coordinate system* called *local east, north, up (ENU) coordinates*. The way this works is, you select an origin somewhere in the world (in our case, it is saved in the reference_lla.json file), and you align your axes such that the x-axis is parallel to latitudes and increasing Eastward, the y-axis is parallel to meridians and increasing Northward, and the z-axis is pointing away from the center of the Earth. The image below shows how this works:In order to convert from ENU coordinates to geodetic coordinates (i.e. latitude, longitude, altitude), you need to know the origin.
# Origin of our reconstruction, as given by the reference_lla.json (made from the reconstruction) with open("CAP_sample_1/reference_lla.json", "r") as f: reference_lla = json.load(f) latitude=reference_lla["latitude"] longitude=reference_lla["longitude"] altitude=reference_lla["altitude"] # This is the json file that contains the reconstructed feature points with open("CAP_sample_1/reconstruction.json", "r") as f: reconstructions = json.load(f)
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
There is a bit of work we need to go through to finalize the georegistration. First, we need to match the reconstructed features with the features on an image the tracks.csv file and the reconstruction.json can help us do that. The columns of tracks are as follows: image name, track ID (ID of the reconstructed point), feature ID (ID of the feature within the image), the *normalized* image coordinates x and y, the normalization factor s, and the color of the feature RGB.
from opensfm.features import denormalized_image_coordinates # reading the csv tracks = pd.read_csv("CAP_sample_1/tracks.csv", sep="\t", skiprows=1, names=["image_name", "track_id", "feature_id", "x", "y", "s", "R", "G", "B"]) # we need to denormalize the coordinates to turn them into regular pixel coordinates normalized_coor = tracks[["x", "y", "s"]] denormalized_coor = denormalized_image_coordinates(normalized_coor.values, 4496, 3000) # create a new column with the denormalized coordinates tracks["denorm_x"] = denormalized_coor[:, 0] tracks["denorm_y"] = denormalized_coor[:, 1]
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
We're going to store the georegistration by creating a new .tif file for every CAP image. As you can recall, .tif files save not just the pixel data but also the projection that allows it to be displayed on top of other map data. There are two parts to doing this:- First, we need to create an _orthorectified_ image. Simply put, this is one that is transformed such that it looks as though you are looking at it from the top down. - Second, we need to add *ground control points* (GCPs) to the orthorectified image. GCPs are correspondences between world coordinates and pixel coordinates.Once we add the GCPs, any mapping software can plot the image such that the GCPs are aligned with their underlying coordinates.
import shutil import gdal, osr try: from pymap3d import enu2geodetic except: !pip install pymap3d from pymap3d import enu2geodetic import random from skimage import transform if not os.path.isdir("CAP_sample_1/geotiff/"): os.mkdir("CAP_sample_1/geotiff/") if not os.path.isdir("CAP_sample_1/ortho/"): os.mkdir("CAP_sample_1/ortho/") for reconst in reconstructions: for shot in reconst["shots"]: # some housekeeping shot_name = shot.split(".")[0] img = cv2.imread("CAP_sample_1/images/"+shot) shape = img.shape # here we get the features from the image and their corresponding reconstructed features reconst_ids = list(map(int, reconst["points"].keys())) tracks_shot = tracks[(tracks["image_name"] == shot) & (tracks["track_id"].isin(reconst_ids))] denorm_shot = np.round(tracks_shot[["denorm_x", "denorm_y"]].values) reconst_shot = np.array([reconst["points"][str(point)]["coordinates"] for point in tracks_shot["track_id"]]) # we're going to create an image that is distorted to fit within the world coordinates # pix_shot is just the reconstructed feature coordinates offset by some amount so that # all coordinates are positive. offset = np.min(reconst_shot[:, :2]) pix_shot = reconst_shot[:, :2]-np.multiply(offset, offset<0) # transformation for the new orthorectified image H, inliers = cv2.findHomography(denorm_shot, pix_shot) # filtering out points that didn't fit the transformation reconst_shot = reconst_shot[inliers.ravel()==1, :] denorm_shot = np.round(denorm_shot[inliers.ravel()==1, :]) pix_shot = np.round(pix_shot[inliers.ravel()==1, :]) # creating the ortho image shape = tuple(np.max(pix_shot, axis=0).astype(int)) ortho_img = cv2.warpPerspective(img, H, shape) cv2.imwrite("CAP_sample_1/ortho/" + shot + "_ortho.jpg", ortho_img) # here we convert all of the reconstructed points into lat/lon coordinates geo_shot = np.array([enu2geodetic(reconst_shot[i, 0],reconst_shot[i, 1],reconst_shot[i, 2],latitude,longitude,altitude) for i in range(reconst_shot.shape[0])]) idx = random.sample(range(len(geo_shot)), 10) pix_shot_sample = pix_shot[idx, :] geo_shot_sample = geo_shot[idx, :] # creating the Ground Control Points orig_fn = "CAP_sample_1/ortho/" + shot + "_ortho.jpg" fn = "CAP_sample_1/geotiff/" + shot_name + "_GCP.tif" orig_ds = gdal.Open(orig_fn) gdal.GetDriverByName('GTiff').CreateCopy(fn, orig_ds) ds = gdal.Open(fn, gdal.GA_Update) sr = osr.SpatialReference() sr.SetWellKnownGeogCS('WGS84') gcps = [gdal.GCP(geo_shot_sample[i, 1], geo_shot_sample[i, 0], 0, int(pix_shot_sample[i, 0]), int(pix_shot_sample[i, 1])) for i in range(geo_shot_sample.shape[0])] ds.SetGCPs(gcps, sr.ExportToWkt()) ds = None import rasterio import rasterio.plot fig = plt.figure(figsize=(15, 15)) files = [os.path.join('CAP_sample_1/geotiff/', f) for f in os.listdir('CAP_sample_1/geotiff/') if f.endswith("tif")] for i, file in enumerate(files): with rasterio.open(file, "r") as dataset: # dataset_mask = dataset.read_masks(1) # dataset_read = dataset.read(1) # rasterio.plot.show(np.ma.masked_where(dataset_mask==0, dataset_read), ax=ax) ax = fig.add_subplot(3, 2, i+1) rasterio.plot.show(dataset, ax=ax) ax.axis("equal")
_____no_output_____
FSFULLR
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
Pre-process input data for coastal variable extractionAuthor: Emily Sturdivant; [email protected]***Pre-process files to be used in extractor.ipynb (Extract barrier island metrics along transects). See the project [README](https://github.com/esturdivant-usgs/BI-geomorph-extraction/blob/master/README.md) and the Methods Report (Zeigler et al., in review). Pre-processing steps1. Pre-created geomorphic features: dunes, shoreline points, armoring.2. Inlets3. Shoreline4. Transects - extend and sort5. Transects - tidy Notes:This process requires some manipulation of the spatial layers by the user. When applicable, instructions are described in this file.*** Import modules
import os import sys import pandas as pd import numpy as np import arcpy import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') try: import core.functions_warcpy as fwa import core.functions as fun except ImportError as err: print("Looks like you need to install the module to your ArcGIS environment. Please see the README for details.") from core.setvars import *
No module named 'CoastalVarExtractor' Looks like you need to install the module to your ArcGIS environment. To do so: pip install git+https://github.com/esturdivant-usgs/BI-geomorph-extraction.git
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
If you don't want to formally install the module, you'll need to add the path to the package to your system path: ```pythonmod_path = r"path\to\dir\BI-geomorph-extraction" replace with path to modulesys.path.insert(0, mod_path)import CoastalVarExtractor.functions_warcpy as fwa``` Initialize variablesBased on the project directory, and the site and year you have input, setvars.py will set a bunch of variables as the names of folders, files, and fields. 1) set-up the project folder and paths:
from core.setvars import * # Inputs - vector orig_trans = os.path.join(arcpy.env.workspace, 'DelmarvaS_SVA_LT') ShorelinePts = os.path.join(home, 'SLpts') dlPts = os.path.join(home, 'DLpts') dhPts = os.path.join(home, 'DHpts') # Inputs - raster elevGrid = os.path.join(home, 'DEM') elevGrid_5m = os.path.join(home, 'DEM_5m') SubType = os.path.join(home, 'FI11_SubType') VegType = os.path.join(home, 'FI11_VegType') VegDens = os.path.join(home, 'FI11_VegDens') GeoSet = os.path.join(home, 'FI11_GeoSet') DisMOSH = os.path.join(home, 'FI11_DisMOSH') # Files to create or modify armorLines = os.path.join(home, 'armorLines') inletLines = os.path.join(home, 'inletLines') SA_bounds = 'SA_bounds' # Outputs extendedTrans = os.path.join(home, 'extTrans') extTrans_tidy = os.path.join(home, 'tidyTrans') barrierBoundary = os.path.join(home, 'bndpoly_2sl') shoreline = os.path.join(home, 'ShoreBetweenInlets') tr_w_anthro = os.path.join(home, 'extTrans_wAnthro')
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Dunes and armoring Display the points and the DEM in a GIS to check for irregularities. For example, if shoreline points representing a distance less than X m are visually offset from the general shoreline, they should likely be removed. Another red flag is when the positions of dlows and dhighs in relation to the shore are illogical, i.e. dune crests are seaward of dune toes. If fill values in the morphology datasets are not -99999, then replace them will Null values. If they are -99999, the extractor can accept fill values as long as they match those in the rest of the extractor. It also accepts Null (None or np.nan) values. The morphology datasets do not need to be reprojected to UTM because the find_ClosestPt2Trans_snap() function will reproject them if necessary. Replace fill values with Null. Only necessary if the fill values are different from what will be used during the extraction routine to follow (default is -99999).
fwa.ReplaceValueInFC(dhPts, oldvalue=fill, newvalue=None, fields=["dhigh_z"]) # Dhighs fwa.ReplaceValueInFC(dlPts, oldvalue=fill, newvalue=None, fields=["dlow_z"]) # Dlows fwa.ReplaceValueInFC(ShorelinePts, oldvalue=fill, newvalue=None, fields=["slope"]) # Shoreline
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Project to UTM if not already. If this happens, we need to change the file name for future processing. If desired, delete dune points with missing Z values. Not necessary because you can choose to exclude those points from the beach width calculation.
# Delete points with fill elevation value from dune crests fmapdict = fwa.find_similar_fields('DH', dhPts, fields=['_z']) arcpy.CopyFeatures_management(dhPts, dhPts+'_orig') fwa.DeleteFeaturesByValue(dhPts, [fmapdict['_z']['src']], deletevalue=-99999) print('Deleted dune crest points that will fill elevation values. Original file is saved with the _orig suffix.') # Delete points with fill elevation value from dune toes fmapdict = fwa.find_similar_fields('DL', dlPts, fields=['_z']) arcpy.CopyFeatures_management(dlPts, dlPts+'_orig') fwa.DeleteFeaturesByValue(dlPts, [fmapdict['_z']['src']], deletevalue=-99999) print('Deleted dune toe points that will fill elevation values. Original file is saved with the _orig suffix.')
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
ArmoringIf the dlows do not capture the entire top-of-beach due to atypical formations caused by anthropogenic modification, you may need to digitize the beachfront armoring. The next code block will generate an empty feature class. Refer to the DEM and orthoimagery. If there is no armoring in the study area, continue. If there is armoring, use the Editing toolbar to add lines to the feature class that trace instances of armoring. Common manifestations of what we call armoring are sandfencing and sandbagging and concrete seawalls. If there is no armoring file in the project geodatabase, the extractor script will notify you that it is proceeding without armoring.*__Requires manipulation in GIS__*
arcpy.CreateFeatureclass_management(home, os.path.basename(armorLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now manually digitize the shorefront armoring.".format(armorLines))
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
InletsWe also need to manually digitize inlets if an inlet delineation does not already exist. To do, the code below will produce the feature class. After which, use the Editing toolbar to create a line where the oceanside shore meets a tidal inlet. If the study area includes both sides of an inlet, that inlet will be represented by two lines. The inlet lines are use to define the bounds of the oceanside shore, which is also considered the point where the oceanside shore meets the bayside. Inlet lines must intersect the MHW contour. What do we do when the study area and not an inlet is the end?*__Requires manipulation in GIS__*
# manually create lines that correspond to end of land and cross the MHW line (use bndpoly/DEM) arcpy.CreateFeatureclass_management(home, os.path.basename(inletLines), 'POLYLINE', spatial_reference=utmSR) print("{} created. Now we'll stop for you to manually create lines at each inlet.".format(inletLines))
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
ShorelineThe shoreline is produced through a combination of the DEM and the shoreline points. The first step converts the DEM to both MTL and MHW contour polygons. Those polygons are combined to produce the full shoreline, which is considered to fall at MHW on the oceanside and MTL on the bayside (to include partially submerged wetland).If the study area does not end cleanly at an inlet, create a separate polyline feature class (default name is 'SA_bounds') and add lines that bisect the shoreline; they should look and function like inlet lines. Specify this in the arguments for DEMtoFullShorelinePoly() and CreateShoreBetweenInlets().At some small inlets, channel depth may be above MTL. In this case, the script left to its own devices will leave the MTL contour between the two inlet lines. This can be rectified after processing by deleting the mid-inlet features from the temp file 'shore_2split.'
SA_bounds = 'SA_bounds' bndpoly = fwa.DEMtoFullShorelinePoly(elevGrid_5m, sitevals['MTL'], sitevals['MHW'], inletLines, ShorelinePts) print('Select features from {} that should not be included in the final shoreline polygon. '.format(bndpoly))
Creating the MTL contour polgon from the DEM... Creating the MHW contour polgon from the DEM... Combining the two polygons... Isolating the above-MTL portion of the polygon to the bayside... User input required! Select extra features in bndpoly for deletion. Recommended technique: select the polygon/s to keep and then Switch Selection.
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires display in GIS__*User input is required to identify only the areas within the study area and eliminate isolated landmasses that are not. Once the features to delete are selected, either delete in the GIS or run the code below. Make sure the bndpoly variable matches the layer name in the GIS.__Do not...__ select the features in ArcGIS and then run DeleteFeatures in this Notebook Python kernel. That will delete the entire feature class. ```arcpy.DeleteFeatures_management(bndpoly)```The next step snaps the boundary polygon to the shoreline points anywhere they don't already match and as long as as they are within 25 m of each other.
bndpoly = 'bndpoly' barrierBoundary = fwa.NewBNDpoly(bndpoly, ShorelinePts, barrierBoundary, '25 METERS', '50 METERS') shoreline = fwa.CreateShoreBetweenInlets(barrierBoundary, inletLines, shoreline, ShorelinePts, proj_code)
Splitting \\Mac\stor\Projects\TransectExtraction\FireIsland2010\FireIsland2010.gdb\bndpoly_2sl_edited at inlets... Preserving only those line segments that intersect shoreline points... Dissolving the line to create \\Mac\stor\Projects\TransectExtraction\FireIsland2010\FireIsland2010.gdb\ShoreBetweenInlets...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
After this step, you'll want to make sure the shoreline looks okay. There should be only one line segment for each stretch of shore between two inlets. Segments may be incorrectly deleted if the shoreline points are missing in the area. Segments may be incorrectly preserved if they are intersect a shoreline point. To rectify, either perform manual editing or rerun this code with modifications. Transects - extend, sort, and tidyCreate extendedTrans, NASC transects for the study area extended to cover the island, with gaps filled, and sorted in the field sort_ID. 1. Extend the transects and use a copy of the lines to fill alongshore gaps
# Delete transects over 200 m outside of the study area. if input("Need to remove extra transects? 'y' if barrierBoundary should be used to select. ") == 'y': fwa.RemoveTransectsOutsideBounds(orig_trans, barrierBoundary) trans_extended = fwa.ExtendLine(orig_trans, os.path.join(arcpy.env.scratchGDB, 'trans_ext_temp'), extendlength, proj_code) trans_presort = fwa.CopyAndWipeFC(trans_extended, os.path.join(arcpy.env.scratchGDB, 'trans_presort_temp'), ['sort_ID']) print("MANUALLY: use groups of existing transects in new FC '{}' to fill gaps.".format(trans_presort))
Need to remove extra transects? 'y' if barrierBoundary exists and should be used to select. y \\Mac\stor\Projects\TransectExtraction\Fisherman2014\Fisherman2014.gdb\DelmarvaS_SVA_LT is already projected in UTM. MANUALLY: use groups of existing transects in new FC '\\Mac\stor\Projects\TransectExtraction\Fisherman2014\scratch.gdb\trans_presort_temp' to fill gaps.
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires manipulation in GIS__*1. Edit the trans_presort_temp feature class. __Move and rotate__ groups of transects to fill in gaps that are greater than 50 m alongshore. There is no need to preserve the original transects, but avoid overlapping the transects with each other and with the originals. Do not move any transects slightly. If they are moved, they will not be deleted in the next stage. If you slightly move any, you can either undo or delete that line entirely.
fwa.RemoveDuplicates(trans_presort, trans_extended, barrierBoundary)
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
2. Sort the transects along the shoreUsually if the shoreline curves, we need to identify different groups of transects for sorting. This is because the GIS will not correctly establish the alongshore order by simple ordering from the identified sort_corner. If this is the case, answer __yes__ to the next prompt.
sort_lines = fwa.SortTransectPrep(spatialref=utmSR)
Do we need to sort the transects in batches to preserve the order? (y/n) y MANUALLY: Add features to sort_lines. Indicate the order of use in 'sort' and the sort corner in 'sort_corn'.
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
*__Requires manipulation in GIS__*The last step generated an empty sort lines feature class if you indicated that transects need to be sorted in batches to preserve the order. Now, the user creates lines that will be used to spatially sort transects in groups. For each group of transects:1. __Create a new line__ in 'sort_lines' that intersects all transects in the group. The transects intersected by the line will be sorted independently before being appended to the preceding groups. (*__add example figure__*)2. __Assign values__ for the fields 'sort,' 'sort_corner,' and 'reverse.' 'sort' indicates the order in which the line should be used and 'sort_corn' indicates the corner from which to perform the spatial sort ('LL', 'UL', etc.). 'reverse' indicates whether the order should be reversed (roughly equivalent to 'DESCENDING').3. Run the following code to create a new sorted transect file.
fwa.SortTransectsFromSortLines(trans_presort, extendedTrans, sort_lines, tID_fld)
Creating new feature class \\Mac\stor\Projects\TransectExtraction\Fisherman2014\Fisherman2014.gdb\extTrans to hold sorted transects... Sorting sort lines by field sort... For each line, creating subset of transects and adding them in order to the new FC... Copying the generated OID values to the transect ID field (sort_ID)...
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
3. Tidy the extended (and sorted) transects to remove overlap*__Requires manipulation in GIS__*Overlapping transects cause problems during conversion to 5-m points and to rasters. We create a separate feature class with the 'tidied' transects, in which the lines don't overlap. This is largely a manually process with the following steps: 1. __Select__ transects to be used to split other transects. Prioritize transects that a) were originally from NASC, b) have dune points within 25 m, and c) are oriented perpendicular to shore. (*__add example figure__*)2. Use the __Copy Features__ geoprocessing tool to copy only the selected transects into a new feature class. If desired, here is the code that could be used to copy the selected features and clear the selection: ```python arcpy.CopyFeatures_management(extendedTrans, overlapTrans_lines) arcpy.SelectLayerByAttribute_management(extendedTrans, "CLEAR_SELECTION") ```3. Run the code below to split the transects at the selected lines of overlap.
overlapTrans_lines = os.path.join(arcpy.env.scratchGDB, 'overlapTrans_lines_temp') if not arcpy.Exists(overlapTrans_lines): overlapTrans_lines = input("Filename of the feature class of only 'boundary' transects: ") trans_x = arcpy.Intersect_analysis([extendedTrans, overlapTrans_lines], os.path.join(arcpy.env.scratchGDB, 'overlap_points_temp'), 'ALL', output_type="POINT") arcpy.SplitLineAtPoint_management(extendedTrans, trans_x, extTrans_tidy)
_____no_output_____
CC0-1.0
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
Importing all the required libraries
import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.preprocessing import LabelEncoder from sklearn.ensemble import BaggingClassifier
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Reading the training dataset to Pandas DataFrame
data = pd.read_csv('train.csv') data.head()
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Getting the target variables to Y variable
Y = data['Severity'] Y.shape
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
Dropoing the irrelevent columns from training data
data = data.drop(columns=['Severity','Accident_ID','Accident_Type_Code','Adverse_Weather_Metric'],axis=1) data.head()
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
creating the Label Encoder object which will encode the target severities to numerical form
label_encode = LabelEncoder() y = label_encode.fit_transform(Y) x_train,x_test,y_train,y_test = train_test_split(data,y,test_size=0.3) bag = BaggingClassifier(n_estimators=100,) bag.fit(data,y) predictions = bag.predict(x_test) accuracy_score(y_test,predictions) test_data = pd.read_csv('test.csv') accident_id = test_data['Accident_ID'] print(test_data.shape) test_data = test_data.drop(columns=['Accident_ID','Accident_Type_Code','Adverse_Weather_Metric'],axis=1) test_data.shape predictions = bag.predict(test_data) predictions = label_encode.inverse_transform(predictions) result_df = pd.DataFrame({'Accident_ID':accident_id,'Severity':predictions}) result_df.head() result_df.to_csv('Prediction.csv',index=False)
_____no_output_____
MIT
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
"Chapter 1: Data Model"> Introduction about what "Pythonic" means.- toc:true- badges: true- author: JJmachan Pythonic Card DeckTo undertant how python works as a framework it is crutial that you get the Python Data Model. Python is very consistent and by that I mean that once you have some experince with the language you can start to correctly make informed guesses on other features about python even if its new. This will help you make your objects more pythonic by leveraging the options python has for:1. Iteration2. Collections3. Attribute access4. Operator overloading5. Function and method invocation6. Object creation and destruction7. String representation and formatting8. Managed contexts (i.e., with blocks)Studing these will give you the power to make your own python object play nicely with the python language and use many of the freatures mentioned above. In short makes you code "pythonic".Let see an example to show you the power of `__getitem__` and `__len__`.
import collections # namedtuple - tuples with names for each value in it (much like a class) Card = collections.namedtuple('Card', ['rank', 'suit']) c = Card('7', 'diamonds') # individual card object print(c) print(c.rank, c.suit) # class to represent the deck of cards class FrenchDeck: ranks = [str(n) for n in range(2, 11)] + list('JQKA') suits = 'spades diamonds clubs hearts'.split() def __init__(self): self._cards = [Card(rank, suit) for suit in self.suits for rank in self.ranks] def __len__(self): return len(self._cards) def __getitem__(self, position): return self._cards[position] deck = FrenchDeck() # with this simple class, we can already use `len` and `__getitem__` len(deck), deck[0]
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
Now we have created a class FrenchDeck that is short but still packs a punch. All the basic operations are supported. Now imagine we have another usecase to pick a random card. Normally we would add another function but in this case we can use pythons existing lib function `random.choice()`.
from random import choice choice(deck)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
> We’ve just seen two advantages of using special methods to leverage the Python datamodel:> 1. The users of your classes don’t have to memorize arbitrary method names for stan‐dard operations (“How to get the number of items? Is it .size() , .length() , orwhat?”).> 2. It’s easier to benefit from the rich Python standard library and avoid reinventingthe wheel, like the random.choice function.But we have even more features
# because of __getitem__, our deck is now slicable deck[1:5] # because of __getitem__, is iterable for card in deck: if card.rank == 'K': print(card) # iteration is often implicit hence if the collection has no __contains__ method # the in operator does a sequential scan. Card('Q', 'spades') in deck Card('M', 'spades') in deck
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
we can also make use the build-in `sorted()` function. We just need to proved a function for providing the values of the cards. Here the logic is provided in `spedes_high`
suit_value = dict(spades=3, hearts=2, diamonds=1, clubs=0) def spades_high(card): rank_value = FrenchDeck.ranks.index(card.rank) return rank_value*len(suit_value) + suit_value[card.suit] for card in sorted(deck, key=spades_high)[:10]: print(card)
Card(rank='2', suit='clubs') Card(rank='2', suit='diamonds') Card(rank='2', suit='hearts') Card(rank='2', suit='spades') Card(rank='3', suit='clubs') Card(rank='3', suit='diamonds') Card(rank='3', suit='hearts') Card(rank='3', suit='spades') Card(rank='4', suit='clubs') Card(rank='4', suit='diamonds')
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
> Although FrenchDeck implicitly inherits from object its functionality is not inherited,but comes from leveraging the data model and composition. By implementing the special methods `__len__` and `__getitem__` , our FrenchDeck behaves like a standard Pythonsequence, allowing it to benefit from core language features (e.g., iteration and slicing). and from the standard library, as shown by the examples using random.choice ,reversed , and sorted . Thanks to composition, the `__len__` and `__getitem__` imple‐mentations can hand off all the work to a *list* object, `self._cards` . How special methods are usedNormally you just define these special methods and call them via the inbuild methods like `len()` `in` `[index]` instead of calling it via `object.__len__()`. This gives you speed up in some cases and also plays nicely with other other python library functions since they all are now interfacing with the same endpoints. Enumerating Numeric TypesSpecial methods can also be used to repond to operators like +, - etc. We will see an example of vector operations.
from math import hypot class Vector: def __init__(self, x=0, y=0): self.x = x self.y = y def __repr__(self): return 'Vector(%d, %d)' %(self.x, self.y) def __abs__(self): return hypot(self.x, self.y) def __bool__(self): return bool(self.x or self.y) def __add__(self, other): x = self.x + other.x y = self.y + other.y return Vector(x, y) def __mul__(self, scalar): x = scalar * self.x y = scalar * self.y return Vector(x, y) v = Vector(3, 4) a = Vector(0, 0) print(v) print(abs(v)) print(v*2) print(v + a)
Vector(3, 4) 5.0 Vector(6, 8) Vector(3, 4)
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
As you can see we implemented many special methods but we don't directly invoke them. The special methods are to be invoked by the interpretor most of the time, unless you are doing a lot of metaprogramming.
bool(a)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
String RepresentationWe use the `__repr__` special method to get the buildin string representation of of the object for inspection (note the usage in `vector` object. There are also other special methods like `__repr__with__str__` which is called by `str()` or `__str__` which is used to return a string for display to the end user. If your only implementing 1 function stick with `__repr__` since `print()` will fall back to that if `__str__` is not found. Arithmetic OperatorsIn the above example we have implemented `__add__` and `__mul__`. Note in both cases we are returning new object, reading from self, and other. This is the expected behaviour. Boolean Value of Custom TypeIn python any object can be used in a boolean context. If `__bool__` or `__len__` is not implemented then the object will be truthy by default. IF `__bool__` is implemented that is called, if not python calls `__len__` and checks if the length is 0.
class Test: def __init__(self, x): self.x = x t = Test(0) t, bool(t) class Test: def __init__(self, x): self.x = x def __bool__(self): return bool(self.x) t = Test(0) t, bool(t)
_____no_output_____
Apache-2.0
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
!rm -r sample_data/ import pandas as pd pd.set_option('display.max_columns', None) import numpy as np from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt import seaborn as sns from scipy.cluster.hierarchy import dendrogram, linkage from scipy.cluster.hierarchy import cophenet from scipy.spatial.distance import pdist # computing the distance from scipy.cluster.hierarchy import inconsistent from scipy.cluster.hierarchy import fcluster
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
UTILS
# PCA class derived from skelean standard PCA package # code adapted from: https://github.com/A-Jyad/NBAPlayerClustering class PCA_adv: def __init__(self, data, var_per): self.data = data self.pca = PCA(var_per, random_state = 0) self.PCA = self.pca.fit(self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1)) def Standard_Scaler_Preprocess(self): std_scale = StandardScaler() std_scale_data = std_scale.fit_transform(self.data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1)) std_scale_data = pd.DataFrame(std_scale_data, columns = self.data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist()) std_scale_data['PLAYER'] = self.data['PLAYER'] std_scale_data['TEAM'] = self.data['TEAM'] std_scale_data['POSITION'] = self.data['POSITION'] return std_scale_data def PCA_name(self): PCA_name = [] for i in range(1, self.PCA.n_components_ + 1): PCA_name += ['PC' + str(i)] return PCA_name def PCA_variance(self): pca_variance = pd.DataFrame({"Variance Explained" : self.PCA.explained_variance_, 'Percentage of Variance Explained' : self.PCA.explained_variance_ratio_}, index = self.PCA_name()) pca_variance['Percentage of Variance Explained'] = (pca_variance['Percentage of Variance Explained'] * 100).round(0) pca_variance['Cumulative Percentage of Variance Explained'] = pca_variance['Percentage of Variance Explained'].cumsum() return pca_variance def PCA_transform(self, n): pca_data = self.pca.fit_transform(self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1)) pca_data = pd.DataFrame(pca_data, columns = self.PCA_name()) index = [] for i in range(1, n+1): index += ['PC' + str(i)] pca_data = pca_data[index] pca_data['PLAYER'] = self.Standard_Scaler_Preprocess()['PLAYER'] pca_data['TEAM'] = self.Standard_Scaler_Preprocess()['TEAM'] pca_data['POSITION'] = self.Standard_Scaler_Preprocess()['POSITION'] return pca_data def Heatmap(self): pca_eigen = pd.DataFrame(self.PCA.components_, columns = self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist(), index = self.PCA_name()).T plt.figure(figsize = (10,10)) sns.heatmap(pca_eigen.abs(), vmax = 0.5, vmin = 0) def PCA_sorted_eigen(self, PC): pca_eigen = pd.DataFrame(self.PCA.components_, columns = self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist(), index = self.PCA_name()).T return pca_eigen.loc[pca_eigen[PC].abs().sort_values(ascending = False).index][PC] # simple heat map function def HeatMap(df, vert_min, vert_max): plt.figure(figsize = (10,10)) sns.heatmap(df.corr(), vmin = vert_min, vmax = vert_max, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True) # utility function to normalize the players' data def Standard_Scaler_Preprocess(data): std_scale = StandardScaler() std_scale_data = std_scale.fit_transform(data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1)) std_scale_data = pd.DataFrame(std_scale_data, columns = data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist()) std_scale_data['PLAYER'] = data['PLAYER'] std_scale_data['TEAM'] = data['TEAM'] std_scale_data['POSITION'] = data['POSITION'] return std_scale_data # Hierarchical Clustering class # code adapted from: https://github.com/A-Jyad/NBAPlayerClustering class Cluster: def __init__(self, df, method): self.df = df self.method = method self.linked = linkage(self.df, self.method) # calculates cophenete value def cophenet_value(self): c, coph_dists = cophenet(self.linked, pdist(self.df)) return c # denogram plotting function def dendrogram_truncated(self, n, y_min = 0, max_d = 0): plt.title('Hierarchical Clustering Dendrogram (truncated)') plt.xlabel('sample index') plt.ylabel('distance') dendro = dendrogram( self.linked, truncate_mode='lastp', # show only the last p merged clusters p=n, # show only the last p merged clusters leaf_rotation=90., leaf_font_size=12., show_contracted=True, # to get a distribution impression in truncated branches ) for i, d, c in zip(dendro['icoord'], dendro['dcoord'], dendro['color_list']): x = 0.5 * sum(i[1:3]) y = d[1] #if y > annotate_above: plt.plot(x, y, 'o', c=c) plt.annotate("%.3g" % y, (x, y), xytext=(0, -5), textcoords='offset points', va='top', ha='center') if max_d: plt.axhline(y=max_d, c='k') plt.ylim(ymin = y_min) plt.show() def inconsistency(self): depth = 3 incons = inconsistent(self.linked, depth) return incons[-15:] # silhoute and elbow plot def elbow_plot(self, cut = 0): last = self.linked[(-1*cut):, 2] last_rev = last[::-1] idxs = np.arange(1, len(last) + 1) plt.plot(idxs, last_rev) acceleration = np.diff(last, 2) # 2nd derivative of the distances self.acceleration_rev = acceleration[::-1] plt.plot(idxs[:-2] + 1, self.acceleration_rev) plt.show() def elbow_point(self): k = self.acceleration_rev.argmax() + 2 # if idx 0 is the max of this we want 2 clusters return k def create_cluster(self, max_d): clusters = fcluster(self.linked, max_d, criterion='distance') return clusters
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
DATA LOADING
data = pd.read_csv('Data/career.csv') # csv file with the career averages of all players who played more than 10 seasons data.drop(['Unnamed: 0'], axis =1, inplace=True) # csv conversion automatically creates an index column which is not needed data.head()
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
PCA Analysis
pca = PCA_adv(data, 0.89) # create PCA object that covers 89% of the variance pca.PCA_variance() pca_df = pca.PCA_transform(4) # run PCA for the first 4 components pca.Heatmap() # heatmap of the PCs and variables pca.PCA_sorted_eigen('PC1')[:10] # eigenvalues for PC1 pc1 = pca_df[['PLAYER','POSITION','PC1']].copy() pc1.nlargest(10,'PC1') # players with largest PC1 pca.PCA_sorted_eigen('PC2')[:10] # eigenvalues for PC2 pc2 = pca_df[['PLAYER','POSITION','PC2']].copy() pc2.nlargest(10,'PC2') # players with largest PC2 pca.PCA_sorted_eigen('PC3')[:10] # eigenvalues for PC3 pc3 = pca_df[['PLAYER','POSITION','PC3']].copy() pc3.nlargest(10,'PC3') # players with largest PC3 pca.PCA_sorted_eigen('PC4')[:10] # eigenvalues for PC4 pc4 = pca_df[['PLAYER','POSITION','PC4']].copy() pc4.nlargest(10,'PC4') # players with largest PC4 pca_df.head() data_scaled = Standard_Scaler_Preprocess(pca_df) # normalize and standardize the PCA for clustering data_scaled.head() data_scaled.describe().round(1) # check PCs are standardized num_data_scaled = data_scaled.drop(['PLAYER', 'POSITION', 'TEAM'], axis = 1) # keep numerical categories only num_data_scaled.columns
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
K-MEANS
# elbow test for K-means to predict appropiate number of clusters from sklearn.cluster import KMeans Sum_of_squared_distances = [] K = range(1,20) for k in K: km = KMeans(n_clusters=k) km = km.fit(num_data_scaled) Sum_of_squared_distances.append(km.inertia_) plt.plot(K, Sum_of_squared_distances, 'bx-') plt.xlabel('k') plt.ylabel('Sum_of_squared_distances') plt.title('Elbow Method For Optimal k') plt.show() # Silhouette test for K-means to predict appropiate number of clusters from sklearn.metrics import silhouette_score sil = [] kmax = 10 # dissimilarity would not be defined for a single cluster, thus, minimum number of clusters should be 2 for k in range(2, kmax+1): kmeans = KMeans(n_clusters = k).fit(num_data_scaled) labels = kmeans.labels_ sil.append(silhouette_score(num_data_scaled, labels, metric = 'euclidean')) plt.plot(sil, 'bx-') plt.xlabel('k') plt.ylabel('Silhouette Score') plt.title('Silhouette Method For Optimal k') plt.show() # Run K-means for 6 clusters X = num_data_scaled.copy() kmeans = KMeans(n_clusters=6) kmeans.fit(X) y_kmeans = kmeans.labels_ centers = kmeans.cluster_centers_ # Plot Results from mpl_toolkits.mplot3d import Axes3D import numpy as np X['K-cluster'] = y_kmeans fig = plt.figure(figsize = (10,10)) ax = fig.add_subplot(111, projection='3d') for i in range(6): x = np.array(X[X['K-cluster'] == i]['PC1']) y = np.array(X[X['K-cluster'] == i]['PC2']) z = np.array(X[X['K-cluster'] == i]['PC3']) ax.scatter(x, y, z, marker = 'o', s = 30) plt.title('K-Clusters Results') ax.set_xlabel('PC1') ax.set_ylabel('PC2') ax.set_zlabel('PC3') ax.legend([0,1,2,3,4,5]) for i in range(6): ax.scatter(centers[i][0],centers[i][1],centers[i][2],marker = 'o', s = 50,c='black') # plot the centers plt.show() # assign clusters to the players data_scaled_k = data_scaled.copy() data_scaled_k['K-cluster'] = y_kmeans # Plot values per cluster plt.bar([0,1,2,3,4,5],data_scaled_k['K-cluster'].value_counts().sort_index()) plt.xlabel('K-Cluster') plt.ylabel('Number of Players') plt.title('Player Distribution per Cluster') plt.show() data_scaled_k['K-cluster'].value_counts().sort_index() # heatmap for each cluster plt.figure(figsize = (10,10)) sns.heatmap(data_scaled_k.groupby('K-cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True) # Find Representative Players in the clusters data_scaled_k[data_scaled_k['K-cluster'] == 5][['PLAYER','POSITION','K-cluster','PC3']].sort_values(['PC3'],ascending=False).head(10) # Save players classification for rookie cost analysis results = data_scaled_k[['PLAYER','K-cluster']].copy() results = results.rename({'K-cluster' : 'CLUSTER'}, axis = 1) results.to_csv('results-k-cluster.csv')
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Complete Hierarchy
data_scaled_c = data_scaled.copy() # run complete linkage clustering complete = Cluster(num_data_scaled, 'complete') complete.dendrogram_truncated(15, 5, 6.2) # plot dendrogram complete.elbow_plot(15) # elbow and silhouette plot # Calculate Complete Clusters data_scaled_c['complete_cluster'] = complete.create_cluster(6) data_scaled_c['complete_cluster'].value_counts().sort_index() # 3D plot results X = data_scaled_c.copy() fig = plt.figure(figsize = (10,10)) ax = fig.add_subplot(111, projection='3d') for i in range(1,6): x = np.array(X[X['complete_cluster'] == i]['PC1']) y = np.array(X[X['complete_cluster'] == i]['PC2']) z = np.array(X[X['complete_cluster'] == i]['PC3']) ax.scatter(x, y, z, marker = 'o', s = 30) plt.title('Complete-Cluster Results') ax.set_xlabel('PC1') ax.set_ylabel('PC2') ax.set_zlabel('PC3') ax.legend([1,2,3,4,5]) plt.show() # Plot values per cluster plt.bar([1,2,3,4,5],data_scaled_c['complete_cluster'].value_counts().sort_index()) plt.xlabel('Complete-Cluster') plt.ylabel('Number of Players') plt.title('Player Distribution per Cluster') plt.show() # heatmap plot plt.figure(figsize = (10,10)) sns.heatmap(data_scaled_c.groupby('complete_cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True) # get representative players per cluster data_scaled_c[data_scaled_c['complete_cluster'] == 5][['PLAYER','POSITION','complete_cluster','PC4']].sort_values(['PC4'],ascending=False).head(10) # Save results res = data_scaled_c[['PLAYER','complete_cluster']].copy() res = res.rename({'complete_cluster' : 'CLUSTER'}, axis = 1) res.to_csv('results-complete.csv')
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
SINGLE
data_scaled_s = data_scaled.copy() # run single linkage clustering single = Cluster(num_data_scaled, 'single') single.dendrogram_truncated(15) # plot dendrogram single.elbow_plot(15) # elbow and silhouette plot # Inadequate for the given data (all players fall in one cluster) data_scaled_s['single_cluster'] = single.create_cluster(1.5) data_scaled_s['single_cluster'].value_counts()
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Average
data_scaled_a = data_scaled.copy() # run average linkage clustering average = Cluster(num_data_scaled, 'average') average.dendrogram_truncated(15, 3, 4) # plot dendrogram average.elbow_plot(15) # silhouette and elbow plot # Inadequate for the given data data_scaled_a['average_cluster'] = average.create_cluster(3.5) data_scaled_a['average_cluster'].value_counts()
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
WARD method
# calculate ward linkage data_scaled_w = data_scaled.copy() ward = Cluster(num_data_scaled, 'ward') ward.dendrogram_truncated(15, 5, 11) # calculate elbow and silhouette plots ward.elbow_plot(15) # Cluster the data data_scaled_w['ward_cluster'] = ward.create_cluster(10) data_scaled_w['ward_cluster'].value_counts().sort_index() # 3D plot results X = data_scaled_w.copy() fig = plt.figure(figsize = (10,10)) ax = fig.add_subplot(111, projection='3d') for i in range(1,8): x = np.array(X[X['ward_cluster'] == i]['PC1']) y = np.array(X[X['ward_cluster'] == i]['PC2']) z = np.array(X[X['ward_cluster'] == i]['PC3']) ax.scatter(x, y, z, marker = 'o', s = 30) plt.title('Ward-Cluster Results') ax.set_xlabel('PC1') ax.set_ylabel('PC2') ax.set_zlabel('PC3') ax.legend([1,2,3,4,5,6,7]) plt.show() # Plot values per cluster plt.bar([1,2,3,4,5,6,7],data_scaled_w['ward_cluster'].value_counts().sort_index()) plt.xlabel('Ward-Cluster') plt.ylabel('Number of Players') plt.title('Player Distribution per Cluster') plt.show() # plot heatmap of PCs per Cluster plt.figure(figsize = (10,10)) sns.heatmap(data_scaled_w.groupby('ward_cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True) # results are very similar to K-means so discard
_____no_output_____
OLDAP-2.2.1
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
Practice Notebook - Putting It All Together Hello, coders! Below we have code similar to what we wrote in the last video. Go ahead and run the following cell that defines our `get_event_date`, `current_users` and `generate_report` methods.
def get_event_date(event): return event.date def current_users(events): events.sort(key=get_event_date) machines={} for event in events: if event.machine not in machines: machines[event.machine]=set() if event.type =="login": machines[event.machine].add(event.user) elif event.type=="logout": machines[event.machine].remove(event.user) return machines def generate_report(machines): for machine,users in machines.items(): if len(users)>0:user_list=",".join(users) print("{}: {}".format(machines,user_list)) def get_event_date(event): return event.date def current_users(events): events.sort(key=get_event_date) machines={} for event in events: if event.machine not in machines: machines[event.machine]=set() if event.type =="login": machines[event.machine].add(event.user) elif event.type=="logout": machines[event.machine].remove(event.user) return machines def generate_report(machines): for machine,users in machines.items(): if len(users)>0:user_list=",".join(users) print("{}: {}".format(machines,user_list))
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
No output should be generated from running the custom function definitions above. To check that our code is doing everything it's supposed to do, we need an `Event` class. The code in the next cell below initializes our `Event` class. Go ahead and run this cell next.
class Event: def __init__(self, event_date, event_type, machine_name, user): self.date = event_date self.type = event_type self.machine = machine_name self.user = user
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Ok, we have an `Event` class that has a constructor and sets the necessary attributes. Next let's create some events and add them to a list by running the following cell.
events = [ Event('2020-01-21 12:45:56', 'login', 'myworkstation.local', 'jordan'), Event('2020-01-22 15:53:42', 'logout', 'webserver.local', 'jordan'), Event('2020-01-21 18:53:21', 'login', 'webserver.local', 'lane'), Event('2020-01-22 10:25:34', 'logout', 'myworkstation.local', 'jordan'), Event('2020-01-21 08:20:01', 'login', 'webserver.local', 'jordan'), Event('2020-01-23 11:24:35', 'logout', 'mailserver.local', 'chris'), ]
_____no_output_____
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Now we've got a bunch of events. Let's feed these events into our `custom_users` function and see what happens.
users = current_users(events) print(users)
{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Uh oh. The code in the previous cell produces an error message. This is because we have a user in our `events` list that was logged out of a machine he was not logged into. Do you see which user this is? Make edits to the first cell containing our custom function definitions to see if you can fix this error message. There may be more than one way to do so. Remember when you have finished making your edits, rerun that cell as well as the cell that feeds the `events` list into our `custom_users` function to see whether the error message has been fixed. Once the error message has been cleared and you have correctly outputted a dictionary with machine names as keys, your custom functions are properly finished. Great! Now try generating the report by running the next cell.
generate_report(users)
{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan {'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan
MIT
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
Master Data Science for Business - Data Science Consulting - Session 2 Notebook 2: Web Scraping with Scrapy: Getting reviews from TripAdvisorTo Do (note for Cap): -Enlever des parties du code que les élèves doivent compléter par eux même 1. Importing packages
import scrapy from scrapy.crawler import CrawlerProcess from scrapy.spiders import CrawlSpider, Rule from scrapy.selector import Selector import sys from scrapy.http import Request from scrapy.linkextractors import LinkExtractor import json import logging import pandas as pd
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
2. Some class and functions
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html class HotelreviewsItem(scrapy.Item): # define the fields for your item here like: rating = scrapy.Field() review = scrapy.Field() title = scrapy.Field() trip_date = scrapy.Field() trip_type = scrapy.Field() published_date = scrapy.Field() hotel_type = scrapy.Field() hotel_name = scrapy.Field() price_range = scrapy.Field() reviewer_id = scrapy.Field() review_language = scrapy.Field() def user_info_splitter(raw_user_info): """ :param raw_user_info: :return: """ user_info = {} splited_info = raw_user_info.split() for element in splited_info: converted_element = get_convertible_elements_as_dic(element) if converted_element: user_info[converted_element[0]] = converted_element[1] return user_info
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
2. Creating the JSon pipeline
#JSon pipeline, you can rename the "trust.jl" to the name of your choice class JsonWriterPipeline(object): def open_spider(self, spider): self.file = open('tripadvisor.jl', 'w') def close_spider(self, spider): self.file.close() def process_item(self, item, spider): line = json.dumps(dict(item)) + "\n" self.file.write(line) return item
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
3. SpiderWhen you go on a TripAdvisor page, you will have 5 reviews per page. Reviews are not fully displayed on the page, so you have to open them (i.e follow the link of the review to tell Scrapy to scrape this page) to scrape them. This means we will use 2 parsing functions: -The first one will go on the page of the parc, and get the links of the reviews -The second one will go on each page of each reviews and scrape them using the parse_item() method. To Do: Complete the code with XPath to get the proper item to scrape. Once you are done, you can "Restart and run all cells" to see if everything is working correctly.
class MySpider(CrawlSpider): name = 'BasicSpider' domain_url = "https://www.tripadvisor.com" # allowed_domains = ["https://www.tripadvisor.com"] start_urls = [ "https://www.tripadvisor.fr/ShowUserReviews-g1573379-d1573383-r629218790-Center_Parcs_Les_Trois_Forets-Hattigny_Moselle_Grand_Est.html", "https://www.tripadvisor.fr/ShowUserReviews-g1573379-d1573383-r645720538-Center_Parcs_Les_Trois_Forets-Hattigny_Moselle_Grand_Est.html"] #Custom settings to modify settings usually found in the settings.py file custom_settings = { 'LOG_LEVEL': logging.WARNING, 'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1 'FEED_FORMAT':'json', # Used for pipeline 2 'FEED_URI': 'tripadvisor.json' # Used for pipeline 2 } def parse(self, response): item = HotelreviewsItem() item["reviewer_id"] = next(iter(response.xpath( "//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-memberid").extract()), None) item["review_language"] = next(iter(response.xpath( "//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-language").extract()), None) review_url_on_page = response.xpath('//script[@type="application/ld+json"]/text()').extract() review = eval(review_url_on_page[0]) item["review"] = review["reviewBody"].replace("\\n", "") item["title"] = review["name"] item["rating"] = review["reviewRating"]["ratingValue"] item["hotel_type"] = review["itemReviewed"]["@type"] item["hotel_name"] = review["itemReviewed"]["name"] item["price_range"] = review["itemReviewed"]["priceRange"] try: item["published_date"] = review["datePublished"] except KeyError: item["published_date"] = next(iter(response.xpath( f"//div[contains(@id,'review_{review_id}')]/div/div/span[@class='ratingDate']/@title""").extract()), None) item["trip_type"] = next(iter(response.xpath("//div[contains(@class," "'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div" "/div/div/div[contains(@class,'noRatings')]/text()").extract()), None) try: item["trip_date"] = next(iter(response.xpath("//div[contains(@class," "'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div[" "contains(@class,'prw_reviews_stay_date_hsx')]/text()").extract( )), None) except: item["trip_date"] = next(iter(response.xpath( "//div[contains(@id,'review_538163624')]/div/div/div[@data-prwidget-name='reviews_stay_date_hsx']/text()").extract()), None) yield item
_____no_output_____
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
4. Crawling
process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' }) process.crawl(MySpider) process.start()
2019-01-14 16:37:44 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot) 2019-01-14 16:37:44 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.2 (default, Jan 2 2019, 17:07:39) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.16299-SP0 2019-01-14 16:37:44 [scrapy.crawler] INFO: Overridden settings: {'FEED_FORMAT': 'json', 'FEED_URI': 'tripadvisor.json', 'LOG_LEVEL': 30, 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
5. Importing and reading data scrapedIf you've succeeded, you should see here a dataframe with 2 entries corresponding to the 2 first reviews of the parc, and 11 columns for each item scraped.
dfjson = pd.read_json('tripadvisor.json') #Previewing DF dfjson.head() dfjson.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 2 entries, 0 to 1 Data columns (total 11 columns): hotel_name 2 non-null object hotel_type 2 non-null object price_range 2 non-null object published_date 2 non-null object rating 2 non-null int64 review 2 non-null object review_language 2 non-null object reviewer_id 2 non-null object title 2 non-null object trip_date 2 non-null object trip_type 1 non-null object dtypes: int64(1), object(10) memory usage: 256.0+ bytes
MIT
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
Deep Reinforcement Learning for Stock Trading from Scratch: Single Stock TradingTutorials to use OpenAI DRL to trade single stock in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.* Check out medium blog for detailed explanations: https://towardsdatascience.com/finrl-for-quantitative-finance-tutorial-for-single-stock-trading-37d6d7c30aac* Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues Content * [1. Problem Definition](0)* [2. Getting Started - Load Python packages](1) * [2.1. Install Packages](1.1) * [2.2. Check Additional Packages](1.2) * [2.3. Import Packages](1.3) * [2.4. Create Folders](1.4)* [3. Download Data](2)* [4. Preprocess Data](3) * [4.1. Technical Indicators](3.1) * [4.2. Perform Feature Engineering](3.2)* [5.Build Environment](4) * [5.1. Training & Trade Data Split](4.1) * [5.2. User-defined Environment](4.2) * [5.3. Initialize Environment](4.3) * [6.Implement DRL Algorithms](5) * [7.Backtesting Performance](6) * [7.1. BackTestStats](6.1) * [7.2. BackTestPlot](6.2) * [7.3. Baseline Stats](6.3) * [7.3. Compare to Stock Market Index](6.4) Part 1. Problem Definition This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.The components of the reinforcement learning environment are:* Action: The action space describes the allowed actions that the agent interacts with theenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 representselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We usean action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, "Buy10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or −10, respectively* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfoliovalues at state s′ and s, respectively* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, soour trading agent observes many different features to better learn in an interactive environment.* Environment: single stock trading for AAPLThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.We use Apple Inc. stock: AAPL as an example throughout this article, because it is one of the most popular and profitable stocks. Part 2. Getting Started- Load Python Packages 2.1. Install all the packages through FinRL library
## install finrl library !pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-gpm5bcb4 Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-gpm5bcb4 Requirement already satisfied (use --upgrade to upgrade): finrl==0.0.1 from git+https://github.com/AI4Finance-LLC/FinRL-Library.git in /usr/local/lib/python3.6/dist-packages Requirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.18.5) Requirement already satisfied: pandas==1.1.4 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.1.4) Requirement already satisfied: stockstats in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.3.2) Requirement already satisfied: yfinance in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.1.55) Requirement already satisfied: scikit-learn==0.21.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.21.0) Requirement already satisfied: gym==0.15.3 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.15.3) Requirement already satisfied: stable-baselines[mpi] in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (2.10.1) Requirement already satisfied: tensorflow==1.15.4 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.15.4) Requirement already satisfied: joblib==0.15.1 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.15.1) Requirement already satisfied: matplotlib==3.2.1 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (3.2.1) Requirement already satisfied: pytest<6.0.0,>=5.3.2 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (5.4.3) Requirement already satisfied: setuptools<42.0.0,>=41.4.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (41.6.0) Requirement already satisfied: wheel<0.34.0,>=0.33.6 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.33.6) Requirement already satisfied: pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2 from git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.9.2+75.g4b901f6) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas==1.1.4->finrl==0.0.1) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas==1.1.4->finrl==0.0.1) (2018.9) Requirement already satisfied: int-date>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from stockstats->finrl==0.0.1) (0.1.8) Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (0.0.9) Requirement already satisfied: lxml>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (4.6.2) Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (2.23.0) Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn==0.21.0->finrl==0.0.1) (1.4.1) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.15.0) Requirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.2.2) Requirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.3.2) Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]->finrl==0.0.1) (4.1.2.30) Requirement already satisfied: mpi4py; extra == "mpi" in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]->finrl==0.0.1) (3.0.3) Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.2.0) Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.15.1) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.1.2) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (3.3.0) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.33.2) Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.10.0) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.8.1) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.1.0) Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.12.1) Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.0.8) Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (3.12.4) Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.15.0) Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.2.2) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (2.4.7) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (1.3.1) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (0.10.0) Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (20.4) Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (20.3.0) Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (8.6.0) Requirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (0.13.1) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (0.2.5) Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (1.9.0) Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (2.0.0) Requirement already satisfied: empyrical>=0.5.0 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.5.5) Requirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.11.0) Requirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (5.5.0) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (2020.11.8) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym==0.15.3->finrl==0.0.1) (0.16.0) Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow==1.15.4->finrl==0.0.1) (2.10.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4->finrl==0.0.1) (3.3.3) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4->finrl==0.0.1) (1.0.1) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest<6.0.0,>=5.3.2->finrl==0.0.1) (3.4.0) Requirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.6/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.9.0) Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (2.6.1) Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.3.3) Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.8.0) Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (1.0.18) Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.7.5) Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.8.1) Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.4.2) Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.2.0) Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.6.0) Building wheels for collected packages: finrl Building wheel for finrl (setup.py) ... [?25l[?25hdone Created wheel for finrl: filename=finrl-0.0.1-cp36-none-any.whl size=24270 sha256=18b4aee2509abb83c51b85c8a644bfc7d48c424223d88c2876f7a185aa940241 Stored in directory: /tmp/pip-ephem-wheel-cache-hs_4dvki/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b Successfully built finrl
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.2. Check if the additional packages needed are present, if not install them. * Yahoo Finance API* pandas* numpy* matplotlib* stockstats* OpenAI gym* stable-baselines* tensorflow* pyfolio
import pkg_resources import pip installedPackages = {pkg.key for pkg in pkg_resources.working_set} required = {'yfinance', 'pandas', 'matplotlib', 'stockstats','stable-baselines','gym','tensorflow'} missing = required - installedPackages if missing: !pip install yfinance !pip install pandas !pip install matplotlib !pip install stockstats !pip install gym !pip install stable-baselines[mpi] !pip install tensorflow==1.15.4
Requirement already satisfied: yfinance in /usr/local/lib/python3.6/dist-packages (0.1.55) Requirement already satisfied: lxml>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from yfinance) (4.6.1) Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from yfinance) (2.23.0) Requirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.6/dist-packages (from yfinance) (0.0.9) Requirement already satisfied: pandas>=0.24 in /usr/local/lib/python3.6/dist-packages (from yfinance) (1.1.4) Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.6/dist-packages (from yfinance) (1.18.5) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (2020.11.8) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (3.0.4) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->yfinance) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->yfinance) (2018.9) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas>=0.24->yfinance) (1.15.0) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (1.1.4) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas) (2.8.1) Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.6/dist-packages (from pandas) (1.18.5) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas) (2018.9) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0) Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.1) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0) Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.18.5) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.3.1) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib) (1.15.0) Requirement already satisfied: stockstats in /usr/local/lib/python3.6/dist-packages (0.3.2) Requirement already satisfied: int-date>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from stockstats) (0.1.8) Requirement already satisfied: pandas>=0.18.1 in /usr/local/lib/python3.6/dist-packages (from stockstats) (1.1.4) Requirement already satisfied: numpy>=1.9.2 in /usr/local/lib/python3.6/dist-packages (from stockstats) (1.18.5) Requirement already satisfied: python-dateutil>=2.4.2 in /usr/local/lib/python3.6/dist-packages (from int-date>=0.1.7->stockstats) (2.8.1) Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from int-date>=0.1.7->stockstats) (1.15.0) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.18.1->stockstats) (2018.9) Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (0.15.3) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym) (1.15.0) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym) (1.4.1) Requirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym) (1.3.2) Requirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym) (1.18.5) Requirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym) (1.2.2) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym) (0.16.0) Requirement already satisfied: stable-baselines[mpi] in /usr/local/lib/python3.6/dist-packages (2.10.1) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.1.4) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.18.5) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.4.1) Requirement already satisfied: gym[atari,classic_control]>=0.11 in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (0.15.3) Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (4.1.2.30) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (0.15.1) Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (3.2.1) Requirement already satisfied: cloudpickle>=0.5.5 in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.2.2) Requirement already satisfied: mpi4py; extra == "mpi" in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (3.0.3) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->stable-baselines[mpi]) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->stable-baselines[mpi]) (2018.9) Requirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (1.3.2) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (1.15.0) Requirement already satisfied: Pillow; extra == "atari" in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (7.0.0) Requirement already satisfied: atari-py~=0.2.0; extra == "atari" in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (0.2.6) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (1.3.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (2.4.7) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (0.10.0) Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (0.16.0) Requirement already satisfied: tensorflow==1.15.4 in /usr/local/lib/python3.6/dist-packages (1.15.4) Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.12.1) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.1.2) Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.2.2) Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.33.6) Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.1) Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.0.8) Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.0) Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.8.1) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.33.2) Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.10.0) Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (3.3.0) Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.0) Requirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.18.5) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.1.0) Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (3.12.4) Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.2.0) Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow==1.15.4) (2.10.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (3.3.3) Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (41.6.0) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (1.0.1) Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (2.0.0) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (3.4.0)
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.3. Import Packages
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt matplotlib.use('Agg') import datetime from finrl.config import config from finrl.marketdata.yahoodownloader import YahooDownloader from finrl.preprocessing.preprocessors import FeatureEngineer from finrl.preprocessing.data import data_split from finrl.env.environment import EnvSetup from finrl.env.EnvMultipleStock_train import StockEnvTrain from finrl.env.EnvMultipleStock_trade import StockEnvTrade from finrl.model.models import DRLAgent from finrl.trade.backtest import BackTestStats, BaselineStats, BackTestPlot #Diable the warnings import warnings warnings.filterwarnings('ignore')
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
2.4. Create Folders
import os if not os.path.exists("./" + config.DATA_SAVE_DIR): os.makedirs("./" + config.DATA_SAVE_DIR) if not os.path.exists("./" + config.TRAINED_MODEL_DIR): os.makedirs("./" + config.TRAINED_MODEL_DIR) if not os.path.exists("./" + config.TENSORBOARD_LOG_DIR): os.makedirs("./" + config.TENSORBOARD_LOG_DIR) if not os.path.exists("./" + config.RESULTS_DIR): os.makedirs("./" + config.RESULTS_DIR)
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 3. Download DataYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day). -----class YahooDownloader: Provides methods for retrieving daily stock data from Yahoo Finance API Attributes ---------- start_date : str start date of the data (modified from config.py) end_date : str end date of the data (modified from config.py) ticker_list : list a list of stock tickers (modified from config.py) Methods ------- fetch_data() Fetches data from yahoo API
# from config.py start_date is a string config.START_DATE # from config.py end_date is a string config.END_DATE
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
ticker_list is a list of stock tickers, in a single stock trading case, the list contains only 1 ticker
# Download and save the data in a pandas DataFrame: data_df = YahooDownloader(start_date = config.START_DATE, end_date = config.END_DATE, ticker_list = ['AAPL']).fetch_data() data_df.shape data_df.head()
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 4. Preprocess DataData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.* FinRL uses a class **FeatureEngineer** to preprocess the data* Add **technical indicators**. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc. class FeatureEngineer:Provides methods for preprocessing the stock price data Attributes ---------- df: DataFrame data downloaded from Yahoo API feature_number : int number of features we used use_technical_indicator : boolean we technical indicator or not use_turbulence : boolean use turbulence index or not Methods ------- preprocess_data() main method to do the feature engineering 4.1 Technical Indicators* FinRL uses stockstats to calcualte technical indicators such as **Moving Average Convergence Divergence (MACD)**, **Relative Strength Index (RSI)**, **Average Directional Index (ADX)**, **Commodity Channel Index (CCI)** and other various indicators and stats.* **stockstats**: supplies a wrapper StockDataFrame based on the **pandas.DataFrame** with inline stock statistics/indicators support.
## we store the stockstats technical indicator column names in config.py tech_indicator_list=config.TECHNICAL_INDICATORS_LIST print(tech_indicator_list) ## user can add more technical indicators ## check https://github.com/jealous/stockstats for different names tech_indicator_list=tech_indicator_list+['kdjk','open_2_sma','boll','close_10.0_le_5_c','wr_10','dma','trix'] print(tech_indicator_list)
['macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix']
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
4.2 Perform Feature Engineering
data_df = FeatureEngineer(data_df.copy(), use_technical_indicator=True, tech_indicator_list = tech_indicator_list, use_turbulence=False, user_defined_feature = True).preprocess_data() data_df.head()
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 5. Build EnvironmentConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.Our trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.The action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, "Buy 10 shares of AAPL" or "Sell 10 shares of AAPL" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric. 5.1 Training & Trade data split* Training: 2009-01-01 to 2018-12-31* Trade: 2019-01-01 to 2020-09-30
train = data_split(data_df, start = config.START_DATE, end = config.START_TRADE_DATE) trade = data_split(data_df, start = config.START_TRADE_DATE, end = config.END_DATE) #train = data_split(data_df, start = '2009-01-01', end = '2019-01-01') #trade = data_split(data_df, start = '2019-01-01', end = '2020-09-30') ## data normalization, this part is optional, have little impact feaures_list = list(train.columns) feaures_list.remove('date') feaures_list.remove('tic') feaures_list.remove('close') print(feaures_list) from sklearn import preprocessing data_normaliser = preprocessing.StandardScaler() train[feaures_list] = data_normaliser.fit_transform(train[feaures_list]) trade[feaures_list] = data_normaliser.fit_transform(trade[feaures_list])
['open', 'high', 'low', 'volume', 'macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix', 'daily_return']
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
5.2 User-defined Environment: a simulation environment class
import numpy as np import pandas as pd from gym.utils import seeding import gym from gym import spaces import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt class SingleStockEnv(gym.Env): """A single stock trading environment for OpenAI gym Attributes ---------- df: DataFrame input data stock_dim : int number of unique stocks hmax : int maximum number of shares to trade initial_amount : int start money transaction_cost_pct: float transaction cost percentage per trade reward_scaling: float scaling factor for reward, good for training state_space: int the dimension of input features action_space: int equals stock dimension tech_indicator_list: list a list of technical indicator names turbulence_threshold: int a threshold to control risk aversion day: int an increment number to control date Methods ------- _sell_stock() perform sell action based on the sign of the action _buy_stock() perform buy action based on the sign of the action step() at each step the agent will return actions, then we will calculate the reward, and return the next observation. reset() reset the environment render() use render to return other functions save_asset_memory() return account value at each time step save_action_memory() return actions/positions at each time step """ metadata = {'render.modes': ['human']} def __init__(self, df, stock_dim, hmax, initial_amount, transaction_cost_pct, reward_scaling, state_space, action_space, tech_indicator_list, turbulence_threshold, day = 0): #super(StockEnv, self).__init__() #money = 10 , scope = 1 self.day = day self.df = df self.stock_dim = stock_dim self.hmax = hmax self.initial_amount = initial_amount self.transaction_cost_pct =transaction_cost_pct self.reward_scaling = reward_scaling self.state_space = state_space self.action_space = action_space self.tech_indicator_list = tech_indicator_list # action_space normalization and shape is self.stock_dim self.action_space = spaces.Box(low = -1, high = 1,shape = (self.action_space,)) # Shape = 181: [Current Balance]+[prices 1-30]+[owned shares 1-30] # +[macd 1-30]+ [rsi 1-30] + [cci 1-30] + [adx 1-30] self.observation_space = spaces.Box(low=0, high=np.inf, shape = (self.state_space,)) # load data from a pandas dataframe self.data = self.df.loc[self.day,:] self.terminal = False self.turbulence_threshold = turbulence_threshold # initalize state: inital amount + close price + shares + technical indicators + other features self.state = [self.initial_amount] + \ [self.data.close] + \ [0]*self.stock_dim + \ sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \ [self.data.open] + \ [self.data.high] + \ [self.data.low] +\ [self.data.daily_return] # initialize reward self.reward = 0 self.cost = 0 # memorize all the total balance change self.asset_memory = [self.initial_amount] self.rewards_memory = [] self.actions_memory=[] self.date_memory=[self.data.date] self.close_price_memory = [self.data.close] self.trades = 0 self._seed() def _sell_stock(self, index, action): # perform sell action based on the sign of the action if self.state[index+self.stock_dim+1] > 0: #update balance self.state[0] += \ self.state[index+1]*min(abs(action),self.state[index+self.stock_dim+1]) * \ (1- self.transaction_cost_pct) self.state[index+self.stock_dim+1] -= min(abs(action), self.state[index+self.stock_dim+1]) self.cost +=self.state[index+1]*min(abs(action),self.state[index+self.stock_dim+1]) * \ self.transaction_cost_pct self.trades+=1 else: pass def _buy_stock(self, index, action): # perform buy action based on the sign of the action available_amount = self.state[0] // self.state[index+1] # print('available_amount:{}'.format(available_amount)) #update balance self.state[0] -= self.state[index+1]*min(available_amount, action)* \ (1+ self.transaction_cost_pct) self.state[index+self.stock_dim+1] += min(available_amount, action) self.cost+=self.state[index+1]*min(available_amount, action)* \ self.transaction_cost_pct self.trades+=1 def step(self, actions): # print(self.day) self.terminal = self.day >= len(self.df.index.unique())-1 # print(actions) if self.terminal: #plt.plot(self.asset_memory,'r') #plt.savefig('results/account_value_train.png') #plt.close() end_total_asset = self.state[0]+ \ sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)])) print("begin_total_asset:{}".format(self.asset_memory[0])) print("end_total_asset:{}".format(end_total_asset)) df_total_value = pd.DataFrame(self.asset_memory) #df_total_value.to_csv('results/account_value_train.csv') print("total_reward:{}".format(self.state[0]+sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))- self.initial_amount )) print("total_cost: ", self.cost) print("total_trades: ", self.trades) df_total_value.columns = ['account_value'] df_total_value['daily_return']=df_total_value.pct_change(1) if df_total_value['daily_return'].std() !=0: sharpe = (252**0.5)*df_total_value['daily_return'].mean()/ \ df_total_value['daily_return'].std() print("Sharpe: ",sharpe) print("=================================") df_rewards = pd.DataFrame(self.rewards_memory) #df_rewards.to_csv('results/account_rewards_train.csv') return self.state, self.reward, self.terminal,{} else: #print(actions) actions = actions * self.hmax self.actions_memory.append(actions) #actions = (actions.astype(int)) begin_total_asset = self.state[0]+ \ sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)])) #print("begin_total_asset:{}".format(begin_total_asset)) argsort_actions = np.argsort(actions) sell_index = argsort_actions[:np.where(actions < 0)[0].shape[0]] buy_index = argsort_actions[::-1][:np.where(actions > 0)[0].shape[0]] for index in sell_index: # print('take sell action'.format(actions[index])) self._sell_stock(index, actions[index]) for index in buy_index: # print('take buy action: {}'.format(actions[index])) self._buy_stock(index, actions[index]) self.day += 1 self.data = self.df.loc[self.day,:] #load next state # print("stock_shares:{}".format(self.state[29:])) self.state = [self.state[0]] + \ [self.data.close] + \ list(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]) + \ sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \ [self.data.open] + \ [self.data.high] + \ [self.data.low] +\ [self.data.daily_return] end_total_asset = self.state[0]+ \ sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)])) self.asset_memory.append(end_total_asset) self.date_memory.append(self.data.date) self.close_price_memory.append(self.data.close) #print("end_total_asset:{}".format(end_total_asset)) self.reward = end_total_asset - begin_total_asset # print("step_reward:{}".format(self.reward)) self.rewards_memory.append(self.reward) self.reward = self.reward*self.reward_scaling return self.state, self.reward, self.terminal, {} def reset(self): self.asset_memory = [self.initial_amount] self.day = 0 self.data = self.df.loc[self.day,:] self.cost = 0 self.trades = 0 self.terminal = False self.rewards_memory = [] self.actions_memory=[] self.date_memory=[self.data.date] #initiate state self.state = [self.initial_amount] + \ [self.data.close] + \ [0]*self.stock_dim + \ sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \ [self.data.open] + \ [self.data.high] + \ [self.data.low] +\ [self.data.daily_return] return self.state def render(self, mode='human'): return self.state def save_asset_memory(self): date_list = self.date_memory asset_list = self.asset_memory #print(len(date_list)) #print(len(asset_list)) df_account_value = pd.DataFrame({'date':date_list,'account_value':asset_list}) return df_account_value def save_action_memory(self): # date and close price length must match actions length date_list = self.date_memory[:-1] close_price_list = self.close_price_memory[:-1] action_list = self.actions_memory df_actions = pd.DataFrame({'date':date_list,'actions':action_list,'close_price':close_price_list}) return df_actions def _seed(self, seed=None): self.np_random, seed = seeding.np_random(seed) return [seed]
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
5.3 Initialize Environment* **stock dimension**: the number of unique stock tickers we use* **hmax**: the maximum amount of shares to buy or sell* **initial amount**: the amount of money we use to trade in the begining* **transaction cost percentage**: a per share rate for every share trade* **tech_indicator_list**: a list of technical indicator names (modified from config.py)
## we store the stockstats technical indicator column names in config.py ## check https://github.com/jealous/stockstats for different names tech_indicator_list # the stock dimension is 1, because we only use the price data of AAPL. len(train.tic.unique()) # account balance + close price + shares + technical indicators + open-high-low-price + 1 returns stock_dimension = len(train.tic.unique()) state_space = 1 + 2*stock_dimension + len(tech_indicator_list)*stock_dimension + 4*stock_dimension print(state_space) env_setup = EnvSetup(stock_dim = stock_dimension, state_space = state_space, hmax = 200, initial_amount = 100000, transaction_cost_pct = 0.001, tech_indicator_list = tech_indicator_list) env_train = env_setup.create_env_training(data = train, env_class = SingleStockEnv) train.head()
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 6: Implement DRL Algorithms* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,Multi-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users todesign their own DRL algorithms by adapting these DRL algorithms.
agent = DRLAgent(env = env_train)
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model Training: 5 models, A2C DDPG, PPO, TD3, SAC Model 1: A2C
## default hyperparameters in config file config.A2C_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') a2c_params_tuning = {'n_steps':5, 'ent_coef':0.005, 'learning_rate':0.0007, 'verbose':0, 'timesteps':100000} model_a2c = agent.train_A2C(model_name = "A2C_{}".format(now), model_params = a2c_params_tuning)
==============Model Training=========== begin_total_asset:100000 end_total_asset:176934.7576968735 total_reward:76934.75769687351 total_cost: 5882.835153967686 total_trades: 2484 Sharpe: 0.46981434691347806 ================================= begin_total_asset:100000 end_total_asset:595867.5745766863 total_reward:495867.57457668625 total_cost: 4290.078180151586 total_trades: 2514 Sharpe: 0.8764031127847676 ================================= begin_total_asset:100000 end_total_asset:583671.8077524664 total_reward:483671.8077524664 total_cost: 5838.791503323599 total_trades: 2512 Sharpe: 0.8828870827729837 ================================= begin_total_asset:100000 end_total_asset:637429.0815745457 total_reward:537429.0815745457 total_cost: 3895.962820358061 total_trades: 2514 Sharpe: 0.8993083850920852 ================================= begin_total_asset:100000 end_total_asset:766699.1715777694 total_reward:666699.1715777694 total_cost: 1336.049787657923 total_trades: 2515 Sharpe: 0.9528759647152936 ================================= begin_total_asset:100000 end_total_asset:882677.1870489779 total_reward:782677.1870489779 total_cost: 785.3824416674332 total_trades: 2515 Sharpe: 1.000739173295064 ================================= begin_total_asset:100000 end_total_asset:927423.8880478856 total_reward:827423.8880478856 total_cost: 254.9934727955073 total_trades: 2515 Sharpe: 1.0182559497960086 ================================= begin_total_asset:100000 end_total_asset:1003931.2516248588 total_reward:903931.2516248588 total_cost: 103.18390643660977 total_trades: 2515 Sharpe: 1.0458180695295847 ================================= begin_total_asset:100000 end_total_asset:1034917.0496185564 total_reward:934917.0496185564 total_cost: 115.83752941795201 total_trades: 2515 Sharpe: 1.0560390646001476 ================================= begin_total_asset:100000 end_total_asset:1028252.0867060601 total_reward:928252.0867060601 total_cost: 504.6352044569054 total_trades: 2515 Sharpe: 1.0539729580189 ================================= begin_total_asset:100000 end_total_asset:1012919.8832652074 total_reward:912919.8832652074 total_cost: 1087.4567894795407 total_trades: 2515 Sharpe: 1.049357005305756 ================================= begin_total_asset:100000 end_total_asset:1009170.4684736083 total_reward:909170.4684736083 total_cost: 330.15603324727704 total_trades: 2515 Sharpe: 1.0477097038943333 ================================= begin_total_asset:100000 end_total_asset:1008728.6930000533 total_reward:908728.6930000533 total_cost: 105.3839081911078 total_trades: 2515 Sharpe: 1.0473237020347772 ================================= begin_total_asset:100000 end_total_asset:1066405.9457223369 total_reward:966405.9457223369 total_cost: 99.93001295428193 total_trades: 2515 Sharpe: 1.0661702887529563 ================================= begin_total_asset:100000 end_total_asset:1076095.1269021085 total_reward:976095.1269021085 total_cost: 99.8999331866183 total_trades: 2515 Sharpe: 1.0691763688716032 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1076202.7779122659 total_reward:976202.7779122659 total_cost: 99.89889871461293 total_trades: 2515 Sharpe: 1.0692246929592815 ================================= begin_total_asset:100000 end_total_asset:1076713.6513732625 total_reward:976713.6513732625 total_cost: 99.89764339307739 total_trades: 2515 Sharpe: 1.0693742840547629 ================================= begin_total_asset:100000 end_total_asset:1073821.6024768997 total_reward:973821.6024768997 total_cost: 99.89993767998253 total_trades: 2515 Sharpe: 1.0684508094123852 ================================= begin_total_asset:100000 end_total_asset:1071677.1316402173 total_reward:971677.1316402173 total_cost: 99.89588509830196 total_trades: 2515 Sharpe: 1.0678225871360378 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1073289.289490049 total_reward:973289.289490049 total_cost: 99.89958180492658 total_trades: 2515 Sharpe: 1.0683035175424689 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1075447.4353602997 total_reward:975447.4353602997 total_cost: 99.89794734264501 total_trades: 2515 Sharpe: 1.0689847753454724 ================================= begin_total_asset:100000 end_total_asset:1049675.3015436474 total_reward:949675.3015436474 total_cost: 100.54921497903216 total_trades: 2515 Sharpe: 1.060809128824055 ================================= begin_total_asset:100000 end_total_asset:1047590.4577336748 total_reward:947590.4577336748 total_cost: 101.89706915307313 total_trades: 2514 Sharpe: 1.0602217979823982 ================================= begin_total_asset:100000 end_total_asset:1047776.4501636802 total_reward:947776.4501636802 total_cost: 102.07005234048698 total_trades: 2515 Sharpe: 1.0602399076152227 ================================= begin_total_asset:100000 end_total_asset:1012040.9935228077 total_reward:912040.9935228077 total_cost: 106.2468007798095 total_trades: 2515 Sharpe: 1.0486006596980257 ================================= begin_total_asset:100000 end_total_asset:981578.9094083403 total_reward:881578.9094083403 total_cost: 110.2152610478695 total_trades: 2514 Sharpe: 1.0376627607165896 ================================= begin_total_asset:100000 end_total_asset:1011185.4051029399 total_reward:911185.4051029399 total_cost: 106.32009020648678 total_trades: 2515 Sharpe: 1.0481405595104392 ================================= begin_total_asset:100000 end_total_asset:921929.9658751409 total_reward:821929.9658751409 total_cost: 116.26213817031766 total_trades: 2513 Sharpe: 1.0158525285971 ================================= begin_total_asset:100000 end_total_asset:937270.7163539234 total_reward:837270.7163539234 total_cost: 116.24364382860685 total_trades: 2515 Sharpe: 1.0219615653157366 ================================= begin_total_asset:100000 end_total_asset:1006007.6539997341 total_reward:906007.6539997341 total_cost: 106.56197241422906 total_trades: 2515 Sharpe: 1.0462902027048853 ================================= begin_total_asset:100000 end_total_asset:988280.4663097259 total_reward:888280.4663097259 total_cost: 107.06134741607173 total_trades: 2515 Sharpe: 1.0401128283240537 ================================= Training time (A2C): 3.6911093990008035 minutes
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 2: DDPG
## default hyperparameters in config file config.DDPG_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') ddpg_params_tuning = { 'batch_size': 128, 'buffer_size':100000, 'verbose':0, 'timesteps':50000} model_ddpg = agent.train_DDPG(model_name = "DDPG_{}".format(now), model_params = ddpg_params_tuning)
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/policies.py:136: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.Dense instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:449: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:449: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/ddpg.py:94: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/ddpg.py:444: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:432: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. begin_total_asset:100000 end_total_asset:99995.7058569457 total_reward:-4.294143054299639 total_cost: 0.17149700678428698 total_trades: 10 Sharpe: -0.4393937514363672 ================================= begin_total_asset:100000 end_total_asset:125223.8481070335 total_reward:25223.848107033496 total_cost: 693.701142432137 total_trades: 1159 Sharpe: 0.22317075597890754 ================================= begin_total_asset:100000 end_total_asset:78872.9656911464 total_reward:-21127.034308853603 total_cost: 354.44841880019516 total_trades: 270 Sharpe: -0.31473719368550507 ================================= begin_total_asset:100000 end_total_asset:101105.50020441337 total_reward:1105.5002044133726 total_cost: 158.41803119077085 total_trades: 523 Sharpe: 0.05295084331155536 ================================= begin_total_asset:100000 end_total_asset:92841.32190924165 total_reward:-7158.678090758345 total_cost: 285.19241356424914 total_trades: 441 Sharpe: -0.04400567053352091 ================================= begin_total_asset:100000 end_total_asset:100098.01839813044 total_reward:98.01839813044353 total_cost: 317.5804545814511 total_trades: 529 Sharpe: 0.08613603512851835 ================================= begin_total_asset:100000 end_total_asset:92739.4599723879 total_reward:-7260.540027612107 total_cost: 191.08645151072778 total_trades: 309 Sharpe: -0.05614602909156426 ================================= begin_total_asset:100000 end_total_asset:184718.08681328793 total_reward:84718.08681328793 total_cost: 413.2914850454691 total_trades: 1474 Sharpe: 0.468037661266039 ================================= begin_total_asset:100000 end_total_asset:348737.8082337941 total_reward:248737.80823379412 total_cost: 1077.897208581877 total_trades: 2325 Sharpe: 0.8488781572304104 ================================= begin_total_asset:100000 end_total_asset:1066685.5776203808 total_reward:966685.5776203808 total_cost: 104.86199927912372 total_trades: 2515 Sharpe: 1.0662577405642613 ================================= begin_total_asset:100000 end_total_asset:546140.3665567372 total_reward:446140.3665567372 total_cost: 1984.94501164637 total_trades: 2039 Sharpe: 1.0962526319196348 ================================= begin_total_asset:100000 end_total_asset:725392.1516885572 total_reward:625392.1516885572 total_cost: 1929.21672055331 total_trades: 2367 Sharpe: 1.0850685606375767 ================================= begin_total_asset:100000 end_total_asset:1197963.7491200864 total_reward:1097963.7491200864 total_cost: 775.1689520095539 total_trades: 2515 Sharpe: 1.1368189820751509 ================================= begin_total_asset:100000 end_total_asset:742963.9653327786 total_reward:642963.9653327786 total_cost: 4533.239665881099 total_trades: 2515 Sharpe: 1.1079544079376524 ================================= begin_total_asset:100000 end_total_asset:1144761.2711953667 total_reward:1044761.2711953667 total_cost: 3276.7738260039214 total_trades: 2515 Sharpe: 1.1819869523465631 ================================= begin_total_asset:100000 end_total_asset:1037986.4133432165 total_reward:937986.4133432165 total_cost: 1362.696404204281 total_trades: 2515 Sharpe: 1.074102780542535 ================================= begin_total_asset:100000 end_total_asset:379800.5713001307 total_reward:279800.5713001307 total_cost: 561.097958557322 total_trades: 1108 Sharpe: 1.017939795759656 ================================= begin_total_asset:100000 end_total_asset:1057570.3289890413 total_reward:957570.3289890413 total_cost: 230.8395712170385 total_trades: 2515 Sharpe: 1.0649367921722361 ================================= begin_total_asset:100000 end_total_asset:1262476.1474488103 total_reward:1162476.1474488103 total_cost: 3144.3996860926018 total_trades: 2515 Sharpe: 1.192124716528578 ================================= begin_total_asset:100000 end_total_asset:1082336.6108373601 total_reward:982336.6108373601 total_cost: 4391.917908269401 total_trades: 2515 Sharpe: 1.2316074665722823 ================================= begin_total_asset:100000 end_total_asset:1073915.1368067395 total_reward:973915.1368067395 total_cost: 3961.0404912539616 total_trades: 2515 Sharpe: 1.182975331285516 ================================= begin_total_asset:100000 end_total_asset:995355.5827891508 total_reward:895355.5827891508 total_cost: 2987.380600884907 total_trades: 2275 Sharpe: 1.2184826630430359 ================================= begin_total_asset:100000 end_total_asset:1265155.931393874 total_reward:1165155.931393874 total_cost: 3421.904957608416 total_trades: 2515 Sharpe: 1.176230159082932 ================================= begin_total_asset:100000 end_total_asset:354591.09329952736 total_reward:254591.09329952736 total_cost: 2402.9491733768123 total_trades: 1728 Sharpe: 1.1527467961622158 ================================= begin_total_asset:100000 end_total_asset:803623.4664680425 total_reward:703623.4664680425 total_cost: 4972.748300329915 total_trades: 2023 Sharpe: 1.2292805568673504 ================================= begin_total_asset:100000 end_total_asset:955506.5108108347 total_reward:855506.5108108347 total_cost: 3991.8851063311604 total_trades: 2515 Sharpe: 1.0580866091966117 ================================= begin_total_asset:100000 end_total_asset:1155244.3521296869 total_reward:1055244.3521296869 total_cost: 948.3196589027705 total_trades: 2515 Sharpe: 1.115323871625628 ================================= begin_total_asset:100000 end_total_asset:558497.3118469787 total_reward:458497.31184697873 total_cost: 764.4295498483248 total_trades: 2160 Sharpe: 0.9180767731547095 ================================= begin_total_asset:100000 end_total_asset:1066247.2705421762 total_reward:966247.2705421762 total_cost: 1693.194371330125 total_trades: 2515 Sharpe: 1.0701861305505607 ================================= begin_total_asset:100000 end_total_asset:1182423.7881506896 total_reward:1082423.7881506896 total_cost: 4612.575868804704 total_trades: 2515 Sharpe: 1.1897017571714126 ================================= begin_total_asset:100000 end_total_asset:352639.7791066152 total_reward:252639.77910661523 total_cost: 2203.071873404569 total_trades: 1706 Sharpe: 0.9297194480752586 ================================= begin_total_asset:100000 end_total_asset:512017.8187501993 total_reward:412017.8187501993 total_cost: 3237.2744638466706 total_trades: 2074 Sharpe: 1.2296052920172373 ================================= begin_total_asset:100000 end_total_asset:1026617.409790139 total_reward:926617.409790139 total_cost: 2235.833171563652 total_trades: 2515 Sharpe: 1.0634461951643783 ================================= begin_total_asset:100000 end_total_asset:432922.27221052325 total_reward:332922.27221052325 total_cost: 1965.1113230232177 total_trades: 1676 Sharpe: 0.9558190650202323 ================================= begin_total_asset:100000 end_total_asset:1136563.8991799501 total_reward:1036563.8991799501 total_cost: 4048.353596072037 total_trades: 2515 Sharpe: 1.1567139637696162 ================================= begin_total_asset:100000 end_total_asset:457739.8968391317 total_reward:357739.8968391317 total_cost: 1451.009792129765 total_trades: 1722 Sharpe: 0.9887615430292522 ================================= begin_total_asset:100000 end_total_asset:832672.3654919548 total_reward:732672.3654919548 total_cost: 2254.518771357834 total_trades: 2117 Sharpe: 1.0499743963093453 ================================= begin_total_asset:100000 end_total_asset:903730.0291357596 total_reward:803730.0291357596 total_cost: 4160.4464784263955 total_trades: 2515 Sharpe: 1.0537325331716016 ================================= begin_total_asset:100000 end_total_asset:868039.507615209 total_reward:768039.507615209 total_cost: 1324.6054822848214 total_trades: 2515 Sharpe: 1.0055657486465792 ================================= Training time (DDPG): 7.679340577125549 minutes
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 3: PPO
config.PPO_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') ppo_params_tuning = {'n_steps':128, 'nminibatches': 4, 'ent_coef':0.005, 'learning_rate':0.00025, 'verbose':0, 'timesteps':50000} model_ppo = agent.train_PPO(model_name = "PPO_{}".format(now), model_params = ppo_params_tuning)
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:191: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:200: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/policies.py:116: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/input.py:25: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/policies.py:561: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.flatten instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/core.py:332: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: Please use `layer.__call__` method instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_layers.py:123: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/distributions.py:418: The name tf.random_normal is deprecated. Please use tf.random.normal instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:190: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:198: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:206: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:240: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:242: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/base_class.py:1169: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead. begin_total_asset:100000 end_total_asset:467641.36933949846 total_reward:367641.36933949846 total_cost: 6334.431322515711 total_trades: 2512 Sharpe: 0.8257905133964245 ================================= WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:502: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead. begin_total_asset:100000 end_total_asset:598301.9358692836 total_reward:498301.9358692836 total_cost: 6714.914704657209 total_trades: 2514 Sharpe: 0.9104137553610742 ================================= begin_total_asset:100000 end_total_asset:487324.45261743915 total_reward:387324.45261743915 total_cost: 6694.683756348197 total_trades: 2513 Sharpe: 0.8778200252832747 ================================= begin_total_asset:100000 end_total_asset:376587.1472550176 total_reward:276587.1472550176 total_cost: 6498.996226416659 total_trades: 2500 Sharpe: 0.9265883206757147 ================================= begin_total_asset:100000 end_total_asset:411775.78502221894 total_reward:311775.78502221894 total_cost: 6672.303574431684 total_trades: 2509 Sharpe: 0.8663978354433025 ================================= begin_total_asset:100000 end_total_asset:443250.46347580303 total_reward:343250.46347580303 total_cost: 6792.764421390525 total_trades: 2515 Sharpe: 0.8567059823183628 ================================= begin_total_asset:100000 end_total_asset:547712.6511708717 total_reward:447712.6511708717 total_cost: 6901.285057676673 total_trades: 2511 Sharpe: 0.9463287997507608 ================================= begin_total_asset:100000 end_total_asset:534293.6779391705 total_reward:434293.6779391705 total_cost: 7026.2048333167895 total_trades: 2515 Sharpe: 0.9103397038651807 ================================= begin_total_asset:100000 end_total_asset:767260.8108358055 total_reward:667260.8108358055 total_cost: 6963.422003443312 total_trades: 2515 Sharpe: 0.9969063532868196 ================================= begin_total_asset:100000 end_total_asset:862184.7490450073 total_reward:762184.7490450073 total_cost: 6934.620506435971 total_trades: 2515 Sharpe: 1.0262666662712374 ================================= begin_total_asset:100000 end_total_asset:877375.7041656245 total_reward:777375.7041656245 total_cost: 6802.841792068231 total_trades: 2515 Sharpe: 1.0294698704729517 ================================= Training time (PPO): 0.8607302109400431 minutes
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 4: TD3
## default hyperparameters in config file config.TD3_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') td3_params_tuning = { 'batch_size': 128, 'buffer_size':200000, 'learning_rate': 0.0002, 'verbose':0, 'timesteps':50000} model_td3 = agent.train_TD3(model_name = "TD3_{}".format(now), model_params = td3_params_tuning)
==============Model Training=========== begin_total_asset:100000 end_total_asset:766882.06486716 total_reward:666882.06486716 total_cost: 122.06275547719093 total_trades: 2502 Sharpe: 0.9471484567377753 ================================= begin_total_asset:100000 end_total_asset:1064261.8436314124 total_reward:964261.8436314124 total_cost: 99.89867026527524 total_trades: 2515 Sharpe: 1.065522879677433 ================================= begin_total_asset:100000 end_total_asset:1065410.0297433857 total_reward:965410.0297433857 total_cost: 99.89616402166861 total_trades: 2515 Sharpe: 1.0658930407952774 ================================= begin_total_asset:100000 end_total_asset:1062109.9810929787 total_reward:962109.9810929787 total_cost: 99.8953728848598 total_trades: 2515 Sharpe: 1.0648226545239186 ================================= begin_total_asset:100000 end_total_asset:1066748.4141685755 total_reward:966748.4141685755 total_cost: 99.89584806687385 total_trades: 2515 Sharpe: 1.06632128735182 ================================= begin_total_asset:100000 end_total_asset:1064717.8522568534 total_reward:964717.8522568534 total_cost: 99.89954986181223 total_trades: 2515 Sharpe: 1.0656773439462421 ================================= begin_total_asset:100000 end_total_asset:1063618.256994051 total_reward:963618.2569940509 total_cost: 99.89578810058728 total_trades: 2515 Sharpe: 1.065318652036845 ================================= begin_total_asset:100000 end_total_asset:1065101.978900172 total_reward:965101.978900172 total_cost: 99.90007986039683 total_trades: 2515 Sharpe: 1.0657876842044585 ================================= begin_total_asset:100000 end_total_asset:1065345.1607699129 total_reward:965345.1607699129 total_cost: 99.89532260010348 total_trades: 2515 Sharpe: 1.0658841213300805 ================================= begin_total_asset:100000 end_total_asset:1066239.1006302314 total_reward:966239.1006302314 total_cost: 99.89946311191612 total_trades: 2515 Sharpe: 1.0661338428981897 ================================= begin_total_asset:100000 end_total_asset:1064642.5474156558 total_reward:964642.5474156558 total_cost: 99.89530934433792 total_trades: 2515 Sharpe: 1.0656451438551164 ================================= begin_total_asset:100000 end_total_asset:1066120.7395977282 total_reward:966120.7395977282 total_cost: 99.89889606461536 total_trades: 2515 Sharpe: 1.0661139044989152 ================================= begin_total_asset:100000 end_total_asset:1065188.3816049164 total_reward:965188.3816049164 total_cost: 99.89524269603959 total_trades: 2515 Sharpe: 1.0658240771799103 ================================= begin_total_asset:100000 end_total_asset:1062915.9535119308 total_reward:962915.9535119308 total_cost: 99.89634893255415 total_trades: 2515 Sharpe: 1.065112303207818 ================================= begin_total_asset:100000 end_total_asset:1066825.939915284 total_reward:966825.939915284 total_cost: 99.89954193874149 total_trades: 2515 Sharpe: 1.0663221666084541 ================================= begin_total_asset:100000 end_total_asset:1064761.0628751868 total_reward:964761.0628751868 total_cost: 99.89933540212928 total_trades: 2515 Sharpe: 1.0656814880277525 ================================= begin_total_asset:100000 end_total_asset:1068713.0753036987 total_reward:968713.0753036987 total_cost: 99.89722384904954 total_trades: 2515 Sharpe: 1.0669419606856339 ================================= begin_total_asset:100000 end_total_asset:1066851.9421668774 total_reward:966851.9421668774 total_cost: 99.8963950145522 total_trades: 2515 Sharpe: 1.066357578203035 ================================= begin_total_asset:100000 end_total_asset:1066403.710948116 total_reward:966403.7109481159 total_cost: 99.89754595417725 total_trades: 2515 Sharpe: 1.066210452143935 ================================= Training time (DDPG): 4.460090506076813 minutes
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Model 5: SAC
## default hyperparameters in config file config.SAC_PARAMS print("==============Model Training===========") now = datetime.datetime.now().strftime('%Y%m%d-%Hh%M') sac_params_tuning={ 'batch_size': 64, 'buffer_size': 100000, 'ent_coef':'auto_0.1', 'learning_rate': 0.0001, 'learning_starts':200, 'timesteps': 50000, 'verbose': 0} model_sac = agent.train_SAC(model_name = "SAC_{}".format(now), model_params = sac_params_tuning)
==============Model Training=========== WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/sac/policies.py:63: The name tf.log is deprecated. Please use tf.math.log instead. begin_total_asset:100000 end_total_asset:628197.7965312647 total_reward:528197.7965312647 total_cost: 161.17551531590826 total_trades: 2493 Sharpe: 0.8786969593304516 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= begin_total_asset:100000 end_total_asset:1077672.4272528668 total_reward:977672.4272528668 total_cost: 99.89875688362122 total_trades: 2515 Sharpe: 1.069666896575114 ================================= Training time (SAC): 5.726297716299693 minutes
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Trading* we use the environment class we initialized at 5.3 to create a stock trading environment* Assume that we have $100,000 initial capital at 2019-01-01. * We use the trained model of PPO to trade AAPL.
trade.head() # create trading env env_trade, obs_trade = env_setup.create_env_trading(data = trade, env_class = SingleStockEnv) ## make a prediction and get the account value change df_account_value, df_actions = DRLAgent.DRL_prediction(model=model_td3, test_data = trade, test_env = env_trade, test_obs = obs_trade)
begin_total_asset:100000 end_total_asset:308768.3018266945 total_reward:208768.30182669451 total_cost: 99.89708306503296 total_trades: 439 Sharpe: 1.9188345294206783 =================================
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Part 7: Backtesting PerformanceBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy. 7.1 BackTestStatspass in df_account_value, this information is stored in env class
print("==============Get Backtest Results===========") perf_stats_all = BackTestStats(account_value=df_account_value) perf_stats_all = pd.DataFrame(perf_stats_all) perf_stats_all.to_csv("./"+config.RESULTS_DIR+"/perf_stats_all_"+now+'.csv')
==============Get Backtest Results=========== annual return: 104.80443553947256 sharpe ratio: 1.9188345294206783 Annual return 0.907331 Cumulative returns 2.087683 Annual volatility 0.374136 Sharpe ratio 1.918835 Calmar ratio 2.887121 Stability 0.909127 Max drawdown -0.314268 Omega ratio 1.442243 Sortino ratio 2.903654 Skew NaN Kurtosis NaN Tail ratio 1.049744 Daily value at risk -0.044288 Alpha 0.000000 Beta 1.000000 dtype: float64
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.2 BackTestPlot
print("==============Compare to AAPL itself buy-and-hold===========") %matplotlib inline BackTestPlot(account_value=df_account_value, baseline_ticker = 'AAPL')
==============Compare to AAPL itself buy-and-hold=========== annual return: 104.80443553947256 sharpe ratio: 1.9188345294206783 [*********************100%***********************] 1 of 1 completed Shape of DataFrame: (440, 7)
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.3 Baseline Stats
print("==============Get Baseline Stats===========") baesline_perf_stats=BaselineStats('AAPL') print("==============Get Baseline Stats===========") baesline_perf_stats=BaselineStats('^GSPC')
==============Get Baseline Stats=========== [*********************100%***********************] 1 of 1 completed Shape of DataFrame: (440, 7) Annual return 0.176845 Cumulative returns 0.328857 Annual volatility 0.270644 Sharpe ratio 0.739474 Calmar ratio 0.521283 Stability 0.339596 Max drawdown -0.339250 Omega ratio 1.174869 Sortino ratio 1.015508 Skew NaN Kurtosis NaN Tail ratio 0.659621 Daily value at risk -0.033304 Alpha 0.000000 Beta 1.000000 dtype: float64
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
7.4 Compare to Stock Market Index
print("==============Compare to S&P 500===========") %matplotlib inline # S&P 500: ^GSPC # Dow Jones Index: ^DJI # NASDAQ 100: ^NDX BackTestPlot(df_account_value, baseline_ticker = '^GSPC')
_____no_output_____
MIT
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evaluate the performance of our trained model on a test dataset!Before we begin, we have to import the necessary libraries for working with data and PyTorch.
# import libraries import torch import numpy as np
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our datasets.
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection # Run this script to enable the datasets download # Reference: https://github.com/pytorch/vision/issues/1938 from six.moves import urllib opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib.request.install_opener(opener) from torchvision import datasets import torchvision.transforms as transforms # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.MNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='data', train=False, download=True, transform=transform) # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Processing... Done!
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') # print out the correct label for each image # .item() gets the value contained in a Tensor ax.set_title(str(labels[idx].item()))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
View an Image in More Detail
img = np.squeeze(images[1]) fig = plt.figure(figsize = (12,12)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray') width, height = img.shape thresh = img.max()/2.5 for x in range(width): for y in range(height): val = round(img[x][y],2) if img[x][y] !=0 else 0 ax.annotate(str(val), xy=(y,x), horizontalalignment='center', verticalalignment='center', color='white' if img[x][y]<thresh else 'black')
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Net(nn.Module): def __init__(self): super(Net, self).__init__() # number of hidden nodes in each layer (512) hidden_1 = 512 hidden_2 = 512 # linear layer (784 -> hidden_1) self.fc1 = nn.Linear(28 * 28, hidden_1) # linear layer (n_hidden -> hidden_2) self.fc2 = nn.Linear(hidden_1, hidden_2) # linear layer (n_hidden -> 10) self.fc3 = nn.Linear(hidden_2, 10) # dropout layer (p=0.2) # dropout prevents overfitting of data self.dropout = nn.Dropout(0.2) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add hidden layer, with relu activation function x = F.relu(self.fc2(x)) # add dropout layer x = self.dropout(x) # add output layer x = self.fc3(x) return x # initialize the NN model = Net() print(model)
Net( (fc1): Linear(in_features=784, out_features=512, bias=True) (fc2): Linear(in_features=512, out_features=512, bias=True) (fc3): Linear(in_features=512, out_features=10, bias=True) (dropout): Dropout(p=0.2) )
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
# specify loss function (categorical cross-entropy) criterion = nn.CrossEntropyLoss() # specify optimizer (stochastic gradient descent) and learning rate = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to model parameters5. Perform a single optimization step (parameter update)6. Update average training lossThe following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
# number of epochs to train the model n_epochs = 50 model.train() # prep model for training for epoch in range(n_epochs): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data, target in train_loader: # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*data.size(0) # print training statistics # calculate average loss over an epoch train_loss = train_loss/len(train_loader.dataset) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch+1, train_loss ))
Epoch: 1 Training Loss: 0.833544 Epoch: 2 Training Loss: 0.321996 Epoch: 3 Training Loss: 0.247905 Epoch: 4 Training Loss: 0.201408 Epoch: 5 Training Loss: 0.169627 Epoch: 6 Training Loss: 0.147488 Epoch: 7 Training Loss: 0.129424 Epoch: 8 Training Loss: 0.116433 Epoch: 9 Training Loss: 0.104333 Epoch: 10 Training Loss: 0.094504 Epoch: 11 Training Loss: 0.085769 Epoch: 12 Training Loss: 0.080728 Epoch: 13 Training Loss: 0.073689 Epoch: 14 Training Loss: 0.067905 Epoch: 15 Training Loss: 0.063251 Epoch: 16 Training Loss: 0.058666 Epoch: 17 Training Loss: 0.055106 Epoch: 18 Training Loss: 0.050979 Epoch: 19 Training Loss: 0.048491 Epoch: 20 Training Loss: 0.046173 Epoch: 21 Training Loss: 0.044311 Epoch: 22 Training Loss: 0.041405 Epoch: 23 Training Loss: 0.038702 Epoch: 24 Training Loss: 0.036634 Epoch: 25 Training Loss: 0.035159 Epoch: 26 Training Loss: 0.033605 Epoch: 27 Training Loss: 0.030255 Epoch: 28 Training Loss: 0.029026 Epoch: 29 Training Loss: 0.028722 Epoch: 30 Training Loss: 0.027026 Epoch: 31 Training Loss: 0.026134 Epoch: 32 Training Loss: 0.022992 Epoch: 33 Training Loss: 0.023809 Epoch: 34 Training Loss: 0.022347 Epoch: 35 Training Loss: 0.021212 Epoch: 36 Training Loss: 0.020292 Epoch: 37 Training Loss: 0.019413 Epoch: 38 Training Loss: 0.019758 Epoch: 39 Training Loss: 0.017851 Epoch: 40 Training Loss: 0.017023 Epoch: 41 Training Loss: 0.016846 Epoch: 42 Training Loss: 0.016187 Epoch: 43 Training Loss: 0.015530 Epoch: 44 Training Loss: 0.014553 Epoch: 45 Training Loss: 0.014781 Epoch: 46 Training Loss: 0.013546 Epoch: 47 Training Loss: 0.013328 Epoch: 48 Training Loss: 0.012698 Epoch: 49 Training Loss: 0.012012 Epoch: 50 Training Loss: 0.012588
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
# initialize lists to monitor test loss and accuracy test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # prep model for training for data, target in test_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct = np.squeeze(pred.eq(target.data.view_as(pred))) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # calculate and print avg test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( str(i), 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total)))
Test Loss: 0.052876 Test Accuracy of 0: 99% (972/980) Test Accuracy of 1: 99% (1127/1135) Test Accuracy of 2: 98% (1012/1032) Test Accuracy of 3: 98% (992/1010) Test Accuracy of 4: 98% (968/982) Test Accuracy of 5: 98% (875/892) Test Accuracy of 6: 98% (946/958) Test Accuracy of 7: 98% (1010/1028) Test Accuracy of 8: 97% (949/974) Test Accuracy of 9: 98% (990/1009) Test Accuracy (Overall): 98% (9841/10000)
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds = torch.max(output, 1) # prep images for display images = images.numpy() # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())), color=("green" if preds[idx]==labels[idx] else "red"))
_____no_output_____
MIT
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
Exercise 1 1.1. Which sigs are valid?```P = (887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c, 61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34)z, r, s = ec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60, ac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395, 68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4z, r, s = 7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d, eff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c, c7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6``` 1.2. Make [these tests](/edit/session3/ecc.py) pass```ecc.py:S256Test:test_verifyecc.py:PrivateKeyTest:test_sign```
# Exercise 1.1 from ecc import S256Point, G, N px = 0x887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c py = 0x61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34 signatures = ( # (z, r, s) (0xec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60, 0xac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395, 0x68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4), (0x7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d, 0xeff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c, 0xc7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6), ) # initialize the public point # use: S256Point(x-coordinate, y-coordinate) # iterate over signatures # u = z / s, v = r / s # finally, uG+vP should have the x-coordinate equal to r # Exercise 1.2 reload(ecc) run_test(ecc.S256Test('test_verify')) run_test(ecc.PrivateKeyTest('test_sign'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 2 2.1. Verify the DER signature for the hash of "ECDSA is awesome!" for the given SEC pubkey`z = int.from_bytes(double_sha256('ECDSA is awesome!'), 'big')`Public Key in SEC Format: 0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574Signature in DER Format: 304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1
# Exercise 2.1 from ecc import S256Point, Signature from helper import double_sha256 der = bytes.fromhex('304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1') sec = bytes.fromhex('0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574') # message is the double_sha256 of the message "ECDSA is awesome!" z = int.from_bytes(double_sha256(b'ECDSA is awesome!'), 'big') # parse the der format to get the signature # parse the sec format to get the public key # use the verify method on S256Point to validate the signature # WIF Example from helper import encode_base58_checksum secret = 2**256 - 2**200 s = secret.to_bytes(32, 'big') print(encode_base58_checksum(b'\x80'+s)) print(encode_base58_checksum(b'\x80'+s+b'\x01')) print(encode_base58_checksum(b'\xef'+s)) print(encode_base58_checksum(b'\xef'+s+b'\x01'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 3WIF is the serialization of a Private Key. 3.1. Find the WIF Format of the following:* \\(2^{256}-2^{199}\\), mainnet, compressed* \\(2^{256}-2^{201}\\), testnet, uncompressed* 0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, mainnet, uncompressed* 1cca23de92fd1862fb5b76e5f4f50eb082165e5191e116c18ed1a6b24be6a53f, testnet, compressed 3.2. Make [this test](/edit/session3/ecc.py) pass```ecc.py:PrivateKeyTest:test_wif```
# Exercise 3.1 from helper import encode_base58_checksum components = ( # (secret, testnet, compressed) (2**256-2**199, False, True), (2**256-2**201, True, False), (0x0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, False, False), (0x1cca23de92fd1862fb5b76e5f4f50eb082165e5191e116c18ed1a6b24be6a53f, True, True), ) # iterate through components # get the private key in 32-byte big-endian: num.to_bytes(32, 'big') # prepend b'\x80' for mainnet, b'\xef' for testnet # append b'\x01' for compressed # base58 the whole thing with checksum # print the wif # Exercise 3.2 reload(ecc) run_test(ecc.PrivateKeyTest('test_wif'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 4 4.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_version```
# Exercise 4.1 reload(tx) run_test(tx.TxTest('test_parse_version'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 5 5.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_inputs```
# Exercise 5.1 reload(tx) run_test(tx.TxTest('test_parse_inputs'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 6 6.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_outputs```
# Exercise 6.1 reload(tx) run_test(tx.TxTest('test_parse_outputs'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises
Exercise 7 7.1. Make [this test](/edit/session3/tx.py) pass```tx.py:TxTest:test_parse_locktime```
# Exercise 7.1 reload(tx) run_test(tx.TxTest('test_parse_locktime'))
_____no_output_____
BSD-2-Clause
session3/session3.ipynb
casey-bowman/pb-exercises