markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Aside: Image processing via convolutionsAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check. Colab Users OnlyPlease execute the below cell to copy two cat images to the Colab VM.
# Colab users only! %mkdir -p cs231n/notebook_images %cd drive/My\ Drive/$FOLDERNAME/cs231n %cp -r notebook_images/ /content/cs231n/ %cd /content/ from imageio import imread from PIL import Image kitten = imread('cs231n/notebook_images/kitten.jpg') puppy = imread('cs231n/notebook_images/puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d//2:-d//2, :] img_size = 200 # Make this smaller if it runs too slow resized_puppy = np.array(Image.fromarray(puppy).resize((img_size, img_size))) resized_kitten = np.array(Image.fromarray(kitten_cropped).resize((img_size, img_size))) x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = resized_puppy.transpose((2, 0, 1)) x[1, :, :, :] = resized_kitten.transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_no_ax(img, normalize=True): """ Tiny helper to show images as uint8 and remove axis labels """ if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_no_ax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_no_ax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_no_ax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_no_ax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_no_ax(out[1, 0]) plt.subplot(2, 3, 6) imshow_no_ax(out[1, 1]) plt.show()
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolution: Naive backward passImplement the backward pass for the convolution operation in the function `conv_backward_naive` in the file `cs231n/layers.py`. Again, you don't need to worry too much about computational efficiency.When you are done, run the following to check your backward pass with a numeric gradient check.
np.random.seed(231) x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 2, 'pad': 3} out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) # Your errors should be around e-8 or less. print('Testing conv_backward_naive function') print('dx error: ', rel_error(dx, dx_num)) print('dw error: ', rel_error(dw, dw_num)) print('db error: ', rel_error(db, db_num)) print(dw_num) print(dw) t = np.array([[1,2], [3,4]]) np.rot90(t, k=2)
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Max-Pooling: Naive forwardImplement the forward pass for the max-pooling operation in the function `max_pool_forward_naive` in the file `cs231n/layers.py`. Again, don't worry too much about computational efficiency.Check your implementation by running the following:
x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be on the order of e-8. print('Testing max_pool_forward_naive function:') print('difference: ', rel_error(out, correct_out))
Testing max_pool_forward_naive function: difference: 4.1666665157267834e-08
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Max-Pooling: Naive backwardImplement the backward pass for the max-pooling operation in the function `max_pool_backward_naive` in the file `cs231n/layers.py`. You don't need to worry about computational efficiency.Check your implementation with numeric gradient checking by running the following:
np.random.seed(231) x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be on the order of e-12 print('Testing max_pool_backward_naive function:') print('dx error: ', rel_error(dx, dx_num))
Testing max_pool_backward_naive function: dx error: 3.27562514223145e-12
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Fast layersMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file `cs231n/fast_layers.py`. The fast convolution implementation depends on a Cython extension; to compile it either execute the local development cell (option A) if you are developing locally, or the Colab cell (option B) if you are running this assignment in Colab.---**Very Important, Please Read**. For **both** option A and B, you have to **restart** the notebook after compiling the cython extension. In Colab, please save the notebook `File -> Save`, then click `Runtime -> Restart Runtime -> Yes`. This will restart the kernel which means local variables will be lost. Just re-execute the cells from top to bottom and skip the cell below as you only need to run it once for the compilation step.--- Option A: Local DevelopmentGo to the cs231n directory and execute the following in your terminal:```bashpython setup.py build_ext --inplace``` Option B: ColabExecute the cell below only only **ONCE**.
%cd drive/My\ Drive/$FOLDERNAME/cs231n/ !python setup.py build_ext --inplace
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.**NOTE:** The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.You can compare the performance of the naive and fast versions of these layers by running the following:
# Rel errors should be around e-9 or less from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time %load_ext autoreload %autoreload 2 np.random.seed(231) x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print('Testing conv_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('Difference: ', rel_error(out_naive, out_fast)) t0 = time() # dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print('\nTesting conv_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('Fast: %fs' % (t2 - t1)) print('Speedup: %fx' % ((t1 - t0) / (t2 - t1))) # print('dx difference: ', rel_error(dx_naive, dx_fast)) # print('dw difference: ', rel_error(dw_naive, dw_fast)) # print('db difference: ', rel_error(db_naive, db_fast)) # Relative errors should be close to 0.0 from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast np.random.seed(231) x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print('Testing pool_forward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('difference: ', rel_error(out_naive, out_fast)) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print('\nTesting pool_backward_fast:') print('Naive: %fs' % (t1 - t0)) print('fast: %fs' % (t2 - t1)) print('speedup: %fx' % ((t1 - t0) / (t2 - t1))) print('dx difference: ', rel_error(dx_naive, dx_fast))
Testing pool_forward_fast: Naive: 0.248409s fast: 0.001589s speedup: 156.347839x difference: 0.0 Testing pool_backward_fast: Naive: 0.323171s fast: 0.007276s speedup: 44.415689x dx difference: 0.0
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Convolutional "sandwich" layersPreviously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file `cs231n/layer_utils.py` you will find sandwich layers that implement a few commonly used patterns for convolutional networks. Run the cells below to sanity check they're working.
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward np.random.seed(231) x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) # Relative errors should be around e-8 or less print('Testing conv_relu_pool') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db)) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) # Relative errors should be around e-8 or less print('Testing conv_relu:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db))
Testing conv_relu: dx error: 1.5218619980349303e-09 dw error: 2.702022646099404e-10 db error: 1.451272393591721e-10
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Three-layer ConvNetNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.Open the file `cs231n/classifiers/cnn.py` and complete the implementation of the `ThreeLayerConvNet` class. Remember you can use the fast/sandwich layers (already imported for you) in your implementation. Run the following cells to help you debug: Sanity check lossAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization the loss should go up slightly.
model = ThreeLayerConvNet() N = 50 X = np.random.randn(N, 3, 32, 32) y = np.random.randint(10, size=N) loss, grads = model.loss(X, y) print('Initial loss (no regularization): ', loss) model.reg = 0.5 loss, grads = model.loss(X, y) print('Initial loss (with regularization): ', loss)
Initial loss (no regularization): 2.3025850635890874 Initial loss (with regularization): 2.508599728507643
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Gradient checkAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to the order of e-2.
num_inputs = 2 input_dim = (3, 16, 16) reg = 0.0 num_classes = 10 np.random.seed(231) X = np.random.randn(num_inputs, *input_dim) y = np.random.randint(num_classes, size=num_inputs) model = ThreeLayerConvNet(num_filters=3, filter_size=3, input_dim=input_dim, hidden_dim=7, dtype=np.float64, reg=0.5) loss, grads = model.loss(X, y) # Errors should be small, but correct implementations may have # relative errors up to the order of e-2 for param_name in sorted(grads): f = lambda _: model.loss(X, y)[0] param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6) e = rel_error(param_grad_num, grads[param_name]) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
W1 max relative error: 1.103635e-05 W2 max relative error: 1.521379e-04 W3 max relative error: 1.763147e-05 b1 max relative error: 3.477652e-05 b2 max relative error: 2.516375e-03 b3 max relative error: 7.945660e-10
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Overfit small dataA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
np.random.seed(231) num_train = 100 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } model = ThreeLayerConvNet(weight_scale=1e-2) solver = Solver(model, small_data, num_epochs=15, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=1) solver.train() # Print final training accuracy print( "Small data training accuracy:", solver.check_accuracy(small_data['X_train'], small_data['y_train']) ) # Print final validation accuracy print( "Small data validation accuracy:", solver.check_accuracy(small_data['X_val'], small_data['y_val']) )
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
plt.subplot(2, 1, 1) plt.plot(solver.loss_history, 'o') plt.xlabel('iteration') plt.ylabel('loss') plt.subplot(2, 1, 2) plt.plot(solver.train_acc_history, '-o') plt.plot(solver.val_acc_history, '-o') plt.legend(['train', 'val'], loc='upper left') plt.xlabel('epoch') plt.ylabel('accuracy') plt.show()
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Train the netBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001) solver = Solver(model, data, num_epochs=1, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=20) solver.train() # Print final training accuracy print( "Full data training accuracy:", solver.check_accuracy(small_data['X_train'], small_data['y_train']) ) # Print final validation accuracy print( "Full data validation accuracy:", solver.check_accuracy(data['X_val'], data['y_val']) )
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Visualize FiltersYou can visualize the first-layer convolutional filters from the trained network by running the following:
from cs231n.vis_utils import visualize_grid grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1)) plt.imshow(grid.astype('uint8')) plt.axis('off') plt.gcf().set_size_inches(5, 5) plt.show()
_____no_output_____
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial Batch NormalizationWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. As proposed in the original paper (link in `BatchNormalization.ipynb`), batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."Normally batch-normalization accepts inputs of shape `(N, D)` and produces outputs of shape `(N, D)`, where we normalize across the minibatch dimension `N`. For data coming from convolutional layers, batch normalization needs to accept inputs of shape `(N, C, H, W)` and produce outputs of shape `(N, C, H, W)` where the `N` dimension gives the minibatch size and the `(H, W)` dimensions give the spatial size of the feature map.If the feature map was produced using convolutions, then we expect every feature channel's statistics e.g. mean, variance to be relatively consistent both between different images, and different locations within the same image -- after all, every feature channel is produced by the same convolutional filter! Therefore spatial batch normalization computes a mean and variance for each of the `C` feature channels by computing statistics over the minibatch dimension `N` as well the spatial dimensions `H` and `W`.[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167) Spatial batch normalization: forwardIn the file `cs231n/layers.py`, implement the forward pass for spatial batch normalization in the function `spatial_batchnorm_forward`. Check your implementation by running the following:
np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 3, 4, 5 x = 4 * np.random.randn(N, C, H, W) + 10 print('Before spatial batch normalization:') print(' Shape: ', x.shape) print(' Means: ', x.mean(axis=(0, 2, 3))) print(' Stds: ', x.std(axis=(0, 2, 3))) # Means should be close to zero and stds close to one gamma, beta = np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization:') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) # Means should be close to beta and stds close to gamma gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8]) out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) print('After spatial batch normalization (nontrivial gamma, beta):') print(' Shape: ', out.shape) print(' Means: ', out.mean(axis=(0, 2, 3))) print(' Stds: ', out.std(axis=(0, 2, 3))) np.random.seed(231) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, C, H, W = 10, 4, 11, 12 bn_param = {'mode': 'train'} gamma = np.ones(C) beta = np.zeros(C) for t in range(50): x = 2.3 * np.random.randn(N, C, H, W) + 13 spatial_batchnorm_forward(x, gamma, beta, bn_param) bn_param['mode'] = 'test' x = 2.3 * np.random.randn(N, C, H, W) + 13 a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After spatial batch normalization (test-time):') print(' means: ', a_norm.mean(axis=(0, 2, 3))) print(' stds: ', a_norm.std(axis=(0, 2, 3)))
After spatial batch normalization (test-time): means: [-0.08446363 0.08091916 0.06055194 0.04564399] stds: [1.0241906 1.09568294 1.0903571 1.0684257 ]
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial batch normalization: backwardIn the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_batchnorm_backward`. Run the following to check your implementation using a numeric gradient check:
np.random.seed(231) N, C, H, W = 2, 3, 4, 5 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) bn_param = {'mode': 'train'} fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) #You should expect errors of magnitudes between 1e-12~1e-06 _, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) print(dx[0]) print(dx_num[0]) print(rel_error(dx[0], dx_num[0]))
[[[ 4.82500057e-08 -6.18616346e-07 3.49900510e-08 -1.01656172e-06 2.21274578e-04] [-8.18296259e-07 1.06936237e-07 2.56172679e-08 6.08105008e-08 -7.21749724e-06] [-1.88721588e-05 3.50657245e-07 -3.24214426e-06 -9.50347872e-08 3.30202440e-07] [-4.69853077e-06 -3.05069739e-07 9.78361879e-08 7.28839098e-06 -4.55892654e-08]] [[-1.04277935e-07 9.63218452e-07 1.98862366e-06 3.38830371e-06 1.56879316e-06] [-1.76798576e-06 -2.04511247e-04 1.24032782e-02 -8.19890551e-10 8.00572597e-06] [-1.97819314e-08 -2.21420815e-07 -2.18365091e-07 5.12938128e-07 5.80098461e-06] [-7.32695890e-08 1.74940725e-08 -5.10295706e-07 2.65071171e-05 6.13496551e-08]] [[ 4.10499437e-09 -1.64792199e-09 7.77772208e-08 -3.98336745e-08 -5.19554157e-07] [ 8.93598739e-10 1.54241596e-09 -6.17399690e-10 5.98332851e-11 1.54458330e-09] [-1.78912331e-07 4.76659981e-10 1.29507309e-08 1.66509831e-09 1.57005474e-09] [ 5.76918608e-10 -5.78221607e-10 1.91686211e-09 7.21027908e-08 -7.31198445e-11]]] [[[ 4.82429917e-08 -6.18613071e-07 3.49810343e-08 -1.01655161e-06 2.21274577e-04] [-8.18296101e-07 1.06943155e-07 2.56248467e-08 6.08208895e-08 -7.21751834e-06] [-1.88721444e-05 3.50657095e-07 -3.24213026e-06 -9.50347287e-08 3.30190632e-07] [-4.69853074e-06 -3.05076200e-07 9.78253870e-08 7.28839423e-06 -4.55828488e-08]] [[-1.04280277e-07 9.63219277e-07 1.98862054e-06 3.38830579e-06 1.56879241e-06] [-1.76798531e-06 -2.04511240e-04 1.24032784e-02 -8.19870920e-10 8.00571709e-06] [-1.97798693e-08 -2.21423050e-07 -2.18359769e-07 5.12933335e-07 5.80098067e-06] [-7.32873779e-08 1.74943463e-08 -5.10310657e-07 2.65071114e-05 6.13411225e-08]] [[ 4.10535500e-09 -1.64264801e-09 7.77871231e-08 -3.98317671e-08 -5.19546842e-07] [ 8.99421228e-10 1.53625905e-09 -6.14967580e-10 5.86159573e-11 1.54866956e-09] [-1.78944574e-07 4.76763898e-10 1.29497212e-08 1.66313198e-09 1.57114321e-09] [ 5.78678420e-10 -5.89311768e-10 1.91779211e-09 7.20991748e-08 -7.49812329e-11]]] 0.0011090161282718075
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Group NormalizationIn the previous notebook, we mentioned that Layer Normalization is an alternative normalization technique that mitigates the batch size limitations of Batch Normalization. However, as the authors of [2] observed, Layer Normalization does not perform as well as Batch Normalization when used with Convolutional Layers:>With fully connected layers, all the hidden units in a layer tend to make similar contributions to the final prediction, and re-centering and rescaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whosereceptive fields lie near the boundary of the image are rarely turned on and thus have very differentstatistics from the rest of the hidden units within the same layer.The authors of [3] propose an intermediary technique. In contrast to Layer Normalization, where you normalize over the entire feature per-datapoint, they suggest a consistent splitting of each per-datapoint feature into G groups, and a per-group per-datapoint normalization instead. Visual comparison of the normalization techniques discussed so far (image edited from [3])Even though an assumption of equal contribution is still being made within each group, the authors hypothesize that this is not as problematic, as innate grouping arises within features for visual recognition. One example they use to illustrate this is that many high-performance handcrafted features in traditional Computer Vision have terms that are explicitly grouped together. Take for example Histogram of Oriented Gradients [4]-- after computing histograms per spatially local block, each per-block histogram is normalized before being concatenated together to form the final feature vector.You will now implement Group Normalization. Note that this normalization technique that you are to implement in the following cells was introduced and published to ECCV just in 2018 -- this truly is still an ongoing and excitingly active field of research![2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)[3] [Wu, Yuxin, and Kaiming He. "Group Normalization." arXiv preprint arXiv:1803.08494 (2018).](https://arxiv.org/abs/1803.08494)[4] [N. Dalal and B. Triggs. Histograms of oriented gradients forhuman detection. In Computer Vision and Pattern Recognition(CVPR), 2005.](https://ieeexplore.ieee.org/abstract/document/1467360/) Group normalization: forwardIn the file `cs231n/layers.py`, implement the forward pass for group normalization in the function `spatial_groupnorm_forward`. Check your implementation by running the following:
np.random.seed(231) # Check the training-time forward pass by checking means and variances # of features both before and after spatial batch normalization N, C, H, W = 2, 6, 4, 5 G = 2 x = 4 * np.random.randn(N, C, H, W) + 10 x_g = x.reshape((N*G,-1)) print('Before spatial group normalization:') print(' Shape: ', x.shape) print(' Means: ', x_g.mean(axis=1)) print(' Stds: ', x_g.std(axis=1)) # Means should be close to zero and stds close to one gamma, beta = 2*np.ones(C), np.zeros(C) bn_param = {'mode': 'train'} out, _ = spatial_groupnorm_quick_forward(x, gamma, beta, G, bn_param) out_g = out.reshape((N*G,-1)) print('After spatial group normalization:') print(' Shape: ', out.shape) print(' Means: ', out_g.mean(axis=1)) print(' Stds: ', out_g.std(axis=1)) np.vstack(list([np.hstack([[g]*H*W for g in gamma])])*N).shape p = np.zeros((3,4)) print(p) q = np.hsplit(p, 2) print(q) np.hstack(q) print(np.arange(36).reshape((6,6)).reshape((18,-1))) print(np.arange(36).reshape((6,6))) print(np.arange(36).reshape((6,6)).reshape((18,-1)).reshape((6, -1)))
[[ 0 1] [ 2 3] [ 4 5] [ 6 7] [ 8 9] [10 11] [12 13] [14 15] [16 17] [18 19] [20 21] [22 23] [24 25] [26 27] [28 29] [30 31] [32 33] [34 35]] [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23] [24 25 26 27 28 29] [30 31 32 33 34 35]] [[ 0 1 2 3 4 5] [ 6 7 8 9 10 11] [12 13 14 15 16 17] [18 19 20 21 22 23] [24 25 26 27 28 29] [30 31 32 33 34 35]]
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Spatial group normalization: backwardIn the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_groupnorm_backward`. Run the following to check your implementation using a numeric gradient check:
np.random.seed(231) N, C, H, W = 2, 6, 4, 5 G = 2 x = 5 * np.random.randn(N, C, H, W) + 12 gamma = np.random.randn(C) beta = np.random.randn(C) dout = np.random.randn(N, C, H, W) gn_param = {} fx = lambda x: spatial_groupnorm_quick_forward(x, gamma, beta, G, gn_param)[0] fg = lambda a: spatial_groupnorm_quick_forward(x, gamma, beta, G, gn_param)[0] fb = lambda b: spatial_groupnorm_quick_forward(x, gamma, beta, G, gn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = spatial_groupnorm_quick_forward(x, gamma, beta, G, gn_param) dx, dgamma, dbeta = spatial_groupnorm_backward(dout, cache) #You should expect errors of magnitudes between 1e-12~1e-07 print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta))
dx error: 7.413109542981906e-08 dgamma error: 9.468085754206675e-12 dbeta error: 3.35440867127888e-12
MIT
assignment2/ConvolutionalNetworks.ipynb
pranav-s/Stanford_CS234_CV_2017
Pre-processing and analysis for one-source with distance 25 Load or create R scripts
get.data <- dget("get_data.r") #script to read data files get.pars <- dget("get_pars.r") #script to extract relevant parameters from raw data get.mv.bound <- dget("get_mvbound.r") #script to look at movement of boundary across learning plot.cirib <- dget("plot_cirib.r") #script to plot confidence intervals as ribbon plot zscore <- function(v){(v - mean(v, na.rm=T))/sqrt(var(v, na.rm=T))} #function to compute Z score
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Load data
fnames <- list.files(pattern = "*.csv") #create a vector of data file names, assuming all csv files are data nfiles <- length(fnames) #number of data files alldat <- list(get.data(fnames[1])) #initialize list containing all data with first subject for(i1 in c(2:nfiles)) alldat[[i1]] <- get.data(fnames[i1]) #populate list with rest of data allpars <- get.pars(alldat) #extract parameters from grid test 1 and 2 from all data
[1] "Processing sj: 1"
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
NOTE that get pars will produce warnings whenever the subject data has a perfectly strict boundary--these can be safely ignored.
head(allpars) dim(allpars)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
KEY **PID**: Unique tag for each participant **cond**: Experiment condition **axlab**: What shape (spiky/smooth) got the "Cooked" label? For counterbalancing, not interesting **closebound**: What was the location of the closest source boundary? **cbside**: Factor indicating what side of the range midpoint has the close boundary**sno**: Subject number in condition **txint, slope, bound**: Intercept, slope, and estimated boundary from logistic regression on test 1 and test 2 data. NOTE that only the boundary estimate is used in analysis. **bshift**: Boundary shift direction and magnitude measured as test 2 boundary - test 1 boundary **alshift**: Placeholder for aligned boundary shift (see below), currently just a copy of bshift **Zalshift**: Zscored midshift, recalculated below Check data for outliers Set Zscore threshold for outlier rejection
zthresh <- 2.5
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
First check t1bound and t2bound to see if there are any impossible values.
plot(allpars$t1bound, allpars$t2bound)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
There is an impossible t2bound so let's remove it.
dim(allpars) sjex <- as.character(allpars$PID[allpars$t2bound < 0]) #Add impossible value to exclude list sjex <- unique(sjex) #remove any accidental repeats noo <- allpars[is.na(match(allpars$PID, sjex)),] #Copy remaining subjects to noo object dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Write "no impossible" (nimp) file for later agglomeration in mega-data
write.csv(noo, "summary/one25_grids_nimp.csv", row.names = F, quote=F)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Check to make sure "aligned" shift computation worked (should be an X pattern)
plot(noo$alshift, noo$bshift)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Check initial boundary for outliers
plot(zscore(noo$t1bound)) abline(h=c(-zthresh,0,zthresh))
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Add any outliers to the exclusion list and recompute no-outlier data structure
sjex <- c(sjex, as.character(allpars$PID[abs(zscore(allpars$t1bound)) > zthresh])) sjex <- unique(sjex) #remove accidental repeats noo <- noo[is.na(match(noo$PID, sjex)),] dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Now compute Zscore for aligned shift for all subjects and look for outliers
noo$Zalshift <- zscore(noo$alshift) #Compute Z scores for this aligned shift plot(noo$Zalshift); abline(h = c(-zthresh,0,zthresh)) #plot Zscores
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Again add any outliers to exclusion list and remove from noo
sjex <- c(sjex, as.character(noo$PID[abs(noo$Zalshift) > zthresh])) sjex <- unique(sjex) #remove accidental repeats noo <- noo[is.na(match(noo$PID, sjex)),] dim(noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Data analysis Does the initial (t1) boundary differ between the two groups? It shouldn't since they have the exact same experience to this point.
t.test(t1bound ~ closebound, data = noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Reassuringly, it doesn't. So what is the location of the initial boundary on average?
t.test(noo$t1bound) #NB t.test of a single vector is a good way to compute mean and CIs
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The mean boundary is shifted a bit positive relative to the midpoint between labeled examplesNext, looking across all subjects, does the aligned boundary shift differ reliably from zero? Also, what are the confidence limits on the mean shift?
t.test(noo$alshift)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The boundary shifts reliably toward the close source. The mean amount of shift is 18, and the confidence interval spans 9-27.Next, where does the test 2 boundary lie for each group, and does this differ depending on where the source was?
t.test(t2bound ~ closebound, data = noo)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
When the source was at 125, the boundary ends up at 134; when the source is at 175, the boundary ends up at 166.Is the boundary moving all the way to the source?
t.test(noo$t2bound[noo$closebound==125]) #compute confidence intervals for source at 125 subgroup t.test(noo$t2bound[noo$closebound==175]) #compute confidence intervals for source at 175 subgroup
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
In both cases boundaries move toward the source. When the initial boundary is closer to the source (source at 175), the final boundary ends up at the source. When it is farther away (source at 125), the final boundary ends up a little short of the source.Another way of looking at the movement is to compute, for each subject, how far the source was from the learner's initial boundary, and see if this predicts the amount of shift:
#Predict the boundary shift from the distance between initial bound and source m <- lm(bshift ~ t1dist, data = noo) #fit linear model predicting shift from distance summary(m) #look at model parameters
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Distance predicts shift significantly. The intercept is not reliably different from zero, so that, with zero distance, boundary does not shift. The slope of 0.776 suggests that the boundary shifts 78 percent of the way toward the close source. Let's visualize:
plot(noo$t1dist, noo$bshift) #plot distance of source against boundary shift abline(lm(bshift~t1dist, data = noo)$coefficients) #add least squares line abline(0,1, col = 2) #Add line with slope 1 and intercept 0
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
The black line shows the least-squares linear fit; the red line shows the expected slope if learner moved all the way toward the source. True slope is quite a bit shallower. If we compute confidence limits on slope we get:
confint(m, 't1dist', level = 0.95)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
So the confidence limit extends v close to 1 Export parameter data
write.csv(noo, paste("summary/onesrc25_noo_z", zthresh*10, ".csv", sep=""), row.names=F, quote=F)
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Further analyses Movement of boundary over the course of learning
nsj <- length(alldat) #Number of subjects is length of alldat object mvbnd <- matrix(0, nsj, 301) #Initialize matrix of 0s to hold boundary-movement data, with 301 windows for(i1 in c(1:nsj)) mvbnd[i1,] <- get.mv.bound(alldat, sj=i1) #Compute move data for each sj and store in matrix rows
Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message: "glm.fit: algorithm did not converge"Warning message: "glm.fit: fitted probabilities numerically 0 or 1 occurred"Warning message:
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
Again, ignore warnings here
tmp <- cbind(allpars[,1:6], mvbnd) #Add subject and condition data columns mvb.noo <- tmp[is.na(match(tmp$PID, sjex)),] #Remove excluded subjects head(mvb.noo) tmp <- mvb.noo[,7:307] #Copy movement data into temporary object tmp[abs(tmp) > 250] <- NA #Remove boundary estimates that are extreme (outside 50-250 range) tmp[tmp < 50] <- NA mvb.noo[,7:307] <- tmp #Put remaining data back in plot.cirib(mvb.noo[mvb.noo$bounds==125,7:307], genplot=T) plot.cirib(mvb.noo[mvb.noo$bounds==175,7:307], genplot=F, color=4) abline(h=150, lty=2) abline(h=175, col=4) abline(h=125, col=2) title("Boundary shift over training")
_____no_output_____
MIT
Experiment1_Main/Components/One25/one25.ipynb
ttrogers/frigo-chen-rogers
About此笔记包含了以下内容:* keras 的基本使用* 组合特征* 制作dataset* 模型的存取(2种方式)* 添加检查点
import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import math from tensorflow.keras.utils import plot_model import os # fea_x = [i for i in np.arange(0, math.pi * 2.0, 0.01)] # print(fea_x[:50]) x0 = np.random.randint(0, math.pi * 6.0 * 100.0, 5000) / 100.0 x1 = np.random.randint(0, math.pi * 6.0 * 100.0, 5000) / 100.0 x2 = np.random.randint(0, math.pi * 6.0 * 100.0, 1000) / 100.0 # Noisy feaY0 = [np.random.randint(10 * math.sin(i), 20) for i in x0] feaY1 = [np.random.randint(-20, 10 * math.sin(i)) for i in x1] feaY2 = [np.random.randint(-10, 10) for i in x2] fea_x = np.concatenate([x0, x1, x2]) fea_y = np.concatenate([feaY0, feaY1, feaY2]) label0 = np.repeat(0, 5000) label1 = np.repeat(1, 5000) label2 = np.random.randint(0,2, 1000) label = np.concatenate([label0, label1, label2]) fea_1 = [] fea_2 = [] fea_3 = [] fea_4 = [] fea_5 = [] for i in range(len(label)): x = fea_x[i] y = fea_y[i] ex_1 = x * y ex_2 = x * x ex_3 = y * y ex_4 = math.sin(x) ex_5 = math.sin(y) fea_1.append(ex_1) fea_2.append(ex_2) fea_3.append(ex_3) fea_4.append(ex_4) fea_5.append(ex_5) fea = np.c_[fea_x, fea_y, fea_1, fea_2, fea_3, fea_4, fea_5] dataset = tf.data.Dataset.from_tensor_slices((fea, label)) dataset = dataset.shuffle(10000) dataset = dataset.batch(500) dataset = dataset.repeat() ds_iteror = dataset.make_one_shot_iterator().get_next() len(fea[0]) with tf.Session() as sess: def _pltfunc(sess): res = sess.run(ds_iteror) # print(res) lb = res[1] t_fea = res[0] for index in range(len(lb)): tfs = t_fea[index] if lb[index] > 0: plt.scatter(tfs[0], tfs[1], marker='o', c='orange') else: plt.scatter(tfs[0], tfs[1], marker='o', c='green') _pltfunc(sess) _pltfunc(sess) _pltfunc(sess) plt.show() inputs = tf.keras.Input(shape=(7, )) x = layers.Dense(7, activation=tf.keras.activations.relu)(inputs) x1 = layers.Dense(7, activation='relu')(x) # x2 = layers.Dense(32, activation='relu')(x1) # x3 = layers.Dense(24, activation='relu')(x2) # x4 = layers.Dense(16, activation='relu')(x3) # x5 = layers.Dense(8, activation='relu')(x4) predictions = layers.Dense(2, activation='softmax')(x1) model = tf.keras.Model(inputs=inputs, outputs=predictions) # The compile step specifies the training configuration. # opt = tf.train.AdamOptimizer(learning_rate=0.0001) opt = tf.train.AdagradOptimizer(learning_rate=0.1) # opt = tf.train.RMSPropOptimizer(0.1) model.compile(optimizer=opt, loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) model.fit(dataset, epochs=10, steps_per_epoch=200) # model.fit(fea, label, epochs=10, batch_size=500, steps_per_epoch=300) model.fit(dataset, epochs=10, steps_per_epoch=200) result = model.predict([[[1, -10]]]) print(np.argmax(result[0])) result = model.predict([[[1, 10]]]) print(np.argmax(result[0])) os.getcwd() # 模型可视化 plot_model(model, to_file=os.getcwd()+ '/model.png') from IPython.display import SVG import tensorflow.keras.utils as tfku tfku.plot_model(model) # SVG(model_to_dot(model).create(prog='dot', format='svg')) for i in range(1000): randomX = np.random.randint(0, 10 * math.pi * 6.0) / 10.0 randomY = 0 if np.random.randint(2) > 0: randomY = np.random.randint(10 * math.sin(randomX), 20) else: randomY = np.random.randint(-20, 10 * math.sin(randomX)) ex_1 = randomX * randomY ex_2 = randomX**2 ex_3 = randomY**2 ex_4 = math.sin(randomX) ex_5 = math.sin(randomY) color = '' result = model.predict([[[randomX, randomY, ex_1, ex_2, ex_3, ex_4, ex_5]]]) pred_index = np.argmax(result[0]) if pred_index > 0: color = 'orange' else: color = 'green' plt.scatter(randomX, randomY, marker='o', c=color) plt.show()
_____no_output_____
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
Save Model
!pip install h5py pyyaml model_path = os.getcwd() + "/mymodel.h5" model_path
_____no_output_____
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
这里使用默认的优化器, 默认优化器不能直接保存, 读取模型时需要再次创建优化器并编译使用 keras 内置的优化器可以直接保存和读取, 比如: `tf.keras.optimizers.Adam()`
model.save(model_path) new_model = tf.keras.models.load_model(model_path) opt = tf.train.AdagradOptimizer(learning_rate=0.1) # opt = tf.train.RMSPropOptimizer(0.1) new_model.compile(optimizer=opt, loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) new_model.summary() loss, acc = new_model.evaluate(dataset, steps=200) print("Restored model, accuracy: {:5.2f}%".format(100*acc)) print(new_model.layers[1].get_weights()) print(new_model.layers[3].get_weights())
[array([[ 0.06555444, -0.5212194 ], [-0.5949386 , -0.09970472], [-0.37951565, -0.21464492], [-0.13808419, 0.24510457], [ 0.36669165, -0.2663816 ], [ 0.45086718, -0.26410016], [-0.04899281, -0.6156222 ]], dtype=float32), array([-0.4162824, 0.4162828], dtype=float32)]
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
保存为 pb 文件
pb_model_path = os.getcwd() + '/pbmdoel' pb_model_path tf.contrib.saved_model.save_keras_model(new_model, pb_model_path) !ls {pb_model_path}
assets saved_model.pb variables
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
读取 pb 文件
model2 = tf.contrib.saved_model.load_keras_model(pb_model_path) model2.summary() # 使用前要先编译 model2.compile(optimizer=opt, loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) loss, acc = model2.evaluate(dataset, steps=200) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
200/200 [==============================] - 0s 2ms/step - loss: 0.2203 - acc: 0.9250 Restored model, accuracy: 92.50%
MIT
Notes/KerasExercise.ipynb
GrayLand119/GLColabNotes
[Strings](https://docs.python.org/3/library/stdtypes.htmltext-sequence-type-str)
my_string = 'Python is my favorite programming language!' my_string type(my_string) len(my_string)
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Respecting [PEP8](https://www.python.org/dev/peps/pep-0008/maximum-line-length) with long strings
long_story = ('Lorem ipsum dolor sit amet, consectetur adipiscing elit.' 'Pellentesque eget tincidunt felis. Ut ac vestibulum est.' 'In sed ipsum sit amet sapien scelerisque bibendum. Sed ' 'sagittis purus eu diam fermentum pellentesque.') long_story
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.replace()` If you don't know how it works, you can always check the `help`:
help(str.replace)
Help on method_descriptor: replace(self, old, new, count=-1, /) Return a copy with all occurrences of substring old replaced by new. count Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences. If the optional argument count is given, only the first count occurrences are replaced.
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
This will not modify `my_string` because replace is not done in-place.
my_string.replace('a', '?') print(my_string)
Python is my favorite programming language!
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
You have to store the return value of `replace` instead.
my_modified_string = my_string.replace('is', 'will be') print(my_modified_string)
Python will be my favorite programming language!
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.format()`
secret = '{} is cool'.format('Python') print(secret) print('My name is {} {}, you can call me {}.'.format('John', 'Doe', 'John')) # is the same as: print('My name is {first} {family}, you can call me {first}.'.format(first='John', family='Doe'))
My name is John Doe, you can call me John. My name is John Doe, you can call me John.
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.join()`
help(str.join) pandas = 'pandas' numpy = 'numpy' requests = 'requests' cool_python_libs = ', '.join([pandas, numpy, requests]) print('Some cool python libraries: {}'.format(cool_python_libs))
Some cool python libraries: pandas, numpy, requests
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Alternatives (not as [Pythonic](http://docs.python-guide.org/en/latest/writing/style/idioms) and [slower](https://waymoot.org/home/python_string/)):
cool_python_libs = pandas + ', ' + numpy + ', ' + requests print('Some cool python libraries: {}'.format(cool_python_libs)) cool_python_libs = pandas cool_python_libs += ', ' + numpy cool_python_libs += ', ' + requests print('Some cool python libraries: {}'.format(cool_python_libs))
Some cool python libraries: pandas, numpy, requests Some cool python libraries: pandas, numpy, requests
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.upper(), str.lower(), str.title()`
mixed_case = 'PyTHoN hackER' mixed_case.upper() mixed_case.lower() mixed_case.title()
_____no_output_____
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.strip()`
help(str.strip) ugly_formatted = ' \n \t Some story to tell ' stripped = ugly_formatted.strip() print('ugly: {}'.format(ugly_formatted)) print('stripped: {}'.format(ugly_formatted.strip()))
ugly: Some story to tell stripped: Some story to tell
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
`str.split()`
help(str.split) sentence = 'three different words' words = sentence.split() print(words) type(words) secret_binary_data = '01001,101101,11100000' binaries = secret_binary_data.split(',') print(binaries)
['01001', '101101', '11100000']
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Calling multiple methods in a row
ugly_mixed_case = ' ThIS LooKs BAd ' pretty = ugly_mixed_case.strip().lower().replace('bad', 'good') print(pretty)
this looks good
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
Note that execution order is from left to right. Thus, this won't work:
pretty = ugly_mixed_case.replace('bad', 'good').strip().lower() print(pretty)
this looks bad
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
[Escape characters](http://python-reference.readthedocs.io/en/latest/docs/str/escapes.htmlescape-characters)
two_lines = 'First line\nSecond line' print(two_lines) indented = '\tThis will be indented' print(indented)
This will be indented
MIT
notebooks/beginner/notebooks/strings.ipynb
jordantcarlisle/learn-python3
ADVANCED TEXT MINING- 본 자료는 텍스트 마이닝을 활용한 연구 및 강의를 위한 목적으로 제작되었습니다.- 본 자료를 강의 목적으로 활용하고자 하시는 경우 꼭 아래 메일주소로 연락주세요.- 본 자료에 대한 허가되지 않은 배포를 금지합니다.- 강의, 저작권, 출판, 특허, 공동저자에 관련해서는 문의 바랍니다.- **Contact : ADMIN([email protected])**--- WEEK 02-2. Python 자료구조 이해하기- 텍스트 데이터를 다루기 위한 Python 자료구조에 대해 다룹니다.--- 1. 리스트(LIST) 자료구조 이해하기--- 1.1. 리스트(LIST): 값 또는 자료구조를 저장할 수 있는 구조를 선언합니다.---
# 1) 리스트를 생성합니다. new_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] print(new_list) # 2) 리스트의 마지막 원소 뒤에 새로운 원소를 추가합니다. new_list.append(100) print(new_list) # 3) 더하기 연산자를 활용해 두 리스트를 결합합니다. new_list = new_list + [101, 102] print(new_list) # 4-1) 리스트에 존재하는 특정 원소 중 일치하는 가장 앞의 원소를 삭제합니다. new_list.remove(3) print(new_list) # 4-2) 리스트에 존재하는 N 번째 원소를 삭제합니다. del new_list[3] print(new_list) # 5) 리스트에 존재하는 N 번째 원소의 값을 변경합니다. new_list[0] = 105 print(new_list) # 6) 리스트에 존재하는 모든 원소를 오름차순으로 정렬합니다. new_list.sort() #new_list.sort(reverse=False) print(new_list) # 7) 리스트에 존재하는 모든 원소를 내림차순으로 정렬합니다. new_list.sort(reverse=True) print(new_list) # 8) 리스트에 존재하는 모든 원소의 순서를 거꾸로 변경합니다. new_list.reverse() print(new_list) # 9) 리스트에 존재하는 모든 원소의 개수를 불러옵니다. length = len(new_list) print(new_list) # 10-1) 리스트에 특정 원소에 존재하는지 여부를 in 연산자를 통해 확인합니다. print(100 in new_list) # 10-2) 리스트에 특정 원소에 존재하지 않는지 여부를 not in 연산자를 통해 확인합니다. print(100 not in new_list)
False
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
1.2. 리스트(LIST) 인덱싱: 리스트에 존재하는 특정 원소를 불러옵니다.---
new_list = [0, 1, 2, 3, 4, 5, 6, 7, "hjvjg", 9] # 1) 리스트에 존재하는 N 번째 원소를 불러옵니다. print("0번째 원소 :", new_list[0]) print("1번째 원소 :", new_list[1]) print("4번째 원소 :", new_list[4]) # 2) 리스트에 존재하는 N번째 부터 M-1번째 원소를 리스트 형식으로 불러옵니다. print("0~3번째 원소 :", new_list[0:3]) print("4~9번째 원소 :", new_list[4:9]) print("2~3번째 원소 :", new_list[2:3]) # 3) 리스트에 존재하는 N번째 부터 모든 원소를 리스트 형식으로 불러옵니다. print("3번째 부터 모든 원소 :", new_list[3:]) print("5번째 부터 모든 원소 :", new_list[5:]) print("9번째 부터 모든 원소 :", new_list[9:]) # 4) 리스트에 존재하는 N번째 이전의 모든 원소를 리스트 형식으로 불러옵니다. print("1번째 이전의 모든 원소 :", new_list[:1]) print("7번째 이전의 모든 원소 :", new_list[:7]) print("9번째 이전의 모든 원소 :", new_list[:9]) # 5) 리스트 인덱싱에 사용되는 정수 N의 부호가 음수인 경우, 마지막 원소부터 |N|-1번째 원소를 의미합니다. print("끝에서 |-1|-1번째 이전의 모든 원소 :", new_list[:-1]) print("끝에서 |-1|-1번째 부터 모든 원소 :", new_list[-1:]) print("끝에서 |-2|-1번째 이전의 모든 원소 :", new_list[:-2]) print("끝에서 |-2|-1번째 부터 모든 원소 :", new_list[-2:])
끝에서 |-1|-1번째 이전의 모든 원소 : [0, 1, 2, 3, 4, 5, 6, 7, 8] 끝에서 |-1|-1번째 부터 모든 원소 : [9] 끝에서 |-2|-1번째 이전의 모든 원소 : [0, 1, 2, 3, 4, 5, 6, 7] 끝에서 |-2|-1번째 부터 모든 원소 : [8, 9]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
1.3. 다차원 리스트(LIST): 리스트의 원소에 다양한 값 또는 자료구조를 저장할 수 있습니다.---
# 1-1) 리스트의 원소에는 유형(TYPE)의 값 또는 자료구조를 섞어서 저장할 수 있습니다. new_list = ["텍스트", 0, 1.9, [1, 2, 3, 4], {"서울": 1, "부산": 2, "대구": 3}] print(new_list) # 1-2) 리스트의 각 원소의 유형(TYPE)을 type(변수) 함수를 활용해 확인합니다. print("Type of new_list[0] :", type(new_list[0])) print("Type of new_list[1] :", type(new_list[1])) print("Type of new_list[2] :", type(new_list[2])) print("Type of new_list[3] :", type(new_list[3])) print("Type of new_list[4] :", type(new_list[4])) # 2) 리스트 원소에 리스트를 여러개 추가하여 다차원 리스트(NxM)를 생성할 수 있습니다. new_list = [[0, 1, 2], [2, 3, 7], [9, 6, 8], [4, 5, 1]] print("new_list :", new_list) print("new_list[0] :", new_list[0]) print("new_list[1] :", new_list[1]) print("new_list[2] :", new_list[2]) print("new_list[3] :", new_list[3]) # 3-1) 다차원 리스트(NxM)를 정렬하는 경우 기본적으로 각 리스트의 첫번째 원소를 기준으로 정렬합니다. new_list.sort() print("new_list :", new_list) print("new_list[0] :", new_list[0]) print("new_list[1] :", new_list[1]) print("new_list[2] :", new_list[2]) print("new_list[3] :", new_list[3]) # 3-2) 다차원 리스트(NxM)를 각 리스트의 N 번째 원소를 기준으로 정렬합니다. new_list.sort(key=lambda elem: elem[2]) print("new_list :", new_list) print("new_list[0] :", new_list[0]) print("new_list[1] :", new_list[1]) print("new_list[2] :", new_list[2]) print("new_list[3] :", new_list[3])
new_list : [[4, 5, 1], [0, 1, 2], [2, 3, 7], [9, 6, 8]] new_list[0] : [4, 5, 1] new_list[1] : [0, 1, 2] new_list[2] : [2, 3, 7] new_list[3] : [9, 6, 8]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
2. 딕셔너리(DICTIONARY) 자료구조 이해하기--- 2.1. 딕셔너리(DICTIONARY): 값 또는 자료구조를 저장할 수 있는 구조를 선언합니다.---
# 1) 딕셔너리를 생성합니다. new_dict = {"마케팅팀": 98, "개발팀": 78, "데이터분석팀": 83, "운영팀": 33} print(new_dict) # 2) 딕셔너리의 각 원소는 KEY:VALUE 쌍의 구조를 가지며, KEY 값에 대응되는 VALUE를 불러옵니다. print(new_dict["마케팅팀"]) # 3-1) 딕셔너리에 새로운 KEY:VALUE 쌍의 원소를 추가합니다. new_dict["미화팀"] = 55 print(new_dict) # 3-2) 딕셔너리에 저장된 각 원소의 KEY 값은 유일해야하기 때문에, 중복된 KEY 값이 추가되는 경우 VALUE는 덮어쓰기 됩니다. new_dict["데이터분석팀"] = 100 print(new_dict) # 4) 딕셔너리에 다양한 유형(TYPE)의 값 또는 자료구조를 VALUE로 사용할 수 있습니다. new_dict["데이터분석팀"] = {"등급": "A"} new_dict["운영팀"] = ["A"] new_dict["개발팀"] = "재평가" new_dict[0] = "오타" print(new_dict)
{'마케팅팀': 98, '개발팀': '재평가', '데이터분석팀': {'등급': 'A'}, '운영팀': ['A'], '미화팀': 55, 0: '오타'}
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
2.2. 딕셔너리(DICTIONARY) 인덱싱: 딕셔너리에 존재하는 원소를 리스트 형태로 불러옵니다.---
# 1-1) 다양한 함수를 활용해 딕셔너리를 인덱싱 가능한 구조로 불러옵니다. new_dict = {"마케팅팀": 98, "개발팀": 78, "데이터분석팀": 83, "운영팀": 33} print("KEY List of new_dict :", new_dict.keys()) print("VALUE List of new_dict :", new_dict.values()) print("(KEY, VALUE) List of new_dict :", new_dict.items()) for i, j in new_dict.items(): print(i, j) # 1-2) 불러온 자료구조를 실제 리스트 자료구조로 변환합니다. print("KEY List of new_dict :", list(new_dict.keys())) print("VALUE List of new_dict :", list(new_dict.values())) print("(KEY, VALUE) List of new_dict :", list(new_dict.items()))
KEY List of new_dict : ['마케팅팀', '개발팀', '데이터분석팀', '운영팀'] VALUE List of new_dict : [98, 78, 83, 33] (KEY, VALUE) List of new_dict : [('마케팅팀', 98), ('개발팀', 78), ('데이터분석팀', 83), ('운영팀', 33)]
Apache-2.0
practice-note/week_02/W02-2_advanced-text-mining_python-data-structure.ipynb
fingeredman/advanced-text-mining
Euler Problem 148=================We can easily verify that none of the entries in the first seven rows ofPascal's triangle are divisible by 7:However, if we check the first one hundred rows, we will find that only 2361of the 5050 entries are not divisible by 7.Find the number of entries which are not divisible by 7 in the first onebillion (10^9) rows of Pascal's triangle.
def f(n): if n == 0: return 1 return (1+(n%7))*f(n//7) def F(n): if n == 0: return 0 r = n % 7 return 28*F(n//7) + r*(r+1)//2*f(n//7) print(F(10**9))
2129970655314432
MIT
Euler 148 - Exploring Pascal's triangle.ipynb
Radcliffe/project-euler
Gradient CheckingWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".Let's do it!
# Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
1) How does gradient checking work?Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient):$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."We know the following:- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checkingConsider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. **Figure 1** : **1D linear model** The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
# GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = theta * x ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J))
J = 8
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**: ** J ** 8 **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
# GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta))
dtheta = 2
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**: ** dtheta ** 2 **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.**Instructions**:- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$- Then compute the gradient using backward propagation, and store the result in a variable "grad"- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them.- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
# GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = thetaplus * x # Step 3 J_minus = thetaminus * x # Step 4 gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference))
The gradient is correct! difference = 2.91933588329e-10
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
**Expected Output**:The gradient is correct! ** difference ** 2.9193358103083e-10 Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. **Figure 2** : **deep neural network***LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*Let's look at your implementations for forward propagation and backward propagation.
def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
Now, run backward propagation.
def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = (A3 - Y) # / (A3 * (1 - A3)) WHY ISN'T dZ3 more complicated dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) #2* before the end of assignment made us look for errors up here db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) # 4 / m before the end of assn made us look for errors up here gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients
_____no_output_____
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**.As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. **Figure 2** : **dictionary_to_vector() and vector_to_dictionary()** You will need these functions in gradient_check_n()We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.**Exercise**: Implement gradient_check_n().**Instructions**: Here is pseudo-code that will help you implement the gradient check.For each i in num_parameters:- To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
# GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 #print(thetaplus[i][0]) thetaplus[i,0] = thetaplus[i,0] + epsilon #thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i,0] = thetaplus[i,0] - epsilon #thetaminus[i][0] = thetaplus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i]) / (epsilon) #Why isn't the 2 eeps in here? need this to be correct. #gradapprox[i] = (J_plus[i] - J_minus[i]) / (2. * epsilon) #How is should be! ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") #print(gradapprox) return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y)
Your backward propagation works perfectly fine! difference = 1.69916980932e-07
BSD-2-Clause
Coursera Deeplearning Specialization/c2wk1c - Gradient+Checking+v1.ipynb
hamil168/Data-Science-Misc
Code along 4 Scale, Standardize, or Normalize with scikit-learn När ska man använda MinMaxScaler, RobustScaler, StandardScaler, och Normalizer Attribution: Jeff Hale Varför är det ofta nödvändigt att genomföra så kallad variable transformation/feature scaling det vill säga, standardisera, normalisera eller på andra sätt ändra skalan på data vid dataaalys?Som jag gått igenom på föreläsningen om data wrangling kan data behöva formateras (variable transformation) för att förbättra prestandan hos många algoritmer för dataanalys. En typ av formaterinng av data, som går att göra på många olika sätt, är så kallad skalning av attribut (feature scaling). Det kan finnas flera anledningar till att data kan behöv skalas, några exempel är:* Exempelvis neurala nätverk, regressionsalgoritmer och K-nearest neighbors fungerar inte lika bra om inte de attribut (features) som algoritmen använder befinner sig i relativt lika skalor. * Vissa av metoderna för att skala, standardisera och normalisera kan också minska den negativa påverkan outliers kan ha i vissa algoritmer.* Ibland är det också av vikt att ha data som är normalfördelat (standardiserat) *Med skala menas inte den skala som hänsyftas på exempelvis kartor där det brukar anges att skalan är 1:50 000 vilket tolkas som att varje avstånd på kartan är 50 000 ggr kortare än i verkligheten.*
#Importerar de bibliotek vi behöver import numpy as np import pandas as pd from sklearn import preprocessing import matplotlib import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') #Denna kod sätter upp hur matplotlib ska visa grafer och plotar %matplotlib inline matplotlib.style.use('ggplot') #Generera lite input #(den som är extremt intresserad kan läsa följande, intressanta och roliga förklaring kring varför random.seed egentligen är pseudorandom) #https://www.sharpsightlabs.com/blog/numpy-random-seed/ np.random.seed(34)
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Original Distributions Data som det kan se ut i original, alltså när det samlats in, innan någon pre-processing har genomförts.För att ha data att använda i övningarna skapar nedanstående kod ett antal randomiserade spridningar av data
#skapa kolumner med olika fördelningar df = pd.DataFrame({ 'beta': np.random.beta(5, 1, 1000) * 60, # beta 'exponential': np.random.exponential(10, 1000), # exponential 'normal_p': np.random.normal(10, 2, 1000), # normal platykurtic 'normal_l': np.random.normal(10, 10, 1000), # normal leptokurtic }) # make bimodal distribution first_half = np.random.normal(20, 3, 500) second_half = np.random.normal(-20, 3, 500) bimodal = np.concatenate([first_half, second_half]) df['bimodal'] = bimodal # create list of column names to use later col_names = list(df.columns)
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Uppgift 1: a. Plotta de kurvor som skapats i ovanstående cell i en och samma koordinatsystem med hjälp av [seaborn biblioteket](https://seaborn.pydata.org/api.htmldistribution-api).>Se till att det är tydligt vilken kurva som representerar vilken distribution.>>Koden för själva koordinatsystemet är given, fortsätt koda i samma cell>>HINT! alla fem är distribution plots
# plot original distribution plot fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('Original Distributions') #De fem kurvorna sns.kdeplot(df['beta'], ax=ax1) sns.kdeplot(df['exponential'], ax=ax1) sns.kdeplot(df['normal_p'], ax=ax1) sns.kdeplot(df['normal_l'], ax=ax1) sns.kdeplot(df['bimodal'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
b. Visa de fem första raderna i den dataframe som innehåller alla distributioner.
df.head()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
c. För samtliga fem attribut, beräkna:* medel* medianVad för bra metod kan användas för att få ett antal statistiska mått på en dataframe? Hämta denna information med denna metod.
df.describe()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
d. I pandas kan du plotta din dataframe på några olika sätt. Gör en plot för att ta reda på hur skalan på de olika attibuten ser ut, befinner sig alla fem i ungefär samma skala?
df.plot()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
* Samtliga värden ligger inom liknande intervall e. Vad händer om följande kolumn med randomiserade värden läggs till?
new_column = np.random.normal(1000000, 10000, (1000,1)) df['new_column'] = new_column col_names.append('new_column') df['new_column'].plot(kind='kde') # plot våra originalvärden tillsammans med det nya värdet fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('Original Distributions') sns.kdeplot(df['beta'], ax=ax1) sns.kdeplot(df['exponential'], ax=ax1) sns.kdeplot(df['normal_p'], ax=ax1) sns.kdeplot(df['normal_l'], ax=ax1) sns.kdeplot(df['bimodal'], ax=ax1); sns.kdeplot(df['new_column'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Hur gick det? Testar några olika sätt att skala dataframes.. MinMaxScalerMinMaxScaler subtraherar varje värde i en kolumn med medelvärdet av den kolumnen och dividerar sedan med antalet värden.
mm_scaler = preprocessing.MinMaxScaler() df_mm = mm_scaler.fit_transform(df) df_mm = pd.DataFrame(df_mm, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After MinMaxScaler') sns.kdeplot(df_mm['beta'], ax=ax1) sns.kdeplot(df_mm['exponential'], ax=ax1) sns.kdeplot(df_mm['normal_p'], ax=ax1) sns.kdeplot(df_mm['normal_l'], ax=ax1) sns.kdeplot(df_mm['bimodal'], ax=ax1) sns.kdeplot(df_mm['new_column'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt med värdena?
df_mm['beta'].min() df_mm['beta'].max()
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi jämför med min och maxvärde för varje kolumn innan vi normaliserade vår dataframe
mins = [df[col].min() for col in df.columns] mins maxs = [df[col].max() for col in df.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Let's check the minimums and maximums for each column after MinMaxScaler.
mins = [df_mm[col].min() for col in df_mm.columns] mins maxs = [df_mm[col].max() for col in df_mm.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? RobustScalerRobustScaler subtraherar med medianen för kolumnen och dividerar med kvartilavståndet (skillnaden mellan största 25% och minsta 25%)
r_scaler = preprocessing.RobustScaler() df_r = r_scaler.fit_transform(df) df_r = pd.DataFrame(df_r, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After RobustScaler') sns.kdeplot(df_r['beta'], ax=ax1) sns.kdeplot(df_r['exponential'], ax=ax1) sns.kdeplot(df_r['normal_p'], ax=ax1) sns.kdeplot(df_r['normal_l'], ax=ax1) sns.kdeplot(df_r['bimodal'], ax=ax1) sns.kdeplot(df_r['new_column'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi kollar igen min och max efteråt (OBS; jämför med originalet högst upp innan vi startar olika skalningsmetoder).
mins = [df_r[col].min() for col in df_r.columns] mins maxs = [df_r[col].max() for col in df_r.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? StandardScalerStandardScaler skalar varje kolumn till att ha 0 som medelvärde och standardavvikelsen 1
s_scaler = preprocessing.StandardScaler() df_s = s_scaler.fit_transform(df) df_s = pd.DataFrame(df_s, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After StandardScaler') sns.kdeplot(df_s['beta'], ax=ax1) sns.kdeplot(df_s['exponential'], ax=ax1) sns.kdeplot(df_s['normal_p'], ax=ax1) sns.kdeplot(df_s['normal_l'], ax=ax1) sns.kdeplot(df_s['bimodal'], ax=ax1) sns.kdeplot(df_s['new_column'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vi kontrollerar min och max efter skalningen återigen
mins = [df_s[col].min() for col in df_s.columns] mins maxs = [df_s[col].max() for col in df_s.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? I jämförelse med de två innan? NormalizerNormaliser transformerar rader istället för kolumner genom att (default) beräkna den Euclidiska normen som är roten ur summan av roten ur samtliga värden. Kallas för l2.
n_scaler = preprocessing.Normalizer() df_n = n_scaler.fit_transform(df) df_n = pd.DataFrame(df_n, columns=col_names) fig, (ax1) = plt.subplots(ncols=1, figsize=(10, 8)) ax1.set_title('After Normalizer') sns.kdeplot(df_n['beta'], ax=ax1) sns.kdeplot(df_n['exponential'], ax=ax1) sns.kdeplot(df_n['normal_p'], ax=ax1) sns.kdeplot(df_n['normal_l'], ax=ax1) sns.kdeplot(df_n['bimodal'], ax=ax1) sns.kdeplot(df_n['new_column'], ax=ax1);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Min och max efter skalning
mins = [df_n[col].min() for col in df_n.columns] mins maxs = [df_n[col].max() for col in df_n.columns] maxs
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Vad har hänt? Nu tar vi en titt på alla olika sätt att skala tillsammans, dock skippar vi normalizern då det är väldigt ovanligt att man vill skala om rader. Kombinerad plot
#Själva figuren fig, (ax0, ax1, ax2, ax3) = plt.subplots(ncols=4, figsize=(20, 8)) ax0.set_title('Original Distributions') sns.kdeplot(df['beta'], ax=ax0) sns.kdeplot(df['exponential'], ax=ax0) sns.kdeplot(df['normal_p'], ax=ax0) sns.kdeplot(df['normal_l'], ax=ax0) sns.kdeplot(df['bimodal'], ax=ax0) sns.kdeplot(df['new_column'], ax=ax0); ax1.set_title('After MinMaxScaler') sns.kdeplot(df_mm['beta'], ax=ax1) sns.kdeplot(df_mm['exponential'], ax=ax1) sns.kdeplot(df_mm['normal_p'], ax=ax1) sns.kdeplot(df_mm['normal_l'], ax=ax1) sns.kdeplot(df_mm['bimodal'], ax=ax1) sns.kdeplot(df_mm['new_column'], ax=ax1); ax2.set_title('After RobustScaler') sns.kdeplot(df_r['beta'], ax=ax2) sns.kdeplot(df_r['exponential'], ax=ax2) sns.kdeplot(df_r['normal_p'], ax=ax2) sns.kdeplot(df_r['normal_l'], ax=ax2) sns.kdeplot(df_r['bimodal'], ax=ax2) sns.kdeplot(df_r['new_column'], ax=ax2); ax3.set_title('After StandardScaler') sns.kdeplot(df_s['beta'], ax=ax3) sns.kdeplot(df_s['exponential'], ax=ax3) sns.kdeplot(df_s['normal_p'], ax=ax3) sns.kdeplot(df_s['normal_l'], ax=ax3) sns.kdeplot(df_s['bimodal'], ax=ax3) sns.kdeplot(df_s['new_column'], ax=ax3);
_____no_output_____
Apache-2.0
Scale, Standardize, or Normalize with scikit-learn.ipynb
2IS239-Data-Analytics/Code_along4
Tutorial 2. Solving a 1D diffusion equation
# Document Author: Dr. Vishal Sharma # Author email: [email protected] # License: MIT # This tutorial is applicable for NAnPack version 1.0.0-alpha4
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
I. BackgroundThe objective of this tutorial is to present the step-by-step solution of a 1D diffusion equation using NAnPack such that users can follow the instructions to learn using this package. The numerical solution is obtained using the Forward Time Central Spacing (FTCS) method. The detailed description of the FTCS method is presented in Section IV of this tutorial. II. Case DescriptionWe will be solving a classical probkem of a suddenly accelerated plate in fluid mechanicas which has the known exact solution. In this problem, the fluid isbounded between two parallel plates. The upper plate remains stationary and the lower plate is suddenly accelerated in *y*-direction at velocity $U_o$. It isrequired to find the velocity profile between the plates for the given initial and boundary conditions.(For the sake of simplicity in setting up numerical variables, let's assume that the *x*-axis is pointed in the upward direction and *y*-axis is pointed along the horizontal direction as shown in the schematic below: ![parallel-plate-plot.png](attachment:1be77927-d72d-49db-86dc-b2af1aeed6b7.png) **Initial conditions**$$u(t=0.0, 0.0<x\leq H) = 0.0 \;m/s$$$$u(t=0.0, x=0.0) = 40.0 \;m/s$$**Boundary conditions**$$u(t\geq0.0, x=0.0) = 40.0 \;m/s$$$$u(t\geq0.0, x=H) = 0.0 \;m/s$$Viscosity of fluid, $\;\;\nu = 2.17*10^{-4} \;m^2/s$ Distance between plates, $\;\;H = 0.04 \;m$ Grid step size, $\;\;dx = 0.001 \;m$ Simulation time, $\;\;T = 1.08 \;sec$Specify the required simulation inputs based on our setup in the configuration file provided with this package. You may choose to save the configuration file with any other filename. I have saved the configuration file in the "input" folder of my project directory such that the relative path is `./input/config.ini`. III. Governing EquationThe governing equation for the given application is the simplified for the the Navies-Stokes equation which is given as:$$\frac{\partial u} {\partial t} = \nu\frac{\partial^2 u}{\partial x^2}$$This is the diffusion equation model and is classified as the parabolic PDE. IV. FTCS methodThe forward time central spacing approximation equation in 1D is presented here. This is a time explicit method which means that one unknown is calculated using the known neighbouring values from the previous time step. Here *i* represents grid point location, *n*+1 is the future time step, and *n* is the current time step.$$u_{i}^{n+1} = u_{i}^{n} + \frac{\nu\Delta t}{(\Delta x)^2}(u_{i+1}^{n} - 2u_{i}^{n} + u_{i-1}^{n})$$The order of this approximation is $[(\Delta t), (\Delta x)^2]$The diffusion number is given as $d_{x} = \nu\frac{\Delta t}{(\Delta x)^2}$ and for one-dimensional applications the stability criteria is $d_{x}\leq\frac{1}{2}$ The solution presented here is obtained using a diffusion number = 0.5 (CFL = 0.5 in configuration file). Time step size will be computed using the expression of diffusion number. Beginners are encouraged to try diffusion numbers greater than 0.5 as an exercise after running this script.Users are encouraged to read my blogs on numerical methods - [link here](https://www.linkedin.com/in/vishalsharmaofficial/detail/recent-activity/posts/). V. Script Development*Please note that this code script is provided in file `./examples/tutorial-02-diffusion-1D-solvers-FTCS.py`.*As per the Python established coding guidelines [PEP 8](https://www.python.org/dev/peps/pep-0008/imports), all package imports must be done at the top part of the script in the following sequence -- 1. import standard library2. import third party modules3. import local application/library specificAccordingly, in our code we will importing the following required modules (in alphabetical order). If you are using Jupyter notebook, hit `Shift + Enter` on each cell after typing the code.
import matplotlib.pyplot as plt from nanpack.benchmark import ParallelPlateFlow import nanpack.preprocess as pre from nanpack.grid import RectangularGrid from nanpack.parabolicsolvers import FTCS import nanpack.postprocess as post
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
As the first step in simulation, we have to tell our script to read the inputs and assign those inputs to the variables/objects that we will use in our entire code. For this purpose, there is a class `RunConfig` in `nanpack.preprocess` module. We will call this class and assign an object (instance) to it so that we can use its member variables. The `RunConfig` class is written in such a manner that its methods get executed as soon as it's instance is created. The users must provide the configuration file path as a parameter to `RunConfig` class.
FileName = "path/to/project/input/config.ini" # specify the correct file path cfg = pre.RunConfig(FileName) # cfg is an instance of RunConfig class which can be used to access class variables. You may choose any variable in place of cfg.
******************************************************* ******************************************************* Starting configuration. Searching for simulation configuration file in path: "D:/MyProjects/projectroot/nanpack/input/config.ini" SUCCESS: Configuration file parsing. Checking whether all sections are included in config file. Checking section SETUP: Completed. Checking section DOMAIN: Completed. Checking section MESH: Completed. Checking section IC: Completed. Checking section BC: Completed. Checking section CONST: Completed. Checking section STOP: Completed. Checking section OUTPUT: Completed. Checking numerical setup. User inputs in SETUP section check: Completed. Accessing domain geometry configuration: Completed Accessing meshing configuration: Completed. Calculating grid size: Completed. Assigning COLD-START initial conditions to the dependent term. Initialization: Completed. Accessing boundary condition settings: Completed Accessing constant data: Completed. Calculating time step size for the simulation: Completed. Calculating maximum iterations/steps for the simulation: Completed. Accessing simulation stop settings: Completed. Accessing settings for storing outputs: Completed. ********************************************************** CASE DESCRIPTION SUDDENLY ACC. PLATE SOLVER STATE TRANSIENT MODEL EQUATION DIFFUSION DOMAIN DIMENSION 1D LENGTH 0.04 GRID STEP SIZE dX 0.001 TIME STEP 0.002 GRID POINTS along X 41 DIFFUSION CONST. 2.1700e-04 DIFFUSION NUMBER 0.5 TOTAL SIMULATION TIME 1.08 NUMBER OF TIME STEPS 468 START CONDITION COLD-START ********************************************************** SUCEESS: Configuration completed.
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
You will obtain several configuration messages on your output screen so that you can verify that your inputs are correct and that the configuration is successfully completed. Next step is the assignment of initial conditions and the boundary conditions. For assigning boundary conditions, I have created a function `BC()` which we will be calling in the next cell. I have included this function at the bottom of this tutorial for your reference. It is to be noted that U is the dependent variable that was initialized when we executed the configuration, and thus we will be using `cfg.U` to access the initialized U. In a similar manner, all the inputs provided in the configuration file can be obtained by using configuration class object `cfg.` as the prefix to the variable names. Users are allowed to use any object of their choice.*If you are using Jupyter Notebook, the function BC must be executed before referencing to it, otherwise, you will get an error. Jump to the bottom of this notebook where you see code cell 1 containing the `BC()` function*
# Assign initial conditions cfg.U[0] = 40.0 cfg.U[1:] = 0.0 # Assign boundary conditions U = BC(cfg.U)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will be calculating location of all grid points within the domain using the function `RectangularGrid()` and save values into X. We will also require to calculate diffusion number in X direction. In nanpack, the program treats the diffusion number = CFL for 1D applications that we entered in the configuration file, and therefore this step may be skipped, however, it is not the same in two-dimensional applications and therefore to stay consistent and to avoid confusion we will be using the function `DiffusionNumbers()` to compute the term `diffX`.
X, _ = RectangularGrid(cfg.dX, cfg.iMax) diffX,_ = pre.DiffusionNumbers(cfg.Dimension, cfg.diff, cfg.dT, cfg.dX)
Calculating diffusion numbers: Completed.
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will initialize some local variables before start the time stepping:
Error = 1.0 # variable to keep track of error n = 0 # variable to advance in time
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Start time loop using while loop such that if one of the condition returns False, the time stepping will be stopped. For explanation of each line, see the comments. Please note the identation of the codes within the while loop. Take extra care with indentation as Python is very particular about it.
while n <= cfg.nMax and Error > cfg.ConvCrit: # start loop Error = 0.0 # reset error to 0.0 at the beginning of each step n += 1 # advance the value of n at each step Uold = U.copy() # store solution at time level, n U = FTCS(Uold, diffX) # solve for U using FTCS method at time level n+1 Error = post.AbsoluteError(U, Uold) # calculate errors U = BC(U) # Update BC post.MonitorConvergence(cfg, n, Error) # Use this function to monitor convergence post.WriteSolutionToFile(U, n, cfg.nWrite, cfg.nMax,\ cfg.OutFileName, cfg.dX) # Write output to file post.WriteConvHistToFile(cfg, n, Error) # Write convergence log to history file
ITER ERROR ---- ----- 10 4.92187500 20 3.52394104 30 2.88928896 40 2.50741375 50 2.24550338 60 2.05156084 70 1.90048503 80 1.77844060 90 1.67704721 100 1.59085792 110 1.51614304 120 1.45025226 130 1.39125374 140 1.33771501 150 1.28856146 160 1.24298016 170 1.20035213 180 1.16020337 190 1.12216882 200 1.08596559 210 1.05137298 220 1.01821734 230 0.98636083 240 0.95569280 250 0.92612336 260 0.89757851 270 0.86999638 280 0.84332454 290 0.81751777 300 0.79253655 310 0.76834575 320 0.74491380 330 0.72221190 340 0.70021355 350 0.67889409 360 0.65823042 370 0.63820074 380 0.61878436 390 0.59996158 400 0.58171354 410 0.56402217 420 0.54687008 430 0.53024053 440 0.51411737 450 0.49848501 460 0.48332837 STATUS: SOLUTION OBTAINED AT TIME LEVEL= 1.08 s. TIME STEPS= 468 Writing convergence log file: Completed. Files saved: "D:/MyProjects/projectroot/nanpack/output/HISTftcs1D.dat".
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
In the above convergence monitor, it is worth noting that the solution error is gradually moving towards zero which is what we need to confirm stability in the solution. If the solution becomes unstable, the errors will rise, probably upto the point where your code will crash. As you know that the solution obtained is a time-dependent solution and therefore, we didn't allow the code to run until the convergence is observed. If a steady-state solution is desired, change the STATE key in the configuration file equals to "STEADY" and specify a much larger value of nMax key, say nMax = 5000. This is left as an exercise for the users to obtain a stead-state solution. Also, try running the solution with the larger grid step size, $\Delta x$ or a larger time step size, $\Delta t$.After the time stepping is completed, save the final results to the output files.
# Write output to file post.WriteSolutionToFile(U, n, cfg.nWrite, cfg.nMax, cfg.OutFileName, cfg.dX) # Write convergence history log to a file post.WriteConvHistToFile(cfg, n, Error)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Verify that the files are saved in the target directory.Now let us obtain analytical solution of this flow that will help us in validating our codes.
# Obtain analytical solution Uana = ParallelPlateFlow(40.0, X, cfg.diff, cfg.totTime, 20)
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS
Next, we will validate our results by plotting the results using the matplotlib package that we have imported above. Type the following lines of codes:
plt.rc("font", family="serif", size=8) # Assign fonts in the plot fig, ax = plt.subplots(dpi=150) # Create axis for plotting plt.plot(U, X, ">-.b", linewidth=0.5, label="FTCS",\ markersize=5, markevery=5) # Plot data with required labels and markers, customize the plot however you may like plt.plot(Uana, X, "o:r", linewidth=0.5, label="Analytical",\ markersize=5, markevery=5) # Plot analytical solution on the same plot plt.xlabel('Velocity (m/s)') # X-axis labelling plt.ylabel('Plate distance (m)') # Y-axis labelling plt.title(f"Velocity profile\nat t={cfg.totTime} sec", fontsize=8) # Plot title plt.legend() plt.show() # Show plot- this command is very important
_____no_output_____
MIT
docs/source/examples/tutorial-02-diffusion-1D-solvers-FTCS.ipynb
vxsharma-14/DIFFUS