File size: 3,416 Bytes
3a0d0dc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# Description: This program use Convolutional Neural Networks(CNN)
# classify handwritten digits as number 0-9
#importing the libraries
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten, MaxPool2D
from keras.datasets import mnist
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
#Load the data and split it into train and test
(X_train, y_train), (X_test, y_test) = mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 ββββββββββββββββββββ 0s 0us/step
#Get the image shape
print(X_train.shape)
print(X_test.shape)
(60000, 28, 28)
(10000, 28, 28)
plt.imshow(X_train[2])
<matplotlib.image.AxesImage at 0x7d583f968610>
# Reshaping the data to fit the model
X_train = X_train.reshape(60000, 28, 28, 1)
X_test = X_test.reshape(10000, 28, 28, 1)
# One-Hot Encoding:
y_train_one_hot = to_categorical(y_train)
y_test_one_hot = to_categorical(y_test)
# Print the new label
print(y_train_one_hot[0])
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
# Build the CNN model
model = Sequential()
# Add model layers
model.add(Conv2D(64, kernel_size=3, activation = 'relu', input_shape=(28,28,1)))
model.add(Conv2D(32, kernel_size=3, activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2), strides=None, padding='valid', data_format=None))
model.add(Flatten())
model.add(Dense(10,activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
#Train the model
hist = model.fit(X_train,y_train_one_hot, validation_data=(X_test,y_test_one_hot), epochs=10)
Epoch 1/10
1875/1875 ββββββββββββββββββββ 152s 80ms/step - accuracy: 0.8982 - loss: 0.8538 - val_accuracy: 0.9752 - val_loss: 0.0818
Epoch 2/10
1875/1875 ββββββββββββββββββββ 201s 80ms/step - accuracy: 0.9762 - loss: 0.0775 - val_accuracy: 0.9778 - val_loss: 0.0704
Epoch 3/10
1875/1875 ββββββββββββββββββββ 150s 80ms/step - accuracy: 0.9816 - loss: 0.0580 - val_accuracy: 0.9795 - val_loss: 0.0697
Epoch 4/10
1875/1875 ββββββββββββββββββββ 202s 80ms/step - accuracy: 0.9855 - loss: 0.0438 - val_accuracy: 0.9820 - val_loss: 0.0593
Epoch 5/10
1875/1875 ββββββββββββββββββββ 202s 80ms/step - accuracy: 0.9897 - loss: 0.0305 - val_accuracy: 0.9815 - val_loss: 0.0725
Epoch 6/10
1875/1875 ββββββββββββββββββββ 202s 80ms/step - accuracy: 0.9909 - loss: 0.0287 - val_accuracy: 0.9796 - val_loss: 0.0779
Epoch 7/10
1875/1875 ββββββββββββββββββββ 150s 80ms/step - accuracy: 0.9933 - loss: 0.0220 - val_accuracy: 0.9836 - val_loss: 0.0803
Epoch 8/10
1875/1875 ββββββββββββββββββββ 202s 80ms/step - accuracy: 0.9941 - loss: 0.0174 - val_accuracy: 0.9820 - val_loss: 0.0896
Epoch 9/10
1875/1875 ββββββββββββββββββββ 201s 80ms/step - accuracy: 0.9936 - loss: 0.0190 - val_accuracy: 0.9798 - val_loss: 0.0926
Epoch 10/10
1875/1875 ββββββββββββββββββββ 203s 81ms/step - accuracy: 0.9953 - loss: 0.0148 - val_accuracy: 0.9827 - val_loss: 0.1042
|