Dogs vs Cats Image Classification With Image Augmentation

In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using tf.keras.Sequential model and load data using tf.keras.preprocessing.image.ImageDataGenerator.

Specific concepts that will be covered:

In the process, we will build practical experience and develop intuition around the following concepts

  • Building data input pipelines using the tf.keras.preprocessing.image.ImageDataGenerator class — How can we efficiently work with data on disk to interface with our model?
  • Overfitting - what is it, how to identify it, and how can we prevent it?
  • Data Augmentation and Dropout - Key techniques to fight overfitting in computer vision tasks that we will incorporate into our data pipeline and image classifier model.

We will follow the general machine learning workflow:

  1. Examine and understand data
  2. Build an input pipeline
  3. Build our model
  4. Train our model
  5. Test our model
  6. Improve our model/Repeat the process

Before you begin

Before running the code in this notebook, reset the runtime by going to Runtime -> Reset all runtimes in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits.

Importing packages

Let's start by importing required packages:

  • os — to read files and directory structure
  • numpy — for some matrix math outside of TensorFlow
  • matplotlib.pyplot — to plot the graph and display images in our training and validation data
In [1]:
import tensorflow as tf
In [2]:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
In [3]:
import os
import matplotlib.pyplot as plt
import numpy as np
In [4]:
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)

Data Loading

To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of Dogs vs. Cats dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).

In previous Colabs, we've used TensorFlow Datasets, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class tf.keras.preprocessing.image.ImageDataGenerator which will read data from disk. We therefore need to directly download Dogs vs. Cats from a URL and unzip it to the Colab filesystem.

In [5]:
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
Downloading data from https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip
68608000/68606236 [==============================] - 1s 0us/step

The dataset we have downloaded has the following directory structure.

cats_and_dogs_filtered
|__ train
    |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ...]
    |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
|__ validation
    |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ...]
    |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]

We can list the directories with the following terminal command:

In [6]:
zip_dir_base = os.path.dirname(zip_dir)
!find $zip_dir_base -type d -print
/root/.keras/datasets
/root/.keras/datasets/cats_and_dogs_filtered
/root/.keras/datasets/cats_and_dogs_filtered/train
/root/.keras/datasets/cats_and_dogs_filtered/train/dogs
/root/.keras/datasets/cats_and_dogs_filtered/train/cats
/root/.keras/datasets/cats_and_dogs_filtered/validation
/root/.keras/datasets/cats_and_dogs_filtered/validation/dogs
/root/.keras/datasets/cats_and_dogs_filtered/validation/cats

We'll now assign variables with the proper file path for the training and validation sets.

In [7]:
base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')

train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

Understanding our data

Let's look at how many cats and dogs images we have in our training and validation directory

In [8]:
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
In [9]:
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)

print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
total training cat images: 1000
total training dog images: 1000
total validation cat images: 500
total validation dog images: 500
--
Total training images: 2000
Total validation images: 1000

Setting Model Parameters

For convenience, we'll set up variables that will be used later while pre-processing our dataset and training our network.

In [10]:
BATCH_SIZE = 100  # Number of training examples to process before updating our models variables
IMG_SHAPE  = 150  # Our training data consists of images with width of 150 pixels and height of 150 pixels

After defining our generators for training and validation images, flow_from_directory method will load images from the disk and will apply rescaling and will resize them into required dimensions using single line of code.

Data Augmentation

Overfitting often occurs when we have a small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number and variety of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples through random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This exposes the model to more aspects of the data, allowing it to generalize better.

In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.

To start off, let's define a function that can display an image, so we can see the type of augmentation that has been performed. Then, we'll look at specific augmentations that we'll use during training.

In [11]:
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
    fig, axes = plt.subplots(1, 5, figsize=(20,20))
    axes = axes.flatten()
    for img, ax in zip(images_arr, axes):
        ax.imshow(img)
    plt.tight_layout()
    plt.show()

Flipping the image horizontally

We can begin by randomly applying horizontal flip augmentation to our dataset and seeing how individual images will look after the transformation. This is achieved by passing horizontal_flip=True as an argument to the ImageDataGenerator class.

In [12]:
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)

train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_SHAPE,IMG_SHAPE))
Found 2000 images belonging to 2 classes.

To see the transformation in action, let's take one sample image from our training set and repeat it five times. The augmentation will be randomly applied (or not) to each repetition.

In [13]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

Rotating the image

The rotation augmentation will randomly rotate the image up to a specified number of degrees. Here, we'll set it to 45.

In [14]:
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)

train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_SHAPE, IMG_SHAPE))
Found 2000 images belonging to 2 classes.

To see the transformation in action, let's once again take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.

In [15]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

Applying Zoom

We can also apply Zoom augmentation to our dataset, zooming images up to 50% randomly.

In [16]:
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)

train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_SHAPE, IMG_SHAPE))
Found 2000 images belonging to 2 classes.

One more time, take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.

In [17]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

Putting it all together

We can apply all these augmentations, and even others, with just one line of code, by passing the augmentations as arguments with proper values.

Here, we have applied rescale, rotation of 45 degrees, width shift, height shift, horizontal flip, and zoom augmentation to our training images.

In [18]:
image_gen_train = ImageDataGenerator(
      rescale=1./255,
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

train_data_gen = image_gen_train.flow_from_directory(batch_size=BATCH_SIZE,
                                                     directory=train_dir,
                                                     shuffle=True,
                                                     target_size=(IMG_SHAPE,IMG_SHAPE),
                                                     class_mode='binary')
Found 2000 images belonging to 2 classes.

Let's visualize how a single image would look like five different times, when we pass these augmentations randomly to our dataset.

In [19]:
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

Creating Validation Data generator

Generally, we only apply data augmentation to our training examples, since the original images should be representative of what our model needs to manage. So, in this case we are only rescaling our validation images and converting them into batches using ImageDataGenerator.

In [20]:
image_gen_val = ImageDataGenerator(rescale=1./255)

val_data_gen = image_gen_val.flow_from_directory(batch_size=BATCH_SIZE,
                                                 directory=validation_dir,
                                                 target_size=(IMG_SHAPE, IMG_SHAPE),
                                                 class_mode='binary')
Found 1000 images belonging to 2 classes.

Model Creation

Define the model

The model consists of four convolution blocks with a max pool layer in each of them.

Before the final Dense layers, we're also applying a Dropout probability of 0.5. It means that 50% of the values coming into the Dropout layer will be set to zero. This helps to prevent overfitting.

Then we have a fully connected layer with 512 units, with a relu activation function. The model will output class probabilities for two classes — dogs and cats — using softmax.

In [21]:
model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),

    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),

    tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),

    tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2),

    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation='relu'),
    tf.keras.layers.Dense(2)
])

Compile the model

As usual, we will use the adam optimizer. Since we output a softmax categorization, we'll use sparse_categorical_crossentropy as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.

In [22]:
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

Model Summary

Let's look at all the layers of our network using summary method.

In [23]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 148, 148, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 74, 74, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 72, 72, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 36, 36, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 34, 34, 128)       73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 17, 17, 128)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 15, 15, 128)       147584    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
dropout (Dropout)            (None, 7, 7, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 6272)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               3211776   
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 1026      
=================================================================
Total params: 3,453,634
Trainable params: 3,453,634
Non-trainable params: 0
_________________________________________________________________

Train the model

It's time we train our network.

Since our batches are coming from a generator (ImageDataGenerator), we'll use fit_generator instead of fit.

In [24]:
EPOCHS = 100
history = model.fit_generator(
    train_data_gen,
    steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
    epochs=EPOCHS,
    validation_data=val_data_gen,
    validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)
Epoch 1/100
20/20 [==============================] - 19s 963ms/step - loss: 0.7140 - accuracy: 0.5105 - val_loss: 0.6939 - val_accuracy: 0.5000
Epoch 2/100
20/20 [==============================] - 20s 990ms/step - loss: 0.6892 - accuracy: 0.5225 - val_loss: 0.6916 - val_accuracy: 0.5000
Epoch 3/100
20/20 [==============================] - 20s 981ms/step - loss: 0.6820 - accuracy: 0.5380 - val_loss: 0.6768 - val_accuracy: 0.5610
Epoch 4/100
20/20 [==============================] - 20s 979ms/step - loss: 0.6695 - accuracy: 0.5795 - val_loss: 0.6450 - val_accuracy: 0.6260
Epoch 5/100
20/20 [==============================] - 20s 978ms/step - loss: 0.6502 - accuracy: 0.6110 - val_loss: 0.6146 - val_accuracy: 0.6510
Epoch 6/100
20/20 [==============================] - 19s 975ms/step - loss: 0.6359 - accuracy: 0.6315 - val_loss: 0.5838 - val_accuracy: 0.6730
Epoch 7/100
20/20 [==============================] - 20s 983ms/step - loss: 0.6512 - accuracy: 0.6050 - val_loss: 0.6270 - val_accuracy: 0.6430
Epoch 8/100
20/20 [==============================] - 20s 985ms/step - loss: 0.6130 - accuracy: 0.6785 - val_loss: 0.5828 - val_accuracy: 0.7010
Epoch 9/100
20/20 [==============================] - 20s 982ms/step - loss: 0.6103 - accuracy: 0.6675 - val_loss: 0.6934 - val_accuracy: 0.5920
Epoch 10/100
20/20 [==============================] - 20s 991ms/step - loss: 0.6055 - accuracy: 0.6750 - val_loss: 0.5942 - val_accuracy: 0.6730
Epoch 11/100
20/20 [==============================] - 20s 978ms/step - loss: 0.5989 - accuracy: 0.6610 - val_loss: 0.5632 - val_accuracy: 0.7050
Epoch 12/100
20/20 [==============================] - 20s 976ms/step - loss: 0.5940 - accuracy: 0.6770 - val_loss: 0.5941 - val_accuracy: 0.6730
Epoch 13/100
20/20 [==============================] - 20s 979ms/step - loss: 0.6011 - accuracy: 0.6795 - val_loss: 0.5938 - val_accuracy: 0.6850
Epoch 14/100
20/20 [==============================] - 19s 970ms/step - loss: 0.6190 - accuracy: 0.6560 - val_loss: 0.5834 - val_accuracy: 0.6940
Epoch 15/100
20/20 [==============================] - 19s 965ms/step - loss: 0.5903 - accuracy: 0.6780 - val_loss: 0.5486 - val_accuracy: 0.7080
Epoch 16/100
20/20 [==============================] - 19s 966ms/step - loss: 0.5901 - accuracy: 0.6900 - val_loss: 0.5469 - val_accuracy: 0.7140
Epoch 17/100
20/20 [==============================] - 19s 967ms/step - loss: 0.5674 - accuracy: 0.7125 - val_loss: 0.5577 - val_accuracy: 0.6870
Epoch 18/100
20/20 [==============================] - 19s 962ms/step - loss: 0.5540 - accuracy: 0.7035 - val_loss: 0.5222 - val_accuracy: 0.7310
Epoch 19/100
20/20 [==============================] - 19s 963ms/step - loss: 0.5464 - accuracy: 0.7245 - val_loss: 0.5318 - val_accuracy: 0.7280
Epoch 20/100
20/20 [==============================] - 19s 969ms/step - loss: 0.5543 - accuracy: 0.7180 - val_loss: 0.5443 - val_accuracy: 0.7060
Epoch 21/100
20/20 [==============================] - 19s 957ms/step - loss: 0.5637 - accuracy: 0.7035 - val_loss: 0.5265 - val_accuracy: 0.7330
Epoch 22/100
20/20 [==============================] - 19s 959ms/step - loss: 0.5516 - accuracy: 0.7130 - val_loss: 0.5043 - val_accuracy: 0.7490
Epoch 23/100
20/20 [==============================] - 19s 958ms/step - loss: 0.5252 - accuracy: 0.7490 - val_loss: 0.5103 - val_accuracy: 0.7270
Epoch 24/100
20/20 [==============================] - 19s 962ms/step - loss: 0.5205 - accuracy: 0.7440 - val_loss: 0.5219 - val_accuracy: 0.7260
Epoch 25/100
20/20 [==============================] - 19s 971ms/step - loss: 0.5287 - accuracy: 0.7225 - val_loss: 0.5268 - val_accuracy: 0.7220
Epoch 26/100
20/20 [==============================] - 19s 961ms/step - loss: 0.5114 - accuracy: 0.7410 - val_loss: 0.4995 - val_accuracy: 0.7600
Epoch 27/100
20/20 [==============================] - 19s 966ms/step - loss: 0.5093 - accuracy: 0.7455 - val_loss: 0.5555 - val_accuracy: 0.7260
Epoch 28/100
20/20 [==============================] - 19s 964ms/step - loss: 0.5300 - accuracy: 0.7270 - val_loss: 0.5192 - val_accuracy: 0.7240
Epoch 29/100
20/20 [==============================] - 19s 966ms/step - loss: 0.5105 - accuracy: 0.7500 - val_loss: 0.5325 - val_accuracy: 0.7330
Epoch 30/100
20/20 [==============================] - 19s 964ms/step - loss: 0.5116 - accuracy: 0.7410 - val_loss: 0.5425 - val_accuracy: 0.7170
Epoch 31/100
20/20 [==============================] - 19s 966ms/step - loss: 0.5266 - accuracy: 0.7445 - val_loss: 0.5643 - val_accuracy: 0.7250
Epoch 32/100
20/20 [==============================] - 19s 964ms/step - loss: 0.5123 - accuracy: 0.7570 - val_loss: 0.4991 - val_accuracy: 0.7540
Epoch 33/100
20/20 [==============================] - 19s 967ms/step - loss: 0.4888 - accuracy: 0.7600 - val_loss: 0.4895 - val_accuracy: 0.7680
Epoch 34/100
20/20 [==============================] - 19s 956ms/step - loss: 0.4891 - accuracy: 0.7620 - val_loss: 0.5002 - val_accuracy: 0.7530
Epoch 35/100
20/20 [==============================] - 19s 960ms/step - loss: 0.4908 - accuracy: 0.7635 - val_loss: 0.5182 - val_accuracy: 0.7370
Epoch 36/100
20/20 [==============================] - 19s 956ms/step - loss: 0.4907 - accuracy: 0.7610 - val_loss: 0.4909 - val_accuracy: 0.7580
Epoch 37/100
20/20 [==============================] - 19s 958ms/step - loss: 0.4736 - accuracy: 0.7695 - val_loss: 0.4720 - val_accuracy: 0.7750
Epoch 38/100
20/20 [==============================] - 19s 955ms/step - loss: 0.4772 - accuracy: 0.7660 - val_loss: 0.4702 - val_accuracy: 0.7670
Epoch 39/100
20/20 [==============================] - 19s 958ms/step - loss: 0.4769 - accuracy: 0.7770 - val_loss: 0.4686 - val_accuracy: 0.7640
Epoch 40/100
20/20 [==============================] - 19s 973ms/step - loss: 0.4778 - accuracy: 0.7725 - val_loss: 0.4530 - val_accuracy: 0.7770
Epoch 41/100
20/20 [==============================] - 20s 977ms/step - loss: 0.4515 - accuracy: 0.7945 - val_loss: 0.4569 - val_accuracy: 0.7760
Epoch 42/100
20/20 [==============================] - 19s 972ms/step - loss: 0.4559 - accuracy: 0.7810 - val_loss: 0.5129 - val_accuracy: 0.7510
Epoch 43/100
20/20 [==============================] - 19s 971ms/step - loss: 0.4851 - accuracy: 0.7760 - val_loss: 0.4824 - val_accuracy: 0.7550
Epoch 44/100
20/20 [==============================] - 19s 972ms/step - loss: 0.4391 - accuracy: 0.7895 - val_loss: 0.5122 - val_accuracy: 0.7430
Epoch 45/100
20/20 [==============================] - 19s 975ms/step - loss: 0.4373 - accuracy: 0.7950 - val_loss: 0.4323 - val_accuracy: 0.7910
Epoch 46/100
20/20 [==============================] - 19s 972ms/step - loss: 0.4478 - accuracy: 0.7905 - val_loss: 0.4902 - val_accuracy: 0.7680
Epoch 47/100
20/20 [==============================] - 19s 971ms/step - loss: 0.4511 - accuracy: 0.7735 - val_loss: 0.4359 - val_accuracy: 0.7980
Epoch 48/100
20/20 [==============================] - 19s 968ms/step - loss: 0.4405 - accuracy: 0.7930 - val_loss: 0.4320 - val_accuracy: 0.8030
Epoch 49/100
20/20 [==============================] - 19s 970ms/step - loss: 0.4363 - accuracy: 0.7985 - val_loss: 0.4337 - val_accuracy: 0.8110
Epoch 50/100
20/20 [==============================] - 19s 969ms/step - loss: 0.4461 - accuracy: 0.7985 - val_loss: 0.4474 - val_accuracy: 0.7850
Epoch 51/100
20/20 [==============================] - 19s 968ms/step - loss: 0.4207 - accuracy: 0.8145 - val_loss: 0.4457 - val_accuracy: 0.7830
Epoch 52/100
20/20 [==============================] - 19s 968ms/step - loss: 0.4343 - accuracy: 0.7950 - val_loss: 0.4187 - val_accuracy: 0.7950
Epoch 53/100
20/20 [==============================] - 19s 964ms/step - loss: 0.4150 - accuracy: 0.8135 - val_loss: 0.4614 - val_accuracy: 0.7880
Epoch 54/100
20/20 [==============================] - 19s 969ms/step - loss: 0.4082 - accuracy: 0.8080 - val_loss: 0.4082 - val_accuracy: 0.8100
Epoch 55/100
20/20 [==============================] - 19s 966ms/step - loss: 0.4008 - accuracy: 0.8260 - val_loss: 0.4019 - val_accuracy: 0.8240
Epoch 56/100
20/20 [==============================] - 19s 972ms/step - loss: 0.3741 - accuracy: 0.8345 - val_loss: 0.4427 - val_accuracy: 0.7860
Epoch 57/100
20/20 [==============================] - 19s 967ms/step - loss: 0.4038 - accuracy: 0.8180 - val_loss: 0.4382 - val_accuracy: 0.7960
Epoch 58/100
20/20 [==============================] - 19s 965ms/step - loss: 0.3947 - accuracy: 0.8175 - val_loss: 0.5087 - val_accuracy: 0.7740
Epoch 59/100
20/20 [==============================] - 19s 965ms/step - loss: 0.3858 - accuracy: 0.8300 - val_loss: 0.4182 - val_accuracy: 0.8130
Epoch 60/100
20/20 [==============================] - 19s 963ms/step - loss: 0.3822 - accuracy: 0.8275 - val_loss: 0.3840 - val_accuracy: 0.8230
Epoch 61/100
20/20 [==============================] - 19s 961ms/step - loss: 0.3715 - accuracy: 0.8365 - val_loss: 0.4419 - val_accuracy: 0.7950
Epoch 62/100
20/20 [==============================] - 19s 968ms/step - loss: 0.3910 - accuracy: 0.8300 - val_loss: 0.4519 - val_accuracy: 0.7890
Epoch 63/100
20/20 [==============================] - 19s 963ms/step - loss: 0.3868 - accuracy: 0.8220 - val_loss: 0.4003 - val_accuracy: 0.8160
Epoch 64/100
20/20 [==============================] - 19s 967ms/step - loss: 0.3888 - accuracy: 0.8220 - val_loss: 0.3939 - val_accuracy: 0.8200
Epoch 65/100
20/20 [==============================] - 19s 966ms/step - loss: 0.3668 - accuracy: 0.8425 - val_loss: 0.4342 - val_accuracy: 0.8050
Epoch 66/100
20/20 [==============================] - 19s 966ms/step - loss: 0.3824 - accuracy: 0.8270 - val_loss: 0.3968 - val_accuracy: 0.8210
Epoch 67/100
20/20 [==============================] - 19s 968ms/step - loss: 0.3620 - accuracy: 0.8355 - val_loss: 0.4759 - val_accuracy: 0.7950
Epoch 68/100
20/20 [==============================] - 19s 968ms/step - loss: 0.3816 - accuracy: 0.8245 - val_loss: 0.3953 - val_accuracy: 0.8160
Epoch 69/100
20/20 [==============================] - 19s 969ms/step - loss: 0.3757 - accuracy: 0.8315 - val_loss: 0.4319 - val_accuracy: 0.7950
Epoch 70/100
20/20 [==============================] - 19s 968ms/step - loss: 0.3568 - accuracy: 0.8345 - val_loss: 0.4077 - val_accuracy: 0.8200
Epoch 71/100
20/20 [==============================] - 20s 977ms/step - loss: 0.3554 - accuracy: 0.8360 - val_loss: 0.3772 - val_accuracy: 0.8330
Epoch 72/100
20/20 [==============================] - 19s 973ms/step - loss: 0.3755 - accuracy: 0.8425 - val_loss: 0.4089 - val_accuracy: 0.8210
Epoch 73/100
20/20 [==============================] - 19s 971ms/step - loss: 0.3643 - accuracy: 0.8315 - val_loss: 0.3880 - val_accuracy: 0.8200
Epoch 74/100
20/20 [==============================] - 19s 968ms/step - loss: 0.3499 - accuracy: 0.8460 - val_loss: 0.4107 - val_accuracy: 0.8260
Epoch 75/100
20/20 [==============================] - 20s 975ms/step - loss: 0.3484 - accuracy: 0.8460 - val_loss: 0.3952 - val_accuracy: 0.8270
Epoch 76/100
20/20 [==============================] - 19s 972ms/step - loss: 0.3258 - accuracy: 0.8580 - val_loss: 0.4103 - val_accuracy: 0.8180
Epoch 77/100
20/20 [==============================] - 19s 969ms/step - loss: 0.3217 - accuracy: 0.8540 - val_loss: 0.3912 - val_accuracy: 0.8450
Epoch 78/100
20/20 [==============================] - 19s 971ms/step - loss: 0.3132 - accuracy: 0.8580 - val_loss: 0.3854 - val_accuracy: 0.8420
Epoch 79/100
20/20 [==============================] - 19s 970ms/step - loss: 0.3541 - accuracy: 0.8440 - val_loss: 0.4224 - val_accuracy: 0.8120
Epoch 80/100
20/20 [==============================] - 19s 967ms/step - loss: 0.3542 - accuracy: 0.8465 - val_loss: 0.3910 - val_accuracy: 0.8280
Epoch 81/100
20/20 [==============================] - 19s 969ms/step - loss: 0.3538 - accuracy: 0.8410 - val_loss: 0.3660 - val_accuracy: 0.8400
Epoch 82/100
20/20 [==============================] - 19s 966ms/step - loss: 0.3199 - accuracy: 0.8665 - val_loss: 0.3769 - val_accuracy: 0.8320
Epoch 83/100
20/20 [==============================] - 20s 975ms/step - loss: 0.3090 - accuracy: 0.8690 - val_loss: 0.3853 - val_accuracy: 0.8450
Epoch 84/100
20/20 [==============================] - 19s 975ms/step - loss: 0.3140 - accuracy: 0.8650 - val_loss: 0.3889 - val_accuracy: 0.8290
Epoch 85/100
20/20 [==============================] - 20s 981ms/step - loss: 0.3279 - accuracy: 0.8555 - val_loss: 0.3804 - val_accuracy: 0.8310
Epoch 86/100
20/20 [==============================] - 20s 975ms/step - loss: 0.3117 - accuracy: 0.8690 - val_loss: 0.3977 - val_accuracy: 0.8410
Epoch 87/100
20/20 [==============================] - 20s 980ms/step - loss: 0.2915 - accuracy: 0.8815 - val_loss: 0.3542 - val_accuracy: 0.8440
Epoch 88/100
20/20 [==============================] - 20s 976ms/step - loss: 0.3199 - accuracy: 0.8585 - val_loss: 0.3761 - val_accuracy: 0.8380
Epoch 89/100
20/20 [==============================] - 20s 976ms/step - loss: 0.3058 - accuracy: 0.8665 - val_loss: 0.3709 - val_accuracy: 0.8480
Epoch 90/100
20/20 [==============================] - 20s 975ms/step - loss: 0.3184 - accuracy: 0.8620 - val_loss: 0.3574 - val_accuracy: 0.8550
Epoch 91/100
20/20 [==============================] - 20s 979ms/step - loss: 0.3084 - accuracy: 0.8620 - val_loss: 0.3574 - val_accuracy: 0.8410
Epoch 92/100
20/20 [==============================] - 20s 976ms/step - loss: 0.2907 - accuracy: 0.8760 - val_loss: 0.3676 - val_accuracy: 0.8390
Epoch 93/100
20/20 [==============================] - 20s 976ms/step - loss: 0.2927 - accuracy: 0.8765 - val_loss: 0.3553 - val_accuracy: 0.8340
Epoch 94/100
20/20 [==============================] - 20s 978ms/step - loss: 0.2892 - accuracy: 0.8750 - val_loss: 0.4256 - val_accuracy: 0.8250
Epoch 95/100
20/20 [==============================] - 20s 977ms/step - loss: 0.2698 - accuracy: 0.8805 - val_loss: 0.4215 - val_accuracy: 0.8390
Epoch 96/100
20/20 [==============================] - 20s 975ms/step - loss: 0.2957 - accuracy: 0.8745 - val_loss: 0.3955 - val_accuracy: 0.8290
Epoch 97/100
20/20 [==============================] - 20s 976ms/step - loss: 0.2796 - accuracy: 0.8765 - val_loss: 0.3721 - val_accuracy: 0.8480
Epoch 98/100
20/20 [==============================] - 19s 972ms/step - loss: 0.3007 - accuracy: 0.8760 - val_loss: 0.3621 - val_accuracy: 0.8400
Epoch 99/100
20/20 [==============================] - 20s 976ms/step - loss: 0.2816 - accuracy: 0.8740 - val_loss: 0.3924 - val_accuracy: 0.8240
Epoch 100/100
20/20 [==============================] - 20s 982ms/step - loss: 0.3000 - accuracy: 0.8705 - val_loss: 0.3570 - val_accuracy: 0.8390

Visualizing results of the training

We'll now visualize the results we get after training our network.

In [25]:
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(EPOCHS)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.savefig('./foo.png')
plt.show()
In [25]: