question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

extracting Bottleneck features using pretrained Inceptionv3 - differences between Keras' implementation and Native Tensorflow implementation

See original GitHub issue

(Apologies for the long post)

All,

I want to use the bottleneck features from a pretrained Inceptionv3 model to predict classification for my input images. Before training a model and predicting classification, I tried 3 different approaches for extracting the bottleneck features.

My 3 approaches yielded different bottleneck features (not just in values but even the size was different).

  1. Size of my bottleneck features from Approach 1 and 2: <number of input images x 3 x 3 x 2048> Size of my bottleneck features from Approach 3: <number of input images x 2048>

    Why are the sizes different between the Keras based Inceptionv3 model and the native Tensorflow model? My guess is that when I say include_top=False in Keras, I’m not extracting the ‘pool_3/_reshape:0’ layer. Is this correct? If yes, how do I extract the ‘pool_3/_reshape:0’ layer in Keras? If my guess is incorrect, what 'am I missing?

  2. I compared the bottleneck feature values from Approach 1 and 2 and they were significantly different. I think I’m feeding it the same input images because I resize and rescale my images before I even read it as input for my script. I have no options for my ImageDataGenerator in Approach 1 and according to the documentation for that function all the default values do not change my input image. I have set shuffle to false so I assumed that predict_generator and predict are reading images in the same order. What 'am I missing?


Please note:

My inputs images are in RGB format (so number of channels = 3) and I resized all of them to 150x150. I used the preprocess_input function in inceptionv3.py to preprocess all my images.

    def preprocess_input(image):
    image /= 255.
    image -= 0.5
    image *= 2.
    return image

Approach 1: Used Keras with tensorflow as backend, an ImageDataGenerator to read my data and model.predict_generator to compute bottleneck features

I followed the example (Section Using the bottleneck features of a pre-trained network: 90% accuracy in a minute) from Keras’ blog. Instead of VGG model listed there I used Inceptionv3. Below is the snippet of code I used

(code not shown here but what i did before the code below) : read all input images, resize to 150x150x3, rescale according to the preprocessing_input function mentioned above, save the resized and rescaled images

    train_datagen = ImageDataGenerator() 
    train_generator = train_datagen.flow_from_directory(my_input_dir, target_size=(150,150),shuffle=False, batch_size=16)

    # get bottleneck features
    # use pre-trained model and exclude top layer - which is used for classification
    pretrained_model = InceptionV3(include_top=False, weights='imagenet', input_shape=(150,150,3))
    bottleneck_features_train_v1 = pretrained_model.predict_generator(train_generator,len(train_generator.filenames)//16)

Approach 2: Used Keras with tensorflow as backend, my own reader and model.predict to compute bottleneck features

Only difference between this approach and earlier one is that I used my own reader to read the input images. (code not shown here but what i did before the code below) : read all input images, resize to 150x150x3, rescale according to the preprocessing_input function mentioned above, save the resized and rescaled images

    # inputImages is a numpy array of size <number of input images x 150 x 150 x 3>
    inputImages = readAllJPEGsInFolderAndMergeAsRGB(my_input_dir)

    # get bottleneck features
    # use pre-trained model and exclude top layer - which is used for classification
    pretrained_model = InceptionV3(include_top=False, weights='imagenet', input_shape=(img_width, img_height, 3))
    bottleneck_features_train_v2 = pretrained_model.predict(trainData.images,batch_size=16)

Approach 3: Used tensorflow (NO KERAS) compute bottleneck features

I followed retrain.py to extract bottleneck features for my input images. Please note that that the weights from that script can be obtained from (http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz)

As mentioned in that example, I used the bottleneck_tensor_name = ‘pool_3/_reshape:0’ as the layer to extract and compute bottleneck features. Similar to the first 2 approaches, I used resized and rescaled images as input to the script and I called this feature list bottleneck_features_train_v3

Thank you so much


P.S. I have posted this question in stackoverflow too. I will post an answer here if I it is answered on SO.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

6reactions
brge17commented, Nov 8, 2017

Working code is below. Step 1 is to load the Inception V3 model, step 2 is to print it and find where you want the bottleneck features from, step 3 is to perform surgery and make a new network with the same input as Inception V3 and the desired output, and finally step 4 is to predict.

Change the get layer string as needed.

from keras.models import Model
from keras.applications import InceptionV3

import numpy as np

# Randomly generated image
dummy_img = np.random.rand(1, 299, 299, 3)

# Source model
model = InceptionV3()
print model.summary()

# Surgery
bottleneck_model = Model(inputs=model.input, outputs=model.get_layer('avg_pool').output)
print bottleneck_model.summary()

# Predict
bottleneck_features = bottleneck_model.predict(dummy_img)
print bottleneck_features.shape
print bottleneck_features
2reactions
brge17commented, Nov 7, 2017

I would recommend doing something like the following (can also index by name instead of number):

bottleneck_model = Model(inputs=source_model.input, 
                         outputs=self.source_model.get_layer(index=[insert number]).output)

and then simply call predict_on_batch or predict_generator.

Read more comments on GitHub >

github_iconTop Results From Across the Web

extracting Bottleneck features using pretrained Inceptionv3
extracting Bottleneck features using pretrained Inceptionv3 - differences between Keras' implementation and Native Tensorflow implementation.
Read more >
Using Bottleneck Features for Multi-Class Classification in ...
Learn how to build a multi-class image classification system using bottleneck features from a pre-trained model in Keras to achieve transfer ...
Read more >
Model Zoo - Deep learning code and pretrained models for ...
This is an implementation of Mask R-CNN on Python 3, Keras, and TensorFlow. The model generates bounding boxes and segmentation masks for each...
Read more >
Transfer Learning in Keras with Computer Vision Models
The pre-trained model can be used as a separate feature extraction program, in which case input can be pre-processed by the model or...
Read more >
Useful Keras features - Towards Data Science
This is because the large gradient updates triggered by the randomly initialized weights would wreck the learned weights in the convolutional ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found