question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Loading a trained model, popping the last two layers, and then saving it

See original GitHub issue

So I’m working with this architecture for a facial point network:

n_kpts = 68 # number of keypoints

input_shape = (1,100,100)

input_1 = Input(shape=input_shape)

conv_1 = Conv2D(34, kernel_size=(9,9),
				activation='tanh',
				input_shape=input_shape,
				padding='same',
				data_format='channels_first')(input_1)

conv_2 = Conv2D(34, kernel_size=(9,9),
				activation='tanh',
				padding='same',
				data_format='channels_first')(conv_1)

conv_3 = Conv2D(34, kernel_size=(9,9),
				activation='tanh',
				padding='same',
				data_format='channels_first')(conv_2)

conv_4 = Conv2D(34, kernel_size=(9,9),
				activation='tanh',
				padding='same',
				data_format='channels_first')(conv_3)

conv_5 = Conv2D(34, kernel_size=(9,9),
				activation='tanh',
				padding='same',
				data_format='channels_first')(conv_4)

softargmax = spatial_softArgmax(68)(conv_4)

reshape = Reshape((68,2))(softargmax)

model = Model(inputs=input_1, outputs=reshape)

I need to get rid of the reshape and softargmax (it’s a custom layer) - and just save the model as the input and conv_1 - conv_5; I want the output to just be the output of that last convolutional layer. I have a model that’s trained as an h5 with all of these layers, but i run into some trouble when trying to pop and resave - here’s the script I wrote for that:

def get_weights_without_softargmax(fname):
	model = load_model(fname, custom_objects={'spatial_softArgmax':spatial_softArgmax})
	
	model.summary()
	model.layers.pop() # reshape layer
	model.layers.pop() # spatial softargmax

	model.summary()
	
	model.save("no_softargmax_" + str(fname))

The first model.summary returns a summary of the whole network;

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 1, 100, 100)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 34, 100, 100)      2788      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 34, 100, 100)      93670     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 34, 100, 100)      93670     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 34, 100, 100)      93670     
_________________________________________________________________
spatial_soft_argmax_1 (spati (None, 68)                0         
_________________________________________________________________
reshape_1 (Reshape)          (None, 34, 2)             0         
=================================================================
Total params: 283,798
Trainable params: 283,798
Non-trainable params: 0

and the second one returns the now popped network

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 1, 100, 100)       0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 34, 100, 100)      2788      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 34, 100, 100)      93670     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 34, 100, 100)      93670     
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 34, 100, 100)      93670     
=================================================================
Total params: 283,798
Trainable params: 283,798
Non-trainable params: 0

but when I try to do model.save() - i get this error:

Traceback (most recent call last):
  File "weight_chopper.py", line 18, in <module>
    get_weights_without_softargmax("34pts_94percent.h5")
  File "weight_chopper.py", line 16, in get_weights_without_softargmax
    model.save("no_softargmax_" + str(fname))
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2553, in save
    save_model(self, filepath, overwrite, include_optimizer)
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 107, in save_model
    'config': model.get_config()
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2390, in get_config
    new_node_index = node_conversion_map[node_key]
KeyError: u'reshape_1_ib-0'

Note how it’s referencing the old reshape layer? When I defined the model; I said model = Model(inputs=input_1, outputs=reshape) - so does it still think that the model has that reshape output? How can I convince it otherwise? I’ve tried doing another Model(inputs=..., outputs=...) type command; but there aren’t any appropriate values to plug in for the inputs and outputs!

How can I get the model to save (preferably as a compiled model) with just the convolutional layers?

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:15 (4 by maintainers)

github_iconTop GitHub Comments

29reactions
fcholletcommented, Dec 12, 2017

there aren’t any values i can give for inputs= and outputs= because I haven’t defined the model in that script…

You can retrieve these from your model:

new_model = Model(model.inputs, model.layers[-3].output)  # assuming you want the 3rd layer from the last
25reactions
fcholletcommented, Dec 12, 2017

Wait, I got confused. I thought you were using the pop method of a Sequential model, but that’s not what you are doing. Please post your full code.

Note that pop is not possible with the functional API, it’s only implemented for Sequential. If you want to drop some layers in the functional API, you’d do:

new_model = Model(inputs=input_1, outputs=conv_5)

In you case.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Save and Load Your Keras Deep Learning Model
In this post, you will discover how to save your Keras models to files and load them up again to make predictions. After...
Read more >
How to add and remove new layers in keras after loading ...
You can take the output of the last model and create a new model. The lower layers remains the same. model.summary() model.layers.pop() ...
Read more >
Save and Load a Model with TensorFlow's Keras API
The last saving mechanism we'll discuss only saves the weights of the model. We can do this by calling model.save_weights() and passing in...
Read more >
Saving and Loading Models - PyTorch
When saving a model for inference, it is only necessary to save the trained model's learned parameters. Saving the model's state_dict with the...
Read more >
FAQ - Keras Documentation
You can then use keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found