question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Train CNN on the "green" channel from RGB images

See original GitHub issue

Hello everyone, I’m trying to train my model just on the G channel from my images and to compare the results with training on the entire RGB.

My images are located in 2 sub-directories so I’m using the “flow_from_directory” function to read them.

I’ve also defined a preprocessing function that should have done this “Green extraction” for me: def preprocess(x): return K.expand_dims(x[1,:,:],0) and this is my model:

def get_model(input_shape=(3,256,256), classes = 2, lr=1e-4, channels=3):
model = Sequential([

        Lambda(preprocess, input_shape=input_shape, output_shape=(channels,)+ input_shape[1:]),

        BatchNormalization(axis=1),

        Convolution2D(32,3,3, activation='relu',border_mode='same'),

        BatchNormalization(axis=1), bla bla bla,...


        BatchNormalization(),

        Dense(1000, activation='relu'),

        BatchNormalization(),

        Dense(classes, activation='softmax')

        ])
    model.compile(Adam(lr=lr), loss='categorical_crossentropy', metrics=['accuracy'])
    return model

def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=8, class_mode='categorical', target_size=(256,256),color='rgb'): return gen.flow_from_directory(dirname, target_size=target_size, class_mode=class_mode, shuffle=shuffle, batch_size=batch_size, color_mode=color)

this is my main code:

path = '/home/ubuntu/images/'
test_batches = get_batches(path+'valid', target_size=(224,224))
model = get_model(input_shape=(3,224,224),channels=1)
model.fit_generator(test_batches, samples_per_epoch=test_batches.nb_sample, nb_epoch=1)

when I run it I get the following error:

ValueError: GpuDnnConv images and kernel must have the same stack size

Apply node that caused the error: GpuDnnConv{algo=‘small’, inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode=‘half’, subsample=(1, 1), conv_mode=‘conv’, precision=‘float32’}.0, Constant{1.0}, Constant{0.0}) Toposort index: 458 Inputs types: [CudaNdarrayType(float32, (True, False, False, False)), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, False, False, False)), <theano.gof.type.CDataType object at 0x7f0b120ab090>, Scalar(float32), Scalar(float32)] Inputs shapes: [(1, 3, 224, 224), (32, 1, 3, 3), (1, 32, 224, 224), ‘No shapes’, (), ()] Inputs strides: [(0, 50176, 224, 1), (9, 0, 3, 1), (0, 50176, 224, 1), ‘No strides’, (), ()] Inputs values: [‘not shown’, ‘not shown’, ‘not shown’, <capsule object NULL at 0x7f0afa9d46c0>, 1.0, 0.0] Inputs name: (‘image’, ‘kernel’, ‘output’, ‘descriptor’, ‘alpha’, ‘beta’)

Outputs clients: [[GpuElemwise{add,no_inplace}(GpuDnnConv{algo=‘small’, inplace=True}.0, GpuDimShuffle{x,0,x,x}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag ‘optimizer=fast_compile’. If that does not work, Theano optimizations can be disabled with ‘optimizer=None’. HINT: Use the Theano flag ‘exception_verbosity=high’ for a debugprint and storage map footprint of this apply node.

How to fix it? Any help would be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 6 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
Golbsteincommented, Sep 15, 2017

I found the solution to my problem -

the pre processing function should be as follows: def preprocess(x): return K.expand_dims(x[:,1,:,:],1)

0reactions
paulopradocommented, Jul 14, 2022

Knowing the channel order and which channel do you intent do use you can use the sub array access. To avoid losing dimension when filtering single channel, you just need to specify the range like:

    def preprocess(x): # print(x) -> Tensor("IteratorGetNext:0", shape=(None, height, width, channels), dtype=float32)
	return x[:,:,:,1:2] 

In my case, as I’m using a RGB image, the “1:2” will ensure I get the green channel without losing any dimension print(x[:,:,:,1:2] ) # -> print(x[:,:,:,1:2] -> Tensor("sequential/lambda/strided_slice:0", shape=(None, height, width, 1), dtype=float32)

Read more comments on GitHub >

github_iconTop Results From Across the Web

Training Keras CNN model on Green channel from RGB images
I've been trying to solve this for about 24 hours so far but couldn't come up with something unfortunately. The question is simple:...
Read more >
How will channels (RGB) effect convolutional neural network?
When RGB image is used as input to CNN, the depth of filter (or kernel) is always equal to depth of image (so...
Read more >
RGB images as input to CNN - Cross Validated
No, you misunderstood. The weights (red) are used to obtain the output (green), you do not convolve a weight with another. Continuing this...
Read more >
Convolution Neural Network for Image Processing
Whenever we study a digital image, it usually comes with three color channels, i.e. the Red-Green-Blue channels, popularly known as the “RGB” ...
Read more >
training a CNN with colors - MATLAB Answers - MathWorks
By single colored images I'm assuming that you are referring to single channel (red or green or blue) of RGB (Truecolor) Images.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found