How to Create a Multi-inputs Convolutional Neural Network Model for Images Classification?
See original GitHub issueI am beginner in deep learning and I hope if you help me to solve my issue.
I want to create a CNN model that takes two inputs of images and produce one output which is the class of the two images. The model takes one image from dataset type1 and one image from dataset type2. I have two datasets: type1 and type2, and each dataset contains the same number of classes, but the number of images in each class in the dataset type1 is higher than the number of images in each class in the dataset type2. The following is the structure of the datasets.
The model should take one image from Type1 dataset and one image from Type2 dataset and then classify these images to one class (ClassA or ClassB or------).
Type1 dataset
|Train
|ClassA
|image1
|image2
|image3
|image4
-----
|ClassB
|image1
|image2
|image3
|image4
-----
|ClassC
|image1
|image2
|image3
|image4
-----
|ClassD
|image1
|image2
|image3
|image4
-----
----------------
|Validate
-----------
|Test
--------------
Type2 dataset
|Train
|ClassA
|image1
|image2
-----
|ClassB
|image1
|image2
-----
|ClassC
|image1
|image2
-----
|ClassD
|image1
|image2
-----
----------------
|Validate
-----------
|Test
--------------
So, I want to create a model that inputs two images (from type 1 and 2), as long as they are from the same class. Also, I want each image from type1 be paired with every images from type2 from the same class. How can I do this???
The code:
in1 = Input(...)
x = Conv2D(...)(in1)
--------
--------
out1 = Dense(...)(x)
in2 = Input(...)
x = Conv2D(...)(in2)
--------
--------
out2 = Dense(...)(x)
concatenated_layer = concatenate([out1, out2]) # merge the outputs of the two models
output_layer = Dense(no_classes, activation='softmax', name='prediction')(concatenated_layer)
modal= Model(inputs=[in1, in2], outputs=[output_layer])
input_imgen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
rotation_range=5.,
horizontal_flip = True)
test_imgen = ImageDataGenerator()
def generate_generator_multiple(generator,dir1, dir2, batch_size, img_height,img_width):
genX1 = generator.flow_from_directory(dir1,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = batch_size,
shuffle=False,
seed=7)
genX2 = generator.flow_from_directory(dir2,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = batch_size,
shuffle=False,
seed=7)
while True:
X1i = genX1.next()
X2i = genX2.next()
yield [X1i[0], X2i[0]], X2i[1] #Yield both images and their mutual label
inputgenerator=generate_generator_multiple(generator=input_imgen,
dir1=train_iris_data,
dir2=train_face_data,
batch_size=32,
img_height=224,
img_width=224)
testgenerator=generate_generator_multiple(generator=test_imgen,
dir1=valid_iris_data,
dir2=valid_face_data,
batch_size=1,
img_height=224,
img_width=224)
Issue Analytics
- State:
- Created 5 years ago
- Reactions:1
- Comments:15
The following codes would be helpful, try them and do not hesitate if you have any questions.
This the code of image generator for three inputs
This the code of model of three inputs :
how to get labels/classes from these generator multi_train_generator.labels()