1

How to add a layer of Conv2D on top of a Keras model? I have input shape of (299,299,15), in order to use pretrained weights (imagenet), the input channel has to be 3, hence my idea was to add a conv2d layer changing the channels from 15 to 3.

image = Input(shape=(299, 299, 15))
x = Conv2D(3, kernel_size=(8,8), strides=(2,2), activation='relu')(image)
model1 = Model(inputs=image, outputs=x)

model2 = InceptionResNetV2(include_top=False, weights = 'imagenet', input_tensor=None, input_shape=(299,299,3))

2 Answers 2

2

This will first create a model that takes a x_input=(229,229,15) as input and performs convolution to reduce the channels to 3. The output of this model is then fed to the base_ model (InceptionResNetV2) and a few layers are added such as GlobalAveragePooling and Dense layers. The final model is built with x_input as the first layer and Dense layer predicting 10 classes as the output layer.

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D

# define input
x_input = tf.keras.layers.Input(shape=(229, 229, 15))
# convolve to go from 15 channels to 3
x_conv = tf.keras.layers.Conv2D(3,1)(x_input)
# model that performs convolution
conv_model = Model(inputs=x_input, outputs=x_conv)
# storing the model output, which will be later used as input for the base model
conv_output=conv_model.output

# defining base model
base_model = tf.keras.applications.InceptionResNetV2(
    weights='imagenet',
    include_top=False
)(conv_output)

# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(base_model)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 10 classes
predictions = Dense(10, activation='softmax')(x)

# this is the model we will train
model = Model(inputs=x_input, outputs=predictions)

model.summary()

View Model Summary

Sign up to request clarification or add additional context in comments.

Comments

1

try

image = Input(shape=(299, 299, 15))
x = Conv2D(3, kernel_size=(8,8), strides=(2,2), activation='relu')(image)
model1 = Model(inputs=image, outputs=x)
x=model1.output
x=tf.keras.applications.InceptionResNetV2(include_top=False, weights = 'imagenet', input_tensor=None)(x)
model2=Model(inputs=image, outputs=x)
print(model2.summary())

you might want to add pooling='max' parameter to InceptionResNetV2 parameters. That would result in the output being a one dimensional vector you could feed into a Dense layer. the model summary is

Model: "functional_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 299, 299, 15)]    0         
_________________________________________________________________
conv2d (Conv2D)              (None, 146, 146, 3)       2883      
_________________________________________________________________
inception_resnet_v2 (Functio (None, None, None, 1536)  54336736  
=================================================================
Total params: 54,339,619
Trainable params: 54,279,075
Non-trainable params: 60,544

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.