try
image = Input(shape=(299, 299, 15))
x = Conv2D(3, kernel_size=(8,8), strides=(2,2), activation='relu')(image)
model1 = Model(inputs=image, outputs=x)
x=model1.output
x=tf.keras.applications.InceptionResNetV2(include_top=False, weights = 'imagenet', input_tensor=None)(x)
model2=Model(inputs=image, outputs=x)
print(model2.summary())
you might want to add pooling='max' parameter to InceptionResNetV2 parameters. That would result in the output being a one dimensional vector you could feed into a Dense layer.
the model summary is
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 299, 299, 15)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 146, 146, 3) 2883
_________________________________________________________________
inception_resnet_v2 (Functio (None, None, None, 1536) 54336736
=================================================================
Total params: 54,339,619
Trainable params: 54,279,075
Non-trainable params: 60,544