1

The code is like this. Variable 'bneck' is keras sequential. I want to get the output of middle layer.

...
x = bneck(x)
x = CBNModule(960, 1, 1, activation=HSwish())(x)  # 7 * 960
s32 = CBNModule(320, 1, 1, activation=HSwish())(x)  # 7 * 960 -> 7 * 320
s32 = CBNModule(24, 1, 1, activation=HSwish())(s32)  # 7 * 320 -> 7 * 24
s16 = k.layers.Add()([
    CBNModule(24, 1, 1, activation=HSwish())(bneck.layers[12].output),
    UpModule(24, 2)(s32)
])  # (14 * 160 -> 14 * 24) + (7 * 24 -> 14 * 24)
...
return keras.Model(inputs=[...], outputs=[...])
    

When i run model.summary(), I got error like this ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor

The error occurs on line 6 bneck.layers[12].output. But when i replace line 1 x = bneck(x) with code

for layer in bneck.layers:
    x = layers(x)

there is no error. Why is that? What's the difference bettwen them.

2
  • You want to join sequential model output to functional API model, am I right? Commented Mar 18, 2021 at 2:54
  • Yes. Apart from the final output, i also want to get the output of middle layers in sequential. Commented Mar 18, 2021 at 7:13

2 Answers 2

2

At first, you have to create a feature extractor based on your desire output layer. Your graph gets disconnected in here bneck.layers[12].output. Let's say you have model A and model B. And you want some layer's output (let's say 2 layers) from model A and use them in model B to complete its architecture. To do that, you first create 2 feature extractor from model A as follows

extractor_one = Model(modelA.input, expected_layer_1.output)
extractor_two = Model(modelA.input, expected_layer_2.output)

Here I will walk you through a simple code example. There can be a more flexible and smart approach to do this but here is one of them. I will build a sequential model and train it on CIFAR10 and next, I will try to build a functional model where I will utilize some of the sequential model layers (just 2 of them) and train the complete model on CIFAR100.

import tensorflow as tf 

seq_model = tf.keras.Sequential(
    [
        tf.keras.Input(shape=(32, 32, 3)),
        tf.keras.layers.Conv2D(16, 3, activation="relu"),
        tf.keras.layers.Conv2D(32, 3, activation="relu"),
        tf.keras.layers.Conv2D(64, 3, activation="relu"),
        tf.keras.layers.Conv2D(128, 3, activation="relu"),
        tf.keras.layers.Conv2D(256, 3, activation="relu"),
        tf.keras.layers.GlobalAveragePooling2D(), 
        tf.keras.layers.Dense(10, activation='softmax')
     
    ]
)

seq_model.summary()

Trian on CIFAR10 data set

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

# train set / data 
x_train = x_train.astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train , num_classes=10)

print(x_train.shape, y_train.shape)

seq_model.compile(
          loss      = tf.keras.losses.CategoricalCrossentropy(),
          metrics   = tf.keras.metrics.CategoricalAccuracy(),
          optimizer = tf.keras.optimizers.Adam())
# fit 
seq_model.fit(x_train, y_train, batch_size=128, epochs=5, verbose = 2)

# -------------------------------------------------------------------
(50000, 32, 32, 3) (50000, 10)
Epoch 1/5
27s 66ms/step - loss: 1.2229 - categorical_accuracy: 0.5647
Epoch 2/5
26s 67ms/step - loss: 1.1389 - categorical_accuracy: 0.5950
Epoch 3/5
26s 67ms/step - loss: 1.0890 - categorical_accuracy: 0.6127
Epoch 4/5
26s 67ms/step - loss: 1.0475 - categorical_accuracy: 0.6272
Epoch 5/5
26s 67ms/step - loss: 1.0176 - categorical_accuracy: 0.6409

Now, let' say we want some output from this sequential model, let's say of the following two layers.

tf.keras.layers.Conv2D(64, 3, activation="relu") # (None, 26, 26, 64)   
tf.keras.layers.Conv2D(256, 3, activation="relu") # (None, 22, 22, 256) 

To get them we first create two feature extractor from the sequential model

last_layer_outputs = tf.keras.Model(seq_model.input, seq_model.layers[-3].output)
last_layer_outputs.summary() # (None, 22, 22, 256)  

mid_layer_outputs = tf.keras.Model(seq_model.input, seq_model.layers[2].output)
mid_layer_outputs.summary() # (None, 26, 26, 64)   

Optionally, if we want to freeze them we can do that too now. Freezing because we choose the same type of data set here. (CIFAR 10-100).

print('last layer output')
# just freezing first 2 layer 
for layer in last_layer_outputs.layers[:2]:
  layer.trainable = False

# checking 
for l in last_layer_outputs.layers:
    print(l.name, l.trainable)


print('\nmid layer output')
# freeze all layers
mid_layer_outputs.trainable = False

# checking 
for l in mid_layer_outputs.layers:
    print(l.name, l.trainable)

last layer output
input_11 False
conv2d_81 False
conv2d_82 False
conv2d_83 False
conv2d_84 True
conv2d_85 True

mid layer output
input_11 False
conv2d_81 False
conv2d_82 False
conv2d_83 False

Now, let's create a new model with functional API and use the above two feature extractors.

encoder_input = tf.keras.Input(shape=(32, 32, 3), name="img")
x = tf.keras.layers.Conv2D(16, 3, activation="relu")(encoder_input)

last_x = last_layer_outputs(encoder_input)
print(last_x.shape) # (None, 22, 22, 256)

mid_x = mid_layer_outputs(encoder_input)
mid_x = tf.keras.layers.Conv2D(32, kernel_size=3, strides=1)(mid_x)
print(mid_x.shape) # (None, 24, 24, 32)

last_x = tf.keras.layers.GlobalMaxPooling2D()(last_x)
mid_x = tf.keras.layers.GlobalMaxPooling2D()(mid_x)
print(last_x.shape, mid_x.shape) # (None, 256) (None, 32)

encoder_output = tf.keras.layers.Concatenate()([last_x, mid_x])
print(encoder_output.shape) # (None, 288)

encoder_output = tf.keras.layers.Dense(100, activation='softmax')(encoder_output)
print(encoder_output.shape) # (None, 100)

encoder = tf.keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()

Train on CIFAR100

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar100.load_data()

# train set / data 
x_train = x_train.astype('float32') / 255
y_train = tf.keras.utils.to_categorical(y_train , num_classes=100)

print(x_train.shape, y_train.shape)

encoder.compile(
          loss      = tf.keras.losses.CategoricalCrossentropy(),
          metrics   = tf.keras.metrics.CategoricalAccuracy(),
          optimizer = tf.keras.optimizers.Adam())
# fit 
encoder.fit(x_train, y_train, batch_size=128, epochs=5, verbose = 1)

Reference: Feature extraction with a Sequential model

Sign up to request clarification or add additional context in comments.

3 Comments

I have some new questions. In this case, does it mean that the two extractors run separately? Actually, for the same input image, I hope to run the model once to extract different features.
If you don't understand in any part, please feel free to ask.
I gave another answer based on your comment, if it what you want, please mark that as the right answer so that future reader can get understanding.
0

Based on your first comment on my first post, I'm adding a new post rather than editing my existing answer as it's already got a too-long post. Anyway, your concern is reasonable. Even I was also struggling with some kind of issue with subclasses API, here. But It seems that I didn't write quite well in my query there, as people didn't feel like it's a matter of concern.

Anyway, here is another with a more concise and precise answer as we build a single model with desired outputs. A single extractor rather than previously two separate extractors which brought extra computation overhead. Let's say, our sequential model

import tensorflow as tf 

seq_model = tf.keras.Sequential(
    [
        tf.keras.Input(shape=(32, 32, 3)),
        tf.keras.layers.Conv2D(16, 3, activation="relu", name='conv1'),
        tf.keras.layers.Conv2D(32, 3, activation="relu", name='conv2'),
        tf.keras.layers.Conv2D(64, 3, activation="relu", name='conv3'),
        tf.keras.layers.Conv2D(128, 3, activation="relu", name='conv4'),
        tf.keras.layers.Conv2D(256, 3, activation="relu", name='conv5'),
        tf.keras.layers.GlobalAveragePooling2D(), 
        tf.keras.layers.Dense(10, activation='softmax')
     
    ]
)

for l in seq_model.layers:
    print(l.name, l.output_shape)

conv1 (None, 30, 30, 16)
conv2 (None, 28, 28, 32)
conv3 (None, 26, 26, 64)
conv4 (None, 24, 24, 128)
conv5 (None, 22, 22, 256)
global_average_pooling2d_3 (None, 256)
dense_3 (None, 10)

And we want conv3 and conv5 from a single model. We can do that easily as follows

model = tf.keras.models.Model(
        inputs=[seq_model.input], 
        outputs=[seq_model.get_layer('conv3').output,
                 seq_model.get_layer('conv5').output]
    )

# check 
for i in check_model(tf.keras.Input((32, 32, 3))):
    print(i.name, i.shape)

model_13/conv3/Relu:0 (None, 26, 26, 64)
model_13/conv5/Relu:0 (None, 22, 22, 256)

Nice, two feature output from the expected layer. Now, let's use this two layers (like my first post) to build a functional API model.

encoder_input = tf.keras.Input(shape=(32, 32, 3), name="img")
x = tf.keras.layers.Conv2D(16, 3, activation="relu")(encoder_input)

last_x = check_model(encoder_input)[0]
print(last_x.shape) # (None, 26, 26, 64) - model_13/conv3/Relu:0 (None, 26, 26, 64)

mid_x = check_model(encoder_input)[1] # model_13/conv5/Relu:0 (None, 22, 22, 256)
mid_x = tf.keras.layers.Conv2D(32, kernel_size=3, strides=1)(mid_x)
print(mid_x.shape) # (None, 20, 20, 32)

last_x = tf.keras.layers.GlobalMaxPooling2D()(last_x)
mid_x = tf.keras.layers.GlobalMaxPooling2D()(mid_x)
print(last_x.shape, mid_x.shape) # (None, 64) (None, 32)

encoder_output = tf.keras.layers.Concatenate()([last_x, mid_x])
print(encoder_output.shape) # (None, 96)

encoder_output = tf.keras.layers.Dense(100, activation='softmax')(encoder_output)
print(encoder_output.shape) # (None, 100)

encoder = tf.keras.Model(encoder_input, encoder_output, name="encoder")

tf.keras.utils.plot_model(
    encoder,
    show_shapes=True,
    show_layer_names=True
)
(None, 26, 26, 64)
(None, 20, 20, 32)
(None, 64) (None, 32)
(None, 96)
(None, 100)

enter image description here

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.