0

I was wondering if it is possible to create a customized network structure where the input layer has an extra connection to a hidden layer that is not adjacent to the input layer by using tensorflow. As an example, suppose I have a simple network structure as shown below.

import numpy as np
import random
import tensorflow as tf
from tensorflow import keras 

m = 200
n = 5
my_input=  np.random.random([m,n])
my_output =  np.random.random([m,1])

          

my_model = tf.keras.Sequential([  
    tf.keras.layers.Flatten(input_shape=(my_input.shape[1],)),
    tf.keras.layers.Dense(32, activation='softmax'),
    tf.keras.layers.Dense(32, activation='tanh'),
    tf.keras.layers.Dense(32, activation='relu'),
    tf.keras.layers.Dense(1) 
])   
                    

my_model.compile(loss='mse',optimizer = tf.keras.optimizers.Adam(learning_rate=0.001))
res = my_model.fit(my_input,  my_output, epochs=50, batch_size=1,verbose=0)

Is there a way that the first layer having the input values can have an extra connection to the third layer that has the ReLU activation? While doing so, I'd like to have different constraints in each connection. For example, for the connection coming from the previous layer, I'd like to use GlorotNormal as my weight initialization. As for the extra connection coming from the input layer, I'd like to use HeUniform initialization.

I tried to visualize what I have in mind below.

enter image description here

1 Answer 1

1

Use the Keras functional API and tf.concat:

import numpy as np
import random
import tensorflow as tf
from tensorflow import keras 

m = 200
n = 5
my_input=  np.random.random([m,n])
my_output =  np.random.random([m,1])

inputs = tf.keras.layers.Input((my_input.shape[1],))  
x = tf.keras.layers.Flatten()(inputs)
x = tf.keras.layers.Dense(32, activation='softmax')(x)
x = tf.keras.layers.Dense(32, activation='tanh', kernel_initializer=tf.keras.initializers.GlorotNormal())(x)
y = tf.keras.layers.Dense(my_input.shape[1], kernel_initializer=tf.keras.initializers.HeUniform())(inputs)
x = tf.keras.layers.Dense(32, activation='relu')(tf.concat([x, y], axis=1))
outputs = tf.keras.layers.Dense(1)(x)

my_model = tf.keras.Model(inputs, outputs)

dot_img_file = 'model_1.png'
tf.keras.utils.plot_model(my_model, to_file=dot_img_file, show_shapes=True)
my_model.compile(loss='mse',optimizer = tf.keras.optimizers.Adam(learning_rate=0.001))
res = my_model.fit(my_input,  my_output, epochs=50, batch_size=1,verbose=0)

enter image description here

Sign up to request clarification or add additional context in comments.

10 Comments

Thanks! I will give it a try. May I ask where you found this info? I went through the website and could not find any information.
Thanks for the image. Looking at it now, how am I going to constraint weights in those two connections differently. My intention is to use different weight structures for x and inputs?
Oh okay! I thought you were reading a png file and calling it thru tensorflow.
Yeah you can define a new layer. You probably need to install the Graphviz library to plot your model.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.