3

I created a custom keras layer with the purpose of manually changing activations of previous layer during inference. Following is basic layer that simply multiplies the activations with a number.

import numpy as np
from keras import backend as K
from keras.layers import Layer
import tensorflow as tf

class myLayer(Layer):

    def __init__(self, n=None, **kwargs):
        self.n = n
        super(myLayer, self).__init__(**kwargs)

    def build(self, input_shape):

        self.output_dim = input_shape[1]
        super(myLayer, self).build(input_shape)

    def call(self, inputs):

        changed = tf.multiply(inputs, self.n)

        forTest  = changed
        forTrain = inputs

        return K.in_train_phase(forTrain, forTest)

    def compute_output_shape(self, input_shape):
        return (input_shape[0], self.output_dim)

It works fine when I use it like this with IRIS dataset

model = Sequential()
model.add(Dense(units, input_shape=(5,)))
model.add(Activation('relu'))
model.add(myLayer(n=3))
model.add(Dense(units))
model.add(Activation('relu'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()

However now I want to move 'n' from init to the call function so I can apply different values of n after training to evaluate model. The idea is to have a placeholder inplace of n which can be initialzed with some value before calling the evaluate function on it. I am not sure how to achieve this. What would the correct approach for this? Thanks

1 Answer 1

3

You should work the same way the Concatenate layer does.

These layers taking multiple inputs rely on the inputs (and the input shapes) being passed in a list.

See the verification part in build, call and comput_output_shape:

def call(self,inputs):
    if not isinstance(inputs, list):
        raise ValueError('This layer should be called on a list of inputs.')
    
    mainInput = inputs[0]
    nInput = inputs[1]

    changed = tf.multiply(mainInput,nInput)
    #I suggest using an equivalent function in K instead of tf here, if you ever want to test theano or another backend later. 
    #if n is a scalar, then just "changed=nInput * mainInput" is ok

    #....the rest of the code....

Then you call this layer passing a list to it. But for that, I strongly recommend you move away from Sequential models. They're pure limitation.

from keras.models import Model

inputTensor = Input((5,)) # the original input (from your input_shape)

#this is just a suggestion, to have n as a manually created var
#but you can figure out your own ways of calculating n later
nInput = Input((1,))
    #old answer: nInput = Input(tensor=K.variable([n]))

#creating the graph
out = Dense(units, input_shape=(5,),activation='relu')(inputTensor)

#your layer here uses the output of the dense layer and the nInput
out = myLayer()([out,nInput])
    #here you will have to handle n with the same number of samples as x. 
    #You can use `inputs[1][0,0]` inside the layer

out = Dense(units,activation='relu')(out)
out = Dense(3,activation='softmax')(out)

#create the model with two inputs and one output:
model = Model([inputTensor,nInput], out)
    #nInput is now a part of the model's inputs

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])

Using the old answer, with Input(tensor=...), the model will not demand, as usually would happen, that you pass 2 inputs to the fit and predict methods.

But using the new option, with Input(shape=...) it will demand two inputs, so:

nArray = np.full((X_train.shape[0],1),n)
model.fit([X_train,nArray],Y_train,....)

Unfortunately, I coulnd't make it work with n having only one element. It must have exactly the same number of samples as (this is a keras limitation).

Sign up to request clarification or add additional context in comments.

11 Comments

Thanks Daniel. That is exactly what I was looking for. However I tried your code and it is giving me the following error: My Code Error Error Do you know why I am getting this?
An issue here says that: This is a symptom of one of two things: your inputs don't come from keras.layers.Input() your outputs aren't the output of a Keras layer. Make sure that you are only passing to Model 1) inputs generated via Input 2) outputs generated by a Keras layer, with no further ops applied to them.
Ok, sorry for that. Now it's working. Notice that nInput is now an Input, and that the model must be created taking it into account in Model([inputTensor,nInput],out)
I've tried many things, and the only that actually worked was using Input((1,)) for nInput, and then training passing n as n=np.full((X_train.shape[0],1), nValue) --- See updated answer.
Aah! Found an answer, using K.variable instead of K.placeholder: stackoverflow.com/questions/46234722/…
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.