I created a custom keras layer with the purpose of manually changing activations of previous layer during inference. Following is basic layer that simply multiplies the activations with a number.
import numpy as np
from keras import backend as K
from keras.layers import Layer
import tensorflow as tf
class myLayer(Layer):
def __init__(self, n=None, **kwargs):
self.n = n
super(myLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.output_dim = input_shape[1]
super(myLayer, self).build(input_shape)
def call(self, inputs):
changed = tf.multiply(inputs, self.n)
forTest = changed
forTrain = inputs
return K.in_train_phase(forTrain, forTest)
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
It works fine when I use it like this with IRIS dataset
model = Sequential()
model.add(Dense(units, input_shape=(5,)))
model.add(Activation('relu'))
model.add(myLayer(n=3))
model.add(Dense(units))
model.add(Activation('relu'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
However now I want to move 'n' from init to the call function so I can apply different values of n after training to evaluate model. The idea is to have a placeholder inplace of n which can be initialzed with some value before calling the evaluate function on it. I am not sure how to achieve this. What would the correct approach for this? Thanks