I have implemented the following custom Layer that modify the size of a learnable parameter seed_vectors upon call according to the size of input x using the function repeat.
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow import repeat
from tensorflow.keras.layers import LayerNormalization
class PoolingMultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d, k, h):
"""
Arguments:
d: an integer, input dimension.
k: an integer, number of seed vectors.
h: an integer, number of heads.
"""
super(PoolingMultiHeadAttention, self).__init__()
self.seed_vectors = self.add_weight(initializer='uniform',
shape=(1, k, d),
trainable=True)
def call(self, z):
"""
Arguments:
z: a float tensor with shape [b, n, d].
Returns:
a float tensor with shape [b, k, d]
"""
b = z.shape[0]
s = self.seed_vectors
s = repeat(s, (b), axis=0, name='rep') # shape [b, k, d]
return s*z
# Dimensionality test
z = tf.random.normal(shape=(10, 2, 9))
pma = PoolingMultiHeadAttention(d=9, k=2, h=3)
pma(z)
I have tested dimensionality input/output in unit tests and it works fine, but unfortunately if I use this Layer inside a Model it fails with error:
<ipython-input-4-89023d123369>:110 call *
s = repeat(s, (b), axis=0, name='rep') # shape [b, k, d]
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5616 repeat **
return repeat_with_axis(input, repeats, axis, name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5478 repeat_with_axis
repeats = convert_to_int_tensor(repeats, name="repeats")
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py:5388 convert_to_int_tensor
tensor = ops.convert_to_tensor(tensor, name=name, preferred_dtype=dtype)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1341 convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:317 _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:258 constant
allow_broadcast=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py:296 _constant_impl
allow_broadcast=allow_broadcast))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py:439 make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.
This error seems to be related to the lack of an output (or output is None) [which I know it is not the case as I have tested the function in eager mode and it works] or for some reason backprop does not work with this op (repeat).
I do not know of any alternative way to modify the size of that parameter at runtime + (almost) the same code works fine using Pytorch (https://github.com/TropComplique/set-transformer/blob/master/blocks.py)
Thanks