2

I would like to train a CNN using a 2D numpy array as input, but I am receiving this error: ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (21, 21).

My input is indeed a 21x21 numpy array of floats. The first layer of the network is defined as Conv2D(32, (3, 3), input_shape=(21, 21, 1)) to match the shape of the input array.

I have found some similar questions but none pertaining to a 2D input array, they mostly deal with images. According to the documentation, Conv2D is expecting an input of a 4D tensor containing (samples, channels, rows, cols), but I cannot find any documentation explaining the meaning of these values. Similar questions pertaining to image inputs suggest reshaping the input array using np.ndarray.reshape(), but when trying to do that I receive an input error.

How can I train a CNN on such an input array? Should input_shape be a different size tuple?

1 Answer 1

4

Your current numpy array has dimensions (21, 21). However, TensorFlow expects input tensors to have dimensions in the format (batch_size, height, width, channels) or BHWC implying that you need to convert your numpy input array to 4 dimensions (from the current 2 dimensions). One way to do so is as follows:

input = np.expand_dims(input, axis=0)
input = np.expand_dims(input, axis=-1)

Now, the numpy input array has dimensions: (1, 21, 21, 1) which can be passed to a TF Conv2D operation.

Hope this helps! :)

Sign up to request clarification or add additional context in comments.

1 Comment

Yes thanks! I take a look to so many threads but this one works fine for me. My special case is that I create a synthetical image with the values 0 or 255. This starts from the np.zeros(width, height).

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.