0

I have the an autoencoder written in Keras, as shown below. However, I'm having the following error:

ValueError: Error when checking model input: the list of Numpy arrays
that you are passing to your model is not the size the model expected. 
Expected to see 1 arrays but instead got the following list of 374 arrays

provided that 374 is the number of my training images.

How can I train an autoencoder on my data in this case?

from keras.layers import Input, Dense
from keras.models import Model
import os

training_directory = '/training'
testing_directory ='/validation'
results_directory = '/results'
training_images = []
validation_images = []

# the size of the encoded represenatation
encoding_dimension = 4096
# input placeholder
input_image = Input(shape=(262144,))
# the encoded representation of the input
encoded = Dense(encoding_dimension,activation='relu')(input_image)
# reconstruction of the input (lossy)
decoded = Dense(262144,activation='sigmoid')(encoded)
# map the input image to its reconstruction
autoencoder = Model(input_image,decoded)

# encoder model
# map an input image to its encoded representation
encoder = Model(input_image,encoded)

# decoder model

# place holder fpr an encoded input
encoded_input = Input(shape=(encoding_dimension,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(encoded_input,decoder_layer(encoded_input))

for root, dirs, files in os.walk(training_directory):
    for file in files:
        image = cv2.imread(root + '/' + file)
        training_images.append(image)

for root, dirs, files in os.walk(testing_directory):
    for file in files:
        image = cv2.imread(root + '/' + file)
        validation_images.append(image)

autoencoder.compile(optimizer='adam',loss='binary_crossentropy')

autoencoder.fit(training_images,epochs=10,batch_size=20,shuffle=True,validation_data=validation_images)

encoded_images = encoder.predict(validation_images)
decoded_images = decoder.predict(encoded_images)

Thanks.

EDIT

I added the following instead of the for-loops:

training_generator = ImageDataGenerator()
validation_generator = ImageDataGenerator()
training_images = training_generator.flow_from_directory(training_directory, class_mode='input')
validation_images = validation_generator.flow_from_directory(validation_directory, class_mode='input')

But, got the following:

TypeError: Error when checking model input: data should be a Numpy
array, or list/dict of Numpy arrays. Found
<keras.preprocessing.image.DirectoryIterator object at 0x2aff3a806650>...

which occurred on this statement:

autoencoder.fit(
    training_images, 
    epochs=10, 
    batch_size=20, 
    shuffle=True,
    validation_data=validation_images)

Any ideas?

1
  • I believe the problem relies on the shape of the traning_images list, link hope it helps Commented Jan 23, 2019 at 18:00

1 Answer 1

2

Although you are having a shape issue, I would recommend using the Keras's image preprocessing features, in particular the ImageDataGenerator class:

keras.preprocessing.image.ImageDataGenerator: Generate batches of tensor image data with real-time data augmentation. The data will be looped over (in batches).

It will give you access to transformation, data augmentation and other useful features to utilise your data. For the auto-encoder bit you need:

img_gen.flow_from_directory(training_directory, ..., class_mode='input')

which will take images from the directory and return as input-output pairs after applying any desired transformations. The documentation is well written on those transformations and that they allow you to do.

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks for your kind reply. I made an edit as you mentioned, but got an error. Please see the EDIT in my questions.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.