1

I want to understand how to generate a low resolution image for a high resolution using convolutional neural networks.

Is it necessary to have a smaller input image on the network and the output is an image twice the size?

I made the following model:

w,h,c=x_train[0].shape


input = Input(shape=(w,h,c),name='LR')
x = UpSampling2D(size=(2,2), name='UP')(input)
h = Dense(720, activation='relu', name ='hide')(x)
h2= Dense(1280, activation='relu', name ='hide2')(h)
output= Dense(3, activation='relu', name ='output')(h2)


model = Model(inputs=input, outputs=output)
model.compile(loss='mse', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train, epochs=50, verbose=0)

Y_train is twice the size of x_train.

But I get the following error message :

ResourceExhaustedError: OOM when allocating tensor with shape[4608000,720] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
     [[{{node hide/MatMul}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info

What am I doing wrong?

10
  • You might want to look into the concept of autoencoders. Commented Mar 5, 2019 at 12:26
  • I did model.fit(x_train,y_train,batch_size=1024, epochs=50, verbose=0) and the result is exceeded 10% of system memory. Commented Mar 5, 2019 at 12:39
  • 1
    By the way, your model is not a CNN. Commented Mar 5, 2019 at 12:41
  • @Matias Valdenegro What is the correct form in this case? Commented Mar 5, 2019 at 12:43
  • 1
    That's a different question, as @desertnaut mentions you should ask only one question at a time. Commented Mar 5, 2019 at 12:51

1 Answer 1

1

Such out-of-memory (OOM) errors are typical of large batch sizes that simply cannot fit into your memory.

I did model.fit(x_train,y_train,batch_size=1024, epochs=50, verbose=0) and the result is exceeded 10% of system memory.

1024 sounds too-large then. Start small (e.g. ~ 64), and then increase gradually in powers of 2 (e.g. 128, 256...) until you get a batch size large enough that can still fit into your memory.

The general discussion in How to calculate optimal batch size might be helpful, too...

Sign up to request clarification or add additional context in comments.

3 Comments

Modifying the batch_size did not work for me. My network must be wrong.
@Beto What exactly do you mean did not work? Did the OOM error go away? That was the point of the answer... So, you may want to kindly accept it and open a new question on the methodology part (which I see you have done already).
No, and my system (Linux) crashed. The error now is exceeded 10% of system memory and crash my system.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.