2

This is my example code:

N = 3000
with tf.variable_scope("scope") as scope:
    A = tf.Variable(np.random.randn(N,N), dtype=tf.float32, name='A')

sess = tf.Session()

for _ in range(100):
    sess.run(tf.global_variables_initializer())

Running the code allocates >10GB of memory on my machine. I want to re-train my model multiple times without having to reset the whole graph to the default graph every time. What am I missing?

Thanks!

1 Answer 1

3

I found the problem. For anybody else having the same problem in the future: The problem seems to be that a new initialization operation is created each time in the loop. The solution for me was to reuse the initialization operation. This fixes the memory 'leakage' for me:

N = 3000
tf.reset_default_graph()
with tf.variable_scope("scope") as scope:
    A = tf.Variable(np.random.randn(N,N), dtype=tf.float32, name='A')

varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="scope")
init = tf.variables_initializer(varlist) # or tf.global_variables_initializer()
for _ in range(100):
    sess = tf.Session()
    sess.run(init) # here we reuse the init operation
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.