This is my example code:
N = 3000
with tf.variable_scope("scope") as scope:
A = tf.Variable(np.random.randn(N,N), dtype=tf.float32, name='A')
sess = tf.Session()
for _ in range(100):
sess.run(tf.global_variables_initializer())
Running the code allocates >10GB of memory on my machine. I want to re-train my model multiple times without having to reset the whole graph to the default graph every time. What am I missing?
Thanks!