I am interested in implementing a Recursive Neural Network in Tensorflow, like what has been done in How can I implement a recursive neural network in TensorFlow?.
However, in his implementation, the parallel_iterations of the tf.while_loop statement was fixed to be 1. I fear that this might be too slow. Since the tree I am going to feed into tensorflow have parts that are not dependent on each other, I would hope that I could set parallel_iterations to a higher value. However, it is inevitable that there are some dependencies required in the tree I feed in as input to tensorflow, and I am afraid that setting it to higher value may break the dependency property.
So my question is, had Tensorflow's tf.while_loop automatically captured dependencies already, in order to only use paralleism on placed that are not dependent on each other?
The tensorflow documentation says the following:
For correct programs, while_loop should return the same result for any parallel_iterations > 0.
But I am not sure what they mean by "correct programs".