I'm writing a multi-process program and using a queue to collect the results. After that, I move the results from the queue to a list in the main process (other processes are terminated). Here's the code
import multiprocessing
result = multiprocessing.Queue(maxsize=16)
# the multiprocessing part which fills the Queue with 16 elements
res_list = []
while not result.empty():
res_list.append(result.get())
print(results.qsize())
print(len(res_list))
I'm expecting
0
16
But the output is
3
13
I also got other numbers such as 4, 12 when running the program several times. Sometimes I do get 0, 16, but this is not guaranteed.
I've read this post: Multiprocessing Queue empty() function not working reliably in python
The answer mentioned that the number that qsize() returns might not be reliable. But I think it is only the number that is not reliable, the queue should actually be emptied. Also, empty() also returns true (so the program jumps out of the while part), so the queue should already be emptied.
However, the res_list only gets 13 elements. So I'm doubting if the queue is really emptied and why this happened. Also, does anyone have any idea about how can I get all my results from the result queue.
Thanks a lot!
queue.get(block=False)in a loop inside atry...exceptblock. The exception raised, in case the queue is finally empty, will bequeue.Empty. That's when you know the queue is actually empty and you can break the loop. As for your question of why this happens, it might just be possible that the feeder process which puts items on the queue is slower. So the queue does get empty, which causes the loop to break, but the feeder process puts more items after that. More context on how the feeder process functions would be helpful here.q.size,q.full, andq.emptyall may be unreliable due to the fact that a thread is handling read/write to the underlying pipe in the background, and no checks are done on the state of that thread. I would almost never rely on them, and consequently almost never use them. They are included for cross compatibility with the non-multiprocessing version ofQueuewhere those calls are reliable (because there's no thread to worry about. I think @Charchit is probably most correct however here that you manage to empty the queue briefly before all the items are put in by the child process.join. But the documentation warns of a possible deadlock if you join the child processes before retrieving all the items the child processes have put on the queue. One way to handle this is for each of the N child process to put a sentinel value to the queue as the final item. The main processes does blocking get calls until it has seen N sentinel values.Emptyexception before all 16 items have been put.