I have a queue work delagator that spawns 15-25 subprocesses via a custom multiprocessing.Pool(). The individual workers emit 1-3 logging.info messages at 10-15 messages in less then 1000ms and I've noticed that the timestamp is always sequential and never collides with other messages. This suggests to me that there is a shared lock somewhere in multiprocessing or logging but I can't figure out where it is exactly.
This is mostly asked for educational purposes as the software in question is going to be refactored to be async or multithreaded as 90% of real time is in IO ( remote api and not number crunching ).
Logging configuration mirrors Django's as I liked how that was organized:
LOGGING['handlers']['complex_console'] = {'level':'DEBUG',
'class':'logging.StreamHandler',
'formatter':'complex'
}
LOGGING['loggers']['REDACTED_sync'] = {
'handlers': ['complex_console'],
'propagate': True,
'level':'DEBUG'
}
Some quick clarification, multiprocessing.Process does use fork but calls to logging.getLogger() are not made until after a child-process is spawned.