I need to log plenty of data while running my system code. What logging packages I can use to have an efficient, asynchronous logging? Is the standard Python logging package (https://docs.python.org/2/library/logging.html) asynchronous by default?
-
2No it's not. But it easy to write your own handler that drops the message into a Queue where it is picked up by an other thread.Klaus D.– Klaus D.2017-08-23 14:49:05 +00:00Commented Aug 23, 2017 at 14:49
-
@KlausD. Can you maybe explain more or suggest some link to read about it?Ziva– Ziva2017-08-23 15:30:59 +00:00Commented Aug 23, 2017 at 15:30
-
If it's linux you can use syslog or syslog-ng directly which is quick to use.Armen Babakanian– Armen Babakanian2017-08-23 19:49:46 +00:00Commented Aug 23, 2017 at 19:49
-
related? docs.python.org/3/howto/logging-cookbook.html#blocking-handlersstarball– starball ♦2024-09-15 10:12:39 +00:00Commented Sep 15, 2024 at 10:12
2 Answers
Async code can use the usual logging features without resorting to special async modules or wrappers. Code like this is possible.
import logging
async def do_some_async_stuff(self):
logging.getLogger(__name__).info("Started doing stuff...")
logging.getLogger(__name__).warn("Things went awry...")
The concern here is whether submitting log entries will incur some delay while the entries are written to file, depriving the asynchronous system the opportunity to run other tasks during the lapse. This can happen if a blocking handler that writes to file is added directly somewhere along the logging hierarchy.
There's a simple solution for this provided by the standard logging module: use a non-blocking handler that enqueues its messages to the desired blocking handler running in its own private thread.
Pureism aside, there's no hard-bound rule that precludes the use of the QueueHandler for providing async code that logs with a non-blocking log handler, used together with a blocking handler hosted in a QueueListener.
The solution below is entirely compatible with coroutines that call up the logging loggers and submit entries in typical fashion - wrappers with calls to .run_in_executor() aren't needed. Async code won't experience any blocking behavior from the logging system.
For example, a QueueHandler can be set up as the root handler
import queue
from logging.handlers import QueueHandler
log_queue = queue.Queue()
queue_handler = QueueHandler(log_queue) # Non-blocking handler.
root = logging.getLogger()
root.addHandler(queue_handler) # Attached to the root logger.
And the blocking handler you want can be put inside a QueueListener:
from logging.handlers import QueueListener
from logging.handlers import RotatingFileHandler
rot_handler = RotatingFileHandler(...) # The blocking handler.
queue_listener = QueueListener(log_queue,
rot_handler) # Sitting comfortably in its
# own thread, isolated from
# async code.
queue_listener.start()
Then configure the handler nested in the listener with whatever log entry formatting you need.
I personally like the rotating file handler because it limits the size and number of log files produced, deleting the oldest when a new backup is created.
7 Comments
emit method of the handlerYou can execute logging.info() message using a pool of n worker, uing concurrent.futures.ThreadPoolExecutor, n should be always equals to one :
import concurrent.futures
import logging
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
def info(self, msg, *args):
executor.submit(logging.info, msg, *args)
4 Comments
info, it is indeterminate which version will be logged.