When I use sd_journal_open() the process balloons in memory depending on the O/S and the flags used to open the journal.
I wrote a test that does nothing more than open the journal and idle in order to review the memory size. On a semi-lightly used, stock Ubuntu server with 16GB RAM, opening with no flags (all possible sources are loaded), the process memory went to almost 1GB. When using SD_JOURNAL_RUNTIME_ONLY reduced it to a trivial number, but there was almost no data, no logs available in that mode. It appears SD_JOURNAL_SYSTEM is the best compromise that still makes most system logs available (around 150MB). But on a CentOS server with only 4GB RAM, no matter what flags I use, opening the journal always takes around 250MB. (Flags description is on the manpage)
- Aside from the flags used to open the journal, are there systemd journal configuration settings that affect this?
- Are there any other programmatic tactics to slim down the memory requirements of accessing journal data in an app?
Note: Quoted memory usage is VIRT, but RES is still ten times as large as what it takes to directly open a given log file - can be from 30MB to nearly 100MB merely to open the journal.