2

When I use sd_journal_open() the process balloons in memory depending on the O/S and the flags used to open the journal.

I wrote a test that does nothing more than open the journal and idle in order to review the memory size. On a semi-lightly used, stock Ubuntu server with 16GB RAM, opening with no flags (all possible sources are loaded), the process memory went to almost 1GB. When using SD_JOURNAL_RUNTIME_ONLY reduced it to a trivial number, but there was almost no data, no logs available in that mode. It appears SD_JOURNAL_SYSTEM is the best compromise that still makes most system logs available (around 150MB). But on a CentOS server with only 4GB RAM, no matter what flags I use, opening the journal always takes around 250MB. (Flags description is on the manpage)

  • Aside from the flags used to open the journal, are there systemd journal configuration settings that affect this?
  • Are there any other programmatic tactics to slim down the memory requirements of accessing journal data in an app?

Note: Quoted memory usage is VIRT, but RES is still ten times as large as what it takes to directly open a given log file - can be from 30MB to nearly 100MB merely to open the journal.

1
  • Please provide enough code so others can better understand or reproduce the problem. Commented Oct 3 at 3:14

1 Answer 1

3

Libsystemd-journal is memory-hungry, but VIRT is not memory usage, it is address space usage. Libsystemd uses memory-mapped files through mmap() to read journals rather than using seek() and read() – each .journal file is directly mapped to a memory address, but the mappings do not actually contribute to physical memory usage in and of themselves. The parts that the process has read only contribute to 'cached' memory. (So the total memory usage is probably lower than when using read(), as it avoids the need to copy data into the process's allocated memory space.)

So it should only be necessary to reduce RES, not really VIRT. One way to achieve that might be to reduce the number of *.journal files in /var/log/journal – depending on log rotation parameters you might have quite a few of them, and it seems that sd_journal RAM usage grows depending on the number of files it has to interleave.

  • Use persistent storage (on disk in /var/log/journal) rather than runtime storage. Runtime storage (stored in the /run tmpfs) itself contributes to "permanent" RAM usage.

  • If you have many tiny .journal files, configure systemd-journald to have a lower SystemMaxFiles= and a higher SystemMaxFileSize= so that logs would be archived in fewer, larger .journal files. Also lower MaxRetentionSec= to avoid old logs from accumulating.

  • If you have many interactive users (UIDs), each UID corresponds to its own set of journal files (just having root alone will already double the number of .journal files as they all are rotated independently).

    This is sometimes useful as it allows unprivileged users to see what syslog messages their own programs has generated, but in severely constrained systems you can disable this using SplitMode=none.

  • Collect logs to another system and analyze them there instead.

Note that the journal files containing @ in their name are no longer "live"; they can be safely deleted manually if you want to discard some old logs.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.