0

Issue is, all the below listed commands make the file size 0 (Zero) but for some time.
When the new logs are generated then log file size is = old file size(before truncate) + new log messages size.
This means that truncate does not free the disk space occupied by the file.
File size should be whatever the size of new log messages those are coming after truncate command.

I have tried below options before asking this question.

  1. $ > sysout.log
  2. $ : > sysout.log
  3. $ cat /dev/null > sysout.log
  4. $ cp /dev/null sysout.log
  5. $ dd if=/dev/null of=sysout.log
  6. $ echo "" > sysout.log
  7. $ echo -n "" > sysout.log
  8. $ truncate -s 0 sysout.log

At first, I checked the file size (contains 10 lines of log messages)

[root@mylaptop ~]# ls -lh sysout.log
-rw-r--r-- 1 root root 6.0K Dec 2 11:30 sysout.log

Then ran the truncate command

[root@mylaptop ~]# truncate -s0 sysout.log
[root@mylaptop ~]# ls -lh sysout.log
-rw-r--r-- 1 root root 0 Dec 2 11:31 sysout.log

After few seconds 2 lines of log messages printed into the file. But file size is

[root@mylaptop ~]# ls -lh sysout.log
-rw-r--r-- 1 root root 6.3K Dec 2 11:31 sysout.log

As you can see, it adds the file size.
How to free the disk space? Or is there any another approach?

7
  • If a process is keep running(may be a daemon process) then these logs will be keep filling. How about either using a log rotate mechanism? Or you could write the same commands in script form and place them in crontab for scheduled run then. Commented Dec 3, 2019 at 4:32
  • Thanks @RavinderSingh13, Yes I am going to write a crontab for this. But issue is, I want to check the file size, if it's greater than 5GB then copy the logs to S3 and then truncate the file. Here, after truncate it does not release the file size then after certain time job will copy file to S3 very frequently. Please tell me about log rotate mechanism. Commented Dec 3, 2019 at 4:37
  • @AniketKulkarni : In case there is no other process having the log file open and keeps feeding it, I can not reproduce the effect. For instance, I did a ls > out.txt; truncate -s 0 out.txt; touch out.txt; echo abc >>out.txt, the file out.txt contains only abc, nothing else. Commented Dec 3, 2019 at 6:02
  • @user1934428: The file is created by command: "java - jar my_app.jar > sysout.log 2>&1 &". So it is continuously getting the data. Commented Dec 3, 2019 at 6:36
  • For every log you want to manage, you have to check the daemon generating it; to that daemon you can (often) send a signal to tell it "restart your log". You must consult the daemon docs. Logrotate is a mechanism implementing this. Commented Dec 3, 2019 at 6:54

1 Answer 1

2

For long running processes that open the file for write mode (using '>', or otherwise), the process tracks the offset of the next write. Even if the file size is truncated to 0, the next write will resume at the last location. Most likely, based on description is that the long running process continue to log at the old offset (effectively leaving lot of zero-byte data in the start of the file.

  • Verify by inspecting the file - did the initial content disappear ?

Solution is simple, instead of logging in write mode, use append mode.

# Start with a clean file
rm -f sysout.log
# Force Append mode.
java - jar my_app.jar >> sysout.log 2>>&1 &

... 
truncate ...
# New data should be written to the START of the file, based on truncated size.

Notice that all writing processes, and connections should use append mode.

Sign up to request clarification or add additional context in comments.

1 Comment

Solution worked. Thanks for the explanation over write and append.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.