I am trying to write an archiver script in bash but there are too many files. They are realy too many.. about 1 million files.
I planned that, I will create the list of files with;
cd /path/to/log/directory/
find . -type f > logfilelist.txt
And then, I will tar and zip them with;
tar -cvf logarchive.tar $(cat logfilelist.txt)
gzip logarchive.tar
But, because of returning too many lines from cat, the tar gives "Arg list too long" error.
So I tought that, if I can read the file in a loop, I can archive them piece by pece by using append mode of tar. But making a million-line loop is not logical. So, can I read the list file with multiple lines like this;
tar -cf logarchive.tar $(first 50000 lines of logfilelist.txt)
for loop
do
tar -rvf logarchive.tar $(2nd,3rd,...,99th,100th 50000 lines of logfilelist.txt)
done
is it possible to cat multiple lines from a file?
And then, I will tar and zip them with;? Why not just do just that from the start, likecd /path/to/log/directory/ ; tar -cvf logarchive.tar .?