This one does precisely what was requested:
#!/bin/bash
ctr=0;
# Read 1M lines, strip newline chars, put the results into an array named "asdf"
while readarray -n 1000000 -t asdf; do
ctr=$((${ctr}+1));
# "${asdf[@]}" expands each entry in the array such that any special characters in
# the filename won't cause problems
tar czf /destination/path/asdf.${ctr}.tgz "${asdf[@]}";
# If you don't want compression, use this instead:
#tar cf /destination/path/asdf.${ctr}.tar "${asdf[@]}";
# this is the canonical way to generate output
# for consumption by read/readarray in bash
done <(find /source/path -not -type d);
readarray (in bash) can also be used to execute a callback function, so that could potentially be re-written to resemble:
function something() {...}
find /source/path -not -type d \
| readarray -n 1000000 -t -C something asdf
GNU parallel could be leveraged to do something similar (untested; I don't have parallel installed where I'm at so I'm winging it):
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 tar czf '/destination/path/thing_backup.{#}.tgz'
Since that's untested you could add the --dry-run arg to see what it'll actually do. I like this one the best, but not everyone has parallel installed. -j4 makes it use 4 jobs at a time, -d '\0' combined with find's -print0 makes it ignore special characters in the filename (whitespace, etc). The rest should be self explanatory.
Something similar could be done with parallel but I don't like it because it generates random filenames:
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 --tmpdir /destination/path --files tar cz
I don't [yet?] know of a way to make it generate sequential filenames.
xargs could also be used, but unlike parallel there's no straightforward way to generate the output filename so you'd end up doing something stupid/hacky like this:
find /source/path -not -type d -print0 \
| xargs -P 4 -0 -L 1000000 bash -euc 'tar czf $(mktemp --suffix=".tgz" /destination/path/backup_XXX) "$@"'
The OP said they didn't want to use split ... I thought that seemed weird as cat will re-join them just fine; this produces a tar and splits it into 3gb chunks:
tar c /source/path | split -b $((3*1024*1024*1024)) - /destination/path/thing.tar.
... and this un-tars them into the current directory:
cat $(\ls -1 /destination/path/thing.tar.* | sort) | tar x
xargs -Lmay be helpful.tarisn'tzip. Please don't confuse them.