jq IO is rather primitive, so I'd suggest starting with:
def chunks(n):
def c: .[0:n], (if length > n then .[n:]|c else empty end);
c;
chunks(5)
The key now is to use the -c command-line option:
jq -c -f chunk.jq foo.json
With your data, this will produce a stream of three arrays, one per line.
You can pipe that into split or awk or whatever, to send each line to a separate file, e.g.
awk '{n++; print > "out" n ".json"}'
If you want the arrays to be pretty-printed in each file, you could then use jq on each, perhaps with sponge, along the lines of:
for f in out*.json ; do jq . $f | sponge $f ; done
def-free solution
If you don't want to define a function, or prefer a one-liner
for the jq component of the pipeline, consider this:
jq -c --argjson n 5 'recurse(.[$n:]; length > 0) | .[0:$n]' foo.json
Notes
chunks will also work on strings.
chunks defines the 0-arity function, c, to take advantage of jq's support for tail-call optimization.