You are running them sequentially.
Maybe one of the other scripts is still going?
Try starting them all in background.
Simple version -
for f in *.sh; do bash "$f" -H & done
If there's any output this will be a mess though. Likewise, if you log out they will crash. Here's an elaborated version to handle such things -
for f in *.sh; do
nohup bash "$f" -H <&- > "$f.log" 2>&1 &
done
The & at the end puts it into background so that the loop can start the next one without waiting for the current $f to finish. nohup catches SIGHUP, so if it takes a long time you can disconnect and come back later.
<&- closes stdin. > "$f.log" gives each script a log of its own so you can check them individually without them getting all intermixed. 2>&1 just makes sure any error output goes into the same log as the stdout - be aware that stderr is unbuffered, while stdout IS buffered, so if your error seems to be in a weird place (too early) in the log, switch it around:
nohup bash "$f" -H <&- 2>"$f.log" 1>&2 &
which ought to unbuffer them both and keep them collated.
Why do you give them all the same -H argument?
Since you mention below that you have 5k scripts to run, that kind of maybe explains why it's taking so long... You might not want to pound the server with all those at once. Let's elaborate that just a little more...
Minimally, I'd do something like this:
for f in *.sh; do
nohup nice "$f" -H <&- > "$f.log" 2>&1 &
sleep 0.1 # fractional seconds ok, longer pause means fewer per sec
done
This will start nine or ten per second until all of them have been processed, and nohup nice will run $f with a lower priority so normal system requests will be able to get ahead of it.
A better solution might be par or parallel.
set -x), to see what's going on. And of course make sure that the master script (the one you posted) does not have the extension.sh, to avoid recursive invocation. BTW, is it deliberate that you pass to every invoked script the same argument,-H? It's unrelated to your problem, but still somewhat odd.