1

Dear experts i have a small problem...i am trying to run multiple shell scripts having same extension(.sh) in one go, which are present inside a directory. In so far i wrote a common script like as below. But problem is that it does not finish running instead it keeps running.I am unable to find out where the problems persist.I hope some expert may look into it. my small code is as below. if i do something like bash scriptone.sh, bash scriptkk.sh it works fine but i donot want manual way to do it.Thanks.

#!/bin/sh

for f in *.sh; do
  bash "$f" -H 
done
1
  • I would trace the execution (using set -x), to see what's going on. And of course make sure that the master script (the one you posted) does not have the extension .sh, to avoid recursive invocation. BTW, is it deliberate that you pass to every invoked script the same argument, -H? It's unrelated to your problem, but still somewhat odd. Commented Jul 16, 2020 at 7:20

2 Answers 2

4

You are probably calling yourself recursively

#!/bin/sh

for f in *.sh; do
  if [ "$f" == "$0" ]; then
    continue
  else
    echo "running: $f"
    bash "$f" -H 
  fi
done
Sign up to request clarification or add additional context in comments.

5 Comments

Thanks @oguz, force of habit.
your script also keeps going,,,it doesnot stop.
How many shell scripts does it have to run? How long does it take for them to run?
nearly 5000 scripts i need to run
I'd encourage you to look into gnu-parallel to take full advantage of your CPU
1

You are running them sequentially.
Maybe one of the other scripts is still going?

Try starting them all in background.
Simple version -

for f in *.sh; do bash "$f" -H & done

If there's any output this will be a mess though. Likewise, if you log out they will crash. Here's an elaborated version to handle such things -

for f in *.sh; do
  nohup bash "$f" -H <&- > "$f.log" 2>&1 &
done

The & at the end puts it into background so that the loop can start the next one without waiting for the current $f to finish. nohup catches SIGHUP, so if it takes a long time you can disconnect and come back later.

<&- closes stdin. > "$f.log" gives each script a log of its own so you can check them individually without them getting all intermixed. 2>&1 just makes sure any error output goes into the same log as the stdout - be aware that stderr is unbuffered, while stdout IS buffered, so if your error seems to be in a weird place (too early) in the log, switch it around:

  nohup bash "$f" -H <&- 2>"$f.log" 1>&2 &

which ought to unbuffer them both and keep them collated.

Why do you give them all the same -H argument?

Since you mention below that you have 5k scripts to run, that kind of maybe explains why it's taking so long... You might not want to pound the server with all those at once. Let's elaborate that just a little more...

Minimally, I'd do something like this:

for f in *.sh; do
  nohup nice "$f" -H <&- > "$f.log" 2>&1 &
  sleep 0.1 # fractional seconds ok, longer pause means fewer per sec
done

This will start nine or ten per second until all of them have been processed, and nohup nice will run $f with a lower priority so normal system requests will be able to get ahead of it.

A better solution might be par or parallel.

2 Comments

Wow, that got a lot more complex than I intended, lol
i will give it a try

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.