Background: I'm writing a wrapper script which handles sending our researchers' jobs to our computing clusters (one of which uses SLURM, the other one Torque/PBS). It also has to serve as a drop-in replacement for wrapper scripts included in a few pieces of analysis software (eg. FSL) so they can interact with the cluster directly.
On the user side, the script would be called like this:
submit_job [arguments] <command to run>
The call to the scheduler gets assembled and <command to run> should end up in a small script that gets submitted to the scheduler. Currently, this is implemented like this:
# In case of SLURM
sbatch <sbatch arguments> <<EOF
#!/usr/bin/env bash
srun "${@}"
EOF
However, if I just use ${@}, the quotes are not preserved. The command could contain quotes (eg. paths with spaces), and there could be multiple commands separated with semicolons (then the whole thing would be wrapped in quotes though). I cannot really control what gets put in there by the combination of people and software that use it.
So, how can I get the command (which could contain quotes, eg. paths with spaces, and possibly multiple commands separated with semicolons) from A (script parameters) to B (variable) to C (heredoc), without messing up the quoting?
I assemble the scheduler commands using Bash arrays, as recommended in the oft-cited FAQ entry, and it works fine. However, while I can do this:
#/usr/bin/env bash
run=( mkdir "foo bar" )
set -x
"${run[@]}"
set +x
and the directory foo bar is created, I cannot do this:
#/usr/bin/env bash
run=( mkdir "foo bar" )
/usr/bin/env bash <<EOF
set -x
$run[@]
"${run[@]}"
set +x
EOF
The first one creates directories foo and bar, because expansion happens, and the second one tries, and fails, to run mkdir foo bar.
So, what am I left with?
Thank you in advance