0

So i have something along the lines of:

set -eo pipefail
ssh localhost "ls; exit 1" | cat > output

I thought pipefail would prevent cat to write to ouput (because of the exit code from ssh session), but it seems it will not. I then tried creating a fifo, but i am unsure of how to read the exit status of ssh without consuming the content in the fifo, because it seems it will wait until something "consumes" it.

set -eo pipefail
[[ -e /tmp/sshdump ]] && rm /tmp/sshdump; mkfifo /tmp/sshdump;
ssh localhost "ls; exit 1" > /tmp/sshdump &
wait $! && cat > output

So, how do i:

  1. avoid make the pipe exit early in the first example (and not reach cat)?
  2. checking exit code of ssh without consuming the fifo content?

if it matters, a POSIX compatible solution is always appreciated, but bash is fine too. In my particular case i'm using windows and git-bash, so maybe there is som quirks i'm not aware of.

EDIT: full disclosure, i should probably have explained what i was traying to achieve with this from the get go, rather than simplifying it to the point where the question doesn't properly reflect the intent. I was trying to pipe the output of "mysqldump" on a remote ssh shell directly to "wp db import" on my machine without creating intermediate files. I originally thought pipefail would stop the pipeline and thus prevent the import on my end to go through if there was a failure. Why this was desireable to me was simplicity of not creating temporary files, and of course less code.

3
  • But the ssh session completed successfully, when the remote commands finished. The remote $STATUS is not brought back from the remote session. This sounds like an XY problem. What are you trying to accomplish, that this seems desirable? Commented Aug 1, 2024 at 22:55
  • Oh really? I was under the impression that the exit status of the session depended on the exit status of the remote shell in the case of ssh. I'm trying to redirect a stream to my computer, which is no problem but i don't want to receive it if it returns any other exit code than 0. Commented Aug 2, 2024 at 0:07
  • Sorry to add yet another comment but according to the manual: "ssh exits with the exit status of the remote command or with 255 if an error occurred" - man7.org/linux/man-pages/man1/ssh.1.html#EXIT_STATUS Commented Aug 2, 2024 at 0:25

1 Answer 1

1

When the shell processes

ssh localhost "ls; exit 1" | cat > output

it sets up the redirection and pipe before actually running any of the commands. So output is created or truncated before cat runs.

There’s another gotcha here: pipefail only changes the way the exit status of a pipeline is determined, it doesn’t control execution within the pipeline, even with set -e. All the commands in a pipeline are set up simultaneously (or close enough), they’re not run sequentially. This means that cat runs alongside ssh, and will in most cases process the output from ls before the session exits with code 1.

One way to do what you’re after is to store the output somewhere and only process it if the ssh command succeeds:

set -o errexit # the more self-explanatory form of set -e
tmp=$(mktemp ./.tmpXXXXXX) # temp file preferably in same directory
                           # as final file (here .) so the mv is more 
                           # likely to be a straight atomic rename.
trap 'rm -f -- "$tmp"' EXIT INT HUP ALRM TERM
ssh localhost "ls; exit 1" > "$tmp"
mv -- "$tmp" output

Beware mktemp creates files readable and writeable by the owner only, so you may want to adjust the permissions afterwards.

The ksh93 shell has a built-in way to do that:

ssh localhost "ls; exit 1" >; output

Does the same thing as above and also takes care of restoring the original permissions of the file (if already there at the start) or honours the umask if it didn't exist before hand.

Another benefit of writing into a separate file, renamed to the target in the end is that the output file is created or replaced atomically. At no point would another process trying to open output would see it unfinished.

1
  • I do admit i really did think every command in a pipe did ran sequentially, so that certainly explains things! Thanks for clearing up my misunderstanding about pipefail and also the advice about using trap. I still see no good way of avoiding intermediate files, which could be a limiting factor if the transfers are large enough, but i suppose there's always lower level languages for such cases. Commented Aug 2, 2024 at 14:15

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.