3

I have a shell script that I use on gitlab CI that runs a supplied command. The issue is it doesn't display errors if the command failed, it just says ERROR: Job failed: exit code 1. I tried the script locally and it was able to output the failed command, is there a way I could somehow force it to display the error through my script before it exits the job?

The particular part of my script

output="$( (cd "$FOLDER"; eval "$@") 2>&1 )"
if [ "$output" ]; then
    echo -e "$output\n"
fi
2
  • Locally I am using bash since I am on ubuntu but on the CI, it's running on an alpine docker image so it's ash. Commented Jan 24, 2018 at 10:30
  • Hey glad to say that your suggestion lead me to a solution. This is what I came up with output=$(cd "$FOLDER"; eval "$@") || code="$?". Then just did checks on $code. You should do the honors of posting an answer. I could also give you the working code snippet to put on the answer. Commented Jan 26, 2018 at 4:20

1 Answer 1

2

One way to trap an error that works in every shell is to combine two commands with logical OR ||.

Catch error from subshell:

output="$( (cd "$FOLDER"; eval "$@") 2>&1 )" || errorcode="$?"

will save the error code from the previous command if it fails.

Exit with own error code if important command fails

output="$( (cd "$FOLDER"; eval "$@") 2>&1 )" || exit 12

For more complex things one can define a function that will be called after the OR.

handle_error() {
# do stuff
}

output="$( (cd "$FOLDER"; eval "$@") 2>&1 )" || handle_error
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.