9

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.

I tried adding this:

stages:
    - logger

logger-commands:
    stage: logger
    allow_failure: true
    script:
        - echo 'Examining environment'
        - echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
        - git --version
        - echo --------------------------------------------------------------------------------
        - env
        - echo --------------------------------------------------------------------------------
        - npm --version
        - node --version
        - echo java -version
        - mvn --version
        - kanico --version
        - echo -------------------------------------------------------------------------------- 

The problem is that the Java command is failing because java isn't installed. The error says:

/bin/sh: eval: line 217: java: not found

I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.

Searching for the above solution got me close.

./script_that_fails.sh > /dev/null 2>&1 || FAILED=true

if [ $FAILED ]
    then ./do_something.sh
fi

So that is helpful, but my question is this.

Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?

        - npm --version || echo nmp failed
        - node --version  || echo node failed
        - echo java -version || echo java failed

That is a little cleaner (syntax) but I'm trying to make it simpler.

2
  • Does this answer your question? Is it possible to allow for a script in a CI/CD job to fail? Commented Apr 19, 2021 at 4:38
  • 1
    GitLab CI relies entirely on the return code of the executed command. The only workaround is to make the command return a successful status - in your case, probably something like this: echo java -version || echo "No java installed" Commented Apr 19, 2021 at 4:41

1 Answer 1

9

The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.

If the command did fail, the command is printed along with the non-zero exit code.

# File: runit

#!/bin/sh
"$@"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
    echo "CMD: $@"
    echo "Ignored exit code ($EXITCODE)"
fi
exit 0

Testing it as follows:

./runit ls "/bad dir"
echo "ExitCode = $?"

Gives this output:

ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0

Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.

To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,

stages:
  - logger-safe

logger-safe-commands:
  stage: logger-safe
  allow_failure: true
  script:
    - ./runit npm --version
    - ./runit java -version
    - ./runit mvn --version

I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:

    - some_command || echo command failed $?
Sign up to request clarification or add additional context in comments.

1 Comment

Nice! helped me a lot :)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.