0

I'm running a Kubernetes Job from a Jenkins pipeline and want to stream its logs until completion.

I currently use this pattern in a Bash script:

job_status_cmd_complete="kubectl get job ${kube_job_name} --namespace=${kube_namespace} -o jsonpath={..status.conditions[?(@.type=='Complete')].status}"
job_status_cmd_failed="kubectl get job ${kube_job_name} --namespace=${kube_namespace} -o jsonpath={..status.conditions[?(@.type=='Failed')].status}"

function jobIsFinished {
  if [[ $($job_status_cmd_complete) == 'True' || $($job_status_cmd_failed) == 'True' ]]; then
    echo 'True'
  else
    echo 'False'
  fi
}

# First log stream — immediately after pod is ready
kubectl logs --tail=-1 --follow --selector=job-name=$job_name --namespace=$namespace -c $job_name

# Loop to check for completion
while [[ $(jobIsFinished) != 'True' ]]; do
  # Second log stream to catch more output in case the job is still running
  kubectl logs --tail=100 --follow --selector=job-name=$job_name --namespace=$namespace -c $job_name
done

Why I do this:

  1. The first kubectl logs --follow is to get logs as soon as the pod is ready.
  2. The second one in the loop is a safeguard to continue watching if the job is long-running or restarts.
  3. This was intended to ensure I don't miss logs if the first stream gets disconnected or if the pod is slow to start logging.

Issue:

  1. This approach sometimes causes duplicate logs, especially in Jenkins output.

  2. It's not clear whether --follow maintains log offset or restarts from the beginning in each call.

Is kubectl logs --follow stateless and prone to duplication if used multiple times?

3
  • Configuring a log collector will probably be more reliable than trying to manage all of the possible cases of kubectl logs; see Logging Architecture in the Kubernetes documentation. Configuring this is more of a system-administration question than a programming one. Yes, kubectl logs is stateless and prone to repeating content. Commented Aug 6 at 15:58
  • Thanks for the answer, what I see is that: After kubectl logs --tail=-1... the job status isn't there, so in the while loop is begin to duplicated logs (kubectl logs --tail=100 ....). A solution is to check the status: while [[ -z "\$(kubectl get job \${kube_job_name} --namespace=\${kube_namespace} -o jsonpath='{.status.conditions}')" ]]; do counter=\$((counter + 1)) echo "Waiting... (\$counter)" sleep 1 done What do you thinks? Commented Aug 7 at 5:18
  • or a sleep 2 :D Commented Aug 7 at 8:03

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.