0

I am trying to set up kubernetes pods with following file. The image is just the basic hello-world image from Docker HUB.

But I keep getting errors.

Created a deployment via following command

kubectl apply -f config.yaml

No errors and says deployment created. But after few seconds when I try to see pods via following command, I get these error.

kubectl get pod

Output

some-deployment-857fb6d98b-jzwhq   0/1     ImagePullBackOff   0         
some-deployment-857fb6d98b-ethfs   0/1     ImagePullBackOff   0          
some-deployment-857fb6d98b-w8hgt   0/1     ImagePullBackOff   0  

    

After a while when I run same get pod command, I get these.

some-deployment-6f88c9fd89-65c47   0/1     CrashLoopBackOff   8          
some-deployment-84d66585d4-k52ns   0/1     CrashLoopBackOff   9          
some-deployment-857fb6d98b-jzwhq   0/1     ErrImagePull       0          
some-deployment-857fb6d98b-w8hgt   0/1     ErrImagePull       0  

    

I just changed to another image to see if the image is the issue.
It keeps changing. Why do I have 4 pods now?

some-deployment-6f88c9fd89-65c47   0/1     Completed          8          
some-deployment-84d66585d4-k52ns   0/1     CrashLoopBackOff   8          
some-deployment-857fb6d98b-jzwhq   0/1     ImagePullBackOff   0          
some-deployment-857fb6d98b-w8hgt   0/1     ImagePullBackOff   0 

     

After a while this.

some-deployment-6f88c9fd89-65c47   0/1     CrashLoopBackOff   8          
some-deployment-84d66585d4-k52ns   0/1     CrashLoopBackOff   9          
some-deployment-857fb6d98b-jzwhq   0/1     ImagePullBackOff   0          
some-deployment-857fb6d98b-w8hgt   0/1     ImagePullBackOff   0  

    

Issue seems to be with the image pull. The image is coming from public Docker HUB in my case.
What is causing the image pull issue?

This is the file in use.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: networkchuckcoffee-deployment
  labels:
    app: nccoffee
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nccoffee
  template:
    metadata:
      labels:
        app: nccoffee
    spec:
      containers:
      - name: nccoffee
        image: hello-world # initially attempted this image which is also public on DockerHUB -> thenetworkchuck/nccoffee 
        imagePullPolicy: Always
        ports:
        - containerPort: 80

Pod Logs. Same message for all pods:

Command Used:

kubectl logs some-deployment-857fb6d98b-w8hgt

--

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Describe logs: (Not concerned even if there is sensitive data in here, will be deleting this deployment after)

Name:         some-deployment-857fb6d98b-jzwhq
Namespace:    default
Priority:     0
Node:         app-pool-8nid8/10.106.0.5
Start Time:   Sat, 06 Feb 2021 15:45:38 +0000
Labels:       app=nccoffee
              pod-template-hash=857fb6d98b
Annotations:  <none>
Status:       Running
IP:           10.244.1.96
IPs:
  IP:           10.244.1.96
Controlled By:  ReplicaSet/some-deployment-857fb6d98b
Containers:
  nccoffee:
    Container ID:   containerd://beb3b0ac0cd63abc1821e259c0fe24b8d8170bee68d50bffc5590c9154f07ead
    Image:          hello-world
    Image ID:       docker.io/library/hello-world@sha256:31b9c7d48790f0d8c50ab433d9c3b7e17666d6993084c002c2ff1ca09b96391d
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 06 Feb 2021 15:56:40 +0000
      Finished:     Sat, 06 Feb 2021 15:56:40 +0000
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n4n95 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-n4n95:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n4n95
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  13m                  default-scheduler  Successfully assigned default/some-deployment-857fb6d98b-jzwhq to app-pool-8nid8
  Normal   Pulled     13m                  kubelet            Successfully pulled image "hello-world" in 802.715724ms
  Normal   Pulled     13m                  kubelet            Successfully pulled image "hello-world" in 766.964136ms
  Normal   Pulled     12m                  kubelet            Successfully pulled image "hello-world" in 790.211436ms
  Normal   Created    12m (x4 over 13m)    kubelet            Created container nccoffee
  Normal   Pulled     12m                  kubelet            Successfully pulled image "hello-world" in 794.431351ms
  Normal   Started    12m (x4 over 13m)    kubelet            Started container nccoffee
  Normal   Pulling    11m (x5 over 13m)    kubelet            Pulling image "hello-world"
  Normal   Pulled     11m                  kubelet            Successfully pulled image "hello-world" in 807.455498ms
  Warning  BackOff    3m1s (x47 over 13m)  kubelet            Back-off restarting failed container
8
  • Did you check you did not reach the dockerhub pull limit (from memory 100pull/6h as an anonymous user)? If you're operating the cluster from an enterprise network with a single outgoing IP, this can happen very quickly. Commented Feb 6, 2021 at 15:19
  • 1
    What logs does Kubernetes issue when you run kubectl logs Commented Feb 6, 2021 at 15:20
  • @Zeitounator pulling from a local network and only ran this file twice now. Thus guess done 6 image pulls * hello-world image which is very small. Commented Feb 6, 2021 at 15:48
  • The size does not matter (and I'm really talking about the image ;)) only the number. Looking at the logs as @MargachChris suggests should give your more clues. Commented Feb 6, 2021 at 15:50
  • @MargachChris Added logs above for one of the pods which is in crashLoopBackOff status. All pod logs outputs same message. Commented Feb 6, 2021 at 15:51

1 Answer 1

1

CrashLoopBackoff is a perfectly normal state for the hello-world Image. It's designed to print the message you saw in the logs and then exit. It is doing what it is designed to do.

Kubernetes deployments expect to have containers which will stay running. So when the container exits, it will automatically be restarted. If a container keeps exiting as soon as it's started, it's assumed to be crashing, and hence we get CrashLoopBackoff. You need a container which will not exit; hello-world is not it.

As others have mentioned, your problem with thenetworkchuck/nccoffee could well be that you've hit your daily pull limit from DockerHub.

FInally, the reason you have four deployments is that you are not deleting the failed ones; A deployment will stay running until deleted. The ImagePullBackoff is the nccoffee image, most likely and the CrashLoopBackoff the hello-world one.

kubectl list deployments will show what you have, and then you can use kubectl delete deployment <deployment name>.
Alternatively, kubectl delete -f <file.yaml> will delete what you created with kubectl apply -f <file.yaml> (or kubectl create -f ...)

Sign up to request clarification or add additional context in comments.

1 Comment

Appreciate the detail in this answer. Certainly clarified my doubts. This along with the comments above triggered to look at the right spot for the issue. thenetworkchuck/nccoffee image is not working with default :latest tag for some reason. Adding it's specific tag pulled the image successfully.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.