24

I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:

 > kubectl get pod
NAME                      READY     STATUS                                        RESTARTS   AGE
maestro-kubetest-d37hr    0/1       Error: image csats/maestro:latest not found   0          22m

I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:

 > kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
 > docker pull csats/maestro
latest: Pulling from csats/maestro

I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:

> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"[email protected]"}}

I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)

Here's the YAML for my Replication Controller:

---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
  name: "maestro-kubetest"
spec:
  replicas: 1
  selector:
    app: "maestro"
    ecosystem: "kubetest"
    version: "1"
  template:
    metadata:
      labels:
        app: "maestro"
        ecosystem: "kubetest"
        version: "1"
    spec:
      imagePullSecrets:
        - name: "docker-hub-csatsinternal"
      containers:
        - name: "maestro"
          image: "csats/maestro"
          imagePullPolicy: "Always"

      restartPolicy: "Always"
      dnsPolicy: "ClusterFirst"

kubectl version:

Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}

Any ideas?

4
  • 1
    In your example you're pulling two different images - did you try pulling maestro? Commented Sep 11, 2015 at 19:07
  • Good catch -- reran the command with the right image. Same result. Commented Sep 17, 2015 at 0:07
  • I'm experiencing the same problem.. did you find the solution? Commented Sep 18, 2015 at 23:30
  • If it's still useful to you two months later, yes I did. Heh. Commented Nov 14, 2015 at 0:29

4 Answers 4

29

Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.

For example, if your Deployment yaml looks like

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mydeployment
  namespace: kube-system

Then you must make sure the Secret yaml uses a matching namespace:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
  namespace: kube-system
data:
  .dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson

If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.

Sign up to request clarification or add additional context in comments.

3 Comments

I have exactly the same issue here. The only difference is that I actually WANT the secret to be in the default namespace as I don't want to create it for all namespaces. Is there a way to reference to a secret the default namespace while the ingress is created inside a namespace?
Including type: kubernetes.io/dockerconfigjson was the missing secret sauce that none of the other answers around the internet seemed to include. Thanks for that!
@Randy one of the main purpose of namespaces in Kubernetes is secret isolation. Therefore, it is impossible for an application running in namespace A to read a secret in namespace B.
19

Docker generates a config.json file in ~/.docker/ It looks like:

{
    "auths": {
        "index.docker.io/v1/": {
            "auth": "ZmFrZXBhc3N3b3JkMTIK",
            "email": "[email protected]"
        }
    }
}

what you actually want is:

{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "[email protected]"}}

note 3 things:

  • 1) there is no auths wrapping
  • 2) there is https:// in front of the URL
  • 3) it's one line

then you base64 encode that and use as data for the .dockercfg name

apiVersion: v1
kind: Secret
metadata: 
  name: registry
data:
  .dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg

Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)

2 Comments

You got me to the solution -- turns out my problem was that my secret was type: Opaque instead of type: kubernetes.io/dockercfg. Cheers.
It's worth noting that the trailing slash on the URL is also required; https://index.docker.io/v1 does not work. ;)
5

Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).

The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.

A 1.8.x cluster expect a secret with the format:

{  
   "https://registry.gitlab.com":{  
      "username":"...",
      "password":"...",
      "email":"...",
      "auth":"..."
   }
}

But the secret generated by the 1.9.x kubectl has this format:

{  
   "auths":{  
      "https://registry.gitlab.com":{  
         "username":"...",
         "password":"...",
         "email":"...",
         "auth":"..."
      }
   }
}

So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.

3 Comments

Thanks a lot, that really solved my problem. I fiddled around with minikube on windows, and while chocolatey installs kubectl-cli version 1.9 by default minikube for windows still is 1.8. Always run kubectl version and double check for matching versions.
This should be the accepted answer. MrE's answer is on the right track but it doesn't mention why the format is different. Thanks so much eschnou, you finally solved the problem that had me banging my head against the wall for 2 days.
I don't understand how the previous version/format can't also be supported. It's so easy to allow both and instead the internet is full of people struggling
3

I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:

{ 
   "https://index.docker.io/v1/": { 
     "auth": "ZmFrZXBhc3N3b3JkMTIK", 
     "email": "[email protected]" 
   } 
}

While the one generated by docker in my machine looks something like this:

{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "ZmFrZXBhc3N3b3JkMTIK",
            "email": "[email protected]"
        }
    }
}

By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)

I confirm you that if you just take and encode "auths", as in the example, it will work for you.

Probably the documentation should be updated. I will raise a ticket on github.

1 Comment

Hmm -- didn't solve my problem. I believe I tried both formats originally.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.