39

I'm having some trouble getting the Nginx ingress controller working in my Kubernetes cluster. I have created the nginx-ingress deployments, services, roles, etc., according to https://kubernetes.github.io/ingress-nginx/deploy/

I also deployed a simple hello-world app which listens on port 8080

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: hello-world
  namespace: default
spec:
  selector:
    matchLabels:
      name: hello-world
  template:
    metadata:
      labels:
        name: hello-world
    spec:
      containers:
      - name: hello-world
        image: myrepo/hello-world
        resources:
          requests:
            memory: 200Mi
            cpu: 150m
          limits:
            cpu: 300m
        ports:
          - name: http
            containerPort: 8080
            protocol: TCP

And created a service for it

kind: Service
apiVersion: v1
metadata:
  namespace: default
  name: hello-world
spec:
  selector:
    app: hello-world
  ports:
    - name: server
      port: 8080

Finally, I created a TLS secret (my-tls-secret) and deployed the nginx ingress per the instructions. For example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: hello-world
  namespace: default
spec:
  rules:
    - host: hello-world.mydomain.com
      http:
        paths:
        - path: /
          backend:
            serviceName: hello-world
            servicePort: server
  tls:
      - hosts:
          - hello-world.mydomain.com
        secretName: my-tls-cert

However, I am unable to ever reach my application, and in the logs I see

W0103 19:11:15.712062       6 controller.go:826] Service "default/hello-world" does not have any active Endpoint.
I0103 19:11:15.712254       6 controller.go:172] Configuration changes detected, backend reload required.
I0103 19:11:15.864774       6 controller.go:190] Backend successfully reloaded.

I am not sure why it says Service "default/hello-world" does not have any active Endpoint. I have used a similar service definition for the traefik ingress controller without any issues.

I'm hoping I'm missing something obvious with the nginx ingress. Any help you can provide would be appreciated!

3
  • 6
    naming everything hello-world is a really nice way to get stuck later Commented Jun 2, 2020 at 14:53
  • i suppose it was not actually the connection of nginx with the service but the connection of the service with the pod which failed. i think its totally irrelevant which labels you use as long as they match, Commented Feb 23, 2021 at 10:21
  • This is one of those post on SO where all answers are right basically Commented Aug 14, 2022 at 8:26

6 Answers 6

43

I discovered what I was doing wrong. In my application definition I was using name as my selector

  selector:
    matchLabels:
      name: hello-world
  template:
    metadata:
      labels:
        name: hello-world

Whereas in my service I was using app

  selector:
    app: hello-world

After updating my service to use app, it worked

  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
Sign up to request clarification or add additional context in comments.

3 Comments

Exactly, I had a similar cased of wrong labels in deployment and the service section.
@cookandy Did you mean after updating the application definition to use app, it worked?
I might have a similar error. could you take a look here and tell me if I am doing same mistake? serverfault.com/questions/1168759/…
5

Another situation when it may happen is when ingress class of the ingress controller does not match ingress class in the ingress resource manifest used for your services.

Nginx installation command, short example:

  helm install stable/nginx-ingress \
  --name ${INGRESS_RELEASE_NAME} \
  --namespace ${K8S_NAMESPACE} \
  --set controller.scope.enabled=true \
  --set controller.scope.namespace=${K8S_NAMESPACE} \
  --set controller.ingressClass=${NGINX_INGRESS_CLASS}

ingress resource spec. , excerpt:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
  annotations:
    # folowing line is not valid for K8s or Helm, 
    # but reflects the values must be the same
    kubernetes.io/ingress.class: ${NGINX_INGRESS_CLASS}

Comments

5

In our case, this was caused by having the ingress resource definition on a different namespace then the services.

kind: Ingress
apiVersion: networking.k8s.io/v1beta1
metadata:
  name: nginx-ingress-rules
  namespace: **default**       #<= make sure this is the same value like the namespace on the services you are trying to reach

Comments

1

In my case, I included an "id" directive in my Service selector that was missing from the Deployment metadata and this prevented the endpoints controller from finding the correct Pod. I expect the reverse case would also fail:

---
apiVersion: v1
kind: Service
metadata:
  name: some-service
spec:
  ports:
    - name: port-name
      port: 1234
      protocol: TCP
  selector:
    app: some-app
    id: "0"  ## include in both or neither

Comments

0

In my case the actual deployment containers that the service was pointing to were not running, due to ImagePullError. Once I resolved the issue, and the containers started running, the endpoint error went away.

Comments

0

I had written in the wrong host domain in my ingress resource spec.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.