0

I have created the k8s cluster with one master node and two worker nodes using ubuntu vms using Microk8s. Now Every time I have to use microk8s before kubectl. for ex: microk8s kubectl get nodes

For this I have used below command to use kubectl directly on the three nodes. sudo snap alias microk8s.kubectl kubectl

In Master node it is working fine. root@controller:/home/kundurusrikanthreddy9# kubectl get nodes NAME STATUS ROLES AGE VERSION worker-2 Ready worker 22m v1.26.1 controller Ready master 26m v1.26.1 worker-1 Ready worker 22m v1.26.1

But while giving the same command in worker nodes it is showing the output as below undurusrikanthreddy9@worker-1:~$ sudo microk8s kubectl get node This MicroK8s deployment is acting as a node in a cluster. Please use the microk8s kubectl on the master.

How can I resolve this issue.

Can anyone help me on this?

I am expecting the resolution or what causing the issue

2
  • I wouldn't expect to ever log in directly to the nodes. Stack Overflow is specifically about programming questions, however; the help center describes what is on-topic here. This sort of system-administration question might be better asked on another site like Server Fault or DevOps. Commented Mar 16, 2023 at 11:12
  • Kubectl is a command line tool to control a master. The API to respond to kubectl runs on port 8443 or 6443. The master knows the state of the cluster, the workers don't. If you want any worker to respond to kubectl you should promote them all to master Commented Sep 24, 2023 at 6:09

2 Answers 2

0

You probably can't connect from a node using kubectl because you don't have proper client credentials. You can probably use the kubeconfig from kubelet on the node to authenticate with the API server since that's what kubelet uses as well or create your own kubeconfig file

https://devopscube.com/kubernetes-kubeconfig-file/

Sign up to request clarification or add additional context in comments.

Comments

0

This is behavior specific to the kubectl provided in the microk8s snap. You can see it hard-coded here to bail out on any kubectl call from outside the master node. I can't speak to why this is the chosen behavior.

I recommend using an independent kubectl binary (with a kubeconfig pointing to your apiserver of course). The official docs don't raise any issues with this approach (link).

In my own (purely anecdotal) experience, this represents a more typical setup. My work environment includes a few independent clusters ... each with its own unique method of authentication, administration, security policies, etc. I communicate with them from a local kubectl on a development server (which coincidentally resides on one of the clusters).

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.