248

I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:

kubectl top pod podname --namespace=default

I am getting the following error:

W0205 15:14:47.248366    2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
  1. What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
  2. I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?

  3. Do we get the same output if we enter the pod and run the linux top command?

1
  • 6
    If you run top inside the pod, it will be like you run it on the host system because the pod is using kernel of the host system. stackoverflow.com/a/51656039/429476 Commented Jul 14, 2020 at 10:04

19 Answers 19

240

CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL


If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.

  1. Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
  2. Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
  3. Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage

Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level

NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.

Sign up to request clarification or add additional context in comments.

10 Comments

How can I calculate percentage of CPU used from this value or is there a way I can determine percentage of CPU used allocated to a pod/deployment?
For copy paste: cat /sys/fs/cgroup/memory/memory.usage_in_bytes & cat /sys/fs/cgroup/cpu/cpuacct.usage
You can use cat /sys/fs/cgroup/memory/memory.usage_in_bytes | numfmt --to=iec to get numbers in Kb/Mb/Gb.
How should you interpret the CPU value? What is the unit?
Another way to get the MB usage if you don't have numfmt available: cat /sys/fs/cgroup/memory/memory.usage_in_bytes | awk '{ foo = $1 / 1024 / 1024 ; print foo "MB" }'
|
236

kubectl top pod <pod-name> -n <fed-name> --containers

FYI, this is on v1.16.2

5 Comments

I understand that metrics server must first be installed: $ kubectl top pod mypod -n mynamespace --containers Error from server (NotFound): podmetrics.metrics.k8s.io "mynamespace/mypod" not found
@user9074332, Yes you need metrics server installed first. You can do so by executing following commands: wget https://raw.githubusercontent.com/pythianarora/total-practice/master/sample-kubernetes-code/metrics-server.yaml kubectl create -f metrics-server.yaml
"kubectl get pods --namespace product | grep Running | awk '{print $1}' | kubectl top pod $1 --namespace product --containers" for overall output instead of running for each pod
not adding namespace will give pod not found error
Use watch if you want to execute the top command periodically. Example for watch interval of 5 sec : watch -n5 kubectl top pod <pod-name> -n <namespace-name> --containers . watch man page
84

Use k9s for a super easy way to check all your resources' cpu and memory usage.

enter image description here

4 Comments

Why don't I see the CPU and MEM columns in k9s? Is it a limitation of DigitalOcean kubernetes?
@deed02392 not sure - try expanding your terminal?
@deed02392 You need to install metrics-server to have that available: github.com/kubernetes-sigs/metrics-server
Any idea on how to configure k9s to display Mi for memory ?
56
  1. As described in the docs, you should install metrics-server

  2. 250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:

    • 1 AWS vCPU
    • 1 GCP Core
    • 1 Azure vCore
    • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

    Fractional values are allowed. A Container that requests 0.5 CPU is guaranteed half as much CPU as a Container that requests 1 CPU. You can use the suffix m to mean milli. For example 100m CPU, 100 milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not allowed.

    CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.

  3. No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.

    There are more details on these links:

2 Comments

For the 3rd point, the link you gave tells that running top inside pod is same as running it on the host system. But when i tried it, the outputs don't match
Actually the statement is wrong, it does not report the same thing, but they work the same way. The main difference is that the contents on /proc/ filesystem of the container are different from the host then the results won't be the same. I've added another link with more detailed information.
49

A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.

kubectl describe PodMetrics <pod_name>

replace <pod_name> with the pod name you get by using

kubectl get pod

3 Comments

error: the server doesn't have a resource type "PodMetrics"
@JRichardsz you need to install the k8s metrics server first kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Can you believe that I've ran the describe command over a pod but not details related to memory or cpu used was displayed. Some specific feature shoul be enabled ?
36

You need to run metric server to make below commands working with correct data:

  1. kubectl get hpa
  2. kubectl top node
  3. kubectl top pods

Without metric server: Go into the pod by running below command:

  1. kubectl exec -it pods/{pod_name} sh
  2. cat /sys/fs/cgroup/memory/memory.usage_in_bytes

You will get memory usage of pod in bytes.

1 Comment

To add, there should also be a file tells you the memory limit /sys/fs/cgroup/memory/memory.limit_in_bytes . With these files you can calculate the memory usage percentage on that Pod. Preferably have some script on the Pod itself calculates the memory percentage and writes to a file. Then it will be simple as kubectl exec pod/<pod_name> -- cat <memory_load_percentage_file> to get its memory load.
19

Not sure why it's not here

  1. To see all pods with time alive - kubectl get pods --all-namespaces
  2. To see memory and CPU - kubectl top pods --all-namespaces

1 Comment

It's not there because the metrics API server is simply not installed by default on kubernetes, at least not on vanilla.
13

If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:

  • Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
  • Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
  • Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
  • Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
  • Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
  • Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
  • Per-node memory usage percentage:
100 * (
  sum(container_memory_usage_bytes{container!=""}) by (node)
    / on(node)
  kube_node_status_capacity{resource="memory"}
)
  • Per-node CPU usage percentage:
100 * (
  sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
    / on(node)
  kube_node_status_capacity{resource="cpu"}
)

5 Comments

Okay so I tried per-pod memory usage in bytes and it doesn't compare with what is reported by kubectl top or manually adding the memory usage after getting it via api.
@AnupamSrivastava, it would be great if you could provide an example pod with its memory usage reported by the query and memory usage returned by kubectl top?
Underrated answer. Plug and play for a meaningful grafana dashboard
hi @valyala, is there a way to calculate CPU usage percentage by container on each pod? I tried this but does not work ` 100 * ( sum(rate(container_cpu_usage_seconds_total{container="my_name"}[5m])) by (pod) / on(pod) kube_node_status_capacity{resource="cpu"} ) `
@tandathuynh148, try rate(container_cpu_usage_seconds_total{container!~"|POD"}[5m]) / on(node) group_left() kube_node_status_capacity{resource="cpu"}
7

As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server

You can install metrics-server in following way:

  1. Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git

Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:

- command:
     - /metrics-server
     - --metric-resolution=30s
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP
  1. Run the following command: kubectl apply -f deploy/1.8+

It will install all the requirements you need for metrics server.

For more info, please have a look at my following answer:

How to Enable KubeAPI server for HPA Autoscaling Metrics

Comments

7

An alternative approach without having to install the metrics server.

It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.

Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)

  1. Find your container id of the pod sudo crictl ps
  2. use stats to get CPU and RAM sudo crictl stats <CONTAINERID>

Sample output for reference:

CONTAINER           CPU %               MEM                 DISK                INODES
873f04b6cef94       0.50                54.16MB             28.67kB             8

1 Comment

Worth noting that this assumes that kubernetes is running containerd, and not dockerd, used by earlier releases
6

In my use case I wanted to aggregate memory/cpu usage per namespace as I wanted to see how heavy or lightweight a Harbor system running in my small K3s cluster would be, so I wrote this Python script using the kubernetes Python client:

from kubernetes import client, config
import matplotlib.pyplot as plt
import pandas as pd

def cpu_n(cpu_str: str):
    if cpu_str == "0":
        return 0.0
    assert cpu_str.endswith("n")
    return float(cpu_str[:-1])

def mem_Mi(mem_str: str):
    if mem_str == "0":
        return 0.0
    assert mem_str.endswith("Ki") or mem_str.endswith("Mi")
    val = float(mem_str[:-2])
    if mem_str.endswith("Ki"):
        return val / 1024.0
    if mem_str.endswith("Mi"):
        return val

config.load_kube_config()
api = client.CustomObjectsApi()
v1 = client.CoreV1Api()
cpu_usage_pct = {}
mem_usage_mb = {}
namespaces = [item.metadata.name for item in v1.list_namespace().items]
for ns in namespaces:
    resource = api.list_namespaced_custom_object(group="metrics.k8s.io", version="v1beta1", namespace=ns, plural="pods")
    cpu_total_n = 0.0
    mem_total_Mi = 0.0
    for pod in resource["items"]:
        for container in pod["containers"]:
            usage = container["usage"]
            cpu_total_n += cpu_n(usage["cpu"])
            mem_total_Mi += mem_Mi(usage["memory"])
    if mem_total_Mi > 0:
        mem_usage_mb[ns] = mem_total_Mi
    if cpu_total_n > 0:
        cpu_usage_pct[ns] = cpu_total_n * 100 / 10**9

df_mem = pd.DataFrame({"ns": mem_usage_mb.keys(), "memory_mbi": mem_usage_mb.values()})
df_mem.sort_values("memory_mbi", inplace=True)

_, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12))

ax1.barh("ns", "memory_mbi", data=df_mem)
ax1.set_ylabel("Namespace", size=14)
ax1.set_xlabel("Memory Usage [MBi]", size=14)
total_memory_used_Mi = round(sum(mem_usage_mb.values()))
ax1.set_title(f"Memory usage by namespace [{total_memory_used_Mi}Mi total]", size=16)

df_cpu = pd.DataFrame({"ns": cpu_usage_pct.keys(), "cpu_pct": cpu_usage_pct.values()})
df_cpu.sort_values("cpu_pct", inplace=True)
ax2.barh("ns", "cpu_pct", data=df_cpu)
ax2.set_ylabel("Namespace", size=14)
ax2.set_xlabel("CPU Usage [%]", size=14)
total_cpu_usage_pct = round(sum(cpu_usage_pct.values()))
ax2.set_title(f"CPU usage by namespace [{total_cpu_usage_pct}% total]", size=16)

plt.show()

Sample output looks like this: memory cpu usage per namespace

Of course, keep in mind that it is just a snapshot of your system's memory and CPU usage, it could vary a lot as workloads become more or less active.

1 Comment

nice script! But I ran into an issue when running it: ``` $ python mem-usage.py Traceback (most recent call last): Traceback (most recent call last): File "~/mem-usage.py", line 28, in <module> resource = api.list_namespaced_custom_object(group="metrics.k8s.io", version="v1beta1", namespace=ns, plural="pods") ... ... ``` Any idea?
3

To check the usage of individual pods in Kubernetes type the following commands in terminal

$ docker ps | grep <pod_name>

This will give your list of running containers in Kubernetes To check CPU and memory utilization using

$ docker stats <container_id>

CONTAINER_ID  NAME   CPU%   MEM   USAGE/LIMIT   MEM%   NET_I/O   BLOCK_I/O   PIDS

2 Comments

Like we can see Mem USAGE/LIMIT can we see CPU Limit as well? Any Idea
Yes, it will show Memory as well as CPU usage.
2

You can use API as defined here:

For example:

kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq

{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "nginx-7fb5bc5df-b6pzh",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
    "creationTimestamp": "2021-06-14T07:54:31Z"
  },
  "timestamp": "2021-06-14T07:53:54Z",
  "window": "30s",
  "containers": [
    {
      "name": "nginx",
      "usage": {
        "cpu": "33239n",
        "memory": "13148Ki"
      }
    },
    {
      "name": "git-repo-syncer",
      "usage": {
        "cpu": "0",
        "memory": "6204Ki"
      }
    }
  ]
}

Where nginx-7fb5bc5df-b6pzh is pod's name.

Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU

1 Comment

When metrics are not available this will not work
1

you need to deploy heapster or metric server to see the cpu and memory usage of the pods

Comments

1

In completion of Dashrath Mundkar's answer, this execution is possible without entering the pod (using command prompt) :

kubectl exec pod_name -n namespace -- cat /sys/fs/cgroup/cpu/cpuacct.usage

Comments

0

In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.

Comments

0

I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:

  • Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
  • Right-click on the pod and select "Terminal"
  • Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.

Hope it helps

Comments

0

Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.

Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.

Kubernetes is FUN!!!

Comments

-1

If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.

enter image description here

7 Comments

I'm getting error bash: top: command not found
You might have to use your package manager to install it
Pods that use images created from "scratch" image in general does not have installed "top".
just to consider, this method wont give you independent stats of a given pod.,it will show stats of the cluster.
> just to consider, this method wont give you independent stats of a given pod.,it will show stats of the cluster. | This is correct this will give you the resource of the node where your pod is. See stackoverflow.com/a/51656039/5697747
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.