-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Define semantic conventions for k8s metrics #1032
Comments
I love the idea of moving forward with this work. According to the collector end-user survey k8s and the collector are a big part of our end-user's stack, so moving the related semconvs forwards is a great idea. |
In general, my team has been happy with the metrics collected by |
#33598) Having recently been working with the `kubeletstats` receiver (using it and contributing to it), I would like to volunteer to help with its maintainance by intending to dedicate time to contribute to the component as well as help with the existing and future issue queue. Also being a member of the [semconv-k8s-approvers](https://github.com/orgs/open-telemetry/teams/semconv-k8s-approvers) and [semconv-container-approvers](https://github.com/orgs/open-telemetry/teams/semconv-container-approvers) will help to bring more alignment between the [Semantic Conventions](open-telemetry/semantic-conventions#1032) and the Collector's implementation within this specific scope. - ✅ Being a member of Opentelemetry organization - PRs authored: https://github.com/open-telemetry/opentelemetry-collector-contrib/pulls?q=is%3Apr+author%3AChrsMark++label%3Areceiver%2Fkubeletstats%2Cinternal%2Fkubeletstats%2Cinternal%2Fkubelet - Issues have been involved: https://github.com/open-telemetry/opentelemetry-collector-contrib/issues?q=is%3Aissue+commenter%3AChrsMark+label%3Areceiver%2Fkubeletstats%2Cinternal%2Fkubeletstats+ /cc @dmitryax @TylerHelmuth with whom I have already discussed about it Signed-off-by: ChrsMark <[email protected]>
I have updated the description to group metrics together in a meaningful way. I hope this makes the list less overwhelming and people willing to help on this could pick up a group all together and work on it. Maybe we could create standalone issues per group if that helps, link them here to simplify the list in this issue's description and use this issue as a meta issue. |
I removed this from the system semantic conventions WG since this WG does not handle Kubernetes-related semantic conventions |
Reading through the However in the Collector we emit them as metrics. For example https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/e7ebc6e1676aa661880a09b0ff93a9cccad8f011/receiver/k8sclusterreceiver/testdata/e2e/expected.yaml#L703-L709 @povilasv @TylerHelmuth do you have more context on how/why in the collector those were implemented as metrics? |
@dmitryax might know |
Hi! 👋 This issue was mentioned today during the Java SIG meeting as we now have the ability to capture "state metrics" which have a similar structure as defined in the Hardware semconv with the When browsing the definitions of Maybe using a similar modeling to what we use in Java and HW Semconv could be relevant here. |
Thank's @SylvainJuge ! I think we could use a similar modeling here for .status, .phase, .condition ones. |
I think both Resource attribute and a gauge metric tracking historic usage and it's state change is a useful thing. Resource attribute is basically same thing as |
OTel defines three signals - metrics, logs, traces. Resource attributes are metadata attached to those signals. I don't understand the discussion above regarding "use resource attributes instead of metrics". Resource attributes are not first-class things that exist independent of the core signals. |
My confusion was mainly because of the For this specific one what @povilasv mentioned makes sense since we can model container's restarts as a metric but it can also be used as an identifier. This happens already for logs parsed with the container parser where the container restart count is a Resource Attribute of the log record: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/stanza/docs/operators/container.md#add-metadata-from-file-path. Probably we need to name these 2 differently to avoid confusion. I don't know if we have hit something similar in SemConv so far. For the rest of the list I think we should be fine taking also into account #1032 (comment). |
In general, I see that the |
That was discussed today in SemConv SIG meeting (Nov 4, 2024). It seems that this modeling could fit well here and in #1212. @braydonk will prepare a proposal to put this as generic guidance in Semantic Conventions (thank you Braydon :)). Based on the outcome of this we can unblock the related PRs. |
That resource attribute being used to identify a particular container instance in a pod when we scrape log from. Container logs are written in files with the following pattern: |
Area(s)
area:k8s
Is your change request related to a problem? Please describe.
At the moment there are not Semantic Conventions for k8s metrics.
Describe the solution you'd like
Even if we cannot consider the k8s metrics as stable we can start considering adding metrics that are not controversial to get some progress here. This issue aims to collect the existing k8s metrics that exist in the Collector and keep track of any related work.
Bellow I'm providing an initial list with metrics coming from the
kubeletstats
andk8scluster
receivers. Note that these are matter to change with time being so we should get back to the Collector to verify the current state.cc: @open-telemetry/semconv-k8s-approvers
Describe alternatives you've considered
No response
Additional context
Below there are some metrics from namespaces other than
k8s.*
as well. I leave them in there intentionally in order to take them into account accordingly.kubeletstats metrics
cpu metrics: #1489
memory metrics: #1490
filesystem metrics: #1488
network metrics: #1487 ✅
uptime metrics: #1486 ✅
volume metrics: #1485
k8scluster metrics
deployment metrics: #1636 ✅
cronjob metrics: #1660
k8s.cronjob.active_jobs
daemonset metrics: #1649 ✅
k8s.daemonset.current_scheduled_nodes
k8s.daemonset.desired_scheduled_nodes
k8s.daemonset.misscheduled_nodes
k8s.daemonset.ready_nodes
hpa metrics: #1644 ✅
k8s.hpa.max_replicas
k8s.hpa.min_replicas
k8s.hpa.current_replicas
k8s.hpa.desired_replicas
job metrics: #1660
k8s.job.active_pods
k8s.job.desired_successful_pods
k8s.job.failed_pods
k8s.job.max_parallel_pods
k8s.job.successful_pods
namespace metrics: #1668
k8s.namespace.phase
replicaset metrics: #1636 ✅
k8s.replicaset.desired
k8s.replicaset.available
replication_controller metrics #1636 ✅
k8s.replication_controller.desired
k8s.replication_controller.available
statefulset metrics: #1637 ✅
k8s.statefulset.desired_pods
k8s.statefulset.ready_pods
k8s.statefulset.current_pods
k8s.statefulset.updated_pods
container metrics
k8s.container.cpu_request
k8s.container.cpu_limit
k8s.container.memory_request
k8s.container.memory_limit
k8s.container.storage_request
k8s.container.storage_limit
k8s.container.ephemeralstorage_request
k8s.container.ephemeralstorage_limit
k8s.container.restarts
k8s.container.ready
pod metrics
k8s.pod.phase
k8s.pod.status_reason
resource_quota metrics
k8s.resource_quota.hard_limit
k8s.resource_quota.used
node metrics
k8s.node.condition
related issue: open-telemetry/opentelemetry-collector-contrib#33760
Openshift metrics
openshift.clusterquota.limit
openshift.clusterquota.used
openshift.appliedclusterquota.limit
openshift.appliedclusterquota.used
Related issues
TBA
The text was updated successfully, but these errors were encountered: