-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for GPU monitoring #1601
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1601 +/- ##
==========================================
+ Coverage 48.94% 49.06% +0.11%
==========================================
Files 227 235 +8
Lines 20636 21983 +1347
==========================================
+ Hits 10101 10785 +684
- Misses 10010 10629 +619
- Partials 525 569 +44
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 20 files with indirect coverage changes Continue to review full report in Codecov by Sentry.
|
6777208
to
ce25955
Compare
ce25955
to
60173ad
Compare
60173ad
to
dd0dd9c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving with a minor edit requested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall - left some minor questions/suggestions. Thanks for the helpful comments in the feature.go code!
api/datadoghq/v2alpha1/const.go
Outdated
NVIDIADevicesMountPath = "/var/run/nvidia-container-devices/all" | ||
NVIDIADevicesVolumeName = "nvidia-devices" | ||
DevNullPath = "/dev/null" // used to mount the NVIDIADevicesHostPath to /dev/null in the container, it's just used as a "signal" to the nvidia runtime to use the nvidia devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since these (including line 83) only seem to be used in the feature/gpu/feature.go code, can they be placed within the feature/gpu directory? e.g. https://github.com/DataDog/datadog-operator/blob/main/internal/controller/datadogagent/feature/kubernetesstatecore/const.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved back to gpu/const.go
config/manager/kustomization.yaml
Outdated
newName: gcr.io/datadoghq/operator | ||
newTag: 1.11.1 | ||
newName: 601427279990.dkr.ecr.us-east-1.amazonaws.com/guillermo.julian/sandbox | ||
newTag: operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mind undoing these changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, sorry. Removed!
// GPUMonitoringType monitoring feature. | ||
GPUMonitoringType = "gpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// GPUMonitoringType monitoring feature. | |
GPUMonitoringType = "gpu" | |
// GPUIDType GPU feature. | |
GPUIDType = "gpu" |
return &gpuMonitoringFeature{} | ||
} | ||
|
||
type gpuMonitoringFeature struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit, but for consistency maybe we could rename to gpuFeature
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed
@@ -78,6 +78,9 @@ const ( | |||
KubeServicesAndEndpointsListeners = "kube_services kube_endpoints" | |||
EndpointsChecksConfigProvider = "endpointschecks" | |||
ClusterAndEndpointsConfigProviders = "clusterchecks endpointschecks" | |||
|
|||
// DefaultGPUMonitoringRuntimeClass default runtime class for GPU pods | |||
DefaultGPUMonitoringRuntimeClass = "nvidia" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mind moving this to feature/gpu/const.go as well?
@@ -82,6 +82,8 @@ type DatadogFeatures struct { | |||
SBOM *SBOMFeatureConfig `json:"sbom,omitempty"` | |||
// ServiceDiscovery | |||
ServiceDiscovery *ServiceDiscoveryFeatureConfig `json:"serviceDiscovery,omitempty"` | |||
// GPU monitoring | |||
GPUMonitoring *GPUMonitoringFeatureConfig `json:"gpu,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I missed this in the first pass.
GPUMonitoring *GPUMonitoringFeatureConfig `json:"gpu,omitempty"` | |
GPU *GPUFeatureConfig `json:"gpu,omitempty"` |
// GPUMonitoringFeatureConfig contains the GPU monitoring configuration. | ||
type GPUMonitoringFeatureConfig struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// GPUMonitoringFeatureConfig contains the GPU monitoring configuration. | |
type GPUMonitoringFeatureConfig struct { | |
// GPUFeatureConfig contains the GPU monitoring configuration. | |
type GPUFeatureConfig struct { |
const nvidiaDevicesMountPath = "/var/run/nvidia-container-devices/all" | ||
const nvidiaDevicesVolumeName = "nvidia-devices" | ||
const devNullPath = "/dev/null" // used to mount the NVIDIADevicesHostPath to /dev/null in the container, it's just used as a "signal" to the nvidia runtime to use the nvidia devices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const nvidiaDevicesMountPath = "/var/run/nvidia-container-devices/all" | |
const nvidiaDevicesVolumeName = "nvidia-devices" | |
const devNullPath = "/dev/null" // used to mount the NVIDIADevicesHostPath to /dev/null in the container, it's just used as a "signal" to the nvidia runtime to use the nvidia devices | |
const ( | |
nvidiaDevicesMountPath = "/var/run/nvidia-container-devices/all" | |
nvidiaDevicesVolumeName = "nvidia-devices" | |
devNullPath = "/dev/null" // used to mount the NVIDIADevicesHostPath to /dev/null in the container, it's just used as a "signal" to the nvidia runtime to use the nvidia devices | |
) |
@@ -71,4 +71,6 @@ const ( | |||
DummyIDType = "dummy" | |||
// ServiceDiscoveryType service discovery feature. | |||
ServiceDiscoveryType = "service_discovery" | |||
// GPUIDType monitoring feature. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// GPUIDType monitoring feature. | |
// GPUIDType GPU monitoring feature. |
// DefaultGPUMonitoringRuntimeClass default runtime class for GPU pods | ||
DefaultGPUMonitoringRuntimeClass = "nvidia" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// DefaultGPUMonitoringRuntimeClass default runtime class for GPU pods | |
DefaultGPUMonitoringRuntimeClass = "nvidia" | |
// DefaultGPURuntimeClass default runtime class for GPU pods | |
DefaultGPURuntimeClass = "nvidia" |
What does this PR do?
This PR adds support for the GPU monitoring feature, doing all the required changes to improve experience of customers.
Motivation
Simplify deployment of GPU monitoring.
Additional Notes
This is an initial implementation of the feature. It does not support deployment of mixed clusters (those where not all nodes have GPUs).
Minimum Agent Versions
Are there minimum versions of the Datadog Agent and/or Cluster Agent required?
Describe your test plan
feature.gpu.enabled: yes
.runtimeClassName: nvidia
withkubectl get pod datadog-agent-XXX -o json | jq ".spec.runtimeClassName"
.DD_GPU_MONITORING_ENABLED
is set to true in both theagent
andsystem-probe
containers.Checklist
bug
,enhancement
,refactoring
,documentation
,tooling
, and/ordependencies
qa/skip-qa
label