-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: RFC Implementation Supporting AWS On-Demand Capacity Reservations #1263
Conversation
…d update status when found
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tvonhacht-apple The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @tvonhacht-apple. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Pull Request Test Coverage Report for Build 9123804018Details
💛 - Coveralls |
In the PR description, I recommend mentioning AWS On-Demand Capacity Reservations (not just ODCR). |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
1 similar comment
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@@ -294,6 +294,16 @@ func mapCandidates(proposed, current []*Candidate) []*Candidate { | |||
// on an instance type. If the instance type has a spot offering available, then it uses the spot offering | |||
// to get the launch price; else, it uses the on-demand launch price | |||
func worstLaunchPrice(ofs []cloudprovider.Offering, reqs scheduling.Requirements) float64 { | |||
if reqs.Get(v1beta1.CapacityTypeLabelKey).Has("capacity-reservation") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also update our cloudprovider types.go
file to have an "integer" number for the available instances in the offering? Given that we are going to be scheduling using that count value, we don't want to continually overshoot the capacity that we think that we can schedule to the capacity-reservation
offering
@@ -294,6 +294,16 @@ func mapCandidates(proposed, current []*Candidate) []*Candidate { | |||
// on an instance type. If the instance type has a spot offering available, then it uses the spot offering | |||
// to get the launch price; else, it uses the on-demand launch price | |||
func worstLaunchPrice(ofs []cloudprovider.Offering, reqs scheduling.Requirements) float64 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also think about the interactions that we have today with our current scheduling behavior and capacity reservation prioritization. Today, we are going to continually pack pods as tightly as we can on bigger and bigger instance types. This can cause us to rule out capacity reservations that would be cheaper to use if we supported a combination of spot, on-demand, and ODCR capacity. This probably works fine if we are just trying to build ODCR prioritized with on-demand/spot fallback to the same instance type sets. The second you extend the set of instance types to a combination of those that have ODCR offerings and others that don't, you start weird modeling during scheduling -- because you can overpack pods, causing us to rule out ODCR capacity and only leave behind spot and on-demand.
Considering the capacity is free -- we could choose to build affordances so that we always choose a free offering if it's available and don't continue to pack pods if it would cause us to work outside of this free ODCR offering e.g. create a new node instead of creating a bigger instance type.
Something to think about as we are considering the design options here.
/assign jonathan-innis |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Needed for RFC aws/karpenter-provider-aws#5716
Helps RFC implementation for karpenter-provider-aws aws/karpenter-provider-aws#6198
Description
RFC: aws/karpenter-provider-aws#5716
How was this change tested?
Running in personal EKS cluster
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.