Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add experimental Carbon Aware queue scaler for RabbitMQ #3381

Closed
rossf7 opened this issue Jul 15, 2022 · 16 comments
Closed

Add experimental Carbon Aware queue scaler for RabbitMQ #3381

rossf7 opened this issue Jul 15, 2022 · 16 comments
Labels
feature All issues for new features that have been committed to needs-discussion scaler stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@rossf7
Copy link

rossf7 commented Jul 15, 2022

Proposal

Add an experimental carbon-aware queue scaler for RabbitMQ. It will scale to 0 when the marginal carbon intensity of the electricity grid is outside the configured range.

The marginal intensity represents the emissions rate of the electricity generator(s) which are responding to changes in load on the local grid at a certain time.
https://www.watttime.org/api-documentation/#introduction

This can be used to schedule when non time-sensitive tasks are run so they reduce emissions. Tasks can also be scaled to 0 when emissions are high.

It is also possible to schedule where workloads run to reduce emissions but that is out of scope of this proposal.

However, for reference Green Software Foundation is developing a Carbon Aware SDK to do this that also uses marginal emissions. This is based on the SCI (Software Carbon Intensity) specification that is currently in alpha.

Scaler Source

RabbitMQ queue length & WattTime API real time emissions index

Scaling Mechanics

The WattTime API provides marginal carbon intensity for the configured location. This data is updated every 5 mins so is timely enough to make scheduling decisions.

percent A percentile value between 0 (minimum MOER in the last month i.e. clean) and 100 (maximum MOER in the last month i.e. dirty) representing the relative realtime marginal emissions intensity.

The percent value can be used as a trigger for scaling and is available to all users.

https://www.watttime.org/api-documentation/#real-time-emissions-index

The WattTime API will also return the marginal emissions rate if the user has a Pro subscription. So optionally the scaler could also allow scaling on this value.

moer - Marginal Operating Emissions Rate (MOER) value measured in lbs/MWh. This is only available for PRO subscriptions.

For the WattTime API the user needs to specify which grid they are using. This is referred to as the BA (balancing authority) and there is an API endpoint to determine this. https://www.watttime.org/api-documentation/#determine-grid-region

To make the scaler easier to use we could define a high / medium / low style traffic lights system. So users can decide whether workloads can run at medium intensity or only at low intensity. This would also abstract the WattTime API and mean that other sources of carbon intensity data can be used.

Authentication Source

RabbitMQ & WattTime API

Anything else?

This paper from October 2021 shows that in most regions shifting delay tolerant workloads to weekends can reduce carbon emissions by 20% and shifting to next day can reduce emissions by 5%. https://arxiv.org/abs/2110.13234

At the Green Web Foundation we have a Go library that is integrated with the WattTime API and could be used to fetch the grid intensity data. https://github.com/thegreenwebfoundation/grid-intensity-go

For this proposal we’re suggesting using the WattTime API as it has good regional coverage and a free API for relative intensity. However there are other sources of carbon intensity data such as the UK national grid (https://carbonintensity.org.uk/) which is a public API or ElectricityMap which has a paid API.

We're suggesting RabbitMQ because its a widely used queue that can run in-cluster. However the approach should work with any queue based scaler.

@rossf7
Copy link
Author

rossf7 commented Jul 15, 2022

@tomkerkhove @mkorbi I'm afraid it took longer than I'd hoped but here is the feature request we discussed in the CNCF WG Environmental Sustainability.

Keen to hear your thoughts when you have time to review this. Thanks!

@JorTurFer JorTurFer added the stale-bot-ignore All issues that should not be automatically closed by our stale bot label Jul 15, 2022
@JorTurFer
Copy link
Member

Thanks for opening the issue!
Nice proposal, we already have this in our mind but it requires some changes before in the scaling logic to allow more complex logic than already supported. Just in case, I have added the label cant-touch-this to ensure that stale bot doesn't close this issue because it's important and we need some time to address the prerequisites before doing this.

@rossf7
Copy link
Author

rossf7 commented Jul 15, 2022

Thanks @JorTurFer! Great you already have this in mind and I can see how this would need changes to the scaling logic.

@JorTurFer
Copy link
Member

JorTurFer commented Jul 15, 2022

#2440

@tomkerkhove tomkerkhove added needs-discussion feature All issues for new features that have been committed to and removed needs-discussion labels Jul 25, 2022
@tomkerkhove
Copy link
Member

I don't think this depends on #2440 though @JorTurFer. In this case, what we want is to have an AND behavior when using multiple triggers, instead of an OR. It's not specifically the need for doing custom formulas IMO.

Another aspect of this is that we can simply introduce a "marginal carbon intensity" scaler that uses this API which end-users need a license with (similar to other scalers).

We can do that as a first step and cover the combination scenarios later on. Thoughts?

@tomkerkhove tomkerkhove moved this from Proposed to To Do in Roadmap - KEDA Core Jul 25, 2022
@JorTurFer
Copy link
Member

That is exactly the problem. It's the HPA Controller who does the OR, KEDA only exposes all the metrics split. If we want to do the AND, we have to do it inside our own metrics server and expose only that calculated metric inteaf of all of them.

@JorTurFer
Copy link
Member

Of course, advanced scenarios are not needed but at least a simple formula scenarios is required. Other option could be introduce this as par of every scaler and apply this formula for all of them as part of their own metrics request. Honestly, I don't like this second option because why carbon yes and others no, that's why I think that we need to wait till formulas are integrated.
@zroubalik ?

@tomkerkhove
Copy link
Member

Of course, advanced scenarios are not needed but at least a simple formula scenarios is required. Other option could be introduce this as par of every scaler and apply this formula for all of them as part of their own metrics request

I don't see why this has to be with a formula and this cannot be a standalone scaler like any other one?

Honestly, I don't like this second option because why carbon yes and others no

Simple, carbon-friendly processing. Only scale if it is allowed wrt sustainability. If we are not allowed to, then we should not run our secondary/optional workloads.

@JorTurFer
Copy link
Member

JorTurFer commented Jul 25, 2022

How do you propose that we can include the carbon variable without adding the formula feature? At ScaledObject level instead of inside trigger? Maybe inside each scaler as par of them?

@tomkerkhove
Copy link
Member

tomkerkhove commented Jul 25, 2022

Just as simple as this (not spec, just example):

triggers:
- type: carbon-emission
  metadata:
    maximumImpact: "5%"

@JorTurFer
Copy link
Member

That's the point, we could introduce it outside triggers or inside each trigger, but not as a trigger itself. Every trigger is mapped to a metric in the HPA and from there, we don0't have control about how it's evaluated because it's the HPA Controller who does it. We could introduce this maximum impact as a variable for the whole scaledobject or inside each trigger and then calculate the value inside each scaler itself, but definitively right now we cannot do an AND operation because KEDA only create HPA pointing to all the metrics in the SO and exposes those metrics one by one, the OR (MAX to be more accurate) is done by the HPA Controller.
In a real case, imagine a RabbitMQ scaler + carbon. ATM we can modify the RabbitMQ metric value including the carbon value but we will expose only RabbitMQ metric. Basically, we will apply a formula where the values are the scaler metric value and the carbon metric value, but if we expose both, the HPA controller will apply the MAX and we cannot modify it.
Please @zroubalik , correct me if I'm wrong with my explanation

@tomkerkhove
Copy link
Member

tomkerkhove commented Jul 29, 2022

That's the point, we could introduce it outside triggers or inside each trigger, but not as a trigger itself. Every trigger is mapped to a metric in the HPA and from there, we don0't have control about how it's evaluated because it's the HPA Controller who does it. We could introduce this maximum impact as a variable for the whole scaledobject or inside each trigger and then calculate the value inside each scaler itself, but definitively right now we cannot do an AND operation because KEDA only create HPA pointing to all the metrics in the SO and exposes those metrics one by one, the OR (MAX to be more accurate) is done by the HPA Controller.
In a real case, imagine a RabbitMQ scaler + carbon. ATM we can modify the RabbitMQ metric value including the carbon value but we will expose only RabbitMQ metric. Basically, we will apply a formula where the values are the scaler metric value and the carbon metric value, but if we expose both, the HPA controller will apply the MAX and we cannot modify it.
Please @zroubalik , correct me if I'm wrong with my explanation

I don't get why this cannot be a standalone scaler as per the sample above?

Not to fix the current ask but as a first step.

@JorTurFer
Copy link
Member

I don't get why this cannot be a standalone scaler as per the sample above?

Because we cannot change the OR/MAX done by the HPA Controller AFAIK, it's outside KEDA scope, we can expose all the metrics and HPA Controller will apply its logic, if we want to "hack" that logic, we need to do it internally and expose a single metric

@tomkerkhove
Copy link
Member

You're thinking too much from a technical point of view :) Let's take a step back - The initial ask was to combine carbon information with RabbitMQ which is not possible indeed.

However, my proposal is to introduce it as a standalone scaler for now so that end-users can already use it similar to this brain dump:

triggers:
- type: carbon-emission
  metadata:
    maximumImpact: "5%"

This would allow people to already scale based on the impact that they have already. For example, I can have maximum replica of 100 and min of 0. As the impact increases, we scale down.

Simply put, take value from API and invert it.

@JorTurFer
Copy link
Member

Okey,
After a talk, we have understood ourself, atm creating a carbon scaler is doable (and good idea) but only for working like any other scaler, not to modify other scalers based on this.
To achieve that goal, we need the formula feature implemented.

@tomkerkhove
Copy link
Member

I have created #3467 as a dedicated scaler. I will close this one given the main ask is for combining scalers with an OR which we are already tracking.

@tomkerkhove tomkerkhove closed this as not planned Won't fix, can't repro, duplicate, stale Aug 2, 2022
Repository owner moved this from To Do to Ready To Ship in Roadmap - KEDA Core Aug 2, 2022
@tomkerkhove tomkerkhove moved this from Ready To Ship to Done in Roadmap - KEDA Core Aug 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature All issues for new features that have been committed to needs-discussion scaler stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Archived in project
Development

No branches or pull requests

3 participants