-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix minimum up-/ downtime #1021
Conversation
…downtime Fix/minimum uptime downtime
if self._time_step_allows_flexibility( | ||
t, m.flows[i, o].nonconvex.max_up_down, m.TIMESTEPS.at(-1) | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this calls for a different handling of the first time steps. Maybe, we continue to not create the constraint for the first N time steps so that the first time of switching down can continue to be our marker.
Also: The old style was assuming cyclic time. I think we should leave that as an option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or the first timesteps have to be fixed and afterwards implement only the constraints of the timesteps > uptime/downtime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to have this more graphic. The old implementation interpreted the minimum uptime or downtime to force the flow up or down in the beginning and in the end.
Thinking about this a bit longer, there is a bug.
pp2 = solph.components.Source(
label="plant_min_up_constraints",
outputs={
bel: solph.Flow(
nominal_value=10,
min=0.5,
max=1.0,
variable_costs=10,
nonconvex=solph.NonConvex(minimum_uptime=2, minimum_downtime=4, initial_status=1),
)
},
)
Results in the flow being forced active for four time steps. I think backwards compatibility wit this can be dropped, as its an obvious mistake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is only to force the up or down (depending on initial_status) only in the beginning. Or is there a reason, why it has to be forced to be up / down in the end?
In my opinion the second plot ("current implementation") is showing the correct behavor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "current implementation" in this branch does only force the status up/down in the beginning. However, it does so for longer than needed (third plot and #1021 (comment)).
I think, it would make sense to allow forcing the status up/down in the end of the optimisation horizon as an equivalent to the "balanced" storage. If the storage has minimum downsteps of 4, it can have 2 in the end plus 2 in the beginning. But this should just be an option. The old implementation (in v0.5.1 and before) really made no sense.
This allows to go back to the indexing we used before.
The old implementation lead to unwanted situations. E.g. a Flow with `initial_status=0` forcibly off for more time steps than `minimum_downtime` if the the `minimum_uptime` was bigger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to first refactor the old code before extending it. So, here we go.
This also means that NonConvex.max_up_down has to be deleted, as it's not meaningful for two series.
Initialising the sequence as non-empty allows for using it in e.g. `max(sequence(value)) == value`
The "fixed" test just tests if an error is risen on access. Saving the (not existing) result to a variable should avoid code quality bots from complaining.
I do not get why this does not fail in dev, but I am taking the freedom to alter it in this branch so that it does not cause problems.
The new formulation needs less constraints, so the constraint test needs to be adjusted.
I really do not get why this was not an issue before. (I know that Pyomo started "rounding" some time ago.)
The classical mode with constant number of steps seems to work in the refactored version. However, the results when giving a series are not really intuitive (or wrong). I wrote a small script to investigate this: from oemof import solph
def test_min_uptime():
# create an energy system
idx = solph.create_time_index(2024, number=24)
es = solph.EnergySystem(timeindex=idx, infer_last_interval=False)
source = solph.Bus(label="source", balanced=False)
sink = solph.Bus(
label="sink",
inputs={
source: solph.Flow(
nominal_value=10,
min=0.1,
max=[
1.01, 1.02, 1.03, 1.04, 1.05, 1.06,
1.07, 1.08, 1.09, 1.10, 1.11, 1.12,
1.13, 1.14, 1.15, 1.16, 1.17, 1.18,
1.19, 1.20, 1.21, 1.22, 1.23, 1.24,
],
variable_costs=[
0.06, 0.05, 0.04, -9, 0.02, 0.01,
0.07, 0.08, 0.09, 0.10, 0.11, -9,
0.13, 0.14, 0.15, -9, 0.17, 0.18,
0.24, 0.23, 0.22, -9, 0.20, 0.19,
],
nonconvex=solph.NonConvex(
minimum_uptime=[
3, 3, 2, 3, 3, 3,
2, 2, 2, 2, 2, 2,
4, 4, 4, 4, 4, 4,
3, 3, 3, 3, 3, 3,
],
)
),
},
balanced=False,
)
es.add(source, sink)
om = solph.Model(es)
om.solve(solver="cbc", solve_kwargs={"tee": False})
results = solph.processing.results(om)
data = solph.views.node(results, source)
print(list(data["sequences"][(source, sink), "flow"]))
if __name__ == "__main__":
test_min_uptime() Results in the following time series for the Flow:
|
|
Checking the downtime also leads to iterate one timestep more from oemof import solph
def test_min_downtime():
# create an energy system
idx = solph.create_time_index(2024, number=24)
es = solph.EnergySystem(timeindex=idx, infer_last_interval=False)
source = solph.Bus(label="source", balanced=False)
sink = solph.Bus(
label="sink",
inputs={
source: solph.Flow(
nominal_value=10,
min=0.1,
max=[
1.01, 1.02, 1.03, 1.04, 1.05, 1.06,
1.07, 1.08, 1.09, 1.10, 1.11, 1.12,
1.13, 1.14, 1.15, 1.16, 1.17, 1.18,
1.19, 1.20, 1.21, 1.22, 1.23, 1.24,
],
variable_costs=[
-0.06, -0.05, -0.04, 9, -0.02, -0.01,
-.07, -0.08, -0.09, 0.10, -0.11, 9,
-0.13, -0.14, -0.15, 9, -0.17, -0.18,
-0.24, -0.23, -0.22, 9, -0.20, -0.19,
],
nonconvex=solph.NonConvex(
minimum_downtime=[
3, 3, 2, 3, 3, 3,
2, 2, 2, 2, 2, 2,
4, 4, 4, 4, 4, 4,
3, 3, 3, 3, 3, 3,
],
initial_status=1,
)
),
},
balanced=False,
)
es.add(source, sink)
om = solph.Model(es)
om.solve(solver="cbc", solve_kwargs={"tee": False})
results = solph.processing.results(om)
data = solph.views.node(results, source)
print(list(data["sequences"][(source, sink), "flow"]))
if __name__ == "__main__":
test_min_uptime() Results in the following time series for the Flow old version:
|
I keep forgetting that |
I think we should merge this already and prepare the calculation of the required steps for a given length in hours to a second PR. Note: Technically speaking, this is not API stable. However, I would consider the old behavour a bug. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I proved only the code without investment. Maybe someone else can check this. The constraints itself I approve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems plausible to me, thank you @p-snft and @AntonellaConsolinno. Admittedly, I only quickly scrolled through the code, but I like the additions also in terms of tests.
For minimum_uptime and minimum_downtime time steps were counted. However, time steps might have different lengths. This can lead to inconsistencies.
Fixes #1017