-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix limits handling in slack distribution on generators #1041
Conversation
Signed-off-by: Damien Jeandemange <[email protected]>
Signed-off-by: Damien Jeandemange <[email protected]>
Signed-off-by: Damien Jeandemange <[email protected]>
double absMismatch = Math.abs(slackBusActivePowerMismatch); | ||
boolean shouldDistributeSlack = absMismatch > slackBusPMaxMismatch / PerUnit.SB && absMismatch > ActivePowerDistribution.P_RESIDUE_EPS; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Note to reviewers: This part could be a separate BugFix PR... let me know...
if slackBusPMaxMismatch
is small in comparison with ActivePowerDistribution.P_RESIDUE_EPS
, the outerloop becomes unstable because 1/ ActivePowerDistribution
will not distribute anything and 2/ Outerloop says something has to be distributed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for catching this! I think we can let it in this PR.
double mismatch = remainingMismatch; | ||
if (iteration == 0) { | ||
// "undo" everything from targetP to go back to initialP | ||
for (ParticipatingElement participatingGenerator : participatingElements) { | ||
LfGenerator generator = (LfGenerator) participatingGenerator.getElement(); | ||
done += generator.getInitialTargetP() - generator.getTargetP(); | ||
mismatch -= generator.getInitialTargetP() - generator.getTargetP(); | ||
generator.setTargetP(generator.getInitialTargetP()); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Note to reviewers
This is the way I found to fix the issue AND also avoiding a massive refactor: today ActivePowerDistribution
is completely stateless. Making ActivePowerDistribution
aware of a "starting point" would require more work / refactorings...
Note also that such refactoring's will need to be addressed for future developments planned in #979 and powsybl-entsoe/# - planned end of this year / Q4-2024 on our side.
Indeed, imagine a merit order rebalancing, the starting point is crucial, otherwise you would end up with both generators from upward list moved and generators from downward list moved ...
@vmouradian I added you as reviewer because I noticed that this change if not properly implemented would break what you did in #1017 - unit tests are unchanged, but your double check is welcomed. |
if (stepResult.movedBuses()) { | ||
movedBuses = true; | ||
} | ||
double done = step.run(participatingElements, iteration, remainingMismatch); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"powerDistributed" might be a clearer name than "done", don't you agree?
} | ||
|
||
@Override | ||
public ActivePowerDistribution.StepResult run(List<ParticipatingElement> participatingElements, int iteration, double remainingMismatch) { | ||
public double run(List<ParticipatingElement> participatingElements, int iteration, double remainingMismatch) { | ||
// normalize participation factors at each iteration start as some | ||
// generators might have reach a limit and have been discarded | ||
ParticipatingElement.normalizeParticipationFactors(participatingElements); | ||
|
||
double done = 0d; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, another name than "done" might be better (even if the name "done" was not introduced in this PR).
I double checked for the SA generator action part, and everything seems indeed to work as expected |
As a general comment, I think we have to use our summer time to add functional documentation about this kind of features. We can add new pages in the web-site about features and robustness. @jeandemanged What do you thing? |
@@ -334,6 +334,7 @@ private void applyGeneratorChange(LfNetworkParameters networkParameters) { | |||
if (!generator.isDisabled()) { | |||
double newTargetP = generatorChange.isRelative() ? generator.getTargetP() + generatorChange.activePowerValue() : generatorChange.activePowerValue(); | |||
generator.setTargetP(newTargetP); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was about to merge the PR but I have a doubt, why you don't change these lines too in a LfContingency
:
Set<LfBus> generatorBuses = new HashSet<>();
for (LfGenerator generator : lostGenerators) {
**generator.setTargetP(0);** -> initial targetP to zero too ?
LfBus bus = generator.getBus();
generatorBuses.add(bus);
generator.setParticipating(false);
generator.setDisabled(true);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked: indeed the generator.setParticipating(false);
solved this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed, but adding setInitialTargetP(0) would improve code clarity and won't harm, I am adding it.
Signed-off-by: Damien Jeandemange <[email protected]>
|
Please check if the PR fulfills these requirements
Does this PR already have an issue describing the problem?
No
What kind of change does this PR introduce?
Bug fix
What is the current behavior?
We can observe a nasty "hysteresis" effect on active power slack distribution on generators.
Slack distribution on loads does not exhibit this behavior.
It is quite common on large networks that DistributedSlack outerloop first move generation in one direction, and then goes back in the other direction, in particular due to PV-PQ switching.
In the current implementation, the DistributedSlack outerloop always restart from the current generator
targetP
. Because of the effects of the active power limits, the solution becomes "solution path dependent". Ultimately, a perfectly initially balanced input network may end up having some of the generators being moved upward and other generators being moved downward.Here the current behavior (click to expand)
What is the new behavior (if this is a feature change)?
Active power distribution on generators now always restarts from the same initial point, as defined by LfGenerator.initialTargetP, solving the issue.
Here the new fixed behavior (click to expand)
In the context of security analysis, still the previous behavior needs to be maintained, this was done as follows:
Does this PR introduce a breaking change or deprecate an API?
Extra Information
No existing unit test were harmed with this bugfix, in particular: