-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add migration to fix database edges state #5640
base: stable
Are you sure you want to change the base?
Conversation
CodSpeed Performance ReportMerging #5640 will not alter performanceComparing Summary
|
083b5df
to
a6518f2
Compare
""" | ||
Fix corrupted state introduced by Migration012 when duplicating a CoreAccount (branch Aware) | ||
being part of a CoreStandardGroup (branch Agnostic). Database is corrupted at multiple points: | ||
- Old CoreAccount node <> group_member node `active` edge has no `to` time (possibly because of #5590). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, #5590 deserves a dedicated migration fixing missing to
time for any couple of deleted
edge on -global-
with an active
edge on another branch.
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a good start
as I say in my comments, I think we should split this into at least 2 migrations
I took a shot at writing a more general version of what I think the first migration should be
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
ab99efb
to
4ea0e08
Compare
a1d546e
to
7828def
Compare
7828def
to
a842f14
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is most of the way there
some comments on the queries
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
backend/infrahub/core/migrations/graph/m019_restore_rels_to_time.py
Outdated
Show resolved
Hide resolved
cd4c02b
to
746e80e
Compare
746e80e
to
699adfb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking great, just a few small comments to go
""" | ||
|
||
params = {"global_branch": GLOBAL_BRANCH_NAME} | ||
self.params.update(params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can drop this params
update
MATCH (rel)-[active_edge {status: "active"}]-(peer_2) | ||
RETURN active_edge.branch as active_edge_branch | ||
LIMIT 1 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if a relationship is created on a branch and then merged into main, there will actually be 2 active
edges between the Node
node and Relationship
node
and it is possible that a node in this situation is deleted on just one of the branches for which it has an active edge
would it work to just use deleted_edge_branch
here instead of trying to get active_edge_branch
? deleted_edge_branch
should be the correct branch to delete, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have added more comments, taking into consideration the fact that relationship and node might not have the same branch support type, we may need to retrieve active branch, in details:
- If res is agnostic, we should delete on global branch (and we do not use deleted_edge_branch)
- If rel is aware and deleted node is aware, we should use deleted edge branch
- If rel is aware and delete node is agnostic, we need to create deleted edges for any branch on which this relationship exists. I did not have in mind situations where there could be multiple active branches, so I think we should actually retrieve any distinct branch for which an active edge exist, instead of a single one as it's currently done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this makes sense to me
WITH rel, peer_2, branch, branch_level, deleted_time | ||
MATCH (rel)<-[:IS_VISIBLE]-(peer_2) | ||
MERGE (rel)<-[:IS_VISIBLE {status: "deleted", branch: branch, branch_level: branch_level, from: deleted_time}]-(peer_2) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there could also be HAS_OWNER
or HAS_SOURCE
edges linking a Relationship
edge to a Node
. I believe that these are not widely used right now, but adding 4 more subqueries to handle them won't hurt
Fixes IFC-1204
Detail of the issue is within migration code
I updated the PR to have 3 separated queries:
Note that, testing separatily each query might not be relevant as:
NodeListGetRelationshipsQuery
filters on both requested branch AND -global- even for full aware nodes/rels, so setting an edge on global would not break itdeleted
edge.So in the end, a single test has been added reproducing (almost) corrupted state observed on user db.
Note this test does not trigger IFC-1204 bug, but a manual test confirmed this migration fixes it.