Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incremental hom search with constraints #62

Merged
merged 6 commits into from
May 20, 2024
Merged

Conversation

kris-brown
Copy link
Collaborator

@kris-brown kris-brown commented May 19, 2024

This addresses the issues from #59.

I'll write here an exposition of the strategy taken, which could be the start of a followup blog post to this one. Here, $L$ will represent the pattern which we are monitoring. $X$ will be the state of the world, which can be updated by deletion $del: X \leftarrowtail X'$ and addition $update: X \rightarrowtail X'$ (via pushout from some predeclared $add: I \rightarrowtail R$).

I'll now consider three types of constraints

1. Monic constraints

These were straightforward to implement, because a monic constraint cannot be violated by deleting or adding things by the above means. Once we start considering pushouts with non-monic $add: I\rightarrow R$ which can merge things, then we'll have to reconsider this.

Our general strategy for finding new homs is to use (suitably-constrained) backtracking homomorphism search, and monic constraints play, computationally, very nicely with that (Catlab's backtracking search supports this already). So the only change needed to support monic constraints is to attach a IncConstraints object to the IncHomSet which has a field monic that gets passed to any backtracking searches.

2. Positive application conditions (PAC): $p: L \rightarrow P$

A common reaction to the idea of PACs is "why not just use a bigger pattern?". There are lots of reasons for this (e.g. it's tedious to copy around a lot of dead weight in the $I$ and $R$ of the rewrite rule) and most relevantly for agent based modeling it's important that there may be multiple ways to satisfy the PAC but these should not be considered different matches!

One way to prove the existence of a commuting morphism $P\rightarrow X$ is to carry a witness, but we don't want to carry these witnesses around all the time (plus we'd need all of them, because some may get invalidated while others persist). So an existing match can be presumed to have some commuting morphism $P \rightarrow X$, but we do not have it explicitly.

When we have a deletion, which does not delete a given map $L \rightarrow X$, how do we know this same deletion has not invalidated the match due to a PAC? For now we say it's efficient enough to check after each deletion and each match, whether or not that match satisfies all of its PACs, because these are just small search problems (the hom searches start out with most of the domain already initialized). Likewise when we do our normal procedure for finding new morphisms which are introduced via some addition $add: I \rightarrowtail R$, we simply can filter these by those which satisfy the PACs.

The tricky part about PAC is when adding something (the world is updated, $update: X \rightarrow X'$, due to an addition $add: I \rightarrowtail R$), it may activate some new match $L\rightarrow X'$ completely unrelated to any of the old matches, $Hom(L,X)$ in virtue of allowing this match to satisfy its PAC. Here, we use cached partial overlaps once again to solve a hom search problem that has an existential constraint:1 we want a morphism $P\rightarrow X'$ (which implicitly gives a match $L\rightarrow X'$ via postcomposition) such that some element of $P \setminus L$ maps to $X' \setminus X$ (thus, the addition enabled the activation of the match).

But searching for all morphisms $P \rightarrow X'$ seems just as hard as recomputing the hom set we care about, $Hom(L,X)$. The solution here is to enumerate overlaps $(P \setminus L)\rightarrow(X'\setminus X)$, which are all the possible ways something newly added could give you a match purely due to activating the PAC. Now we don't literally have ACSets like $(P \setminus L)$ because they would have undefined foreign keys (i.e. don't satisfy the discrete opfibration condition), but we have, for the subobject induced by $p: L \rightarrow P$, the complement $\sim p$ as our best approximation to $P \setminus L$ that doesn't leave anything out. So we can search for overlaps between $\sim p$ and $\sim add$, and then obtain a bunch of very-constrained, tractable search problems.

3. Negative application conditions (NAC): $n: L \rightarrow N$

When we discover we have a putative new match to add, we can filter based on whether or not it satisfies all of its NAC. So there is no tricky aspect to adding support for NAC when we are adding things. The challenge is, when we delete something, there could be a match which springs out of nowhere because that thing we deleted allows its NAC to be satisfied. There is something similar to the above we can do in the case of DPO deletion: if the deletion was due to pushout complement induced by a morphism $left: L \leftarrowtail I$, we search for overlaps between $\sim n$ and $\sim left$, since we want all maps $N \rightarrow X$ that send something in $N \setminus L$ to something in $X \setminus X'$. Once we have those, we can get a newly activated match $L \rightarrow X$ via precomposing with $n$, and our hom search can ensure that the $L$ subobject of $N$ is only allowed to be mapped to the $X'$ subobject of $X$.

What's even trickier about deletion enabling new matches due to NACs is that SPO and SqPO deletion have a "cascade delete" effect. This is bad because we cannot statically compute all the possible overlaps needed to solve the existential constraint efficiently. By default, when doing a DPO deletion, we opt for the algorithm above, but in case this is not possible, the following strategy is taken.

We need to compute overlaps on the fly between $\sim n$ and $\sim del$ (rather than $\sim left$). Now $\sim del$ is still roughly the size of $\sim left$ except in pathological cases - the real shame here is that we can't compute it once and for all before we start a simulation. However, once we have it, things work similar to the above.

Conclusion

Incremental hom search is cool! There is still a lot of performance left on the table (e.g. #63, integrating with MarkAsDeleted, integrating with VM hom search, using nauty to make partial overlap and subobject finding faster). Something that remains to be addressed is the incorporation of dangling conditions: in which case, deleting something could activate new matches unrelated to the existing hom set.

Footnotes

  1. To reiterate something from the blog post: it's easy to integrate universally quantified constraints into backtracking hom search because you can kill the branch the moment it's not satisfied: existential constraints are trickier because you'll waste a lot of time going down fruitless paths.

@kris-brown kris-brown merged commit e5b2bf7 into main May 20, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Extend incremental hom search to include monic constraints, application conditions, gluing conditions
1 participant