-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compiler validation: provide an option for PGo to attempt compiling invariants into generated code #122
Comments
Some thoughts, after mulling over the noted challenges. It seems that attempting to work backwards from plain TLA+ invariants, written in terms of the TLA+ output, may be an exercise in frustration. By design, the variables involved in the TLA+ output may be significantly different from those available to the generated code... which means, if we want to go down this road, that we have another variation of the abstraction problem inherent to the problem PGo tries to deal with. One way to address this issue might be to allow archetype-level invariants to be specified in MPCal, in terms of the archetype resources. This might allow an implementation to reasonably execute the invariants, but, in turn, would make it highly unclear what the TLA+ model should do... worse, the invariants might need to perform reads on the archetype resources (because that is the only way to access such things, at runtime), meaning that the assertions would destructively affect system state. This is also without beginning to ask what the TLA+ translation of such invariants might look like. Clearly, the "obvious" solution has some issues, from a practical viewpoint. At a high level, the biggest roadblock might just be that, unlike the TLA+ spec, the implementation is not likely to have all the information it needs to ever check most interesting assertions (at least, in one centralised location). All that being said, maybe the answer lies in ... well, how do you check distributed assertions at runtime? Distributed tracing + verification seems to be an answer, here.
Both of these things would not be unreasonable to do e.g using our distributed tracing lib, or something like it: https://github.com/DistributedClocks/tracing. The challenge would then be to map the outputs back to something that the verification conditions can "understand". This would require (1) obviously, generating something one could feed to a model checker, and (2) some care when writing those conditions. For instance, one might want to avoid depending strongly on overly conservative model checking assumptions (e.g some exact configuration of nodes, or a statically chosen set or overly restrictive model of input/output values), which would prevent the verification conditions from being applied to the more diverse set of scenarios that might occur during runtime. So, this challenge seems to be more or less a runtime problem. It might be possible to somehow assist developers in writing appropriately abstract verification conditions, but, more or less, this is a case of needing to record system behaviour, format it for e.g TLC, and check that the properties are satisfied. |
@fhackett Just in case you're not aware of work that appears related:
By the way, have you considered targeting other programming languages than Go (e.g. https://microsoft.github.io/coyote/) that might be more amenable to verification? |
@lemmy thanks for the links. I wasn't aware of these, and they were interesting to check out. Links to material referenced in YT material:
Some notes, based on my understanding:
re: using other programming languages, given a current rewrite (in progress), it should be a lot simpler to add other languages / output modes to PGo. This (in my opinion) is also important in research, as that relates to our ability to try things / ask questions without being (too) afraid of the necessary effort answering them might require :) |
From private conversations with the authors of that paper, most of the problems came from trying to capture the internal state of MongoDB. This didn't prove easy because of various locks at the implementation level. Recording traces at the network layer would likely have been less problematic. Secondly, trace checking was actually super successful because it immediately discovered a bug. The problem was that it was a known bug, and there is no way to "blacklist" known bugs. Note that test case generation didn't find a single bug; not promising! |
Interesting, thanks for the context. re: capturing internal state, that makes more precise what we might have to deal with, and reinforces the idea that we might have better luck, given our generative control over some aspects of concurrency. re: test generation, also good to know. I didn't notice that detail. |
It would be much easier to argue PGo's soundness, if generated programs could check themselves.
This issue suggests adding a flag to PGo, such that it attempts to extract safety properties from the TLA+, and compiles assertions of them into generated code. Temporal properties will likely not be practical to handle.
Challenges:
The text was updated successfully, but these errors were encountered: