Replies: 3 comments
-
The reason I asked @nikswamy for something like this is that I'm used to writing proofs in F*, not Coq. That is, I write a proof by stringing together a series of hints: asserting relevant facts, triggering quantifiers, invoking useful lemmas with appropriate parameters, introducing and eliminating terms, etc. I work forward from the preconditions, establishing new facts along the way, but not being very thorough about it because I know Z3 will fill in simple gaps in the reasoning process. I don't write proofs like a Coq person would, i.e., carefully working backward from a goal to hypotheses, specifying the justification for every step in this process. So, when I heard about there being tactics in F*, this is what I thought they'd be like. I thought I could program a tactic algorithm that would take as input some terms to reflect on, and would generate as output the various things I think of as being a proof: assertions, triggering terms, lemma invocations, etc. This would be very useful to me, since I have a lot of lemmas that I write that are tedious to write because they're just applying a basic algorithm. For instance, I often have to generate proofs that predicates P1, P2, ..., P20 all preserve some invariant. Those predicates have a bunch of subterms, a few of which are hard to reason about. So I write a lemma for each complex subterm showing a key property about it with respect to the invariant. Then, for every top-level predicate, I write the proof by manually unfolding the predicate's definition, and, everywhere there's an invocation of a complex subterm, invoking the corresponding lemma. I'd like to be able to metaprogram this simple algorithm. I know this is a different way of thinking than tactics are designed for. They're designed for programming a Coq-like backwards proof. But, given that this is a tactic language embedded in F*, I'm guessing there are a lot more users used to writing proofs in the forward-hinting style than the backwards-thorough style. So I may not be the only one who would appreciate a means to use F* tactics that way. |
Beta Was this translation helpful? Give feedback.
-
Hi Nik, Jay. Indeed this is a usual problem when proving lemmas, due to having value types in most places. For (I'll write a bit of recap here.) As you notice, using val test0 : (x:ℤ) -> Lemma (requires x > 2) (ensures p x)
let test0 x = assert (p x) by (apply_lemma (`prove_p)) (Though admittedly that becomes a bit annoying when it is a complex formula.) As noted, Goal 1/1
x: int
p: pure_post unit
uu___: x > 2 /\ (forall (pure_result: unit). p x ==> p pure_result)
--------------------------------------------------------------------------------
squash (p x)
(*?u52*) _ This comes "for free" from the way we extract tactic-marked assertions from the VCs. Hence this is only the case for an Another alternative is to write a tactic to turn a goal of shape assume
val p (x:ℤ) : prop
assume
val prove_p (x:ℤ) : Lemma (requires x > 1) (ensures p x)
private
val __intro_pure :
a:Type ->
b:Type ->
pre : (a -> pure_pre) ->
post : (x:a -> pure_post' b (pre x)) ->
(x:a -> squash (pre x) -> y:b{post x y}) ->
(x:a -> Pure b (requires (pre x)) (ensures (post x)))
private
let __intro_pure a b pre post f x = f x ()
let intro_pure () : Tac (binder * binder) =
apply (`__intro_pure);
let x = intro () in
let pre = intro () in
(x, pre)
val test : (x:ℤ) -> Lemma (requires x > 2) (ensures p x)
let test =
_ by (let _ = intro_pure() in
apply_lemma (`prove_p))
val test2 : (z:ℤ) -> (y:int) -> Lemma (requires z > y /\ y > 2) (ensures p z)
let test2 =
_ by (let _ = intro () in
let _ = intro_pure() in
apply_lemma (`prove_p)) The In the first example, this Goal 1/1
x: int
uu___: squash (x > 2)
--------------------------------------------------------------------------------
y: unit{p x}
(*?u135*) _ which is really just Now, this is still not very satisfactory. A user would justifiably just want to write So if we could make this work via some encoding like the above that would be much better IMO. At least for proving... |
Beta Was this translation helpful? Give feedback.
-
There may be a middle-ground between adding computation holes and In the thread, there are already several subtle implementation details for adding computation holes. Another one that comes to mind is that, the typechecker today takes the expected (value) type of the hole from A middle ground may be that synth hook always returns a So in this example, the programmer will write In terms of code,
Instead it would be:
And the One unknown here is, how are these pre- and post (or wps) tracked in the proof state and exposed to the synth hook. |
Beta Was this translation helpful? Give feedback.
-
I was talking with @jaylorch after @mtzguido guido's seminar on tactics and metaprograms.
Say you want to metaprogram the proof of a lemma using something like this:
This doesn't work today, because holes in Meta-F* just have value types. In this case, the tactic just sees that the goal is
unit
, and further, the tactic engine does not allow solving goals with terms that have non-trivial computation types.To make this concrete, I'll use a simple example:
Say, I have this in scope:
And now I want to metaprogram a proof of the following lemma:
This fails for several related reasons:
The hole just has type
unit
When checking the quoted term in
exact
, we check it just as a unit in the current environment, excluding the precondition of the lemma. So, this produces a proof obligationx > 1
which is not provable.For the postcondition, what we do simply to check that any solution of the hole, i.e., any
unit
-typed term, is well-typed as the body of the lemma. In this case, that also produces a proof obligation ofx > 2 ==> p x
, which again is not provable (without the call to prove_p)That brings me to the title of this post:
Can we introduce a new kind of "computation hole" in Meta-F*: i.e., a hole with a computation type? This would address the three problems above:
In this case, the hole would have type
Lemma (requires x > 2) (ensures p x)
We could introduce a new primitive, e.g.,
exact_computation
, which when typechecking a term for solving a computation hole, would typecheck it in a context where we expect it to have a computation typeWe would use the computation type of the hole when building the VC for the context that includes it, not just the value type.
There are some subtleties though:
Can other terms be dependent on computation holes? The simple answer is no. But, maybe there are some cases in which we would like to allow it, but that would bring up effect elaboration etc.
Is it possible to solve a computation hole by anything other than by building some syntax and calling
exact_computation
? This also seems hard, since the behavior of a computation type is determined by the user-defined effect system. So, incrementally filling a computation hole by transforming one computation hole into another would require applying some effect combinator.Despite these limitations, I think a facility to solve computation holes just by building some syntax
t
and solving usingexact_computation t
could be useful in metaprogramming things like lemmas that would be too painful to type out by hand.WDYT @mtzguido @aseemr @jaylorch and others?
Beta Was this translation helpful? Give feedback.
All reactions