Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Intersection Type related problems #205

Merged
merged 3 commits into from
Aug 24, 2023

Conversation

mrhaandi
Copy link
Collaborator

New problems

  • strong normalization wrt. beta-reduction in the lambda-calculus
  • intersection type inhabitation, typability, type checking

New reductions

  • reduction from Krivine machine halting to strong normalization
  • reduction from intersection type typability to intersection type type checking
  • reduction from strong normalization to intersection type typability
  • reduction from simple semi-Thue system 01 rewriting to intersection type inhabitation

…calculus

added reduction from Krivine machine halting to strong normalization
added reduction from intersection type typability to intersection type type checking
added reduction from strong normalization to intersection type typability
added reduction from simple semi-Thue system 01 rewriting to intersection type inhabitation
@mrhaandi mrhaandi requested a review from yforster August 23, 2023 12:58
@DmxLarchey
Copy link
Collaborator

Btw about ten years ago, I did implement a significant part of Krivine's book in Coq, including in particular equivalence between SN and typability in D of pure lambda terms. The project used scoped De Bruijn syntax though.

I did use the following additional reference for system D: Strict Intersection Types for the Lambda Calculus by Steffen van Bakel. Empty intersections were also forbidden but by assuming that the list was not empty instead of representing it as a pair of the head and a tail.

For some reason I do not recall now, I had to specialize the type system to bounded assignments, ie measure the height of typing derivations. I guess some proofs relied on that measure. Do you also need it for your purpose of just characterizing SN with D-typability?

@mrhaandi
Copy link
Collaborator Author

mrhaandi commented Aug 23, 2023

Do you also need it for your purpose of just characterizing SN with D-typability?

@DmxLarchey No, there are several key aspects to this particular proof.

  • As pointed out in [1, Remark 17.2.16 (ii)] almost all published proofs of this fact (including van Bakel) contain mistakes. Chances are, you had exactly this problem. I also had this issue.
  • The technique I use is not exactly from literature. Instead of using strong normalization directly, I rely on a family of perpetual reduction strategies. It is very similar to [2], however I do not use a fixed reduction strategy ([2] uses Barendregt's perpetual F_infinity reduction strategy). This results in a simpler proof.

[1] Barendregt, Henk, Wil Dekkers, and Richard Statman. Lambda calculus with types. Cambridge University Press, 2013.
[2] Neergaard, Peter Møller. "Theoretical pearls: A bargain for intersection types: a simple strong normalization proof." Journal of Functional Programming 15.5 (2005): 669-677.

@DmxLarchey
Copy link
Collaborator

DmxLarchey commented Aug 24, 2023

@mrhaandi Concerning the equivalence between SN and typability in system D also denoted λ∩⁻ in the online version of Lambda Calculus with Types [1]. Of course I first had to realize that SN had to be defined inductively in terms of the Acc predicate, and especially, not in terms of infinite reduction sequences (as done in most textbooks that do not focus on constructive reasoning).

For both sides of the equivalence, I did (try) to follow the proof in Krivine's book [3]:

  • to show that every D-typable term is SN, the proof follows the technique of Girard, also reproduced in [1] as I understand. That proof did work out except that the measure I use to prove SN a -> SN (u[a] t1 ... tn) -> SN ((λu) a t1 ... tn) is not N(t) := the sum of the length of all the normalizations of t (as suggested in the book), but L(t) := the length of the longest normalisation of t instead. Edit: also Lemma 17.2.2 p735 of [1].
  • For the converse, ie every SN term is D-typable, the proof in Krivine's book fails (both in the several textbook versions and in the online version [3]). Indeed the induction measure used to prove Thm 4.17 pp 69-70, namely N(t) above again, is actually not decreasing in the case of t := λ...λ ((x) t1) where x is a variable. Indeed, in that case, there is a length-preserving one-to-one correspondence between the normalization of t and the normalizations of t1 hence we have N(t) = N(t1) contrary to what is stated in the book. Notice that, and this is misleading, the argument works for t := λ...λ ((x) t1 t2) or above. To solve the issue, I did replace the measure by a lexicographic product of L(t) (as above) and strict subterm, and it did work out. Edit: [1] suggests in Rem 17.2.16 (ii) p742 that one can switch to a non-standard notion of length for reduction sequences and this would fix the bug in Krivine's proof. I also observed that: one possibility that I thought of, but did not implement in the end, was to measure the length of reduction sequences as the total number of used symbols when writing down such a sequence.

There is some work to define L(t) by Fixpoint on SN t, so dependent on sn : SN t but irrelevant in that proof. This relies on the fact that beta-reduction steps are finitely branching. I would consider it as easy exercise now, but for me at that time, it was not so easy to work with the dependent version of Acc_rect.

Btw, I cannot find remark 17.2.16 in that online version of [1] and I do not have access to the paper book. Could you please forward me the text of the remark please? I am not sure the issue and your solution is the same as the one I did encounter.

@mrhaandi
Copy link
Collaborator Author

For the converse, ie every SN term is D-typable, the proof in Krivine's book fails (both in the several textbook versions and in the online version [3]).

I studied several approaches. There are three different textbook ones, Krivine's is broken, but can be fixed (as you say). Maybe the simplest proof is in the printed version of Lambda Calculus with Types (2013).
I went for my own (which is different from the existing ones) due to ongoing work on a different calculus for which only this approach works. I plan to refer to the implementation in PR as an additional use case of the technique.

Copy link
Member

@yforster yforster left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with the PR, I only have the comment that we could specify that we are talking about the strong call-by-name lambda-calculus. This can indeed be considered the lambda-calculus, but we use the terminology weak call-by-name, so we might as well be precise here

@mrhaandi
Copy link
Collaborator Author

This can indeed be considered the lambda-calculus, but we use the terminology weak call-by-name, so we might as well be precise here.

@yforster I changed the documentation and comments from
Strong normalization wrt. beta-reduction for given terms in the lambda-calculus
to
Strong normalization for given terms in the strong call-by-name lambda-calculus

changed (scons t var) to (scons t (fun x => var x))
@mrhaandi mrhaandi merged commit a7f4b4f into uds-psl:master Aug 24, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants