You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it's important to have a core language that serves as a good default target.
My initial attempt looks like this:
typecore =
(* first four constructors correspond to regular term constructors *)
| Operatorofstring*core_scopelist
| Varofstring
| Sequenceofcorelist
| Primitiveofprimitive(* plus, core-specific ctors *)
| Lambdaofsortlist*core_scope
| CoreAppofcore*corelist
| Caseofcore*core_scopelist(** A metavariable refers to a term captured directly from the left-hand-side*)
| Metavarofstring(** Meaning is very similar to a metavar in that it refers to a capture from the left-hand-side. However, a meaning variable is interpreted. *)
| Meaningofstring
This issue is here to braindump some thoughts and references.
Implementing functional languages: a tutorial: "The principal content of the book is a series of implementations of a small functional language called the Core language. The Core language is designed to be as small as possible, so that it is easy to implement, but still rich enough to allow modern non-strict functional languages to be translated into it without losing efficiency."
The first question to answer is what exactly is this core language for? It's to unambiguously define the semantics of a language (via translation to core). It's nice if we can do other things like step it with a debugger, but that's secondary.
Two important concerns, fairly unique to this project, are inclusion of terms from other languages and computational primitives.
By "terms from other languages" I mean that denotational semantics (in general / in LVCA) is about translating from language A to B. When using core, this is specialized to a translation from A to Core(B), where Core(B) is core terms with terms from B embedded. As an example, a case expression in Core(bool):
Some of the syntax is up for debate, but the point is that this is the equivalent of (OCaml) match true with true -> false | false -> true, but where booleans are not built in to core at all, they're from the language embedded in core.
The other concern I mentioned above is computational primitives, by which I mean primitives that are expected to actually do something. For example, you might have primitive #not, in which case you could write something like the above example as #not(true()). Here #not is not built in to the specification of core, but it's provided by the runtime environment. (I'm using a hash to denote primitives, but it's just a convention I think is nice).
With primitives we're now dealing with "core plus", core extended with a set of primitives. So the example #not(true()) is a term in Core(bool()){#not}. The syntax is complete undecided, but the idea is that this term can be evaluated in any environment that provides the #not primitive. I think this is really cool. You could easily find the set of primitives your language relies on. It would even be possible to do a translation to a different set of primitives, eg Core(bool()){#not} -> Core(bool()){#nand}.
The text was updated successfully, but these errors were encountered:
Should functions be applied to multiple arguments simultaneously? GHC core only allows a single arg. Mitchell's thesis and GRIN allow multiple args.
Should let-bindings be added as terms? All three I just mentioned have this. By the way, here's a great answer about why Gallina (Coq core) needs let-binding. As far as I can tell, not a concern for us, since we don't plan for our core to be dependently typed.
I think it's important to have a core language that serves as a good default target.
My initial attempt looks like this:
This issue is here to braindump some thoughts and references.
The first question to answer is what exactly is this core language for? It's to unambiguously define the semantics of a language (via translation to core). It's nice if we can do other things like step it with a debugger, but that's secondary.
Two important concerns, fairly unique to this project, are inclusion of terms from other languages and computational primitives.
By "terms from other languages" I mean that denotational semantics (in general / in LVCA) is about translating from language
A
toB
. When using core, this is specialized to a translation fromA
toCore(B)
, whereCore(B)
is core terms with terms fromB
embedded. As an example, a case expression inCore(bool)
:Some of the syntax is up for debate, but the point is that this is the equivalent of (OCaml)
match true with true -> false | false -> true
, but where booleans are not built in to core at all, they're from the language embedded in core.The other concern I mentioned above is computational primitives, by which I mean primitives that are expected to actually do something. For example, you might have primitive
#not
, in which case you could write something like the above example as#not(true())
. Here#not
is not built in to the specification of core, but it's provided by the runtime environment. (I'm using a hash to denote primitives, but it's just a convention I think is nice).With primitives we're now dealing with "core plus", core extended with a set of primitives. So the example
#not(true())
is a term inCore(bool()){#not}
. The syntax is complete undecided, but the idea is that this term can be evaluated in any environment that provides the#not
primitive. I think this is really cool. You could easily find the set of primitives your language relies on. It would even be possible to do a translation to a different set of primitives, egCore(bool()){#not} -> Core(bool()){#nand}
.The text was updated successfully, but these errors were encountered: