A list of puns related to "Typed lambda calculus"
I couldn't find many small implementations of a simply-typed lambda calculus, so I decided to create one in F#. This might be useful for anyone interested in the basics of implementing a functional programming language.
Hi there.
I've been recently learning on the subject of the "Propositions as Types" paradigm which stems from the Curry-Howard correspondence, but can't seem to find online material with beginner-friendly explanations of the "next steps" following the correspondence of Simply-typed Lambda calculus and Propositional calculus (a.k.a. zeroth-order logic). Therefore, I would appreciate help with some questions and/or pointers to resources which may answer them.
From what I understand so far, adding types to Lambda calculus revokes its Turing-completeness as it becomes "strongly normalizing", meaning every well-typed program terminates through reduction to a normal form (question 1: proof?). It is then pretty straightforward to see that its type system corresponds to propositional calculus with implication (->). So far so good.
At this point begins the logic system rabbit-hole.
Further reading on different logics made me wonder, which logic is the correspondence to, exactly. It is clearly not classical logic as that seems to require call/cc
or some other control mechanism and it seems like the answer is Intuitionistic propositional logic (question 2: which variant?). However, I don't see where are the other logic connectives or a clear listing of the intuitionistic axioms and their corresponding rules in simply-typed lambda calculus.
It seems natural to extend the Simply-typed Lambda calculus to a more powerful logic. For instance, adding a product type (*) gets us the corresponding of logical conjuction (AND). Logical disjunction (OR) then, seems to find its type-system correspondent in sum types (a.k.a disjoint unions). I then wonder (question 3) why have functional languages such as OCaml and Haskell chosen sum types (a.k.a Tagged unions) to represent these instead of the "more usual" non-disjoint union (e.g. TypeScript union types). I guess this could be related to the constructive necessity of proofs in the view of intuitionistic logic, but I'm
... keep reading on reddit β‘Here's my idea for a simply typed lambda calculus with macros on terms:
There would be 7 terms: lambdas, function application, variables, macro lambdas, macro application, and let in bindings. Each term would have a type and a classification. Types are as you expect, where lambdas and macro lambdas would each need a type annotation on their binding variable as usual. There would be two types of classification: runtime, and macro. Lambdas, function application, and macro application return a term of classification runtime, macro lambdas return a term of classification macro. Variables bound by either lambdas or macro lambdas will always have classification runtime, where as variables bound by let bindings match the classification of their binding. function application expects two runtime terms, macro applications expect a macro term and a runtime term.
if !\x -> ..
is the syntax and x!(e)
is the syntax for macro application, here's some valid terms:
(!\x : int -> x + 1)!(2) -- reduces at compile time to 2 + 1, reduces to 3 at runtime
(\x : int -> x + 1)(2) -- does not reduce at compile time, reduces at runtime to 3
(\!x : int -> x)(2) -- invalid, function applications expect a runtime value
(\x : int -> x)!(2) -- invalid, macro application expects a macro as it's first argument
(\!x : (int -> int) -> x(2))!(\y : int -> y) -- invalid, macro application expects a runtime value as it's second argument
Here's some basic typing rules i made for this: https://i.imgur.com/BDjX7YP.png, where e : Ο β» Ο means that. e has type Ο and classification Ο.
Now with this blueprint, it's not too hard to extend this to add classification polymorphism, and to make either or both lambda and application use said polymorphism or to add higher order macros(macros on macros).
If any of this has been done before, please let me know.
Edit: I expanded on my typing rules to include higher order macros: https://i.imgur.com/4ssZDP8.png and I plan on looking into staging research now.
Hi,
I made a type checker for the lambda calculus. I haven't added universal quantification yet though.
You provide a term along the lines of
Ξ»(builtin/int, x, local/x)
And it type checks it. / is used for namespacing. local is the "local" namespace lambda binders introduce values to.
https://raw.githubusercontent.com/sstewartgallus/jsystemf/master/peacod-plato/resources/com/sstewartgallus/plato/frontend/typeinference.pl
Hi all,
I'm currently taking a Syntax course at my university and we have been working with typed lambda calculus. I have no prior background with math higher than pre-calc and it took me weeks just to wrap my head around the idea that verbs are functions. I've looked up videos on YouTube and reviewed my lecture notes and I'm simply not getting it in terms of how I can write lexical entries for different words, especially ones whose syntactic categories are ambiguous. Is there anyone who could provide me with an ELI5 explanation or just link me to some resources that aren't totally over my head coming from no knowledge of calculus? I'm doing fine with things like transitive verbs but when it comes to more complex phrases and sentences, I am totally at a loss for how to begin. The course is almost over and I'm sort of panicking about the final so any help would be much appreciated. Edit: looking for actual answers if anyone has them. Iβve already tried talking to my instructor, the way the material has been presented was not helpful.
I'm not really sure if this post fits here, but I suppose I rather ask here than in r/AskProgramming.
Anyway, here is the question:
Given the simply typed lambda-calculus with call-by-value evaluation and the regular inference rules for abstraction, application and variables. If I just add the Unit type with the constant *:Unit, is this the following true?
If t -> t' and t':T, then t:T
I suspect the answer is no, but I can't come up with a counterexample.
Again, I'm sorry if this doesn't belong here, but I can't find any other proper community to ask in.
I am writing interpreters for several typed variants of lambda calculus. I'd like to generate some random terms to feed the interpreter; I know there's Boltzmann sampler to generate random values for inductive datatypes, this may be appropriate for untyped lambda calculus, but not for typed ones since there's no guarantee generated term is well-typed. So is there any good way to generate well-typed terms?
A naive method: generate untyped term, pass to type inference engine, discard ill-typed ones. This only works for Hindley-Milner systems which has decidable type inference; when working with simply-typed lambda calculus (no polymorphic type) or System F (undecidable inference), I must generate terms which are already annonated with correct types.
EDIT: Nevermind, I found it, see my comment.
I can't find a paper I remember I stumbled upon about a year ago. My Google- and google-scholar-fu is failing me.
It was about a set of typed-ness (! but not necessarily type-) preserving edit operations for the simply typed lambda calculus. The edits were complete in the sense that you could get from any valid expression to any other with a sequence of these operations.
Maybe one of you knows about it?
In the lambda cube, the axis Ξ»2 is for "polymorphism" or "second-order" typed lambda calculus.
Why isn't that name used for the axis Ξ»w, typed lambda calculus with type operators? Don't you need type operators to get 'higher-kinded' types?
Also, there seems to be a level of difference between the above wiki article, and this page http://www.rbjones.com/rbjpub/logic/cl/tlc001.htm
Where the wiki has 'terms', the second link has 'types' and where wiki has 'types', the second link has 'kinds'.
I don't know a lot about PLT, but I'm thinking about a language design anyway. Here's the broad sketch:
The language is the dependently typed lambda calculus plus a few conventions. We put lots of sophistication into the library and evaluator, but the core semantics stay fixed. The first line of a source file is the name of a function, the rest of it is a big string. The evaluator takes the string, church encodes it, and passes it to the function. The result of this is whatever our target is, I/O action, executable, audio, video, VHDL, etc. The evaluator figures out what to do with it based on the type - execute, write to filesystem, etc.
If I understand the implications of dependent types correctly, the type of the function that goes at the top of our file can fully express the syntax of the sub-language used below it. So we should get syntax errors out of our typechecker.
The bare dependently typed lambda calculus is obviously very inefficient for most things. The evaluator, however, can do guaranteed-correct optimizations based on the types. Oh, that's a church numeral? Use GMP instead! Oh, that's a (church) character string? Use UTF-8 instead! Et cetera. Up to full supercompilation, hopefully. I don't know how much of this can be shoved into the library - hopefully almost all of it.
Comments, suggestions, hysterical laughter?
First of all, I hope this is the right subreddit for this question!
I've recently had the chance to purvey a fellow undergrad from Germany's dissertation for his bachelor's degree. His topic is confluence in Calculus of Constructions - this is largely irrelevant to my question, but it might help motivate it.
He mentions that Simply Typed Lambda Calculus only deals with one universe, and the meaning behind this is implied. He stated that by adding more "universes" we get things like polymorphism, type operators and dependent types. I'm curious as to what he means by "universes"? I'm thinking in a purely physical sense - changing Euclidean space to R^3 instead of R^2 adds a whole host of operations and things we can operate on. (Squares to cubes, circles to spheres)
A lemma says that if x is a variable and e1, ..., ek are strongly normalizing, then so is xe1...ek. Note that this lemma is used to prove the Strong Normalization Theorem.
The proof goes like this:
Any reduction sequence from xe1...ek will have the form
xe1...ek βΞ² ... βΞ² xf1...fk βΞ² xg1...gk βΞ² ...
where at each stage for exactly one index j, fj βΞ² gj and for the others, fi β‘ gi. This means that if there is an infinite reduction sequence from xe1...ek then there must be one from one of the eis, a contradiction to their being strongly normalizing.
Question: Are we sure any reduction sequence has the form above? Can't some term be applied to the next one?
π Hey guys, so I'm fairly new to the whole Cardano developer ecosystem. I am currently learning solidity and its in and outs. I have recently began reading on the workings of cardano, its academic background and foundation and believe it has potential to be a great blockchain if not the best if more developers hop on board.
So as a result I'm interested in learning how to develop cardano dapps, I know that Haskell is a non conditional prerequisite that one must learn to enter this space. What I would like to know is how much Lambda Calculus is truly need for me to know in order to create full functioning quality Dapps with Haskel. I will admit im not the strongest in math but I am capable of learning lambda if need be, I just need to know if it's something I MUST know in order to be a functional cardano dev.
thanks in advance for any responses.
As in title, I would appreciate any reccomendation.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.