vakila.github.io - Notes from KatsConf2









Search Preview

Notes from KatsConf2 · Anjana Sofia Vakil

vakila.github.io
Anjana Sofia Vakil Language technologist and software developer Blog
.io > vakila.github.io

SEO audit: Content analysis

Language Error! No language localisation is found.
Title Notes from KatsConf2 · Anjana Sofia Vakil
Text / HTML ratio 71 %
Frame Excellent! The website does not use iFrame solutions.
Flash Excellent! The website does not have any flash contents.
Keywords cloud code program language run type write append programming don’t > thing it’s total make talk writing specification you’re Programming that’s
Keywords consistency
Keyword Content Title Description Headings
code 31
program 29
language 26
run 20
type 16
write 15
Headings
H1 H2 H3 H4 H5 H6
2 9 6 0 0 0
Images We found 0 images on this web page.

SEO Keywords (Single)

Keyword Occurrence Density
code 31 1.55 %
program 29 1.45 %
language 26 1.30 %
run 20 1.00 %
type 16 0.80 %
write 15 0.75 %
append 15 0.75 %
programming 14 0.70 %
don’t 14 0.70 %
> 14 0.70 %
thing 13 0.65 %
it’s 13 0.65 %
total 13 0.65 %
make 13 0.65 %
talk 11 0.55 %
writing 10 0.50 %
specification 10 0.50 %
you’re 10 0.50 %
Programming 10 0.50 %
that’s 9 0.45 %

SEO Keywords (Two Word)

Keyword Occurrence Density
going to 21 1.05 %
of the 16 0.80 %
b c 14 0.70 %
in the 13 0.65 %
is a 13 0.65 %
> run 12 0.60 %
we can 12 0.60 %
want to 11 0.55 %
d e 11 0.55 %
you can 11 0.55 %
I can 10 0.50 %
to do 10 0.50 %
c d 10 0.50 %
we have 9 0.45 %
to be 9 0.45 %
a b 9 0.45 %
have a 8 0.40 %
run 1 8 0.40 %
in a 8 0.40 %
on the 8 0.40 %

SEO Keywords (Three Word)

Keyword Occurrence Density Possible Spam
c d e 10 0.50 % No
b c d 10 0.50 % No
a b c 9 0.45 % No
> run 1 8 0.40 % No
a bunch of 6 0.30 % No
Jessica Kerr jessitron 6 0.30 % No
going to do 6 0.30 % No
is going to 6 0.30 % No
— Jessica Kerr 5 0.25 % No
Kerr jessitron February 5 0.25 % No
jessitron February 18 5 0.25 % No
February 18 2017 5 0.25 % No
'a b c 5 0.25 % No
can think of 5 0.25 % No
a lot of 5 0.25 % No
I want to 4 0.20 % No
run 1 q 4 0.20 % No
e a b 4 0.20 % No
d e a 4 0.20 % No
appendo X Y 3 0.15 % No

SEO Keywords (Four Word)

Keyword Occurrence Density Possible Spam
b c d e 10 0.50 % No
a b c d 9 0.45 % No
— Jessica Kerr jessitron 5 0.25 % No
Jessica Kerr jessitron February 5 0.25 % No
Kerr jessitron February 18 5 0.25 % No
jessitron February 18 2017 5 0.25 % No
e a b c 4 0.20 % No
> run 1 q 4 0.20 % No
c d e a 4 0.20 % No
d e a b 4 0.20 % No
run 1 q evalo 3 0.15 % No
in a total language 3 0.15 % No
b c 'd e 3 0.15 % No
'a b c 'd 3 0.15 % No
we can think of 3 0.15 % No
append cdr l s 3 0.15 % No
l append cdr l 3 0.15 % No
car l append cdr 3 0.15 % No
cons car l append 3 0.15 % No
s if null? l 3 0.15 % No

Vakila.github.io Spined HTML


Notes from KatsConf2 · Anjana Sofia Vakil Anjana Sofia Vakil Language technologist and software developer BlogWell-nighProjects Talks © 2017. Credits. Notes from KatsConf2 19 Feb 2017 Hello from Dublin! Yesterday I had the privilege of peekaboo KatsConf2, a functional programming priming put on by the fun-loving, welcoming, and crazy-well-organized @FunctionalKats. It was a whirlwind of really heady talks from some of the weightier speakers around. Here’s a glimpse into what I learned. There’s no such thing as an objectively perfect programming language: all languages make tradeoffs. But it is possible to find/design a language that’s increasingly perfect for you and your project’s needs. Automation, automation, automation: Generative programming lets you write high-level lawmaking that generates low-level lawmaking Program derivation and synthesis let you write specifications/tests and leave it to the computer to icon out the lawmaking (Boring!) lawmaking rewriting tasks can be streamlined too Relational programming, Total programming and Type-DrivenMinutiaeare (cool/mindblowing) things. You can do web programming with FP - and interestingly, plane in a total language like Idris. I took a tuft of notes during the talks, in specimen you’re hungering for increasingly details. But @jessitron took wondrous graphical notes that I’ve linked to in the talks below, so just go read those! And for the well-constructed experience, trammels out this storify Vicky Twomey-Lee, who led a unconfined wive skills workshop the evening surpassing the conference, made of the #KatsConf2 tweets: [View the story "Kats Conf 2" on Storify] Hopefully this gives you an idea of what was said and which brain-exploding things you should go squint up now! Personally it opened up a tuft of cans of worms for me - definitely a lot of the material went over my head, but I have a ton of stuff to go find out increasingly (i.e. the first thing) about. Disclaimer: The (unedited!!!) notes unelevated represent my initial impressions of the content of these talks, jotted lanugo as I listened. They may or may not be totally accurate, or precisely/adequately represent what the speakers said or think, and the lawmaking examples are scrutinizingly certainly mistake-ridden. Read at your own risk! The origin story of FunctionalKats FunctionalKatas => FunctionalKats => (as of today) FunctionalKubs Meetups in Dublin & other locations Katas for solving programming problems in variegated functional languages Talks well-nigh FP and related topics Welcome to all, including beginners The Perfect Language Bodil Stokke @bodil Bodil's opinions on the Perfect Language. #katsConf2Rather noninflammatory, it must be early in the morning https://t.co/KsqGAKubpd— Jessica Kerr (@jessitron) February 18, 2017 What would the perfect programming language squint like? “MS Excel!” “Nobody wants to say ‘JavaScript’ as a joke?” “Lisp!” “I know there are Clojurians in the audience, they’re suspiciously silent…” There’s no such thing as the perfect language; Languages are well-nigh compromise. What the perfect language unquestionably is is a personal thing. I get paid to make whatever products I finger like to make life largest for programmers. So I thought: I should diamond the perfect language. What do I want in a language? It should be nonflexible to make mistakes On that note let’s talk well-nigh JavaScript. It was designed to be easy to get into, and not to place too many restrictions on what you can do. But this ways it’s easy to make mistakes & get unexpected results (cf. crazy stuff that happens when you add variegated things in JS). By restricting the types of inputs/outputs (see TypeScript), we can throw errors for incorrect input types - error messages may squint like the compiler yelling at you, but really they’re saving you a tuft of work later on by telling you up front. Let’s squint at PureScript Category theory! Semiring: something like addition/multiplication that has commutativity (a+b == b+a). Semigroup: …? There should be no uncertainty 1 + 2 * 3 vs. (+ 1 (* 2 3)) Pony: 1 + (2 * 3) – have to use parentheses to make precedence explicit It shouldn’t make you think Joe made a language at Ericsson in the late 80’s tabbed “Erlang”. This is a gif of Joe from the Erlang movie. He’s my favorite movie star. Immutability: In Erlang, values and variable bindings never change. At all. This takes yonder some cognitive overhead (because we don’t have to think well-nigh what value a variable has at the moment) Erlang tends to substantially fold over state: the old state is an input to the function and the new state is an output. The “abstraction ceiling” This term has to do with stuff worldly-wise to express abstractions in your language. Those of you who don’t know C: you don’t know what you’re missing, and I urge you not to find out. If garbage hodgepodge is a thing you don’t have to worry well-nigh in your language, that’s fantastic. Elm doesn’t really let you utopian over the fact that e.g. map over array, list, set is somehow the same type of operation. So you have to provide 3 variegated variants of a function that can be mapped over any of the 3 types of collections. This is a bit awkward, but Elm programmers tend not to mind, considering there’s a tradeoff: the fact that you can’t do this makes the type system simple so that Elm programmers get succinct, helpful error messages from the compiler. I was learning Rust recently and I wanted to be worldly-wise to express this abstraction. If you have aHodgepodgetrait, you can express that you take in aHodgepodgeand return a Collection. But you can’t specify that the outputHodgepodgehas to be the same type as the incoming one. Rust doesn’t have this worthiness to deal with this, but they’re trying to add it. We can do this in Haskell, considering we have functors. And that’s the last time I’m going to use a term from category theory, I promise. On the other hand, in a language like Lisp you can use its metaprogramming capabilities to raise the wresting ceiling in other ways. Efficiency I have a colleague and when I suggested using OCaml as an implementation language for our utopian language, she rejected it considering it was 50% slower than C. In slower languages like Python or Ruby you tend to have performance-critical lawmaking written in the lower-level language of C. But my feeling is that in theory, we should be worldly-wise to take a language like Haskell and build a smarter compiler that can be increasingly efficient. But the problem is that we’re designing languages that are built on the lambda calculus and so on, but the machines they’re implemented on are not built on that idea, but rather on the Von Neumann architecture. The computer has to do a lot of contortions to take the trappy lambda calculus idea and convert it into something that can run on an tracery designed from very variegated principles. This obviously complicates writing a performant and high-level language. Rust wanted to provide a language as high-level as possible, but with zero-cost abstractions. So instead of garbage collection, Rust has a type-system-assisted kind of wipe up. This is easier to deal with than the C version. If you want persistent data structures a la Erlang or Clojure, they can be pretty efficient, but simple mutation is unchangingly going to be increasingly efficient. We couldn’t do PDSs natively. Suppose you have a langauge that’s low-level unbearable to have zero-cost abstractions, but you can plug in something like garbage collection, currying, perhaps proffer the type system, so that you can write high-level programs using that functionality, but it’s not unquestionably part of the library. I have no idea how to do this but it would be really cool. Summing up You need to think about: ErgonomicsWrestingEfficiency Tooling (often forgotten at first, but very important!) Community (Code sharing, Documentation, Education, Marketing) Your language has to be unshut source. You can make a proprietary language, and you can make it succeed if you throw unbearable money at it, but plane the successful historical examples of that were sooner open-sourced, which enabled their unfurled use. I could requite a whole other talk well-nigh unshut source. Functional programming & static typing for server-side web Oskar Wickström @owickstrom FP has been influencing JavaScript a lot in the last few years. You have ES6 functional features, libraries like Underscore, Rambda, etc, products like React with FP/FRP at their core, JS as a compile target for functional languages But the focus is still client-side JS. Single page applications: using the browser to write apps increasingly like you wrote desktop apps before. Not the same model as perhaps the web browser was intended for at the beginning. Lots of frameworks to segregate from: Angular, Ember, Meteor, React&al. Without JS on the client, you get nothing. There’s been talk recently of “isomorphic” applications: one framework which runs exactly the same way on the esrver and the client. The term is sort of stolen & not used in the same way as in category theory. Static typing would be really useful for Middleware, which is a worldwide wresting but every easy to mess up if dynamically typed. In Clojure if you mess up the middleware you get the Stack Trace of Doom. Let’s use extensible records in PureScript - shout out to Edwin’s talk related to this. That inspired me to implement this in PureScript, which started this project tabbed Hyper which is what I’m working on right now in my self-ruling time. Goals: Safe HTTP middleware tracery Make effects of middleware explicit No magic How? Track middleware effects in type system leverage extensible records in PureScript Provide a worldwide API for middleware Write middleware that can work on multiple backendsDiamondConn: sort of like in Elixer, instead of passing a request and returning a response, pass them all together as a single unit Middleware: a function that takes a connection c and returns flipside connection type c’ inside flipside type m Indexed monads: similar to a state monad, but with two spare parameters: the type of the state surpassing this action, and the type after. We can use this to prohibit effectful operations which aren’t correct. Response state transitions: Hyper uses phantom types to track the state of response, guaranteeing lawfulness in response side effects Functional Program Derivation @Felienne (Amazing, hand-drawn, animal-filled) Slides #KatsConf2 notes from @Felienne TL;DR Math is good practice at stuff precise. This helps with programming. https://t.co/4QB9xFcLay— Jessica Kerr (@jessitron) February 18, 2017 Program derivation: Problem => Specification => Derivation => Program Someone said this is like “refactoring in reverse” Generalization: introduce parameters instead of unvarying values Induction: prove something for a wiring specimen and a first step, and you’ve proven it for all numbers Induction hypothesis: if you are at step n, you must have been at step n-1 surpassing that. With these elements, we have a program! We just make an if/else: e.g. for sum(n), if n == 0: return 0; else return sum(n-1) + n It all comes lanugo to writing the right specification: which is where we need to step yonder from the keyboard and think. Induction is the understructure of recursion. We can use induction to create a specification for sorting lists from which we can derive the QuickSort algorithm. But we get 2 sorting algorithms for the price of 1: if we place a restriction that we can only do one recursive call, we can tweak the specification to derive InsertionSort, thus proving that Insertion Sort is a special specimen of Quick Sort. I stole this from a PhD dissertation (“Functional Program Derivation” by ). This is all based on program derivation work by Djikstra. Takeaways: Programming == Math. Practicing some vital math is going to help you write code, plane if you won’t be doing these kind of exercises on yo ur day-to-day Calculations provide insight Delay choices where possible. Say “let’s seem a solution to this part of the problem” and then go when and solve it later. I’m writing a whole typesetting on this, if you’re interested in giving feedback on installment drafts let me know! mail at felienne dot com Q&A: is there a link between the specification and the complexity of the program? Yes, the specification has implications for implementation. The choices you make within the specification (e.g. caching values, splitting computation) stupefy the efficency of the program. What well-nigh proof assistants? Those are nice if you’re writing a dissertation or whatnot, but if you’re at the stage where you’re practicing this, the exercise is stuff precise, so I recommend doing this on paper. The second your fingers touch the keyboard, you can outsource your preciseness to the computer. Once you’ve got your specification, how do you ensure that your program meets it? One of the things you could do is write the spec in something like fscheck, or you could convert the specification into tests. Testing and specification are really enriching each other. Writing tests as a way to test your specification is moreover a good way to go. You should moreover have some cases for which you know, or have an intuition of, the behavior. But none of this is supposed to go in a machine, it’s supposed to be on paper. The confection and eating it: or writing expressive high-level programs that generate fast low-level lawmaking at runtime Nada Amin @nadamin Distinguish stages of computation Program generator: vital types (Int, String, T) are executed at lawmaking generation time Rep(Int), Rep(String), Rep(T) are left as variables in the generated lawmaking and executed at program run time(?) ShonanRencontrefor Generative Programming - part of the gen. pro. for HPC literature: you want to generate lawmaking that is specialized to a particular matrix Demo of generating lawmaking to solve this rencontre Generative Programming Patterns Deep linguistic reuse Turning interpreters into compilers You can think of the process of staging as something which generates code, think of an interpreter as taking lawmaking and spare input and creates a result. Putting them together we get something that takes lawmaking and symbolic input, and in the interpret stage generates lawmaking which takes very input, which in the execution stage produces a result This idea dates when to 1971, Futamura’s Partial Evaluation Generating efficient low-level lawmaking e.g. for specialized parsers We can take an efficient HTTP parser from 2000+ lines to 200, with parser combinators But while this is unconfined for performance, it leaves big security holes So we can use self-sustaining tools to verify the generated lawmaking without the fact Sometimes generating lawmaking is not the right solution to your problemIncreasinglyinfo on the particular framework I’m using: Generative Programming for ‘Abstraction without Regret’ Rug: an External DSL for CodingLawmakingTransformations (with Scala Parser-Combinators) Jessica Kerr @jessitron, Atomist The last talk was well-nigh wresting without (performance) regret. This talk is well-nigh wresting without the regret of making your lawmaking harder to read. Elm is a particularly good language to modify automatically, considering it’s got some boilerplate, but I love that boilerplate! No polymorphism, no type classes - I know exactly what that lawmaking is going to do! Reading it is great, but writing it can be a bit of a headache. As a programmer I want to spend my time thinking well-nigh what the users need and what my program is supposed to do. I don’t want to spend my time going “Oh no, i forgot to put that thing there”. Here’s a simple Elm program that prints “Hello world”. The goal is to write a program that modifies this existing Elm lawmaking and changes the greeting that we print. We’re going to do this with Scala. The goal is to generate readable lawmaking that I can later go superiority and change. It’s increasingly like a templating engine, but instead of starting with a templating file it starts from a cromulent Scala program. Our goal is to parse an Elm file into a parse tree, which requite us the meaningful shit of that file. The “parser” in parser combinators is unquestionably a combination of lexer and parser. Reuse is dangerous, dependencies are dangerous, considering they create coupling. (Controlled, automated) Cut & Paste is a safer solution. at which point @jessitron does some crazy fast live coding to write an Elm parser in Scala Rug is the super-cool open-source project I get to work on as my day job now! It’s a framework for creating lawmaking rewriters Repo for this talk In conclusion: any time my job feels easy, I think “OMG I’m doing it wrong”. But I don’t want to introduce wresting into my code, considering someone else is going to have difficulty reading that. I want to be worldly-wise to utopian without sacrificing lawmaking readability. I can make my job faster and harder by automating it. Relational Programming Will Byrd Your days are numbered, human programmers. #KatsConf2 @webyrd https://t.co/NK8tRg3sei— Jessica Kerr (@jessitron) February 18, 2017 There are many programming paradigms that don’t get unbearable attention. The one I want to talk well-nigh today is Relational Programming. It’s somewhat representative of Logic Programming, like Prolog. I want to show you what can happen when you commit fully to the paradigm, and see where that leads us. Functional Programming is a special specimen of Relational Programming, as we’re going to see in a minute. What is functional programming about? There’s a hint in the name. It’s well-nigh functions, the idea that representing computation in the form of mathematical functions could be useful.Consideringyou can etch functions, you don have to reason well-nigh mutable state, etc. - there are advantages to modeling computation as math. functions. In relational programming, instead of representing computation as functions we represent it as relations. You can think of a relation in may ways. If you’re familiar with relational databases, or you can think in terms of tuples where we want to reason over sets or collections of tuples, or we can think of it in terms of algebra - like upper school algebra - where we have variables representing unknown quantities and we have to icon out their values. We’ll see that we can get FP as a special specimen - there’s a variegated set of tradeoffs - but we’ll see that when you commit fully to this paradigm you can get some very surprising behavior. Let’s start in our functional world, we’re going to write a little program in Scheme or Racket, a little program to manipulate lists. We’ll just do something simple like suspend or concatenate. Let’s pinpoint suspend in Scheme: (define suspend (lambda (l s) (if (null? l) s (cons (car l) (append (cdr l) s)))) We’re going to use a relational programming language tabbed Mini Kanren which is basically an extension that has been unromantic to lots of languages which allows us to put in variables representing values and ask Kanren to fill in those values. So I’m going to pinpoint appendo. (By institute we pinpoint our names ending in -o, it’s kind of a long story, happy to explain offline.) Writes a tuft of Kanren that we don’t really understand Now I can do: > (run 1 (q) (appendo '(a b c) '(d e) q)) ((a b c d e)) So far, not very interesting, if this is all it does then it’s no largest than append. But where it gets interesting is that I can run it backwards to find an input: > (run 1 (X) (appendo '(a, b, c) X (a b c d e))) ((d e)) Or I can ask it to find N possible inputs: > (run 2 (X Y) (appendo X Y (a b c d e))) ((a b c d) (e)) ((a b c d e) ()) Or all possible inputs: > (run* (X Y) (appendo X Y (a b c d e))) ((a b c d) (e)) ((a b c d e) ()) ... What happens if I do this? > (run* (X Y Z) (appendo X Y Z)) It will run forever. This is sort of like a database query, except where the tables are infinite. One program we could write is an interpreter, an evaluator. We’re going to take an eval that’s written in MiniKanren, which is tabbed evalo and takes two arguments: the expression to be evaluated, and the value of that expression. > (run 1 (a) (evalo '(lambda (x) x) q)) ((closure x x ())) > (run 1 (a) (evalo '(list 'a) q)) ((a)) Professor wrote a Valentine's day post "99 ways to say 'I love you' in Racket", to teach people Racket by showing 99 variegated racket expressions that evaluate to the list `(I love you)` > (run 99 (q) (evalo q '(I love you))) ...99 ways... What well-nigh quines: a quine is a program that evaluates to itself. How could we find or generate a quine? > (run 1 (q) (evalo q q)) And twines: two variegated programs p and q where p evaluates to q and q evaluates to p. > (run 1 (p q) (=/= p q) (evalo p q) (evalo q p)) ...two expressions that basically quote/unquote themselves... What would happen if we run Scheme’s suspend in our evaluator? > (run 1 (q) (evalo `(letrec ((append (lambda (l s) (if (null? l) s (cons (car l) (append (cdr l) s))))))) (append '(a b c) '(d e)) q)) ((a b c d e)) But we can put the variable moreover inside the definition of append: > (run 1 (q) (evalo `(letrec ((append (lambda (l s) (if (null? l) q (cons (car l) (append (cdr l) s))))))) (append '(a b c) '(d e)) '(a b c d e))) (s) Now we’re starting to synthesize programs, based on specifications. When I gave this talk at PolyConf a couple of years ago Jessitron trolled me well-nigh how long it took to run this, since then we’ve gotten quite a bit faster. This is a tool tabbed Barliman that I (and Greg Rosenblatt) have been working on, and it’s basically a frontend, a dumb GUI to the interpreter we were just playing with. It’s just a prototype. We can see a partially specified definition - a Scheme function that’s partially defined, with metavariables that are fill-in-the-blanks for some Scheme expressions that we don’t know what they are yet. Barliman’s going to guess what the definition is going to be. (define ,A (lambda ,B ,C)) Now we requite Barliman a tuft of examples. Like (append '() '()) gives '(). It guesses what the missing expressions were based on those examples. The increasingly test cases we requite it, the largest propinquity of the program it guesses. With 3 examples, we can get it to correctly guess the definition of append. Yes, you are going to lose your jobs. Well, some people are going to lose their jobs. This is unquestionably something that concerns me, considering this tool is going to get a lot better. If you want to see the full dog & pony show, watch the ClojureConj talk I gave with Greg. Writing the tests is indeed the harder part. But if you’re once doing TDD or property-based testing, you’re once writing the tests, why don’t you just let the computer icon out the lawmaking for you based on those tests? Some people say this is too hard, the search space is too big. But that’s what they said well-nigh Go, and it turns out that if you use the right techniques plus a lot of computational power, Go isn’t as nonflexible as we thought. I think in well-nigh 10-15 years program synthesis won’t be as nonflexible as we think now. We’ll have much increasingly powerful IDEs, much increasingly powerful synthesis tools. It could plane tell you as you’re writing your lawmaking whether it’s inconsistent with your tests. What this will do for jobs, I don’t know. I don’t know, maybe it won’t pan out, but I can no longer tell you that this definitely won’t work. I think we’re at the point now where a lot of the wonk researchers are looking at a tuft of variegated parts of synthesis, and no one’s really combining them, but when they do, there will be huge breakthroughs. I don’t know what it’s going to do, but it’s going to do something. Working nonflexible to alimony things lazy Raichoo @raichoo The how, why, and trade offs of non-strictness in Haskell @raichoo #KatsConf2 https://t.co/TWkJjW3gL7— Jessica Kerr (@jessitron) February 18, 2017 Without laziness, we waste a lot of space, considering when we have recursion we have to alimony allocating memory for each evaluated thing. Laziness allows us to get virtually that. What is laziness, from a theoretical standpoint? The first thing we want to talk well-nigh is variegated ways to evaluate expressions. > f x y = x + y > f (1 + 1) (2 + 2) How do we evaluate this? => (1 + 1) + (2 + 2) => 2 + 4 => 6 This evaluation was normal form Church-Rosser Theorem: the order of evaluation doesn’t matter, ultimately a lambda expression will evaluate to the same thing. But! We have things like non-termination, and termination can only be unswayable without the fact. Here’s a way we can think of types: Let’s think of a Boolean as something which has three possible values: True, False, and “bottom”, which represents not-yet-determined, a computation that hasn’t ended yet. True and False are increasingly specified than marrow (e.g. _|_ <= True). Partial ordering. Monotone functions: if we have a function that takes a Bool and returns a Bool, and x and y are bools where x <= y, then f x <= f y. We can now show that f _|_ = True and f x = False doesn’t work out, considering it would have the magnitude that True => False, which doesn’t work - that’s a good thing considering if it did, we would have solve the halting problem. What’s nice here is that if we write a function and evaluate it in normal order, in the lazy way, then this naturally works out. Laziness is basically non-strictness (this normal order thing I’ve been talking well-nigh the whole time), and sharing. Laziness lets us reuse lawmaking and use combinators. This is something I miss from Haskell when I use any other language. Honorable mention: Purely Functional Data Structures by Chris Okasaki. When you have Persistent Data Structures, you need laziness to have this whole amortization treatise going on. This typesetting introduces its own dialect of ML (lazy ML). How do we do laziness in Haskell (in GHC)? At an intermediate stage of compilation tabbed STG, Haskell takes unoptimized lawmaking and optimizes it to make it lazy. (???) Total Functional Programming Edwin Brady @edwinbrady Type driven minutiae of interactive, total programs @edwinbrady #KatsConf2 https://t.co/KpzHhzXxMX— Jessica Kerr (@jessitron) February 18, 2017 Idris is a pure functional language with dependent types. It’s a “total” language, which ways you have program totality: a program either terminates, or gives you new results. Goals are: Encourage type-driven minutiae Reduce the forfeit of writing correct software - giving you increasingly tools to know upfront the program will do the correct thing. People on the internet say, you can’t do X, you can’t do Y in a total language. I’m going to do X and Y in a total language. Types wilt plans for a program.Pinpointthe type up front, and use it to guide writing the program. You pinpoint the program interactively. The compiler should be less like a teacher, and increasingly like a lab assistant. You say “let’s work on this” and it says “yes! let me help you”. As you go, you need to refine the type and the program as necessary. Test-driven minutiae has “red, green, refactor”. We have “type, define, refine”. If you superintendency well-nigh types, you should moreover superintendency well-nigh totality. You don’t have a type that completely describes your program unless your program is total. Given f : T: if program f is total, we know that it will unchangingly requite a result of type T. If it’s partial, we only know that if it gives a result, it will be type T, but it might crash, run forever, etc. and not requite a result. The difference between total and partial functions in this world: if it’s total, we can think of it as a Theorem. Idris can tell us whether or not it thinks a program is total (though we can’t be sure, considering we haven’t solved the halting problem “yet”, as a student once wrote in an assignment). If I write a program that type checks but Idris thinks it’s possibly not total, then I’ve probably washed-up the wrong thing. So in my Idris lawmaking I can tell it that some function I’m defining should be total. I can moreover tell Idris that if I can prove something that’s impossible, then I can basically deduce anything, e.g. an alt-fact well-nigh arithmetic. We have the wacky keyword. We have Streams, where a Stream is sort of like a list without nil, so potentially infinite. As far as the runtime is concerned, this ways this is lazy.Planethough we have strictness. Idris uses IO like Haskell to write interactive programs. IO is a unravelment of deportment that we expect the program to make(?). If you want to write interactive programs that loop, this stops it stuff total. But we can solve this by describing looping programs as a stream of IO actions. We know that the potentially-infinite loops are only going to get evaluated when we have a bit increasingly information well-nigh what the program is going to do. Turns out, you can use this to write servers, which run forever and winnow responses, which are total. (So the people on the internet are wrong).Trammelsout David Turner’s paper “Elementary Strong Functional Programming”, where he argues that totality is increasingly important than Turing-completeness, so if you have to requite up one you should requite up the latter.Typesettingcoming out: Type-DrivenMinutiaewith Idris