So there’s this strange compilation step in which you take a lambda termthat you can kind of understand and turn it into a complete mess of S’s andK’s that you can’t understand at all. But when you apply it to an argument,miraculously, it computes the same answer as the original lambda stuff did.And that was another example of something that was very clever and, to meat the time, implausible. But nevertheless you could see that it would justalways work.

I don’t know quite what it was that turned me on about this. I found itcompletely inspirational. It’s partly, I suppose, because, being interested inhardware, it felt like this is a way you could implement the lambda calculus.Because the lambda calculus doesn’t look like it’s an implementationmechanism at all. It’s a bit of a mathematical way of thinking, a bit remotefrom the machine. This S-K stuff looks as if you could just run it and indeedyou can.

Seibel: So, you had a sense that, OK, I’ll just build a machine that has S andK hardwired and then all I’ve got to do is compile things to a series of S andK ops.

Peyton Jones: In fact that’s exactly what my friends did. William Stoye andThomas Clarke and a couple others, built this machine, SKIM, the SKIMachine, which directly executed S and K. For some reason I wasn’t directlyinvolved in that project. But at the time there was this feeling developing.John Backus’s paper, called, “Can Programming Be Liberated from the vonNeumann Style” was extremely influential at the time. It was his TuringAward lecture and he was this guy who had invented Fortran saying, ineffect, “Functional programming is the way of the future.”

Furthermore, he said, “Maybe we should develop new computerarchitectures to execute this stuff on.” So this very high-level endorsementof this research area meant we cited that paper like crazy. And so SKIM wasan example of such a thing. We thought maybe this unusual way of goingabout executing, or at least thinking about, programs turns into acompletely different sort of computer architecture. That phase lasted fromabout 1980 to 1990—radical architectures for functional programming. Inow regard it as slightly misdirected but nevertheless it was terriblyexciting.

Lazy evaluation was another huge motivating factor. With the benefit ofhindsight I now think lazy evaluation is just wonderful but at that time it wassort of pivotal. Lazy evaluation is this idea that functions don’t evaluate theirarguments. Again the motivating factor was something to do with it beingbeautiful or elegant and unusual and radical.

That’s kind of good for catching the imagination: it looks as if this might be away of thinking about programming in a whole new way. Rather than justputting one more brick in the wall, we can build a whole new wall. That’svery exciting. I was strongly motivated by that. Was it just that it was a neattrick? In some ways I think neat tricks are very significant. Lazy evaluationwas just so neat and you could do such remarkable different things that youwouldn’t think were possible.

Seibel: Like what?

Peyton Jones: I remember my friend John Hughes wrote a program forme. For a project I was doing two implementations of the lambda calculusand comparing their performance, so John gave me some test programs.One of them was a program that computed the decimal expansion of e toarbitrary precision. It was a lazy program—it was rather beautiful because itproduced all the digits of e.

Seibel: Eventually.

Peyton Jones: Eventually, that’s right. But it was up to the consumer. Youdidn’t have to say how many digits to produce in advance. You just got giventhis list and you kept hauling on elements of the list and it wouldn’t give youanother digit until it had spent enough cycles computing it. So that’s notsomething that’s very obvious to do if you’re writing a C program. Actuallyyou can do it with enough cleverness. But it’s not a natural programmingparadigm for C. You can almost only do it once you’ve seen the lazyfunctional program. Whereas John’s program was just about four or fivelines. Amazing.

Seibel: Other languages have since special-cased that kind of computationwith, for example, generators in Python or something where you can yieldvalues. Was there something that made you say, “Aha; there are lots ofthings that could be fruitfully looked at as an infinite series of computationsfrom which we just want to draw answers until we’re tired of it?” Asopposed to saying, “Oh, that’s an interesting technique for certain problemsbut not the basis for everything.”

Peyton Jones: I think at this stage I wasn’t as reflective as that. I justthought it was so cool. And fun. I think it’s important to do what you findmotivating and interesting and follow it. I just found it very inspiring. I don’tthink I really thought there are deep principled reasons why this is the wayto do programming. I just thought it was a rather wonderful way to doprogramming. I like skiing. Well, why do I like skiing? Not because it’s goingto change the world—just because it’s a lot of fun.

I now think the important thing about laziness is that it kept us pure. You’llhave seen this in several of my talks probably. But I actually really likelaziness. Given a choice I’d choose a lazy language. I think it’s really helpfulfor all kinds of programming things. I’m sure you’ve read John Hughes’spaper, “Why Functional Programming Matters.” It’s probably the earliestarticulate exposition of why laziness might be important in more than a cuteway. And his main story is that it helps you write modular programs.

Lazy evaluation lets you write generators—his example is generate all thepossible moves in your chess game—separately from your consumer, whichwalks over the tree and does alpha-beta minimaxing or something. Or ifyou’re generating all the sequence of approximations of an answer, then youhave a consumer who says when to stop. It turns out that by separatinggenerators from consumers you can modularly decompose your program.Whereas, if you’re having to generate it along with a consumer that’s sayingwhen to stop, that can make your program much less modular. Modular inthe sense of separate thoughts in separate places that can be composedtogether. John’s paper gives some nice examples of ways in which you canchange the consumer or change the generator, independently from eachother, and that lets you plug together new programs that would have beenmore difficult to get by modifying one tightly interwoven one.

So that’s all about why laziness is a good thing. It’s also very helpful in a verylocal level in your program. You tend to find Haskell programmers willwrite down a function definition with some local definitions. So they’ll say “fof x equals blah, blah, blah where…” And in the where clause they writedown a bunch of definitions and of these definitions, not all are needed in allcases. But you just write them down anyway. The ones that are needed willget evaluated; the ones that aren’t needed won’t. So you don’t have tothink, “Oh, goodness, all of these sub expressions are going to be evaluatedbut I can’t evaluate that because that would crash because of a divide byzero so I have to move the definition into the right branch of theconditional.”

There’s none of that. You tend to just write down auxiliary definitions thatmight be needed and the ones that are needed will be evaluated. So that’s akind of programming convenience thing. It’s a very, very convenientmechanism.

But getting back to the big picture, if you have a lazy evaluator, it’s harder topredict exactly when an expression is going to be evaluated. So that meansif you want to print something on the screen, every call-by-value language,where the order of evaluation is completely explicit, does that by having animpure “function”—I’m putting quotes around it because it now isn’t afunction at all—with type something like string to unit. You call this functionand as a side effect it puts something on the screen. That’s what happens inLisp; it also happens in ML. It happens in essentially every call-by-valuelanguage.


Перейти на страницу:
Изменить размер шрифта: