Now in a pure language, if you have a function from string to unit you wouldnever need to call it because you know that it just gives the answer unit.That’s all a function can do, is give you the answer. And you know what theanswer is. But of course if it has side effects, it’s very important that you docall it. In a lazy language the trouble is if you say, “f applied to print"hello",” then whether f evaluates its first argument is not apparent to thecaller of the function. It’s something to do with the innards of the function.And if you pass it two arguments, f of print "hello" and print "goodbye",then you might print either or both in either order or neither. So somehow,with lazy evaluation, doing input/output by side effect just isn’t feasible. Youcan’t write sensible, reliable, predictable programs that way. So, we had toput up with that. It was a bit embarrassing really because you couldn’t reallydo any input/output to speak of. So for a long time we essentially hadprograms which could just take a string to a string. That was what thewhole program did. The input string was the input and result string was theoutput and that’s all the program could really ever do.

You could get a bit clever by making the output string encode some outputcommands that were interpreted by some outer interpreter. So the outputstring might say, “Print this on the screen; put that on the disk.” Aninterpreter could actually do that. So you imagine the functional program isall nice and pure and there’s sort of this evil interpreter that interprets astring of commands. But then, of course, if you read a file, how do you getthe input back into the program? Well, that’s not a problem, because youcan output a string of commands that are interpreted by the evil interpreterand using lazy evaluation, it can dump the results back into the input of theprogram. So the program now takes a stream of responses to a stream ofrequests. The stream of requests go to the evil interpreter that does thethings to the world. Each request generates a response that’s then fed backto the input. And because evaluation is lazy, the program has emitted aresponse just in time for it to come round the loop and be consumed as aninput. But it was a bit fragile because if you consumed your response a bittoo eagerly, then you get some kind of deadlock. Because you’d be askingfor the answer to a question you hadn’t yet spat out of your back end yet.

The point of this is laziness drove us into a corner in which we had to thinkof ways around this I/O problem. I think that that was extremely important.The single most important thing about laziness was it drove us there. Butthat wasn’t the way it started. Where it started was, laziness is cool; what agreat programming idiom.

Seibel: Since you started programming, what’s changed about how youthink about programming?

Peyton Jones: I think probably the big changes in how I think aboutprogramming have been to do with monads and type systems. Compared tothe early 80s, thinking about purely functional programming with relativelysimple type systems, now I think about a mixture of purely functional,imperative, and concurrent programming mediated by monads. And thetypes have become a lot more sophisticated, allowing you to express amuch wider range of programs than I think, at that stage, I’d envisaged. Youcan view both of those as somewhat evolutionary, I suppose.

Seibel: For instance, since your first abortive attempt at writing a compileryou’ve written lots of compilers. You must have learned some things abouthow to do that that enable you to do it successfully now.

Peyton Jones: Yes. Well, lots of things. Of course that was a compiler foran imperative language written in an imperative language. Now I’m writing acompiler for a functional language in a functional language. But a big featureof GHC, our compiler for Haskell, is that the intermediate language it usesis itself typed.

Seibel: And is the typing on the intermediate representation just carryingthrough the typing from the original source?

Peyton Jones: It is, but it’s much more explicit. In the original source, lotsof type inference is going on and the source language is carefully crafted sothat type inference is possible. In the intermediate language, the type systemis much more general, much more expressive because it’s more explicit:every function argument is decorated with its type. There’s no typeinference, there’s just type checking for the intermediate language. So it’s anexplicitly typed language whereas the source language is implicitly typed.

Type inference is based on a carefully chosen set of rules that make surethat it just fits within what the type inference engine can figure out. If youtransform the program by a source-to-source transformation, maybe you’venow moved outside that boundary. Type inference can’t reach it any more.So that’s bad for an optimization. You don’t want optimizations to have toworry about whether you might have just gone out of the boundaries oftype inference.

Seibel: So that points out that there are programs that are correct,because you’re assuming a legitimate source-to-source transformation,which, if you had written it by hand, the compiler would have said, “I’msorry; I can’t type this.”

Peyton Jones: Right. That’s the nature of static type systems—and whydynamic languages are still interesting and important. There are programs youcan write which can’t be typed by a particular type system but whichnevertheless don’t “go wrong” at runtime, which is the gold standard—don’tsegfault, don’t add integers to characters. They’re just fine.

Seibel: So when advocates of dynamic and static typing bicker the dynamicfolks say, “Well, there are lots of those programs—static typing gets in theway of writing the program I want to write.” And then the fans of statictyping say, “No, they exist but in reality it’s not a problem.” What’s yourtake on that?

Peyton Jones: It’s partly to do with simple familiarity. It’s very like mesaying I’ve not got a visceral feel for writing C++ programs. Or, youdon’t miss lazy evaluation because you’ve never had it whereas I’d miss itbecause I’m going to use it a lot. Maybe dynamic typing is a bit like that. Myfeeling—for what it’s worth, given that I’m biased culturally—is that largechunks of programs can be perfectly well statically typed, particularly inthese very rich type systems. And where it’s possible, it’s very valuable forreasons that have been extensively rehearsed.

But one that is less often rehearsed is maintenance. When you have a blobof code that you wrote three years ago and you want to make a systemicchange to it—not just a little tweak to one procedure, but something that isgoing to have pervasive effects—I find type systems are incredibly helpful.

This happens in our own compiler. I can make a change to GHC, to datarepresentations that pervade the compiler, and can be confident that I’vefound all the places where they’re used. And I’d be very anxious about thatin a more dynamic language. I’d be anxious that I’d missed one and shipped acompiler where somebody feeds in some data that I never had and it justfell over something that I hadn’t consistently changed.

I suppose static types, for me, also perform part of my explanation of whatthe program does. It’s a little language in which I can say something, but nottoo much, about what this program does. People often ask, “What’s theequivalent of UML diagrams for a functional language?” And I think the bestanswer I’ve ever been able to come up with is, it’s the type system. Whenan object-oriented programmer might draw some pictures, I’m sitting therewriting type signatures. They’re not diagrammatic, to be sure, but becausethey are a formal language, they form a permanent part of the program textand are statically checked against the code that I write. So they have allsorts of good properties, too. It’s almost an architectural description of partof what your program does.


Перейти на страницу:
Изменить размер шрифта: