Seibel: Because you have transaction isolation.

Peyton Jones: Because they are put in isolation. So that’s really rather apowerful reasoning principle. Because it says you can use your sequentialreasoning about imperative code despite the fact that the program’sconcurrent. You’ve got to establish what those top-level invariants are, butthat’s good for your soul, too. Because then you know what things you aretrying to maintain. If you get an exception thrown in the middle of atransaction, that’s cool, too—that can’t destroy the invariants because thetransaction is abandoned without effect. I think this is fabulous. Thenreasoning about performance issues is a different level—you’ve guaranteed acertain sort of correctness; now you want to make sure you haven’t got anyperformance holes. Those are harder to get at—at the moment I don’tknow anything better than profiling and directed feedback tools for doingthat.

Seibel: It strikes me that while optimistic concurrency has been used fromtime to time in persistent databases, it’s never really gotten a foothold therecompared to lock-based concurrency.

Peyton Jones: Of course STM can be implemented in all sorts ofways—optimistic concurrency is only one of them. You can lock as you go, whichis more like a pessimistic concurrency model.

Seibel: But there’s also a reason that lock managers are the hairiest part ofdatabases.

Peyton Jones: Right. So the thing about STM is you want to make surethat one person, or one team, gets to implement STM and everybody elsegets to use it. You can pay them a lot of money and shut them in a smalldark room for a year to try to make sure they do a really good job.

But then that work is usable by everybody through a very simple interface.That’s what I think is nice about it. What I want to avoid is having that levelof expertise in everybody’s head. The example that I used at a talk I gaveyesterday—this comes from Maurice Herlihy—is a double-ended queue:insert and delete elements.

A sequential implementation of a double-ended queue is a first-yearundergraduate programming problem. For a concurrent implementationwith a lock per node, it’s a research paper problem. That is too big a step.It’s absurd for something to be so hard. With transactional memory it’s anundergraduate problem again. You simply wrap “atomic” around the insertand delete operations—job done. That’s amazing, I think. It’s a qualitativedifference. Now the people who implement the STM, they’d have to makesure they atomically commit a bunch of changes to memory as one. That’snot easy to do, just with compare and swaps. It can be done but you have tobe careful about it.

And then if there are performance problems to do with starvation then youmay need to do some application-level thinking about how to avoid that. Butthen you express the results of your application-level thinking, again usingSTM. I do think for that kind of program it’s a leap forward.

There was one other thing I wanted to mention—this goes back tofunctional programming. STM, of course, has nothing directly to do withfunctional programming at all. It’s really about mutating shared state—thatdoesn’t sound very functional.

But what happened is this, I went to a talk given by Tim Harris about STM inJava. I’d never heard about STM before; I just happened to go to his talk. Hewas describing an STM in which he had “atomic” but really not much else.You could implement these atomic transactions.

I said, “Wow, that seems really neat. Ah, so you need to log every sideeffect on memory. Every load and store instruction. Gosh, well there are alot of those in Java.” But in Haskell there are practically none because theyoccur in this monadic setting. So loads and stores in Haskell are extremelyexplicit—programmers think of them as a big deal.

So I thought, “Oh, we should just try replicating this atomic memory stuff inHaskell because it would be a very cool feature to have.” And off wewent—I talked to Tim about how to do this. Before long, because of thekind of framework that we had—this kind of pure, rather sparer frameworkthat we had—we’d invented retry and orElse. Retry is this mechanism thatallows you to do blocking within a transaction and orElse is the bit thatallows you to do choice within a transaction. Neither of these are thingsthat had occurred to him or his colleagues in developing transactionalmemory for Java because the rest of their context was rather morecomplicated.

So they hadn’t really thought much about blocking. Or maybe they justassumed that the way you do blocking was you say, atomically, “Only runthis transaction when this predicate holds.” But that’s verynoncompositional—supposing you wanted to get something out of one bankaccount and put it in another, well, what’s the condition that the transactioncan run under? Answer: well, if there’s enough money in the first bankaccount and there’s enough space in the second—let’s say that they’relimited at both ends. So that’s a rather complicated condition to figure out.And it gets even more complicated if there’s a third bank account involved.It’s very noncompositional—you have to look inside the methods and pullall their preconditions out to the front.

That’s what he had and it kind of worked fine for small programs but clearlywasn’t very satisfactory. So in a Haskell context we came up with this retry,orElse thing which we’ve since transplanted back into the mainstreamimperative context and they’re busy doing retry and orElse as well. That’sgreat.

Seibel: So there’s nothing actually inherent about Haskell that enabled thatconcept? It was just that you were able to think of it?

Peyton Jones: That’s right. There was less crap, basically, so the cool ideastood out in higher relief. It became more disgusting that there was no wayto do blocking without losing the abstraction. That’s what led us to retryand orElse. I think a really good place for functional programming to be, ora role for it to play, is as a kind of laboratory in which to examine the beast.And then ideas can feed back. And this STM was a particularly clear examplebecause there was a transition in both directions. Here there was a loopthat actually got closed, which I thought was lovely.

Seibel: What’s your desert-island list of books for programmers?

Peyton Jones: Well, you should definitely read Jon Bentley’s ProgrammingPearls. Speaking of pearls, Brian Hayes has a lovely chapter in this bookBeautiful Code entitled, “Writing Programs for ‘The Book’” where I think by“The Book” he means a program that will have eternal beauty. You’ve gottwo points and a third point and you have to find which side of the linebetween the two points this third point is on. And several solutions don’twork very well. But then there’s a very simple solution that just does itright.

Of course, Don Knuth’s series, The Art of Computer Programming. I don’tthink it was ever anything I read straight through; it’s not that kind of book.I certainly referred to it a lot at one stage. Chris Okasaki’s book PurelyFunctional Data Structures. Fantastic. It’s like Arthur Norman’s course onlyspread out to a whole book. It’s about how you can do queues and lookuptables and heaps without any side effects but with good complexity bounds.Really, really nice book. Everyone should read this. It’s also quite short andaccessible as well. Structure and Interpretation of Computer Programs. Abelsonand Sussman. I loved that. And Compiling with Continuations, Andrew Appel’sbook about how to compile a functional program using continuation passingstyle. Also wonderful.

Books that were important to me but I haven’t read for a long time: ADiscipline of Programming by Dijkstra. Dijkstra is very careful about writingbeautiful programs. These ones are completely imperative but they have the“Hoare property” of rather than having no obvious bugs they obviouslyhave no bugs. And it gives very nice, elegant reasoning to reason about it.That’s a book that introduced me for the first time to reasoning aboutprograms in a pretty watertight way. Another book that at the time made ahuge impression on me was Per Brinch Hansen’s book about writingconcurrent operating systems. I read it lots of times.


Перейти на страницу:
Изменить размер шрифта: