So I worked seriously on the implementation of Emacs probably for onlyabout four or six weeks. At which point it became clear that Stallmanunderstood what the program was. I wanted to get back to doing graduate studentthings. So Stallman did the other 99.999 percent of the work. But Iplayed a role in catalyzing it and beginning the implementation.

Seibel: On a different subject, academic computer science is verymathematical these days. How important is it for working programmers tobe able to understand, say, the math in Knuth? Versus saying, “I need to sortthings; I’m going to flip through Knuth and skip to where he says, ‘This is thebest algorithm’ and implement that”?

Steele: I don’t know. There are parts of Knuth that I don’t understandbecause I don’t have the mathematical training. Particularly if it involveshigher or continuous math. I’m a little weaker on that. I think my strengthsare in things like combinatorics and permutations, group theory, things likethat. And I find myself using that over and over and over again. Maybe that’sbecause that’s the particular hammer I have at hand. I don’t think that everyprogrammer needs that. But I think that mathematics formalizes conceptsthat programmers do need to work with every day.

I’ll give you an example from my recent programming-language work onparallel languages, this Fortress project that I’ve got going. Suppose youwant to add up a bunch of numbers. Well, one thing you can do is you caninitialize a register to zero and add in the numbers one at a time—a classicsequential technique.

Notice that this depends on the addition operation having an identity. Youneeded to start with zero. Trivial little observation, but it’s there.

Now here’s another strategy—you can lay out all the numbers in a row andthen add them up pairwise, giving you a bunch of sums, and keep addingpairwise until you’ve only got one number left. And of course if at any pointyou’ve got an odd number of them you just leave that one over extra andlet it go to the next stage and so forth. Well, that works fine, too. And ifyou’re using floating-point numbers it may actually give you a little bit betteraccuracy, although with the bookkeeping overhead it’s sometimes notworth it.

Notice that this will give you the same answer, at least if you’re workingwith integers, as starting with zero and adding them one at a time. Thisdepends on the fact that addition is associative. In other words, it doesn’tmatter in what way you group the numbers.

And then there’s a third strategy. Suppose you’ve got a bunch ofprocessors. You could parcel out the pairs to the processors and distributethat work. The “start with zero and add them one at a time” algorithm ishard to parallelize, but with the bunching in pairs, you’ve in effect made atree and you can assign processors different parts of the tree, and they cando their parts independently, and only at the end do they need to interactand you get the sum.

OK, so that’s cool. Here’s another parallel strategy that’s more like the firstone: pick some register and initialize it to zero, and then let processorscompete for grabbing numbers and adding them into that common place.This involves questions of synchronization, but you will still get the sameanswer. But this depends on addition being both associative andcommutative. That is, not only does it not matter how you group thenumbers, it also doesn’t matter in what order you process the numbers.

Mathematicians have big, scary words like “identity” and “associativity” and“commutativity” to talk about this stuff—it’s their shorthand. Butprogrammers need to know about ideas like, it doesn’t matter in whatorder you do it. And they need to know about ideas like, it’s OK if youregroup things. So to that extent I claim that mathematical ideas areimportant at some level to programmers.

Seibel: Obviously that’s a good example to give because anyone whounderstands arithmetic can understand it. But do you find that there arekind of higher-level concepts that come back into programming in the sameway?

Steele: Now suppose I’m generating a report. The typical thing is you makea bunch of print statements and you depend on doing them in order andthings get printed out in the order you said. Well, in this multicore worldmaybe I would like to break up the generation of the report and parcel itout to processors. Well, how about concatenating strings? Can I use thesame techniques I used for adding up numbers? Turns out it is associativebut not commutative—that tells me exactly which tricks will work for thestrings and which ones won’t. As a language designer worrying aboutdesigning parallel programming languages, I find these concepts and avocabulary for them very useful.

Seibel: Speaking of being a language designer, how have your ideas aboutlanguage design changed over time?

Steele: I think that the biggest change in my thinking is what I set down inthat talk “Growing a Language” at OOPSLA almost ten years ago, in 1998.Back in the ’70s people would invent languages and make a complete designand implement it and you’d be done. Or you wouldn’t be done, but you hadthis idea that there is this complete thing.

So Pascal was an invention. Things were in it for a reason and things wereleft out for a reason but it was a complete design. And if it turned out that itwasn’t complete—if it turned out that string processing wasn’t that great,well, too bad: Wirth had designed the language. And PL/I got designed andAda got designed. And maybe Ada and C++ were close to the last of thatgeneration. And C++ not so much, because it did sort of evolve over time.

I realized as languages got more complicated they were really too big todesign all at once and that languages would necessarily from now onundergo evolution because they were too big to design all at once or toimplement all at once. And that caused a change in how I approachedprogramming-language design and thinking about it.

Seibel: So you think Java was not designed that way?

Steele: I think maybe Java was not and should have been. Java has evolvedthrough the Java Community Process. That has addressed more API thancore language issues. And while features have been added to the languageover the last 12 or 13 years, I think the team that designed Java in the early’90s thought they were designing a complete language for a specific selfcontainedpurpose. You know, they were aiming at set-top boxes.

Seibel: Right.

Steele: They hadn’t even envisioned at that point programming the WorldWide Web or having as huge a user base as it has turned out to have. Andso I thought they were designing a fairly small, self-contained kernel languageon top of which you would then build a bunch of APIs and use it for variousthings. And that was the model and that’s the way it’s worked out. And partof my thoughts about “Growing a Language” came out of observing thatprocess, that Java turned out to be a little bit too small and people wantedmore features to get to other things.

In particular there was pressure for the kind of for loop that could iteratethrough an enumeration. That is a feature that got added to the language.There was pressure from the high-performance scientific computingcommunity to put in more features to support floating point and stuff likethat. That pretty much got rejected by the Java Community Process, and Ithink that was more for social reasons than technical reasons.

But there is this demand for adding to the language and then there was asocial process that gated that in various ways. And so I got to thinking thatmaybe for a really successful programming language, you need to design andplan for the social process as much as you design the technical features ofthe language and think about how those two things are going to interact.And Fortress is our first experiment with that, or at least my firstexperiment with that. And it’s still early in the game—it’s only a half-doneexperiment.


Перейти на страницу:
Изменить размер шрифта: