You see stuff like Ruby, which took influences from Ada and Smalltalk.That’s great. I don’t mind eclecticism. Though Ruby does seem kind ofoverhyped. Nothing bad about it, just sometimes the fan boys make it soundlike the second coming and it’s going to solve all your problems, and that’snot the case. We should have new languages but they should not beoverhyped. Like the C++ hype, the whole “design patterns will save us.”Though maybe they were reacting to the conservatism of the Unix C worldof the ’80s.

But at some point we have to have better languages. And the reason is tohave proof assistants or proof systems, to have some kind of automaticverification of some claims you’re making in your code. You won’t get all ofthem, right? And the dynamic tools like Valgrind and its race detectors,that’s great too. There’s no silver bullet, as Brooks said, but there are betterlanguages and we should migrate to them as we can.

Seibel: To what extent should programming languages be designed toprevent programmers from making mistakes?

Eich: So a blue-collar language like Java shouldn’t have a crazy genericsystem because blue-collar people can’t figure out what the hell the syntaxmeans with covariant, contravariant type constraints. Certainly I’veexperienced some toe loss due to C and C++’s foot guns. Part ofprogramming is engineering; part of engineering is working out varioussafety properties, which matter. Doing a browser they matter. They mattermore if you’re doing the Therac-25. Though that was more a threadschedulingproblem, as I recall. But even then, you talk about betterlanguages for writing concurrent programs or exploiting hardwareparallelism. We shouldn’t all be using synchronized blocks—we certainlyshouldn’t be using mutexes or spin locks. So the kind of leverage you canget through languages may involve trade-offs where you say, “I’m going, forsafety, to sacrifice some expressiveness.”

With JavaScript I think we held to this, against the wild, woolly Frenchmensuperhackers who want to use JavaScript as a sort of a lambda x86 language.We’re not going to add call/cc; there’s no reason to. Besides the burden onimplementers—let’s say that wasn’t a problem—people would definitely goastray with it. Not necessarily the majority, but enough people who wantedto be like the superhackers. There’s sort of a programming ziggurat—theRight Stuff, you know. People are climbing towards the top, even thoughsome of the tops sometimes fall off or lose a toe.

You can only borrow trouble so many different ways in JavaScript. Thereare first-class functions. There are prototypes, which are a little confusing topeople still because they’re not the standard classical OOP.

That’s almost enough. I’m not a minimalist who says, “That’s it; we shouldfreeze the language.” That’s convenient cover for Microsoft, and it kind ofoutrages me, because I see people wasting a lot of time and still having bugs.You know, you can still have lots of hard-to-find bugs with lambda coding.

Doug has taught people different patterns, but I do agree with Peter Norvig:those patterns show some kind of defect in the language. These patterns arenot free. There’s no free lunch. So we should be looking for evolution in thelanguage that adds the right bits. Adding optional types probably will happen.They might even be more like PLT contracts.

Seibel: A lot of the stuff you’re dealing with, from static analysis of yourC++ to the tracing JITs and new features for JavaScript, seems like you’retrying to keep up with some pretty cutting-edge computer-science research.

Eich: So we’re fighting the good fight but we’re trying to be smart about it.We’re also trying to move the research needle because—this is somethingelse that was obvious to me, even back when I was in school, and I think it’sstill a problem—there are a lot of problems with academic research. It’swidely separated from industry.

So there’s something wrong that we’d like to fix. We’ve been working withacademics who are practically minded. That’s been great. We don’t havemuch money so we’re going to have to use leverage—partly it’s just gettingpeople to talk and network together.

You lose something when the academics are all off chasing NSF grants everyyear. The other thing is, you see the rise in dynamic languages. You seecrazy, idiotic statements about how dynamic language are going to totallyunseat Java and static languages, which is nonsense. But the academics areout there convinced static type systems are the ultimate end and they’reresearching particular kinds of static type systems like the ML, Hindley-Milnertype inferences and it’s completely divorced from industry.

Seibel: Why is that? Because it’s not solving any real problems or becauseit’s only a partial solution?

Eich: We did some work with SML New Jersey to self-host the referenceimplementation of JavaScript, fourth edition, which is now defunct. Wewere trying to make a definitional interpreter. We weren’t even usingHindley-Milner. We would annotate types and arguments to avoid thesecrazy, notorious error messages you get when it can’t unify types and pickssome random source code to blame and it’s usually the wrong one. Sothere’s a quality-of-implementation issue there. Maybe there’s a typetheoreticproblem there too because it is difficult, when unification fails, tohave useful blame.

Now you could do more research and try to develop some higher-levelmodel of cognitive errors that programmers make and get better blamecoordinates. Maybe I’m just picking on one minor issue here, but it doesseem like that’s a big one.

Academia has not been helpful in leading people toward a better model. Ithink academia has been kind of derelict. Maybe it’s not their fault. Theeconomics that they subsist on aren’t good. But we all knew we wereheaded toward this massively parallel future. Nobody has solved it. Nowthey’re all big about transactional memory. That’s not going to solve it.You’re not going to have nested transactions rolling back and contendingacross a large number of processors. It’s not going to be efficient. It’s notgoing to actually work correctly in some cases. You can’t map all yourconcurrent or parallel programming algorithms on to it. And you shouldn’ttry.

People like Joe Armstrong have done really good work with the sharednothingapproach. You see that a lot in custom systems in browserimplementations. Chrome is big on it. We do it our own way in ourJavaScript implementation. And shared nothing is not even interesting toacademics, I think. Transactional memory is more interesting, especially withthe sort of computer-architecture types because they can figure out ways tomake good instructions and hardware support for it. But it’s not going tosolve all the problems we face.

I think there will be progress and it should involve programming languages.That’s why I do think the talk about the second golden age isn’t wrong. It’sjust that we haven’t connected the users of the languages with the would-bedevelopers with the academics who might research a really breakthroughlanguage.

Seibel: You got a Masters but not a PhD. Would you generally recommendthat people who want to be programmers should go get a PhD in computerscience? Or should only certain kinds of people do that?

Eich: I think only certain kind of people. It takes certain skills to do aPhD, and sometimes you wonder if it’s ultimately given just because youendured. But then you get the three letters to put after your name if you wantto. And that helps you open certain doors. But my experience in the Valley inthis inflationist boom of 20 years or so that we’ve been living through—thoughthat may be coming to an end—was certainly it wasn’t a good economic trade-off.So I don’t have regrets about that.


Перейти на страницу:
Изменить размер шрифта: