Physics was also less satisfying to me because it has kind of stalled. There’ssomething not quite right when you have these big inductive theories wherepeople are polishing corners and inventing stuff like dark energy, which isbasically unfalsifiable. I was gravitating toward something that was morepractical but still had some theoretical strength based in mathematics andlogic.

Then I went to University of Illinois Champaign-Urbana to get a master’sdegree, at least. I was thinking of going all the way but I got stuck in aproject that was basically shanghaied by IBM. They had a strange 68020machine they had acquired from a company in Danbury, Connecticut, andthey ported Xenix to it. It was so buggy they co-opted our research projectand had us become like a QA group. Every Monday we’d have the blue suitcome out and give us a pep talk. My professors were kind of supine about it.I should’ve probably found somebody new, but I also heard Jim Clark speakon campus and I pretty much decided I wanted to go work at SiliconGraphics.

Seibel: What were you working on at SGI?

Eich: Kernel and networking code mostly. The amount of languagebackground that I used there grew over time because we ended up writingour own network-management and packet-sniffing layer and I wrote theexpression language for matching fields and packets, and I wrote thetranslator that would reduce and optimize that to a short number ofmask-and-match filters over the front 36 bytes of the packet.

And I ended up writing another language implementation, a compiler that wouldgenerate C code given a protocol description. Somebody wanted us to supportAppleTalk in this packet sniffer. It was a huge, complex grab bag of protocolsyntax for sequences and fields of various sizes and dependent types of…mostly arrays, things like that. It was fun and challenging to write. I endedup using some of the old Dragon book—Aho and Ullman—compiler skills. But thatwas it. I think I did a unifdef clone. Dave Yost had done one and it didn’thandle #if expressions and it didn’t do expression minimization based onsome of the terms being pound-defined or undefined, so I did that. And that’sstill out there. I think it may have made its way into Linux.

I was at SGI from ’85 to ’92. In ’92 somebody I knew at SGI had gone toMicroUnity and I was tired of SGI bloating up and acquiring companies andbeing overrun with politicians. So I jumped and it was to MicroUnity, whichGeorge Gilder wrote about in the ’90s in Forbes ASAP as if it was going to bethe next big thing. Then down the memory hole; it turned into a $200million crater in North Sunnyvale. It was a very good learning experience. Idid some work on GCC there, so I got some compiler-language hacking. Idid a little editor language for MPEG2 video where you could write thiscrufty pseudospec language like the ISO spec or the IEC spec, and actuallygenerate test bit streams that have all the right syntax.

Seibel: And then after MicroUnity you ended up at Netscape and the restis history. Looking back, is there anything you wish you had done differentlyas far as learning to program?

Eich: I was doing a lot of physics until I switched to math and computerscience. I was doing enough math that I was getting some programming but Ihad already studied some things on my own, so when I was in the classes Iwas already sitting in the back kind of moving ahead or being bored or doingsomething else. That was not good for personal self-discipline and I probablymissed some things that I could’ve studied.

I’ve talked to people who’ve gone through a PhD curriculum and theyobviously have studied certain areas to a greater depth than I have. I feellike that was the chance that I had then. Can’t really go back and do it now.You can study anything on the Internet but do you really get the time withthe right professor and the right coursework, do you get the rightopportunities to really learn it? But I’ve not had too many regrets aboutthat.

As far as programming goes, I was doing, like I said, low-level coding. I’m notan object-oriented, design-patterns guy. I never bought the Gamma book.Some people at Netscape did, some of Jamie Zawinski’s and my nemesesfrom another acquisition, they waved it around like the Bible and they werekind of insufferable because they weren’t the best programmers.

I’ve been more low-level than I should’ve been. I think what I’ve learnedwith Mozilla and Firefox has been about more test-driven development,which I think is valuable. And other things like fuzz testing, which we do alot of. We have many source languages and big deep rendering pipelines andother kinds of evaluation pipelines that have lots of opportunity for memorysafety bugs. So we have found fuzz testing to be more productive thanalmost any other kind of testing.

I’ve also pushed us to invest in static analysis and that’s been profitable,though it’s fairly esoteric. We have some people we hired who are strongenough to use it.

Seibel: What kind of static analysis?

Eich: Static analysis of C++, which is difficult. Normally in static analysisyou’re doing some kind of whole-program analysis and you like to do thingslike prove facts about memory. So you have to disambiguate memory to findall the aliases, which is an exponential problem, which is generally infeasiblein any significant program. But the big breakthrough has been that you don’treally need to worry about memory. If you can build a complete controlflowgraph and connect all the virtual methods to their possibleimplementation, you can do a process of partial evaluation over the codewithout actually running it. You can find dead code and you can findredundant tests and you can find missing null tests.

And you can actually do more if you go to higher levels of discourse wherewe all operate, where there’s a proof system in our head about the programwe’re writing. But we don’t have a type system in the common languages toexpress the terms of the proof. That’s a real problem. The Curry-Howardcorrespondence says there’s a correspondence between logic systems andtype systems, and types are terms and programs are proofs, and you shouldbe able to write down these higher-level models that you’re trying toenforce. Like, this array should have some constraint on its length, at leastin this early phase, and then after that it maybe has a different or noconstraint. Part of the trick is you go through these nursery phases or otherphases where you have different rules. Or you’re inside your ownabstraction’s firewall and you violate your own invariants for efficiency butyou know what you’re doing and from the outside it’s still safe. That’s veryhard to implement in a fully type-checked fashion.

When you write Haskell programs you’re forced to decide your proofsystem in advance of knowing what it is you’re doing. Dynamic languagesbecame popular because people can actually rapidly prototype and keep thislatent type system in their head. Then maybe later on, if they have alanguage that can support it, or if they’re recoding in a static language, theycan write down the types. That was one of the reasons why in JavaScript wewere interested in optional typing and we still are, though it’s controversialin the committee. There’s still a strong chance we’ll get some kind of hybridtype system into a future version of JavaScript.

So we would like to annotate our C++ with annotations that conservativestatic analysis could look at. And it would be conservative so it wouldn’t fallinto the halting-problem black hole and take forever trying to goexponential. It would help us to prove things about garbage-collector safetyor partitioning of functions into which control can flow from a script,functions from which control can flow back out to the script, and things todo with when you have to rematerialize your interpreter stack in order tomake security judgments. It would give us some safety properties we canprove. A lot of them are higher-level properties. They aren’t just memorysafety. So we’re going to have to keep fighting that battle.


Перейти на страницу:
Изменить размер шрифта: