Then there are user-interface things where you just don’t know until youbuild it. You think this interaction will be great but then you show it to theuser and half the users just can’t get it. Then you have to backtrack andcome up with something new.

Seibel: Leaving aside designing user interactions, when is prototypingvaluable? As opposed to just thinking about how something is going towork?

Norvig: I think it’s useful to imagine the solution, to see if it’s going towork. It’s useful to see if it feels comfortable. You want a set of tools thatare going to help you build what you have to build now and are going tohelp you evolve the system over time. And if you start out prototyping andall the sudden it feels clunky, then maybe you’ve got the wrong set ofprimitives. It’d be good to know that as soon as possible.

Seibel: What about the idea of using tests to drive design?

Norvig: I see tests more as a way of correcting errors rather than as a wayof design. This extreme approach of saying, “Well, the first thing you do iswrite a test that says I get the right answer at the end,” and then you run itand see that it fails, and then you say, “What do I need next?”—that doesn’tseem like the right way to design something to me.

It seems like only if it was so simple that the solution was preordainedwould that make sense. I think you have to think about it first. You have tosay, “What are the pieces? How can I write tests for pieces until I knowwhat some of them are?” And then, once you’ve done that, then it is gooddiscipline to have tests for each of those pieces and to understand well howthey interact with each other and the boundary cases and so on. Thoseshould all have tests. But I don’t think you drive the whole design by saying,“This test has failed.”

The other thing I don’t like is a lot of the things we run up against at Googledon’t fit this simple Boolean model of test. You look at these test suites andthey have assertEqual and assertNotEqual and assertTrue and so on. Andthat’s useful but we also want to have assertAsFastAsPossible and assertover this large database of possible queries we get results whose score isprecision value of such and such and recall value of such and such and we’dlike to optimize that. And they don’t have these kinds of statistical orcontinuous values that you’re trying to optimize, rather than just having aBoolean “Is this right or wrong?”

Seibel: But ultimately all of those can get converted into Booleans—run abunch of queries and capture all those values and see if they’re all within thetolerances that you want.

Norvig: You could. But you can tell, just from the methods that the testsuites give you, that they aren’t set up to do that, they haven’t thoughtabout that as a possibility. I’m surprised at how much this type of approachis accepted at Google—when I was at Junglee I remember having to teachthe QA team about it. We were doing this shopping search and saying, “Wewant a test where on this query we want to get 80 percent right answers.”And so they’re saying, “Right! So if it’s a wrong answer it’s a bug, right?”And I said, “No, it’s OK to have one wrong answer as long at it’s not 80percent.” So they say, “So a wrong answer’s not a bug?” It was like thosewere the only two possibilities. There wasn’t an idea that it’s more of atrade-off.

Seibel: But you are still a believer in unit tests. How should programmersthink about testing?

Norvig: They should write lots of tests. They should think about differentconditions. And I think you want to have more complex regression tests aswell as the unit tests. And think about failure modes—I remember one ofthe great lessons I got about programming was when I showed up at theairport at Heathrow, and there was a power failure and none of thecomputers were working. But my plane was on time.

Somehow they had gotten print-outs of all the flights. I don’t know where—theremust have been some computer off-site. I don’t know whether they printed themthat morning or if they had a procedure of always printing them the nightbefore and sending them over and every day when there is power they just throwthem out. But somehow they were there and the people at the gates had aprocedure for using the paper backup rather than using the computer system.

I thought that was a great lesson in software design. I think mostprogrammers don’t think about, “How well does my program work whenthere’s no power?”

Seibel: How does Google work when there’s no power?

Norvig: Google does not work very well without power. But we havebackup power and multiple data centers. And we do think in terms of,“How well does my piece work when the server it’s connecting to is downor when there are other sorts of failures?” Or, “I’m running my program ona thousand machines; what happens when one of them dies?” How doesthat computation get restarted somewhere else?

Seibel: Knuth has an essay about developing TeX where he talks aboutflipping over to this pure, destructive QA personality and doing hisdarnedest to break his own code. Do you think most developers are goodat that?

Norvig: No. And I had an example of that in my spelling corrector. I hadintroduced a bug in the code that measured how well I was doing andsimultaneously had made some minor change in the real code. I ran it and Igot back a much better score for how well it was doing. And I believed it! Ifit had been a much worse score I would have never said, “Oh, this minorchange to the real function must have made it much worse.” But I waswilling to believe this minor change made the score much better rather thanbeing skeptical and saying, “Nah, couldn’t have made that much difference,there must be something else wrong.”

Seibel: How do you avoid over-generalization and building more than youneed and consequently wasting resources that way?

Norvig: It’s a battle. There are lots of battles around that. And, I’mprobably not the best person to ask because I still like having elegantsolutions rather than practical solutions. So I have to sort of fight withmyself and say, “In my day job I can’t afford to think that way.” I have to say,“We’re out here to provide the solution that makes the most sense and ifthere’s a perfect solution out there, probably we can’t afford to do it.” Wehave to give up on that and say, “We’re just going to do what’s the mostimportant now.” And I have to instill that upon myself and on the people Iwork with. There’s some saying in German about the perfect being theenemy of the good; I forget exactly where it comes from—every practicalengineer has to learn that lesson.

Seibel: Why is it so tempting to solve a problem we don’t really have?

Norvig: You want to be clever and you want closure; you want tocomplete something and move on to something else. I think people are builtto only handle a certain amount of stuff and you want to say, “This iscompletely done; I can put it out of my mind and then I can go on.” But youhave to calculate, well, what’s the return on investment for solving itcompletely? There’s always this sort of S-shaped curve and by the time youget up to 80 or 90 percent completion, you’re starting to get diminishingreturns. There are 100 other things you could be doing that are just at thebottom of the curve where you get much better returns. And at some pointyou have to say, “Enough is enough, let’s stop and go do something wherewe get a better return.”

Seibel: And how can programmers learn to better recognize where theyare on that curve?

Norvig: I think you set the right environment, where it’s results-oriented.And I think people can train themselves. You want to optimize, but left toyourself you optimize your own sense of comfort and that’s different fromwhat you really should be optimizing—some people would say return oninvestment for the company, others would say satisfaction of yourcustomers. You have to think how much is it going to benefit the customerif I go from 95 percent to 100 percent on this feature vs. working on theseten other features that are at 0 percent.


Перейти на страницу:
Изменить размер шрифта: