flak rss random

books chapter ten

Week X.

coders

Dan Ingalls didn’t invent Smalltalk, but he implemented it. And invented BitBlt, possibly the most famous function in computer history? Very early on, he had a business selling a profiler that actually worked. The first version was for Fortran, but he had trouble selling it because Fortran programmers weren’t interested. “And who are they all working for? They’re all working for the government. Do they care how efficient their program is? Not really. What they really want to do is to show that the computer is overloaded and that they need a new computer and more money.” So then he switched to COBOL because businesses have limited budgets for new computers and that version was very popular.

Dan likes interactive programming. Type a statement, er, expression I mean, see the result. Although it’s a bit of a myth that Smalltalk was written in Smalltalk. The early versions were not, and it wasn’t until Squeak that it was Smalltalk all the way down.

He laments a bit that adding Javascript to HTML is a bit backwards. We should have started with dynamic graphics and simplified in reverse. He doesn’t mention it, but I guess PostScript would satisfy this requirement. So there’s a thought experiment. How would things be different if every web page were a remote NeWS style display?

founders

Caterina Fake founded Flickr. There were some other photo web sites at the time, but they were targeted at people uploading a bunch of pictures from a digital camera, picking out the good ones, and ordering prints of them. The online digital photo wasn’t the end product, but it turns out lots of people were happy just sharing albums that way. The commodification of cameras resulted in the commodification of photos as well. Where once you’d rent a photographer for special events, now you’d bring your own camera to take pictures of lunch. “We may be the most boring startup that you interview for your book because our path was fairly smooth.”

Brewster Kahle founded WAIS, Alexa, and the Internet Archive. Before that, he worked at Thinking Machines. (Side link: Richard Feynman and The Connection Machine) WAIS was trying to invent internet publishing, somewhat pre-web. Of course, for this to work they needed publishing partners. WAIS tried working with only the best in each industry because they weren’t interested in being number 2. That’s an interesting twist. I usually hear about existing companies being slow to adapt and resistant to change. After all, what they’ve been doing has been working. But on the other hand, if you can get them interested in a direction, they’re less likely to half ass it? Plus: “Since we were so inexpensive-we were living based on furniture that didn’t match; we had learned our lessons of how to live very inexpensively-we could do things as production that they would normally pay an Ernst & Young just to do a study.” Go where the money is.

After WAIS, he founded Alexa, which was supposed to be a guide to the internet. You’re looking at one page, you’re probably interested in similar pages, even if the author hasn’t linked to them. But there’s no way you could use search to find them, because that will never scale. Heh, so he was a bit wrong about that. But his idea of collaborative filtering still has some appeal. Not sure I’d want one company to control it any more than I like one company controlling search, but some alternative means of discovery sure would be nice. Alexa was then bought by Amazon. He tells a funny story that Amazon was spending too much money on hardware, and Bezos asked what they should do, and Brewster said they should stop buying so much hardware. Solid advice.

man-month

The best tools are the sharpest tools. Instead of every programmer creating their own tools, they should use a common set. The toolmaster is also responsible for learning and mastering the use of external tools and explaining their use to the other programmers.

Next we consider the kinds of computers needed for development. This is a bit dated, of course. Every computer should have, raises pinky, one meeeelion bytes of memory. Except for the embedded space, my development machine looks a lot like my target machine. It’s rare to develop a new operating system for prototype hardware. But some advice remains pertinent. During the first phase of development, nobody needs access to test runs, then suddenly everybody finishes their component and needs testing time all at once. I worked on a project like that, and wished we had followed Brooks’s advice. Schedule and allocate testing time to each team in advance, and allow them to use it or squander as they decide. Instead we just had continuous testing of the main branch, so everybody tossed their code in to see what happened. The usual result was chaos.

Brooks describes an early form of source control. Everybody writes their code, which they can modify at will. Then an integrator copies it into a library, where it undergoes more testing. Then it gets promoted to the “current version sublibrary”. A rather manual process, but the general idea is gating and testing lead to stability. Write lots of documentation. (This requires use of a text editor, but those are easy to find these days.)

Program in a high level language. Optimizing compilers are getting better, and if they’re insufficient, the slow one to five percent of a program can be rewritten in assembly. So that’s where this advice comes from! As for which language, the only reasonable choice is PL/I, though it may be faster to work things out in APL first.

Brooks is kinda down on interactive programming, sticking with batch systems. Well, actually, he acknowledges that debugging turnaround time is an important part of productivity. Although one might consider web services as a form of batch processing. If it’s not a service I’m writing, but a service I’m using, I fire off some request and wait to see what comes out, but there’s no way to single step through the remote end. So maybe in the end Brooks was right. Batch programming hasn’t gone away. And it is a pain to debug.

pragmatic

33. Refactor your code when it gets too hairy, but don’t postpone this for too long. Schedule time in advance to do it, and make sure everybody knows it’s on the schedule. They make the analogy that writing software is not like building, but more like gardening, which I think is a pretty good one. You only make money by picking crops, but if you don’t stop to pull weeds, you’ll have increasing difficulty growing crops.

34. Write code that’s easy to test. If you used design by contract, you can try testing by contract. And/or try unit tests. Eventually all software meets the ultimate test, production, so consider how one will debug in that environment. Add useful logging and inspection capabilities.

35. Beware the evil wizard. Code generation is good if you understand it, but modern wizards that dump out ready to run skeleton programs can be more trouble than they’re worth. Do you even understand what’s going on in all that code you’re running but not writing?

code

We return to the task of building a computer from circuits and flip flops. If we wire up a bunch of selector lines to a latch, we can address it bit by bit. We might even access it randomly. Hello RAM. And we learn all about kilobytes and megabytes and powers of two that are not quite powers of ten, but we say they are anyway. A home computer might have 32MB or 64MB of RAM, which is the first hint this book is actually a bit older than I thought.

Now that we’ve got some memory, we’re going to soup up our adding machine with some automation. This is a great chapter. If we wire a simple counter (based on a clock), we can step through a memory array, adding all the numbers we find, and storing them back. Alas, when the address counter rolls over, it will go back around and start adding sums together.

What we want is to split things apart so we have some control over the operation at each step. Load, add, store, and halt. This will let us add varying numbers of addends, not just pairs. But in order to tell our machine how to do this, we need some extra memory. So we’ve got Data RAM and Code RAM. But maybe we want to add numbers larger than 8 bits? We can use the carry output from our adder, as long as we add a place to store it. So we’ll include a one bit carry latch, and make a new Add With Carry instruction that reads the carry bit. With this, and the right instruction sequence, we can add 16, 24, 32 bit numbers. And similarly with Subtract With Borrow.

It’s still kinda of a pain though, because the sum of every two bytes is stored after them in memory. (Our memory address line is still just hooked up to the oscillator.) “To fix this problem, I’m going to make a fundamental and excruciating change to the automated adder that will at first seem insanely complicated.” Every instruction opcode will be expanded from 1 byte to 3 bytes. After the opcode, there will be a two byte address from which to load or store, etc. And since we’ve decoupled our data addresses from the clock, we can store our whole program in the same memory as our data. One thing to note is that each operation now requires more clock cycles to complete. This more sophisticated adder only runs at one fourth speed compared to the first model.

Mixing code and data means we have one new problem. What if we want to extend our program with some new instructions, but the memory has already been used by some data? It’d be terribly inconvenient to rewrite everything. Meet our newest instruction, jump. This lets us load a new address into the program counter. Useful, but not as useful as a conditional jump. We already have most of the pieces. We’ll add one more one bit latch, like the carry flag, but it will be the zero flag. This is set whenever the result of an add or subtract instruction is all zero bits. New instructions like Jump If Zero and Jump If Not Zero will use it.

And with that, we have a computer. A real, live computer. The best part about putting all this hardware together is now we get to write some software. In assembly language, using all these instructions we’ve just invented. In theory, all the technology necessary to assemble this computer existed more than a century ago, although I think it would have been a bit unwieldy with telegraph relays, but many aspects of binary logic weren’t fully understood until 1945. Having come this far, next chapter we’re going to go back and visit historical computing machines.

coda

Brooks makes a nice contrast with some interviewees. Last week, Guy Steele went around and replaced everybody’s TECO macros with a common set, so everyone could use the same tools, just as advised. This week, Dan Ingalls advocates for immediate feedback, instead of the slow statement by statement development of PL/I. Brooks is at his best describing what worked and didn’t work for the OS/360 development team, but not so much predicting the future. Not much of a surprise, predictions are tricky.

I really liked the Code chapter on automation. It took a while to build up a functioning adder and latches and so forth, but it one (longish) chapter we can put it all together and make a real computer. Everything is easier in hindsight, but we can go from basic parts to really complex operations in fewer steps than one might think.

Posted 26 Aug 2017 21:32 by tedu Updated: 26 Aug 2017 21:32
Tagged: bookreview