flak rss random

books chapter thirteen

Apparently we’re on a biweekly schedule now.

coders

Fran Allen wanted to be a math teacher, but took a job as a programmer at IBM to pay for school. This was long ago, back when the stereotype was that women made the best programmers because they were more detail oriented. She went on to work on the Stretch supercomputer, particularly the compiler, and other compiler optimization advancements. Her first programming experience, in college, was on the IBM 650, which was a drum machine. Instructions were stored on the drum, spinning round and round, so if you wanted your program to go fast you needed to optimize its physical layout.

This experience perhaps helped when it came to the Stretch computer. The Stretch processor had a very complex memory model, with varying degrees of latency and concurrent instructions, which required interleaving instructions properly for best performance. So that was one of the roles of the compiler optimizer.

Fran agrees you should build one to throw away, but adds the important caveat that you still need to think about what you’re doing. If the first one is carelessly built crap, you probably won’t learn much from it.

She relates a funny story that when working on the Stretch compiler, somebody wanted to use a hash table for the symbol table, but she vetoed that in favor of a linked list, because hashing was new and uncertain. But the developers persevered and implemented hashing and showed it could work. A fun time to program, when using a hash table was a risky decision. The moral is to listen to people’s ideas, but also be careful bringing outsiders onto a team because this can go very wrong as well. She has another story about a project ruined in that way. New comers have limited knowledge of the current project, and a tendency to think whatever worked for the last project will work here. And they can be forceful and convincing in getting the team to go along, but what worked there may not work here.

A lot of Fran’s work has dealt with optimization, in particular unlocking parallelism, which is increasingly relevant. This makes her quite disappointed with the rise of C. It’s very difficult to perform these kinds of optimizations on C code. Seibel asks about Java, etc., but they’re quite similar. It’s overspecified. The compiler for instance needs to be able to reorder fields, row vs column, AOS vs SOA, etc. Fran thinks we need to go back and rethink a lot of how we design and program computers. I wonder what her thoughts on GPU shaders would be. Rather low level, but a lot of the parallelism occurs “outside” the program.

A final remark about patents. Fran worked at IBM research, and sometimes it was very difficult to get IBM corporate to adopt the things they were working on. But if they published a paper, a competitor would implement the idea, and then the business side would take notice. So patenting things would ironically have held IBM back.

founders

Joel Spolsky founded Fog Creek Software, where I happened to work a few years after this interview. Joel was inspired by Philip Greenspun and ArsDigita, who we met before, and wanted to found a software consultancy that would eventually turn into a software product company. Joel didn’t want to give away the software for free, however, because consulting has a linear growth curve but software licenses can be sold in increasing numbers without increasing expenses. In contrast to Greenspun’s take on the demise of ArsDigita, Joel notes that consulting in general completely collapsed at the same time. “The consulting market is the derivative of every other market.” When times are good, companies hire consultants to grow faster. When times are bad, the consultants are the first to be cut. Fog Creek had to lay off their own first employees and was down to just Joel and Michael for a while. Then they released FogBugz, and started making money again. So the plan to have a consultancy feed a product company kind of failed, but they got the product made anyway.

Selling FogBugz turned out to be a challenge. First they tried to find a distributor, but that model is pretty dead these days. Then they wanted to sell the entire company to another company that would know how to sell software, but the deal fell through. (And that other company wasn’t very good at selling software, either.) Then they tried affiliate links and coupons and some other stuff that soaked up engineering time. These techniques were marginally profitable, but the return wasn’t nearly as good as it was when they spent more effort on the product. Improving the product and releasing a new version led to many more sales than anything else they did. And of course, the Joel on Software blog. If tons of people are reading your blog, it’s a good place to stick links to your product. But maybe that doesn’t scale for everyone.

Don’t pay attention to competitors, but listen to users and potential users. If you’re finding out about what users want by reading competitors’ announcements, that’s a bad position to be in. And they may be wrong, too.

Joel worked in a bread factory for two years. Good times.

An anecdote about the importance of really knowing one’s platform. Loading a segment register on the 386 was very slow, so you have to be careful about using far pointers. Microsoft knew this, so their program loaded fast, but Borland didn’t and their program was very slow. Thankfully I mostly missed out on this fun, but at the source level near and far pointers do look very similar. The abstraction hides what may be crippling performance.

A point about a brilliant hack being something that you can do without, or rather finding something everybody says you need but then finding a way to remove it. His example is Ruby on Rails. Before Rails, everybody thought database column names needed to be completely flexible, so you’d have to write lots of code to map database names to application names. Then Rails says, no, you can just require they be the same, and all the flexible mapping code can be removed.

Stephen Kaufer founded TripAdvisor. He wanted to take a vacation, but it was really hard to find reliable reviews about the places and hotels being considered. The initial idea was to manually curate an archive of links, then sort them by relevance. The original business model proposed selling access to this database to other sites, like Yahoo and AOL. That didn’t work, because when they pitched their offering to these companies, the other company wanted TripAdvisor to pay them to be on their site. TripAdvisor’s own web site was only supposed to be a demo, to highlight the database, but it was steadily gaining traffic, so they thought about banner ads but didn’t have the traffic to make much money. Eventually somebody noticed that they could place very well qualified links on pages for individual hotels, so they struck a deal where each hotel would link directly to the Expedia booking page for that hotel. This worked really well, and soon everybody wanted TripAdvisor to send them clicks.

man-month

Skipping ahead another decade, we’re going to review “No Silver Bullet” and see how well it’s held up and respond to some of the comments and criticism it’s received. To recap, Brooks’s central argument was that there was no magical solution. Most of the early retorts were of the form, but did you consider X? Here we are several years later, and X has not in fact revolutionized the field.

Some clarifications about the original paper. Brooks used the word accidental to describe the implementation of software. In this, he means not the common sense of occurring by chance, but the older meaning closer to incidental. As a matter of fact, we should be able to measure what portion of software development comes from mental craftsmanship of the design and what portion from creating the implementation. How much effort is spent worrying about and fixing pointers and memory allocations? Estimates vary, but nobody has claimed that the accidental part is larger than 9/10.

Do not misinterpret the original paper to mean that software development is difficult because of some deficiency in how its built. The argument is that the essential difficulties are inherent to the field. Somebody claimed that the original paper was too gloomy, but Brooks asks if there’s anything wrong with that. Should Einstein not have published the theory of relativity because it’s too gloomy that nothing can exceed the speed of light? He compares the silver bullet seekers to the alchemists searching for the philosophers’ stone. Realism trumps wishful thinking.

He does, however, offer an alternative view, proposed by a physicist. The motion of gas molecules in a container is very complex, but at a higher level we have the laws of thermodynamics and kinetic theory of gases. So perhaps we may yet find a new understanding of software that brings order to chaos. This sounds interesting, although I’m not sure I’m ready to deal with software on a scale where we measure it in moles.

One of the changes proposed that seems to be taking place is the increased reuse of software. The field has sufficiently advanced that meaningful reusable components are widely available. Some commentary on the slow growth of C++ and object oriented programming in general, that it seems we haven’t yet learned its true value. It doesn’t make the first project faster, or the second, but perhaps projects after that can be built faster by more easily reusing code. I would investigate the use of open source these days. Entire applications and servers are reused now.

One of the impediments to reuse seems to be that the consumers of libraries are equally capable of producing their own version. So if it’s harder to find and validate a component, or most importantly, if it’s perceived to be harder, we will write our own. What’s interesting here is he calls out how difficult it can be to assess the functionality of a component. I can certainly attest to this. I visit the home page of a random web framework. What does it do? The words I use to describe my problem and the words they use to describe the problem solved have very little overlap. Even if it’s exactly what I’m searching for, how do I know this? This is contrasted with mathematical libraries, where they have fixed and precise terminology. A glance at the manual for a numeric package will immediately tell you what it’s good for.

He compares learning a large software library with learning a new (human) language. Few people learn a language simply by memorizing lists of words. Syntax and semantics are learned incrementally. Examples and context are important. I think we’ve gotten a little better about this, with guided tours through new languages, but maybe some room for improvement. How about annotated versions of complete programs?

So, to sum up, the sooner we accept that software is difficult, the sooner we can get to work on the incremental improvements that are possible.

pragmatic

Stepping up to the project level. Previous advice was mostly for individuals, but now we can see how it works in a group context.

41. Pragmatic teams. Take all the previous advice, make it work for a team. Don’t let problems sit unsolved. Everybody needs to care about quality, or it will demoralize the one person who does. For this reason, it’s ridiculous to appoint a quality officer. Lots of problems can be harder to see in a group. For example scope creep, where each individual may have only slightly more work, but the aggregate amount is substantial. Or everybody assumes somebody else is handling it. It’s bad to duplicate your code; it’s even worse for a teammate to duplicate your code.

Particular reference: the chief programmer concept, which sounds a lot like the surgical team idea, proposed by Baker in 1972 in the IBM Systems Journal. (Brooks cites this as a reference in the surgical team chapter. Actually proposed by Mills in 1971, and tested by Baker.)

42. Ubiquitous automation means automate everything you do repeatedly. With a special emphasis on deterministic processes. When every developer sets up their environment by hand, you’re going to get some strange results. Use make, it’s great. Use tools to generate web pages that track progress, code reviews, etc. There’s a final note here, citing this article which I haven’t read, that code reviews are effective but conducting them in meetings is not. As stated previously, I’ve had the opposite experience. Code review meeting was awesome.

43. Ruthless testing really just means lots of it. Unit tests for the little stuff, integration tests for the big stuff. Don’t just test ideal conditions, also perform tests for resource exhaustion and stress testing. They particularly call out testing at different screen resolutions. Other usability testing. It can be hard to test a GUI, though this is greatly simplified by adding an alternative interface and decoupling the front end.

For some tests, it’s obvious what’s passing and failing. For others, you need to establish a baseline for comparison. Performance and resource regressions require regular and repeated testing to detect. Every bug that gets fixed must have a test, no matter how obvious, to make sure no bug is ever found a second time.

They propose having a project saboteur deliberately introduce flaws into the program to see if your testsuite detects them. I’ve never heard of this idea. I wonder how well it works. Moreover, how well it’s received. I can imagine tensions escalating quickly.

code

So we’ve got our computer, with its processor and RAM. But that’s not enough. We need to connect to other devices, notably some means of communicating with the outside world, to have a complete computer. For that, we need a bus. An early bus was the S-100, used by the Altair. What’s interesting about this design is the processor goes on one expansion board, RAM goes onto one or more other expansion boards, and so on. There’s nothing on the motherboard, just connection slots for expansion boards. After S-100 came the ISA bus used by the original PC. Then Micro Channel, and EISA, and finally PCI.

Back to the Altair, if we’re making memory boards for this computer, they’re all connected by the same bus. By default, they would all occupy the same address, which might be fun for a bit, but probably not very useful. Instead each board has a DIP switch on it to allow specifying the high bits of the address which it responds to. Now, this is a little bit complicated because all the output signals of each board are still connected together. This is trouble, because if one chip outputs 2.2 volts for a 1 and another outputs 0.4 volts for a 0, you’ll read something strange in the middle. The output signal actually needs to be a 3-state output. This third state is nothing, as if it’s not connected. So when the address lines on the bus indicate that a board is not in use, it must switch to the deselected tri-state output. That’s what makes the bus work.

CRTs are old and weird, but they used to be really popular at the time of writing. The magic electron beam zips back and forth lighting things up. We can think of the screen in terms of pixels, however, which makes things easier. To draw a character on the screen, in the olden text mode days, you would have some embedded ROM that contains the data for how to draw each character. If you switch to graphics mode, you will need a lot more RAM to store the color and pixel data. A TV can’t do much better than 320 x 200 resolution given various constraints, which isn’t very good, so IBM sold higher resolution monitors. The first PC could display 25 lines of 80 columns. 80 columns just like on the old IBM punch cards. Later graphics adapters increased resolution to 640 x 480. Computer monitors use a 4:3 aspect ratio because that’s the aspect ratio Thomas Edison picked for his motion picture camera and projector.

That’s output. Now let’s look at input, the keyboard. Keyboards output scan codes, not characters, and it’s the responsibility of a computer program to translate them. We are briefly introduced to interrupts, which cause the processor to jump to a new piece of code. Pressing a key triggers an interrupt, causing the processor to read which key is down. Or which keys, plural, are down, depending on the design.

The trouble with RAM is that it gets erased when powered down. The trouble with punch cards is that they’re hard to erase. Thus, we prefer magnetic storage. “The paper was soon replaced with a stronger cellulose acetate base, and one of the most enduring and well-known of all recording media was born. Reels of magnetic tape — now conveniently packaged in plastic cassettes — still provide an extremely popular medium for recording and playing back music and video.” Eventually tape gave way to disk. In 1956, the first disk drive, the IBM RAMAC, stored 5 megabytes of data on 50 metal disks 2 feet in diameter. Disks have shrunk since then, and now removable floppy disks are used to distribute commercial software.

Now that we have a complete computer, we need some software to make it go. An important program for a computer is the operating system. When you first turn on our hypothetical computer, you’ll see garbage on the screen as random bytes in video memory are displayed, and the CPU will grind around performing nonsense calculations. So what we need is a reset switch. As long as reset is flipped, the processor won’t do anything, and we will have the opportunity to program instructions into RAM. We’re currently doing this with an elaborate switchboard containing 16 switches for the address and 8 switches for the data. This is going to be tedious as hell.

The first thing we program is a tiny bit of initialization code that reads from the keyboard. We have a very simple input language, basically peek and poke and run. After we’ve entered this in via switches, we can enter the rest of our program on the keyboard. Still programming in raw byte values, but a lot faster and easier. Nevertheless, even this tiny program will be lost if we reset the system. Instead, we save it to a read only memory, ROM, chip, and connect this to our computer at address 0. Now our computer will immediately begin executing this code when powered on.

We’d also like to save the contents of RAM between sessions. Saving to disk is the obvious solution. To avoid the boring task of remembering what data is in what sector on disk, we invent the file system. And now we’re well on our way to making an operating system.

Let’s study the CP/M operating system, written for 8-bit computers by Gary Kildall in the 1970s. CP/M lives on a 77 track 8 inch floppy. The first two tracks are CP/M, the rest are allocated for storage. The first two allocation blocks are used for the directory. In the book, that’s rendered as the directory, because it’s a new term, but I’m more amused by the singularity of it. The directory contains entries for each file, which can have an 8 character name and a 3 character type. Funny fact, the disk map for each file could only support 16KB, so files larger than that required using additional directory entries with a special flag set.

A note about booting. A computer that was designed to run CP/M would have some ROM code sufficient to load the first sector off the disk. Then the loaded code is run, which loads the rest of CP/M into memory. More or less how things are still done today, plus 100 more stages.

CP/M includes a few basic commands, to list files and erase them, etc. It can also run user programs. CP/M provides some useful functions for applications, such as reading from the keyboard or reading and writing files. This is done via the CALL 5 instruction. When CP/M loads, it places a jump instruction at address 0005h (0x0005 for the rest of us) to its actual code. Any program which calls this code will jump into a subroutine of the operating system. This abstraction allows a program to run on several different kinds of computers. As long as it has an Intel 8080 chip, it doesn’t matter exactly how the keyboard and disk drive are attached.

CP/M gave way to MS-DOS. Similar, but different. The FAT filesystem is a bit more complicated, and even includes subdirectories (with version 2.0). Particular note is that the CALL 5 API was retired in favor of software interrupts, a new feature on the 8086. INT 21h, pronounced int twenty one.

AT&T invented UNIX, but they were a monopoly, so they had to give it away, but then they were allowed to sell it again, and look where we are today.

coda

We are now 30 years removed from No Silver Bullet. Time for another lookback? Has anything substantial changed?

Posted 30 Sep 2017 03:12 by tedu Updated: 30 Sep 2017 03:26
Tagged: bookreview