flak rss random

books chapter twelve

A week of cautionary tales.

coders

Ken Thompson likes chess, and helped invent UTF-8. He also did some other thing. Some of the first programs he worked on were analog, which require very careful scaling of inputs so as not to clip, so he wrote a program that ran on a digital computer to help him program the analog one. Speculation here, but I’m going to guess analog computers aren’t as helpful for writing digital programs. One of the first interesting programs he wrote was a pentaminos tiling solver, which sounds like it could be a fun exercise. He once tried to write a Fortran compiler, but it actually turned into a B compiler.

Ken was pretty happy as an academic researcher, and wasn’t interested in an industry job, but Bell Labs offered him a free trip to the east coast, and then when he gets there he finds out these are the people writing the papers he’s reading. And so he started working on the MULTICS project, before that was cancelled. Bell Labs didn’t have particular directives for what they should be researching, but after backing out of MULTICS, operating systems was high on the list of things not to research, but that’s what Ken wanted to do, so that’s what he did.

Ken notes that we can’t have too much reverence for existing code. We need to be willing to rewrite it or it rots. Seibel asks the obvious question, what happens if you introduce a bug because you didn’t understand the original? Then you debug it and fix the problem. It’s unclear when this debugging takes place. If you find the program as you go, great, but if somebody else is finding the bugs, this sounds a little less great.

Interesting note about Bell Labs. There was a phone number you call, and anything you said would be recorded and transcribed, and then the next day you have a stack of paper in your inbox. Another example of providing support staff to get things done.

And now, I think a somewhat infamous exchange, where Ken argues against C being responsible for buffer overflows, etc. Some people write fragile code; some people don’t. I think some of his arguments are pretty weak, or at least poorly articulated, and he chooses some bad examples. I would not equate string truncation with a buffer overflow. He does make a good point that one can become root by trying to exploit a buffer overflow in su, or you can just sweet talk su into giving you a root shell. (Or maybe today you’d sweet talk struts into running a little something special.)

We return to the idea of testing. Another vote for printf. For something complicated, like a network protocol, that gets tested when people use it. The original move fast and break things. I wonder if this is an attitude that works, or develops, in a small group, like the original UNIX team, where you can break something and then somebody will notice and then you fix it. It’s never clear whether Ken is talking about software for himself or a million users.

At first, this “bugs happen” attitude sounds really cavalier. Reflecting on this, though, I wonder if he’s just being honest. I made a mistake once (just the once), and ok, sure, I’ll fix it, and I did, but no big deal, right? Nothing quite so unsettling as somebody saying what we’re really thinking.

“I love yacc. I just love yacc. It just does exactly what you want done. Its complement, Lex, is horrible. It does nothing you want done.”

founders

It’s consulting week.

David Heinemeier Hansson didn’t found 37signals, but he did create their first product, Basecamp. They had very little budget to spare for the creation of a new product, and so they kept everything as lean as possible. They were their own first customer, which helped them focus on features that were truly necessary. But even some features they wanted never got done. Pointed observation: their original market focus on other consulting agencies meant they should have included features like billing and time tracking, but those features never got added and instead people started using Basecamp for things like wedding planning. People didn’t know the product wasn’t meant for them because it didn’t include features they didn’t need. It seems likely if they had gone too far down the feature path, they’d have inadvertently rejected a large part of their ultimate market. Of course, some missing features really were missed, like time zones. Hard coding Chicago time may work even in Copenhagen, as long as you have a loose schedule and the days still match, but less well when you have tighter schedules or are living in Australia. Always plan to ship a major update soon after releasing a new product. It shows early adopters you care. So don’t take that big vacation just yet. On the benefits of a distributed team, he always had a few hours of alone time without interruption.

Philip Greenspun founded ArsDigita, then brought in the wrong VCs for the wrong reasons at the wrong time, and things ended less well. The ArsDigita legend begins with a road trip, then posting some pictures online, then creating a forum for visitors to ask questions about photography, then finally ending up with this web forum toolkit that could run on modest hardware and be managed by just one person. He looked around and wondered why all these other sites with server farms and teams of sysadmins kept crashing, so he decided to give away the software he wrote in the hopes that other sites would suck less. This done, companies would call him up and ask for features, and he’d tell them to edit the source which they already have, and they’d reply, no you edit it, take our money. Finally he set up a consultancy because the requests were getting out of hand. This sounds like a smoother way to turn an open source project into revenue than some other models, but that’s just me. Build it and they will come, and if they don’t, build something else. Don’t force the issue.

Philip wanted the consultancy to be a sort of training facility for programmers. Everybody had direct responsibility for talking to customers, getting work done, and even profits and losses. The teams were kept very small so that each project would be part of a professional portfolio, with individuals’ names on it.

He returns to this point repeatedly. Professional engineers are problem solvers. Their ultimate objective is always to solve the customer’s problem, not necessarily translate specifications into code. He contrasts this with a cautionary tale of outsourcing. “They did exactly what we told them, no matter how ridiculous!” In a later episode, after the bad VCs, they hired more salespeople to mediate communications with customers and the programmers just clocked in and out and implemented whatever crossed their desks. One customer was quite upset that the project was late. When Philip asked around, the programmers protested that the customer kept adding requirements, but apparently nobody had told the customer that this would delay the project, let alone by how much. After intervening, the customer reported that of course they wanted the basic version first, on time, and then then bonus features could be added later. The problem was a diffusion of responsibility.

One of the major advantages ArsDigita had over their competitors, who were similarly selling web frameworks, is that they were also using their framework. Before features were officially released, they were run on the photo site. They’d get feedback, fix the bugs, adjust the interface, and when it was ready, just tar up the filesystem and there’s your release. But having a live site and being able to see problems develop was very beneficial. He contrasts this with how a word processor might be developed. It’s given to a QA team, who test various features, but nobody on the QA team actually goes through the experience of writing a novel.

Business was good, profits were flowing, but after growing a bit, they wanted to hire a business manager. But competent business people are not inclined to work at small startups. So they needed to IPO. But the banks refused to underwrite the offering because they’d have to do due diligence, and that’s slow and expensive. Turns out there’s a loophole. If you have big name VC backing, the banks will underwrite the IPO without the due diligence. So in order to hire a competent manager they needed to go public which meant they needed to take VC funding. They did, and then the VCs brought in new management, who turned out to be incompetent. So much for that plan. They fired Philip and friends, and they lost lots of money, but because they were incompetent they didn’t know this until they almost ran out of money. Lawsuits and looting and much hilarity for lack of a better word ensued.

ArsDigita is remembered for all that, plus the Ferrari. There was a Ferrari parked out front, the epitome of excess, right? It was meant as a recruiting lure. Recruit ten friends, get to drive the Ferrari. Nobody ever got to drive it, but it was great marketing. It looked extravagant, but leasing a Ferrari is actually kind of cheap for a successful business. On the other hand, in the late stages of the company, they were spending many multiples of that on salary for salespeople who weren’t selling anything and marketers who were generating less press than the Ferrari. But somehow that’s not extravagant. I think this experience suggests a rule: if you’re profitable, it’s not extravagant, and if you’re losing money (and especially if you don’t even know how much), then whatever you’re doing is extravagant.

man-month

There’s no silver bullet. This was a paper originally published in 1986, so we’ve jumped ahead quite a bit in the timeline. I didn’t immediately realize this, although some of the references make it clear we’ve departed from the IBM big iron mainframe world. Definitely worth reading in its entirety.

“There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.”

Software involves two kinds of tasks, essential and accidental. Essential tasks are the design of the software and planning what it should do. Accidental tasks are the grunt work of getting the semicolons in the right place. Lots of progress has been made in reducing the time spent on accidental tasks (moving from assembly to higher level languages, for instance), but unless we’re still spending 9/10 of our effort on fairly low level implementation, even reducing accidental work to zero will not make us 10Xers.

The lack of silver bullets doesn’t mean we can’t make progress, but we should expect it to be slower, smaller, and incremental. He makes an interesting analogy. Before the advent of germ theory, we thought we could cure disease with exorcisms and rituals. Just had to discover the magic, first. But germ theory tells us there is no magic, which seems like a step backwards, but it allows us to make progress by researching in the right direction.

“I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.”

So what is the essence of software? Brooks identifies four properties: complexity, conformity, changeability, and invisibility.

Software is very complex. It has many moving parts. And unlike many other engineering disciplines, it becomes more complex as it scales. In software, we eliminate duplicate elements, instead of repeating them. A hotel with 200 rooms is twice the size of a hotel with 100 rooms, but not twice as complex, since the rooms are all alike. A program with 200 classes is twice the size, but considerably more than twice as complex as a program with 100 classes.

Software is capable of conformity, therefore it often must. We invent arbitrary nonsensical interfaces, and then we get to live with them. I’d argue this isn’t entirely essential; we can always choose to make better interfaces. In practice though, that may not be an option. The LDAP server was already there when I got here.

Software is capable of change, therefore it often must. In particular, it seems easy to change compared to physical artifacts. This too seems a bit self inflicted, but as a developer we’re often downstream from the point where such decisions are made.

Software is invisible. We can make physical models of buildings, and see how things fit together and whether there’s enough room for people to move about, etc. Not so with software. We can draw a diagram of control flow, or data flow, or data types, but none of these are sufficient for understanding. A program, as it runs on a computer, is an abstract entity that can’t be reduced to a simpler model of itself. I thought this was a very important section, because it addresses the question of what software even is. It’s certainly not the lines of text in a file. That’s merely an interface to the pure “thought-stuff” that is software.

So that’s what’s hard about making progress in the future. A quick recap of some of the past breakthroughs that solved accidental difficulties. High level languages make us more productive and reliable. But now that we don’t worry about the placement of individual bits, how much more progress can be made? Time sharing, by providing immediate feedback, speeds up development. But we’re already talking about millisecond response times, so how much faster can things get? Unified environments, like Unix, make it easier to get things done by provided lots of tools that are always and immediately available. Standard file formats so we don’t have to start from scratch. Maybe there’s still some room for improvement.

What’s the great silver hope for the future? “One of the most touted recent developments is the programming language Ada, a general-purpose, high-level language of the 1980s.” With some adjustment to the language and the decade, I’d say that’s one of the great timeless statements of programming. Object oriented programming can be subdivided into abstract data types and hierarchical types. Both are good, but alas we’re still dealing with accidental difficulties. It becomes easier to express the design of a program, but no easier to design it. Artificial intelligence, which is vaguely defined and unlikely to help. Expert systems which make suggestions? This sounds kind of weird, but we might allow fuzzing to slip in here. Automatic programming. Just tell the computer what program you want, and it writes it. But how to tell the computer what program to program? Graphical programming moves beyond text, but see above regarding the difficulties representing programs as diagrams. The essence of the program is still invisible; this is just an alternative interface to it. “Program verification is a very powerful concept, and it will be very important for such things as secure operating system kernels.” But it’s also a lot of work, and nobody wants to do it, which I think remains generally true. And finally, we can still improve our environments and tools. Some foreshadowing of what might be an IDE here, where everything you need is right there at your fingertips. Also, get a better workstation, but don’t expect magic. Ridiculous to think there’s been 30 years of hardware progress since this was written and it’s arguable how much better things have really gotten.

Returning to the conceptual difficulties of software, some proposals to make it easier to design software. Buy it, don’t build it. Let somebody else do the thinking. He relates an interesting note that in the 50s and 60s, nobody wanted off the shelf accounting software. Everybody wanted a custom version. In the 80s, such software became very popular, even though it’s not really any more capable. What changed is the cost of hardware came down. After buying a $2 million computer, another 10% to get a software package that did exactly what one wanted and exactly fit one’s existing process was sensible. It’s less sensible to buy a $50,000 machine and pay several times that again for a custom program. Instead you buy what’s available and adapt processes to fit. Kind of a neat lesson here and some food for thought, about our willingness to compromise and how cheaper software changes our habits.

Rapid prototyping, early delivery, feedback gathering. “Much of present-day software acquisition procedures rests upon the assumption that one can specify a satisfactory system in advance, get bids for its construction, have it built, and install it. I think this assumption is fundamentally wrong, and that many software acquisition problems spring from that fallacy.” Then he refines an analogy. Brooks used to talk about writing software. Then he switched to saying building software. The implication is there are specifications, and components, and it’s put together in an orderly fashion. Now he talks about growing software. The implication is that it starts small, then grows over time, but at any point in time, it’s complete and at least minimally functional. This requires a top down development, but it gives better results (and importantly, immediate results) than building parts in isolation and trying to combine them after the fact.

Then Brooks includes a feisty little table of exciting and not exciting products. Unix, good. DOS, bad. APL, good. COBOL, bad. Smalltalk, good. Algol, bad. C, curiously absent, but probably eaten by an algol.

pragmatic

38. Don’t start until you’re ready. Strangely, they don’t mean wait until you have requirements, but until you know what code you’re going to write. If you’re uncertain, wait. This seems kinda vague. How do you know when you’re certain? They do suggest prototyping to test ideas, however.

39. Be careful with over specification. They quote a British Airways memorandum from December 1996 Pilot magazine:

The Landing Pilot is the Non-Handling Pilot until the 'decision altitude' call, when the Handling Non-Landing Pilot hands the handling to the Non-Handling Landing Pilot, unless the latter calls 'go-around,' in which case the Handling Non-Landing Pilot continues handling and the Non-Handling Landing Pilot continues non-handling until the next call of 'land' or 'go-around' as appropriate. In view of recent confusions over these rules, it was deemed necessary to restate them clearly.

This is a fuzzy area. A great many problems stem from under specification. The sensible advice is to try to maintain separation between what something does and how it does it.

40. Should you use formal methods? Yes, but make sure you’re benefiting from them. Don’t become a slave to the tool.

code

We revisit code in the sense of a means of representing information. There are several ways to represent text. For instance, a story might be several narrow columns of text in a magazine, then it might be republished in a book, but it’s still the same story. He attempts to demonstrate this by writing “Call me Ishmael.” in two different fonts, but both sentences appear in the same font in the Kindle version. Not the intended demonstration, but I suppose it works, too.

We’ve already seen Braille code, but now we’re going to study Baudot code, invented by Emile Baudot for the French Telegraph Service. This was modified by Donald Murray and standardized by the ITU as ITA-2, the International Telegraph Alphabet No. 2. We still commonly refer to it as Baudot code (from which we get the word baud), but it’s really the Murray code. It’s a 5-bit code, which is enough for 32 codes. We’ve got all the regular letters, the space, and perennial text favorites, the carriage return and line feed. There’s also a special shift code, which transitions to the figure character set, including numbers and more punctuation. Rather unexpectedly, there’s separate figure shift and letter shift codes, not a single code to toggle back and forth. This isn’t mentioned, but as an extension this lets one shift from letters to letters again to access lowercase. That’s a good use case. My experiments with shift codes have generally reused the same, single code point to toggle behaviors. Making distinct “up” and “down” codes allows cycling through a variety of character sets, but it’s probably hell for normalization.

Problems with shift codes abound, so let’s try using more bits. Letters and numbers alone are 62 codes, so with just a few extra characters, we’re going to need more than 6 bits. How about 7? 128 characters should be enough for anyone. One such encoding is ASCII. It’s got all the letters, all the numbers, some useful punctuation, some less useful punctuation. Important note: letters and numbers appear in order, which is convenient for sorting and comparing. ASCII even has some really weird stuff in the bottom 32 codes because typewriters are weird. This is demonstrated by using tabs to align some text to columns, which again doesn’t appear in the Kindle edition. Another cautionary tale.

Unlike ASCII, EBCDIC, descended from the IBM punch cards we’ve previously examined, doesn’t keep all the letters in order. It’s weird, and subject to constraints like too many holes break the punch cards, and did I mention it’s weird?

We typically store 7-bit ASCII codes in 8-bit bytes, wasting some space, but it’s convenient this way. Some figures for reference. A magazine page might have 7200 bytes of text on it. War and Peace is 3.9 megabytes. The United States Library of Congress has a lot of books.

The extra bit can be used for extended ASCII character sets. This includes extra letters wearing funny hats, and some specialized punctuation, like the no-break space that appears as a space but doesn’t break a line. This is useful for phrases like “WW II”, which you guessed it, line wrapped in my copy. Alas, nobody could agree on which funny hats the letters should wear, so there’s a great many different extensions, and so a bunch of companies got together to invent Unicode and in the darkness bind them.

coda

Worse than not getting what you asked for is getting exactly what you asked for. If you take requirements or specifications and turn them into code without thinking, like a mean spirited genie, maybe you’re a great coder, but probably not a great engineer.

Posted 16 Sep 2017 03:31 by tedu Updated: 16 Sep 2017 03:31
Tagged: bookreview