A couple of years ago now I read Peter Naur’s “Programming as Theory-Building” (alternative PDF link) and it was a mind-blower. Yes, this Naur is the same Naur you know from Backus-Naur Form aka BNF. Anyway, you should go and read this essay now. It’s not very long. I’ll wait while you read it.
Back? Okay! Let’s do a bit of a crawl through some main points.
Highlights of the essay
Naur opens with his thesis statement, the argument he’s about to make:
[P]rogramming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.
Programming isn’t about writing the code; it’s about understanding the problem and expressing that understanding through code. That understanding is what allows us to modify the code without harming its design. Naur discusses three real-world cases of existing programs being modified over time by a team closely connected to the original team, a team that had only documentation to go on, and the same team. The team with the closest connection was most successful at making additions that worked with the existing design, and not against it:
The conclusion seems inescapable that at least with certain kinds of large programs, the continued adaption, modification, and correction of errors in them, is essentially dependent on a certain kind of knowledge possessed by a group of programmers who are closely and continuously connected with them.
Naur then goes into what “theory” means, philosophically and in practice. Here are his three points about what a programmer having the theory can do, lightly edited:
- Explain how the solution relates to the affairs of the world1 that it helps to handle.
- Explain why each part of the program is what it is, in other words is able to support the actual program text with a justification of some sort.
- Respond constructively to any demand for a modification of the program so as to support the affairs of the world in a new manner.
Naur is talking about “programs” here, and today we add on top of that the systems in which many programs interoperate to do complex things. So the demands are a little higher: we need to have theories of the systems we build and maintain, as well as theories of the pieces of that system.
The term I’m more likely to use myself for what Naur calls “the theory of the program” would be “a mental model of the system”. Some accumulation of facts in my head has allowed me to build a map of the territory– where things are implemented or “happen” in the system– and to predict behaviors given inputs. My mental model might be shallow in places where I haven’t had to make changes and very deep and detailed where I have recently worked. I need to keep that model constantly refreshed through review and rehearsal. While I’m model-building I find myself making small changes to the code, like renaming variables for clarity once I understand them, tweaking log lines, or adding comments.
The ability to fluidly make functional changes to the system I’ve modeled requires even deeper understanding of how it’s implemented, because there are patterns and structures in the code that I have to work with rather than against. This gets closer to what Naur means when he talks about the theory of a program. It’s not just the what, but the how and the why. Why does having a theory of the program matter? Because this enables rapid and effective modification of the program to respond to changing requirements without piling up technical debt or hacks.
It must be obvious that built–in program flexibility is no answer to the general demand for adapting programs to the changing circumstances of the world.
Alas, it is not obvious, we say, looking at all the premature generalization happening around us.
I’m going to quote this entire paragraph because of how important it feels to me, with some commentary.
On the basis of the Theory Building View the decay of a program text as a result of modifications made by programmers without a proper grasp of the underlying theory becomes understandable. As a matter of fact, if viewed merely as a change of the program text and of the external behaviour of the execution, a given desired modification may usually be realized in many different ways, all correct. At the same time, if viewed in relation to the theory of the program these ways may look very different, some of them perhaps conforming to that theory or extending it in a natural way, while others may be wholly inconsistent with that theory, perhaps having the character of unintegrated patches on the main part of the program.
We might call these “unintegrated patches” technical debt or hacks, but either way, we know it’s a problem when we see it. Somebody has worked against the grain of the wood when carving a new feature into the system, and it feels wrong. The hacks get in the way when you need to make the next change. They might not work well because they’re at odds with other design choices. They might be sitting right next to an already-existing affordance to add that new behavior!
Continuing on in this paragraph:
This difference of character of various changes is one that can only make sense to the programmer who possesses the theory of the program. At the same time the character of changes made in a program text is vital to the longer term viability of the program. For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer’s understanding.
People who do have the theory of the program can make changes that work with what’s there already. They know where the affordances are. Naur says that simplicity and quality only make sense in the context of that code to begin with, and this point is a good one. Let’s try another metaphor: Writing a program is like finding a domain-specific language to express the problem and its solution, a language that expresses your understanding of the domain. It’s a truism that code is communication. It is primarily communication with other humans, not with a compiler, with a set of verbs and nouns chosen by you as the best expression of your understanding of the problem. Other people reading your code must learn to read your new language, and to make changes they need to write it.
How do programmers learn a theory of the system? Naur says documentation and the source code are not enough, and someone new to a system needs hands-on mentoring:
What is required is that the new programmer has the opportunity to work in close contact with the programmers who already possess the theory, so as to be able to become familiar with the place of the program in the wider context of the relevant real world situations and so as to acquire the knowledge of how the program works and how unusual program reactions and program modifications are handled within the program theory.
Humans learn through guided practice with others. Spend time working with people who understand a system, and you’ll begin to understand it too.
This is great if the people who have the theory of a system as still around to talk to.
Nobody’s around any more
Real world is often more like Naur’s “group B” case, where further development on software happened without the benefit of close contact with theory-holders. I’ve described this a couple of times as a software archaeologist, digging out bits of architecture and sorting through refuse pits to figure out how a past software team lived and what the heck they were thinking about. Given reality, let’s ask two practical questions in response to Naur’s article:
- How can you rebuild the theory of a program or system if its original authors aren’t around to teach you?
- How can you leave useful information behind yourself to help future maintainers rebuild the theory you have?
In my experience, I had to spend a lot of time reading code and building my own mental model of the software, how it was constructed, and how the pieces worked together– the archaeologist metaphor I mentioned above. I had to reconstruct the theory by looking at the textual artifact and what it was doing in practice.
My colleague Chris Dickinson does some interesting things while doing code spelunks. He generates other artifacts as he goes, such as textual call diagrams that he can turn into graphs with graphviz. He’ll do this as a vertical slice for a specific code path as well, ending up with a detailed call flow chart showing every network traversal or call made to build a single web page, for instance. There are cognitive reasons why making drawings or notes like this is helpful– you improve your understanding of a concept by expressing it in a different form than you’re receiving it. (This is related to why taking notes in a lecture is helpful. Hear -> write -> read.)
I often tried to use commit logs to figure out why specific changes were made, but was more often frustrated than enlightened. The past generations of programmers at this company did not often write helpful commit messages.2 One specific commit that introduced an incredibly expensive bug had a commit message like “scope fixes”. Why was the change made? What did the programmer intend? Nobody knows.
I had to supplement by talking with people who weren’t familiar with the source code but could tell me what it did and why that was desirable or not. These people might not be the programmer-operators that Naur discusses, but they are expert user-operators. They have a theory of the program too! The one remaining programmer on staff who really understood a particular piece of software was priceless, and I’m grateful they were as amiable about explaining things as they were.
An aside about retention
I wish now to point out what might be obvious to you about the cost of team turnover. When there’s one human being left on a team who understands how that pile of legacy code works, you’re in trouble. If there are none, you’re in worse trouble. You have to hire people who can walk into messes cold and figure them out without help, and those people don’t come cheap.
Keep people around. Give them raises rather than making them leave to get more money. Keep them feeling good in their daily work.
Since we haven’t prevented, let’s try curing
Given this experience, and given the lightbulb moment that reading Naur gave me, I changed my development practice. I started thinking about ways I could help other programmers– maybe programmers I’d never meet– build useful theories about the software they inherited to maintain. If I found good ways to do this, I could then socialize those approaches and turn them into team practices.
Here are some of the things I’ve started doing.
I deliberately distinguished maintainer documentation from user documentation. The people who need to consume a service’s API are a completely different audience from the people who need to maintain that service. They might have different skillsets and programming languages: a person working on a website is probably writing in Typescript, while the service might be in Rust, C#, or anything at all. They have completely different concerns as well. The maintainer needs to dip into the source and needs to read details about the internals. Making the consumer of an API dip into the internals of its implementation would be a waste of their time.
Maintainers are the people who need the theory, so I invested time writing documentation for them. That documentation belongs as close to the source code as possible, and at least partly inside it in the form of comments. Don’t waste time documenting what can be seen through simple reading. Document why that function exists and what purpose it serves in the software. When might I call it? Does it have side effects? Is there anything important about the inputs and outputs that I might not be able to deduce by reading the source of the function? All of those things are clues about the thinking of the original author of the function that can help their successor figure out what that author’s theory of the program was.
Chunk up a level: writing about that function might help a maintainer fix a bug with it, but that isn’t sufficient for getting across the theory of the program. There are structural choices you make as you put together the program, as well as major decisions you make that inform the design. More specifically, the program exists to solve a problem, some “affair of the world” that Naur refers to. What was that problem? Is there a concise statement of that problem anywhere? What approach did you take to solving that problem statement? What tradeoffs did you make and why? What values did you hold as you made those tradeoffs? Why did you organize the source code in that particular way? What belongs where?
I started putting design documents, decision records, notes about spikes, and so on into the same repo as the source code. If you’re doing code deep-dives and generating artifacts about what you learn, check those artifacts into the source repo too! Put the videos somewhere durable and link to them. I duplicated some of these documents in the company’s documentation platform of choice, but the duplicates were not as important to me. I can’t guarantee a future maintainer will discover those artifacts. In fact, I can’t predict that any documentation platform other that the source code repo will exist in the future.
Toward the end of our joint tenure at out most recent job, my colleague Chris and I recorded videos of us doing code-centric deep dives through specific interesting, important, or especially difficult-to-understand aspects of the system. My hope is that these more conversational artifacts substitute in some way for having human mentors present to give people personal guided tours.
Oh yeah, and I continued writing tomes in my PR/commit messages, and being fussy about what actually lands into the mainline branch from my PRs.
A theory of theory-building
Here are Naur’s points, rephrased:
- The original authors of a system develop a theory of that system as they work, which comes from their understanding of the problem they’re solving and the decisions they made designing the solution.
- Programmers need to share that theory in order to make changes to what the system does successfully, without degrading its quality.
- People learn how complex systems work by being taught about them by other humans.
- Documentation isn’t enough, even if it a) exists, b) is truthful, and c) is discovered and read.
And here are my reactions:
- Do as much teaching of other programmers as possible while you’re in a job.
- Since turnover is a fact of life, and you won’t stay at any one job forever, do your best to leave artifacts behind that help your successors theory-build.
- Bargain with Naur’s position about documentation not being enough to by investing time into writing documentation aimed at describing the theory of the program.
- Put that documentation as close to the source code as possible, because the source code is the only artifact guaranteed to survive.
- Write good commit messages.
The next people to come along will still have to put in the work, but you’ll have at least tried to to make it easier on them.
I love this “affairs of the world” phrasing for “the thing the program is supposed to do”. It points right at the fact that the requirements come from external context. ↩︎
I think that
git commit -mis a culprit here. It encourages people to type very short messages. The Github concept of the “pull request” improves on this by giving people a big text box to type in, so they aren’t pushed to keep their messages to 50 characters tops. This isn’t enough, though, if teams don’t have a culture of encouraging each other to write good PR descriptions, or of squashing branches down into single commits with a good message. ↩︎