Main

Blog (Atom feed)

Software Design for Flexibility: a review

If you were to judge Hanson and Sussman’s recent book by its cover, you might think it to be the unofficial sequel to Structure and Interpretation of Computer Programs. It has similar cover art, uses Scheme for all its code examples, and treks through similar territory. It is never far from its spiritual predecessor, always trailing in its shadow.

The book aims, as the title suggests, to prevent you from writing code that traps you. The authors call this additive programming, an approach that advocates for augmenting existing functionality instead of changing it. They describe multiple techniques to achieve it: composition, generic procedures/multiple dispatch, domain-specific languages, pattern matching, evaluation/compilation, layering, and propagation.

You’re probably familiar with most of these as they tend to show up in things software developers use every day. They are good and useful. Maybe you don’t know them all, or don’t know that you have seen them, in which case you would do well to at least have a passing familiarity with them.

The book does a stellar job of fawning over the techniques while not being convincing. It’s more proselytizing than persuasion or instruction. Unless you share the same vision for software that the authors put forth, the explanations will ring hollow because they do not match up with the realities developers are confronted with on a regular basis.


The core tenet of additive programming is to preserve existing functionality as much as possible. As the authors state early on, it’s about not breaking what already works.

One should not have to modify a working program. One should be able to add to it to implement new functionality or to adjust old functions for new requirements. (p. 2)

This encourages the program to be made of general building blocks whose functionality encompasses more than the initial requirements. The point, then, is that your software should be extensible.

Such an approach is laudable, but there is a popular and successful programming movement that specifically rejects this mindset. Extreme Programming’s principle of “you aren’t gonna need it” (Yagni) says that you should not be adding support for future features. Martin Fowler outlines the reasoning behind Yagni and it’s a compelling case. Code that does more is more code you have to test. And it may not get used, so it’s more work for no gain. And, most simply, it’s just more code to write.

Taking Yagni too seriously can lead to some absurdities, of course (“I never use multiplication so why have it?”), but when done in moderation it makes sense. There are many stories of successful projects that started out as a minimalist implementation with no regard for future functionality that managed to turn into something more useful and robust. The mantra of “move fast and break things” may be destructive in the long term, but at the outset the underlying idea has merit: get something usable in the hands of users so you can implement the features they actually want or the ones that help them accomplish their goals.

Looking at the additive programming principle from a distance, Extreme Programming seems to satisfy the requirement of being additive despite being internally destructive. One of XP’s core tenets is a good test suite that describes the functionality of the system. Any change to the system (say, through refactoring) must result in the test suite passing. This allows for internal changes that preserve existing functionality, even if it means you rewrite what implements it.

It’s not just modern programming practices the authors seem to be ignorant of or indifferent to, it’s also concerns over the maxims that justify flexibility. In particular, Postel’s Law, also known as “the robustness principle.” It gets cited on a few occasions as a key way of thinking in terms of flexible systems. The robustness principle is one of the core principles of network protocol design and is largely seen as an important factor in making the Internet what it is today.

It is also seen as the cause of many serious problems. The robustness principle is often quoted as, “be conservative in what you do, be liberal in what you accept from others.” Criticism of this idea has been around for some time. Does this mean that you should accept malformed or incomplete input? Such an approach was proposed for RSS with the rebuttal that it’s not difficult to generate legal XML so why intentionally make the processing of it ambiguous and error-prone?

The security community has called for a rethink of Postel’s Law for over a decade because of all the problems it has caused. This is ironic since the book champions robustness as a way to make things more secure. (Arguably true, depending on the context, but that context is not explored in the book.) Even the later version of the robustness principle (section 1.2.2) warns against being too accepting.

As with any design decision there is a natural tension between approaches with respect to their applicability and costs. Although trite, it is true that practically everything in software is a trade-off. Do you place more emphasis on getting something out there that (hopefully) works, or do you spend some time to get an elegant base from which you can (hopefully) produce the features you envision with ease? Any easy answer to this question will be clear, simple, and wrong. Pushing for a flexible software design is certainly something to consider, but the book does little to nothing to make you aware of the considerations aside from aphorisms like “with great power comes even greater responsibility.”


Another prominent theme in the book is evolvability. Software being malleable is of great importance, the authors argue, because it’s that malleability that facilitates evolution. For example, a generic function that dispatches to a specialized one based on the types of its arguments instead of a single function with an implementation of each case embedded within it. Here, adding an independent, specialized function is all that is required when something new comes along.

This evolution and malleability is talked about as a quality of the code. The authors stress that most code today emphasizes speed and efficiency over robustness. This gets in the way of robustness, as does correctness.

In computer science, we are taught that the “correctness” of software is paramount, and that correctness is to be achieved by establishing formal specification of components and systems of components and by providing proofs that the specifications of a combination of components are met by the specification of the components and the pattern by which they are combined. We assert that this discipline makes systems more brittle. In fact, to make truly robust systems we must discard such a tight discipline. (p. 19)

Aside from the fact they present no compelling argument for their assertion, this and the evolvability have some immediate problems to address.

For starters, what counts as a “truly robust system”? There is no sincere attempt to define it aside from many allusions to various biological entities. To my knowledge the authors offer up one software example: Emacs. They say this in a footnote on the first page.

For example, Emacs is an extensible editor that has evolved gracefully to adapt to changes in the computing environment and to changes in its users’ expectations. (p. 1)

Leaving aside the “graceful evolution” part of their description of Emacs, if adapting to changes in computing environments and user expectations amounts to evolvable systems then there is a good argument to be made that an awful lot of software out there has done a good job of evolving. GCC has kept up with architectural changes in hardware. Linux, Windows, and macOS keep getting updated to provide new functionality. Do these count as systems? Do they count as robust?

One aspect of robustness, the authors argue, is exploratory behaviour. If the software generates alternatives that are tested for fitness it has a better chance of handling itself in the wake of change. Justification for this approach comes with the standard trope of throwing more hardware at the problem (they say, “what are all those cores for, anyway?” – which may hint at why they don’t seem to care deeply about efficiency) while at the same time missing the fact that this happens already at the macro level: the people making the software create new features and put them out there to be tried.

Granted, it is slightly unfair to take the evolutionary concepts described and apply them at a high level of implementation since the authors appear to be talking about systems that do not need to be “replaced with entirely new designs as the problem domain changes.” But then, did the problem domain really change for Emacs all that much over the past 30 years? Their descriptions make it sound like most systems become useless in the face of change and… that’s just not true. An awful lot of systems we use are based on old software that has transformed to meet the requirements of changing run time environments. It seems a though the authors have no interest in including such “adaptations” in their definition because it doesn’t match their ideal. On the other hand, it really feels like such software is evolving.

The discussion at the beginning to the book may shed some light on why concrete examples and definitions are lacking.

Assumptions made during the design and construction of a program may reduce the possible future extensions of the program. Instead of making such assumptions, we build our programs to make just-in-time decisions based on the environment that the program is running in. (p. 2)

Programs are meant to be autonomous agents that adapt without maintenance by developers. This is a grandiose and impractical idea at the moment, and one that requires a lot more research (and is probably, like a lot of what comes out of the tech world, just a bad idea that sounds cool). Is discounting development teams working on software really their intention here?

Reading the discussion about robustness and evolution in the book, one can’t help but get caught up in the enthusiasm while at the same time wondering what the authors actually mean.


Perhaps this is the wrong way to interpret what the authors are going for. All the talk of flexibility points to the notion of a user customizing their computing experience. Nearly any time extensibility comes up in software discussions, it leads to the idea of adjusting things to one’s desires or needs. Bending the machine to your will, as it were.

Instead of looking at it from a team development standpoint, look at it from an individual’s computing environment standpoint. You want software that optimizes for the creation of more software.

This is an idea/dream that has persisted for a long time, arguably since the beginning of software development. Look no further than the philosophy of the Free Software and Open Source movements, as well as all the current work at making more accessible programming environments.

There is no doubt about the success of the idea. Much of what runs the services we use today is based on software that anyone can access and reshape. But to say that everyone is a developer, or even wants to be one, is laying it on a bit thick. While there are plenty of software developers out there, they pale in comparison to the number of people who just don’t seem to care to program their machines. (Even software developers don’t want to develop software all the time.) Advocating for software design that optimizes for the much smaller group comes across as absurd, if not arrogant.

There is evidence the authors might think that software should be all about the tinkering developer, though, as demonstrated by the designs presented in the book. Consider their regular expression example in the second chapter. They say that Unix’s regular expressions are “bad” and proceed to design and implement a version that requires the user write

(r:seq (r:quote "a") (r:dot) (r:quote "c"))

instead of

"a.c"

As someone who writes a lot of small regexps every day, I prefer the “bad” one. I expect most would. Even a tinkering developer.

What is notable about all the examples presented in the book is how they don’t look like applications and instead look like extensions of a programming language. The interface to everything they present is code. The overwhelming sense you get when reading them is that the code interface is the final product.

This stands out as silly in the context of how most people write software or even use computers. And it’s why the examples are not compelling. The authors write as if every application should be a programming language while ignoring the evidence showing most don’t seem to want such a thing.

It’s also why the book is such a disappointment. It describes techniques that can be incredibly useful and then squanders them on examples that are not relatable. The bulk of applications out there are written by software developers that are then used by both developers and end-users. The interface is rarely code. Using the techniques effectively when the end product is not an API likely won’t be as obvious because of the downstream implications (and speaking from personal experience, it isn’t). When the interface is code and the consumer is a tinkering developer, you can afford more flexibility. That is not so much the case when the consumer of the software is only interested in specific goals, such as “show me the pictures my friends have posted recently”.

The disservice the book does to its reader is the lack of discussion around the trade-offs of flexibility while purporting to be about software design. If you’re a seasoned developer, you will likely find the plethora of code examples a case of form over function. In all honesty, you can skip most of them because the examples don’t provide much insight into how or why you should consider using the technique they are demonstrating. It is the case that with careful study you can see the techniques’ usefulness, but it’s somewhat disingenuous to describe the book as something that will help you “avoid programming yourself into a corner” when it’s clear that the corners of all the presented problems have been sussed out already.

This may seem harsh, but if you read the back cover, the preface, or the start of the book, you will come away with the impression that by reading the book you will know, well, how to avoid programming yourself into a corner. I doubt that to be true. You will learn good ways to help avoid difficult to escape corners, but the mere use of these techniques is not enough to avoid them in the first place. Adding flexibility to a system has its own costs.

Ultimately, the authors demonstrate so much love for these ideas that it blinds them to environments in which software gets written that do not match their own. There is an undercurrent of Utopianism throughout the text, seeing software as a self-managing entity akin to a biological organism that will simplify systems. If anything, biological systems have proven very difficult to manage so perhaps we should tread carefully when taking inspiration from them when designing our own.

March 1, 2022

comment@wozniak.ca

Generated on 2022-03-01