I’m really enjoying answering your questions and comments, because it’s making me thing about things I had clearly taken for granted! This week, a question I received via email…
In response to Thinking about APIs reader Richard made this comment directly to me via email:
Have you thought about integration tests? That could solve some of the issues about the changing contracts.
Yes, integration tests — indeed all kinds of automated checking — definitely help to establish our software’s correctness. They operate within the first of the 4 rules of Simple Design: “passes all of the tests”. But the thesis of this series of articles is that we can do more.
Shigeo Shingo, one of the founding fathers of Lean Manufacturing, said something to the effect that:
Inspection to prevent defects is absolutely required of any process,
but inspection to find defects is waste.
Paraphrasing this into the domain of software development:
It is more efficient to prevent defects from entering our code than it is to find them later — even if “later” is only the time it takes to run our tests.
So I want to find ways to bring that feedback forward in time, so that I can “see” the coupling before I break something — and without having to execute anything. I don’t want to have to rely on test execution, or compilation, or static analysis, or any of the other tools provided within a modern IDE or CI/CD pipeline. I just want to be able to look at this one module right here, at this current level of abstraction, and “know” that it is coupled to something else “over there”.
I want to explore whether it is possible to write code that speaks clearly and immediately, regardless of what tools I have at my disposal. And even if you have different tools — less capable IDE, differently configured pipeline, etc — the code should still speak to you in exactly the same way that it speaks to me.
It may be that we will discover that there are “degrees of explicitness”, and that they are somehow related to the speed of our feedback loops. I sincerely hope that isn’t the case though. I have this naive expectation that code quality shouldn’t be related to tooling; that we can discover something “universal” — although I definitely expect those universal ideas to have widely differing expressions depending on our programming language / paradigm / ecosystem.
One final observation: The tests are also code. They are thus built using coupling, both internal — within the tests’ codebase — and with the system under test. Thus, far from “solving” our implicitness problem, I view tests as just another codebase whose coupling I need to make explicit. The code of the tests should still satisfy the 4 rules of simple design, just as our production code should.
What do you think — am I on a mad Quixotic quest? Can we always write code that has no implicit coupling across encapsulation boundaries?
I think maybe it’s time we began to explore some more (a lot more) worked examples…
I think Integration Tests can some times be things that are written because they make people *feel* safe, but don't necessarily provide the gurantees of safety they appear to (you can't usefully write a test against someone severing a cable with a backhoe, after all). Our unease with ambiguity is perfectly understandable, but it often trips us up. One of the ways it does so is a tendency to want to shift questions into the techincal arena, that are predominantly human ones, rather than learn to live with them.
Maintanence of contracts is a human issue, and not something we ultimately have control over, especially if there are third parties involved (hard to avoid, and by now we should know that people can, and have, done crazy things). I think this points us in two directions. For the first, for the things we do control is to push leftwards, reducing implicit coupling, making code habitable, all the good things you have been talking about. In the other direction, I think we have to learn to accept that the only way we can know something is actually working, is to observe it doing so (i.e in production) and try to build something that can tell us when it isn't.
My first contact with that Shigeo Shingo quote was in the context of manufacturing. The example there is often that of a separate 'inspection' team hunting defects before material entered subsequent work steps. An inspection removed from both from where the defect is created and where it impacts.
So initially I was confused by your interpretation in the context of code. I would not have put the system boundary of the defect creation at the writing, but at the deployment of code. A great provocation, in the sense of 'thought-provoking'. Prior to your post I would have not gone beyond shrinking the boundary from 'deployed' to 'committed'.
You've previously discussed with me how the 'refactor' step of 'red-green-refactor' is essentially an aesthetic one. The 'red' is driven by the product, the 'green' by the 'red' but the 'refactor' is driven by these harder to grasp things in our heads.
I am hopeful that we can hone these senses regarding coupling. Since you've worked with us, our team has definitely started noticing and addressing coupling a lot more. We've started to listen for symptoms such as shotgun-surgery being needed to change something.
But is is often still only after broken tests inform us of coupling that we see it. So looking forward to see which language, perspectives and thought patterns you can unearth that shorten the feedback loop.