Whenever we want to solve a problem or build something useful, we have decisions to make about the tools we choose. Choosing the right tools requires a good understanding of the problems. Often though, there is a tendency to jump to familiar tools regardless of the problem. “I love the balance of this hammer. It just works well.” The hammer is probably awesome and it may have been used successfully to drive screws for years, but what if we stood back and reflected on why we are choosing a hammer to drive screws?
In 2007, Dave Snowden published “A Leader’s Framework for Decision Making” in the Harvard Business Review. In that article, he introduced the Cynefin framework (ku-nev-in). Cynefin is a way to categorize the kind of problem we have so we can apply the right tools. Briefly, there are four kinds of problems.
You start by approaching each problem, categorizing the problem, and then applying the right tools to solve it. For example, tying your shoes seems like an obvious problem that requires best practices.
However, when we review the multiple contexts for tying shoes, for everything from dress shoes to hiking boots, it becomes clear that it is contextual, complicated. We need to use good practices.
In software delivery, applying the wrong mental model for the problem space is very common and negatively impacts outcomes throughout the value stream. An example of this is building an effective quality process where using the wrong mental model for software quality causes us to verify the wrong way.
Is Development Complicated?
It is common to categorize software development as “complicated” because we write code that implements the requirements. We can then verify quality by testing that the code matches the requirements. Using this mental model, the rational thing to do is to create a QA department to verify that the development teams are implementing the specifications correctly. We want to deliver relatively small batches to the QA team, but we do not want to deliver to the end-user until QA has ensured quality. This is using the tools from assembly line manufacturing where quality can be easily verified.
Let’s build a car. We establish an assembly line and begin construction. As individual parts are built, they are tested for adherence to spec using test fixtures and then assembled into larger units where other test fixtures run additional tests on the integrated components. Finally, the car is driven to make sure it all works together. So far, we have a model that resembles unit, integration, and end-to-end testing patterns. However, we’ve made a huge mistake. No one wants to buy our car.
We’ve built the wrong thing using the best quality practices we know, and yet it is still poor quality because it doesn’t fit the need.
This happens all of the time in software development because we over-simplify the problem at hand. We are not assembling and delivering code. It only appears that way on the surface. In Cynefin, if you over-simplify the problem, you will fall into the chaotic problem space and will need to react to recover. This feels very familiar.
Development is Complex
What is the right mental model? Using our car example, we skipped right past the most critical part, “what should we build? How should we build it?” The analogy of development as an assembly line is almost entirely wrong. If we have automated build and deploy systems, those are our assembly lines. They have automated the complicated good practices and obvious best practices. However, everything that happens before we submit code to our assembly line is complex and emergent. The correct analogy is the design studio.
We are not assembling a car from specs. We are developing a new car, getting feedback as we go, iterating towards the specifications we want to build, and designing the most efficient way to build it. We are making decisions about components. Do we need to design new brakes? Can we use off-the-shelf Brembos? What interface changes do we need to make to use those brakes on our new prototype? Etc.
In manufacturing, it is very expensive to change these decisions before we construct our assembly line. So, design precedes construction, we lock in as many decisions as we can after extensive research, and we hope we’ve developed a car people want.
In software, the economic forces are entirely different. The most expensive part isn’t building and running the assembly line. In fact, creating the build and deploy automation is the cheapest thing we do. The cost comes from the R&D work of deciding how to solve the problem and writing code to see if it solves it. To mitigate that cost, we need to design a series of feedback loops to answer several quality questions: is it stable, performant, secure, and fit for purpose?
We can build a quality process that can verify stability, performance, and security before we deliver anything. However, “fit for purpose” is subjective. For that, we need to build a quality process that continuously verifies that with users. We need to do this by delivering the smallest changes we can verify to reduce the cost of being wrong. We need to understand that we are not assembling identical cars. Every single delivery from our assembly line is a prototype that is different from the one before. We cannot use static test fixtures built by another team. In fact, that’s destructive to our quality process because waiting for someone else to build the fixtures means we will build bigger things to test and drive up the cost of being wrong. Our test fixtures can only verify that we are building what we think is needed. We must constantly adjust them as our understanding changes. We are prototyping fixtures and building things to fit the fixtures so that if it is fit for purpose we can replicate success.
Focusing on continuous integration and delivery can make us more efficient at delivering small batches of work. Designing our quality process to optimize for user feedback from ever smaller batches of work will make us more effective at delivering the right thing. To do this well, we need to stop over-simplifying what we do. Software development is complex and if we apply the wrong tools to this problem space…
we will fall into chaos.