Volatility: Everything You Learned About Writing Software Is Wrong

Change is inevitable. It can also be painful. If you fail to take the potential for change seriously early enough, you are doomed to a rewrite of the software. If you dive too deep into the details, you will run over your budget and fail to deliver a product at all. What can you do about this? The answer might fool you, because it’s probably contrary to what you learned about programming.

What is Volatility?

Volatility is defined by Google as, “liability to change rapidly and unpredictably, especially for the worse.” In his paper on Stability, Robert C. Martin (Uncle Bob) alludes to several factors that can cause a tendency for a piece of software to change. I categorize these into three forces that constantly interact with each other.

  • Extrinsic Forces: Changes from outside the system may cause a need for change in the system. These are the most difficult changes to identify. However, expect these changes to occur as a result of new business ventures, new customers, and new requirements.
  • Intrinsic Forces: The internal contents of a software component may cause a need for change to that component. Maybe the force behind it was a bug, the need to refactor to clean code, or the need to improve performance.
  • Transitive Forces: If a component changes, any component that depends upon on it may need to change. When a dependency changes, any upstream dependencies may be adversely affected.

Failure to take each of these forces into account during your design phase can cause a ripple effect of changes to propagate throughout your entire system. It may not rip a hole in the space time continuum, but it could rip one in your client’s pocket.

The Wrong Way to Decompose a Software System: Functional Decomposition

Programmers are taught in school to functionally decompose software requirements. This means that they start with a high level requirement as a single function (or module/class) and then break this function into multiple smaller functions (modules/classes). They repeat this process for each of the smaller functions until they solve the problem.

The result of functional decomposition is not something that can evolve well over time. As Juval Lowy discusses in his webcast on Software Architecture Decomposition, this practice can lead to components that are bloated, difficult to reuse, and nearly impossible to change.

Let’s port Uncle Bob’s Copy program to C# and use it as a trivial demonstration of decomposition gone bad. At a high level, the Copy program must do the following:

  1. Read a key from the keyboard.
  2. If the key is not an End of File marker, write the character to the printer and go back to step 1.

When we break this program down, we arrive with the following code:

static void Copy()
{
    int c;
    while ((c = ReadKeyboard()) != 0)
        WritePrinter(c);
}

The Copy() method is the primary concern of the application. It’s two concerns were broken down into smaller functions. The implementations of these other two functions are not important to this exercise.

Even though the above code is clean, Uncle Bob admits that this implementation has serious implications. The Copy() method has a hard dependency on ReadKeyboard() and WritePrinter(). What do we do if we want to read from a different source or write to a different sink? In order to accomplish this, we would have to modify the Copy() method. By doing so, we violate the Open Closed Principal, which says that objects should be open for extension, but closed for modification.

A Better Way to Decompose: Volatile Decomposition

The design has crumbled under the weight of the new requirement to read and write to alternate sources and sinks. A more appropriate design is to create an abstraction that will shelter the Copy() method from this volatility. By creating an abstraction for reading and writing, we can invert the dependency on the source and sink and then inject these abstractions into the Copy() method. Once more, we’ll port Uncle Bob’s implementation to C#:

static void Copy(IReader reader, IWriter writer)
{
    int c;
    while ((c = reader.Read()) != 0)
        writer.Write(c);
}

At this point, Copy() has a dependency on two abstractions: IReader and IWriter. Since these are interfaces, they are unlikely to change. The consumer of the Copy() method can now inject any concrete instance of the IReader and IWriter. As long as the calling protocols are adhered to, there should be no reason to modify the Copy() method.

A Word of Warning

Treat volatile decomposition with care. If you look hard enough, you will find a potential for change in everything. You must search for changes that are probable, not simply possible. In a future post, we’ll investigate some techniques on discovering and decomposing requirements based upon volatility.

 

What are your experiences with maintaining functionally decomposed software applications? Do you find it easy to introduce new features into these systems?

Leave a Reply