
“I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity.”
— Olliver Wendell Holmes
We seem to appreciate simple solutions to complex problems, and the above quote resonates with many people. But we’re also suspicious of solutions that are too simple for the situation. We’re going explore how we typically deal with simplicity and complexity and begin a search for how to find the “other side.”
Let me simplify (sorry) the language here a bit. I’m going to call “the simplicity on this side of complexity,” “simplistic.” I think this suggests that someone has oversimplified the description of a situation to the point where it is either not helpful, or often, harmful.
I’m going to call “the simplicity on the far side of complexity,” “elegance.” Elegance is often associated with luxury or style, but we’re going to lean into the alternate usage, how it is used in science, math and engineering, which is where the solution is highly effective and simple.
And of course, I’m going to couch this all in the context of enterprise information systems.
The Default Progression
It seems like most enterprise information systems go through a pretty predictable progression, from simplistic, to complex, to (very occasionally) elegant.
There is also a second order effect, which may be the most profound. When solutions get too complex, practitioners often subdivide the problem (in an effort to make it locally simpler, which they do) while making the global problem even more complex.
We’re going to explore a lot of this at a simple (simplistic) level, where we can all relate to it, and then see if, via analogy, we can scale up to things that are hard for us to comprehend (the complexity of enterprise information systems).
Our Toy Example
I think almost everyone has done this: Built a simple excel spreadsheet and then kept adding on to it until it breaks under its’ own weight.
The brilliance of spreadsheets was the ease of getting started. This was baked in from the beginning. VisiCalc, and later Lotus 123, had all characteristics I’m talking about here, but Excel and Google sheets are most people’s reference points.
You can build a simple spreadsheet that does some interesting analytics in minutes. There is a very good chance that the first version has constants hard coded in the formulas. It sums up specific ranges of cells (such that if you add a row, it may or may not get included). We’ve all done this. Many of these are just one off, analytics to solve a problem at hand. No harm, no foul. Just don’t pretend this is a “solution” to a broader family of problems. This is the exact point where most “simplistic” gets its comeuppance.
Right now, readers are thinking, “Yeah, but how bad could it get?” As it turns out, amazingly bad. Firstly, many spreadsheets start innocuous, lives on one analysts’ desk, and through a process of Darwinian evolution, those that are useful get promoted to bigger and bigger roles in the enterprise. Studies have found that 88% of enterprise spreadsheets have errors and in 5-30% of the cases, they were “serious” errors.1
We worked with an architectural firm (Building Architecture, not software Architecture) in the early 1990s. One of them shared a copy of a complex problem they were working on. It was a printout of the formula from a single cell in an Excel spreadsheet. The formula filled the whole page. There were hundreds of nested conditionals in there. More instructive, the reason it was printed out, was the analyst was trying to make a change to the formula and had penciled in a few notes, and a few more parentheses somewhere in the middle of the formula.
Early in my career, I wrote a payroll system in Assembler Language. I suspect that that payroll system was less complex than this single excel formula. Every change to a system (or a spreadsheet) like this is orders of magnitude more difficult and riskier than it should or could be.
This is a small, narrow example, meant to get our heads around the bigger issue. The bigger issue is the complexity we find in the constellation of systems that make up the typical enterprise information system ecosystem.
Where Does “Simplistic” Come From?
Let’s start with where simplistic comes from. You have a simplistic starting point if you begin building a system knowing only a small subset of the requirements you will ultimately entertain. We get simplistic starting points from superficial understanding.
We have to start somewhere, but often a small bit of forethought can forestall a great deal of future rework.
An even more dangerous source of “simplistic starting points” comes from categorical thinking about systems. Management often thinks: “We have a problem.” They then work out what category of problem it is, say “inventory” and then buy an inventory system, and turn it over to someone to implement it. Depending on what the original problem was, the new inventory system is often orders of magnitude more complex than the problem it was meant to solve and involves major conversion and integration projects to implement.
There seems to be no shortage of overly simplistic starting points for systems. Current generations of GenAI will ensure that we can get these simplistic starting points implemented even more rapidly.
Why Does It Always Become Complex?
Systems become complex because change requests arrive one at a time. For each request, the implementor has a choice: Implement it in the current structure, even though it is much harder than it could be if the starting point were different; or burn it all down and start over.
“Burn it all down” almost never gets the nod. Yet, if all the changes arrived at the same time, the answer would be more obvious.
Think about it, if you receive a change request and you recognize that it will cost ten times as much to implement it in the current system, versus a hypothetical system, you may pause and think about it a bit, but you’re still not likely to act on it. You may be staring at a change that will cost $100,000, and you know, from experience with other systems that a similar change to a more elegant system might cost $10,000 or even less (two orders of magnitude different is not unheard of here). But if moving from the current system to the more elegant one is going to cost millions of dollars, and add delay and risk, it will not get done. The sponsors compare the $100,000 change request to the multimillion upgrade and take the incremental solution almost every time.
If dozens of these requests arrived simultaneously, the case might make itself. But it rarely happens that way. The changes trickle in, one at a time, and the systems continue to get more complex, attempting to incorporate the new reality.
How To Recognize Complexity
To paraphrase Ernest Hemingway: “You need to develop a shock proof complexity detector.”2 Mine is perhaps over tuned. I can’t stand it when I check into an AirBnB and find three entertainment remotes, each with 40 plus buttons. I typically spend more time trying to turn it on than watching anything.
Complexity lives on because people accept it as the status quo. It doesn’t occur to them that things could be much simpler — especially just after they have mastered the complexity. When you master the complexity you have a skill, a superpower, you can do something others can’t (such as the ability to add a new event type to the React UI, or to be able to change the channel on the remote). This advantage would be wiped clean if you were able to simplify the ecosystem. So, you don’t.
This complexity embrace continues at many levels. You might be the master of a single bit of complex code. I once met someone who was a world expert in SAP’s product classification sub system. Enterprise architects are often revered for knowing how all the pieces fit together, which is a whole ‘nother level of complexity.
The skill of mastering complexity is only a prerequisite to the skill of overcoming it. To overcome it, you first must also recognize unnecessary complexity. The best way to recognize unnecessary complexity is to know what could be.
Knowing what could be often involves experience with different systems, sometimes even different types of systems. It involves making and then applying analogies. Henry Ford was inspired by the meat processing “disassembly line” to create the automotive production line for the Model T. In the process, he took 90% of the direct labor out of the assembly of cars. He would never have gotten there by watching his skilled craftsmen assembling the now forgotten Model N and Model S.
In the case of enterprise systems, we need to know from other environments, just what is possible in terms of complexity reduction. We need to have a mental library of approaches that have worked elsewhere to be a reference point to where our own systems have gotten too complex.
One Way to Deal with Complexity: Refactoring
It seems to me, in the early days of the agile movement, there was a lot more time and attention spent on refactoring code. Refactoring is the process of rearranging code that has accreted a large number of hacks that have collectively made change more difficult than it could be.
Refactoring is exactly the discipline that I’m talking about here. The only reason I’m not singing its praises from the rooftops is that it seems to be used a lot more sparingly these days, and more importantly, it’s a more localized improvement. The big changes we need in enterprises are at the systems of systems level, where refactoring isn’t an option.
Another Way to Deal With Complexity: Complete Requirements
Earlier I said that one reason complexity persists is that requirements show up as a dribble, over time. One solution to this, that occasionally works, is to get a complete set of the requirements up front. The reason this works sometimes, is exactly what we implied earlier: “If I’d known earlier what I know now, I’d have done it differently.”
One reason it often doesn’t work is people tend to describe their requirements in terms of the existing system. Often, the requirement ends up baking in some of the unnecessary complexity and flaws from the system we were trying to replace. Most subject matter experts know their domain primarily from the systems they use to deal with the domain, rather than the domain itself. As a result, they tend to express what they know in terms and structures they learned from systems that were often arbitrarily and unnecessarily complex.
Another reason it often doesn’t work is that users tend to see these complete requirements projects as a last chance to get everything on their wish list into the new system. This often adds items of little, or even anti-value, which actually runs up the complexity of the resulting system.
The final reason, and this is more why complete requirements are always incomplete, is that many of the requirements of the system don’t even exist at the time the system is built. They are discovered through use, or they are thrust from outside as regulations or impositions from customers or suppliers who cannot be resisted.
I don’t want to throw the requirements baby out with the bath. I am a big fan of requirements; I’m just advocating thinking through the themes that are likely to change the shape of the system, more than trying to be exhaustive on the details.
Attacking the Root Cause
To really get at the root cause, I think we need to see what is generating the complexity. A lot of complexity is in code. Code is brittle; tiny syntactical changes can break systems. The logic of a program can get complex. There are measurements, such as cyclomatic complexity metrics that measure the complexity of a program.
Modern systems contain millions of lines of code. Some of it written by the programmers who are solving the problem at hand, much of it inherited from software libraries and more and more now, generated by agenticAI. Refactoring usually reduces the total amount of code, and makes the code that remains more understandable, and therefore more maintainable.
If we look a bit deeper, in Enterprise Applications, the need for and amount of code is primarily driven by the complexity of the various database schemas. Adding a single attribute, or column to a relational database can often require the addition of thousands of lines of code. There is code to access the new element, code to shuffle it in and out of various APIs, move it into and out of the DOM, validate it in various places, and there will be new code to test the impact this new field might have on all the other code, APIs extracts etc.
Dropping the complexity of a schema in half, will generally mean half as much code is needed, which means half the complexity, half the bugs, half the testing, half the risk. And these days, “half” is table stakes.
The fastest way to increase the data footprint of your enterprise is to add another application system. Implementing a system brings with it all the complexity of its schema, user interface, workflow, and especially all the systems integration that will be needed to fit it in with the rest of the landscape.
Turns out, that is what enterprises have been doing for the last several decades, with the result that most of the firms we look at and work with are supporting data landscapes that are three orders of magnitude more complex than they need to be. Not three times as complex, but three orders of magnitude, or 1000 times as complex (again, not 1,000% which is still only 10 times, but 1000 times as complex as an elegant replacement would be). We typically find 1 million attributes under management where 1000 would do.
Does It Even Matter?
Biological life is complex. genAI is complex. Maybe we should just accept complexity as the way things are and move on.
I’m going to suggest, what I suspect is a minority opinion here. Maybe elegance is our last best hope for getting a grip on what’s happening around us and to us. Maybe with an elegance lens, we have a hope at making informed interventions.
Forgoing this seems to be just abdicating to things we can’t possibly understand. I stand for understanding, and therefore stand for having some agency on as much of the world as is possible.
This includes enterprise information systems, which at the moment are pretty much inscrutable. But I think there is hope.
Summary
There is a predictable progression in the complexity of person-made artifacts. That progression is that they usually start off simple. Too simple. Simplistic. As the artifact adapts to the changes being thrust onto it from daily use, it inevitably takes on more structure to accommodate the variety of the inputs being added. Depending on the original structure, these accommodations may make the artifact more complex. Often, vastly more complex.
At some point, the complexity overwhelms the artifact. Then, one of three things happens. One, people live with it and give up trying to change it. This is the classic legacy system. The second possibility is the artifact is thrown out (possible replaced with another, which is often at least as complex and is only a partial victory). The third possibility is to radically restructure the artifact as if the requirements had all arrived at the same time, rather than in the order they historically arrived. When it works, this is the elegant solution. Because it incorporates a wide variety of variations of input, it often accommodates future change more gracefully.
This simplistic to complex phase change is even more pronounced at the systems of systems level (the enterprise level). Systems of systems are often exponentially more complex than individual systems. The act of attempting to deal with local complexity by subdividing the problem often provides some local complexity reduction at the expense of increasing the overall complexity.
The way out is to study the root cause, and deal with what is causing the complexity. And that is the ever-increasing complexity of all the application schemas that must be managed. I think I forgot to mention: The solution is the data-centric approach.
(No tokens were harmed in the writing of the article.)
1 Five Costs and Perils of Spreadsheets for Business Analytics
2 Often misquoted, the best original quote I could find was from the Paris review — theparisreview.org/interviews/4825/the-art-of-fiction-no-21-ernest-hemingway — or here without the pay wall bradybouchard.ca/republished/hemingway.html where the exact quote is “The most essential gift for a good writer is a built-in, shockproof, shit detector.”
