This column started off with a bang. Last time, I chose the title “Is Agile killing EA?” and while there wasn’t a lot of discussion here, a gentleman named Jason Bloomberg posted a link to it on a LinkedIn discussion group that generated a lot of discussion.
(Note, I said last time I was going to write on “EA as waste reducer.” I need to stop making commitments like that, as I write best when I just go with whatever is at the front of mind.)
I’m doubling down in this column. While I believe that the *practice* of enterprise architecture will survive (for reasons I gave in my last column), there is no question that organizational forms are changing. In particular, the so-called “center of excellence” seems to be – based on multiple reports I am receiving – under serious pressure, if not hastening towards its end.
What is a center of excellence? Two of my first roles were in centers of excellence, with widely varying scopes. The first was focused on ERP software and was essentially an entire delivery center for a large consultancy, capable of deploying large, multi-skilled teams focused on the end to end delivery of a major ERP system. I’m not talking about this kind of COE.
The second role was in an architectural COE focused on data architecture, data modeling and data governance. This functionally focused, shared-service COE was in existence for at least 15 years, but was disbanded this year according to reports I am hearing.
Ultimately, it doesn’t surprise me that this happened. There was always friction around the governance that this team exercised over a critical resource (access to persistent data storage), and we found ourselves on the critical path more times than I care to admit.
I have seen internal centers of excellence (sometimes front-ended by “service catalogs”) for:
- Project management
- Business analysis/architecture
- Quality assurance
- Enterprise architecture
- Data management
- Enterprise integration
- Infrastructure engineering
While some may have more staying power than others, I am not optimistic for any of their survival long-term. The question is, why is this happening?
The idea of organizing work by functional skill is an old one. Production would flow through
…and to some extent this persists today for good reasons.
It’s not surprising that the first few decades of industrial IT sought to emulate this model, with systems development “work” being routed through analogous centers. Project Management COEs defined the method of flow and enforced it through their ownership of the project management resource. Project managers orchestrated the resulting work and applied pressure to drive it through the functional silos. Analogous to a manufacturing production model, a typical IT flow might pass through:
- Requirements (perhaps COE-based)
- Analysis & design (some parts COE-based)
- Testing (some COE-based)
Many models existed. Sometimes the COE was primarily a home for functional resources who would be fractionally allocated to project teams; in other cases, a ticket-driven shared services model would be used. In either case, dysfunctional frictions and handoffs could easily emerge, and important, valuable work with a high cost of delay could be blocked.
And whether you owned a machine tool center in an industrial facility, or a QA center of excellence in an IT shop, you sought to optimize your function locally. Often, this was done by imposing burden on the upstream work center, by increasingly stringent standards for what would be accepted. Local optimization led in case after case to sub-optimal overall flow, and management by “who screams loudest.”
But even as this model (accompanied by a supporting “waterfall” methodology) was being instituted for IT systems development worldwide, the basic premises were under fierce questioning. The initial stirrings started years ago, in the origins of Lean thinking and works like Eli Goldratt’s The Goal (the inspiration for The Phoenix Project).
In 1991, right around the time many functional centers of excellence were being formed, Preston Smith and Don Reinertsen published Developing Products in Half the Time. In that work, Reinertsen continued his ongoing exploration into the challenge of product development, making the following observations:
“In general, we find most companies inclined to organize … around functional skills. The mechanical engineers are grouped together, as are the electrical engineers and the software engineers. … Our experience suggests …[i]f we organize around physical or logical subsystems … we tend to get much faster development.”
“One of the biggest problems we see in team design is the fragmentation of team member time among too many projects… managers underestimate the difficulty of making a fragmented team work… As a rough rule of thumb, assume that the less work someone has to do on your project, the more likely they are to be a delay on the critical path.”
Reinertsen is an interesting source of insight for IT systems professionals. A polymath naval officer trained as a nuclear engineer at the Naval War College, and later a McKinsey consultant, he is not an IT professional per se. Yet he is one of the most influential voices in Agile, widely cited by many of the most well known Agile thought leaders. He was instrumental in David Anderson’s development of Kanban, and his influence on the Scaled Agile Framework is also apparent.
He offers a systematic, well quantified, Lean approach to the broad topic of Lean product design and the processes and practices enabling its flow. Note that Lean has always had two faces: production (the assembly line) and product development (how the assembly line comes to be).
There is often concern among software and IT professionals that manufacturing insights (including Lean) are inapplicable to IT management. However, it is principles from manufacturing production that are hard to apply to IT & systems development, because IT systems development is not repeatable, and production is all about repeatability and minimizing variation.
When we turn to product development – whether based on biotechnology, materials science, electrical engineering, or software-based systems – we see common themes:
- higher levels of variability in the process
- a need to optimize for information generation (product development is the generation of information)
- a corresponding requirement for fast feedback, as answers are not known in advance
In short, product development is always an iterative, uncertain, and risky process. There is nothing unique about software and IT in this respect.
What is unique in the current moment is the sheer, accelerating demand for IT-based systems and products. The transition to a product-centric orientation (for both internal as well as externally-facing IT) is now well under way, accelerated by phenomena like Cloud and the insatiable hunger of the modern economy for an ever-increasing digital component of ALL products. The demand for fast digital time to market is driving out older, less efficient, functional models, and that means the COE.
The concerns that gave rise to the COE model don’t go away. In the Spotify model, functional excellence still has a place and is promoted through the concepts of chapters and guilds. These are not merely advisory; they have some policy authority when matters of consistent functional approach are critical.
And finally, as with any large scale transition, this is contradictory and uneven. IT shops are complex adaptive systems, and there is much investment in the current model. Many organizations will continue to use COEs. Perhaps flow through them can be improved – prioritizing based on Cost of Delay is a promising approach, one I hope to explore further.