Death by Spreadsheet

FEA01x - image EDI suppose most of us have heard the phrase “death by PowerPoint,” referring to those occasions when the burping, animated clip art gets in the way of the message. There is another phrase some of us are also familiar with – “death by spreadsheet.”

Publisher’s note: From time to time – TDAN.com unburies oldie-but-goodie features from years gone by. This feature was originally published in October 2012 (pre-TDAN.com relaunch) and is still relevant today.


Imagine this scenario – a team of analysts and business experts are putting together the requirements for a complex new application. Will they model their business processes using a process modeling tool? They might, but let’s say they don’t in this case because they don’t have a process modeling tool, and they’re too expensive. Will they produce UML activity diagrams and use cases? Probably not; there’s a common misconception that UML is only for system designers and developers.  Will they produce high-level data models in a data modeling tool? I doubt that very much – I haven’t met many business analysts comfortable with producing data models. Okay, what does that leave? By the way, did I mention the need to manage and compare versions of the requirements, and allow several people to update them at the same time? We also need to link the requirements to design artefacts and other stuff, like data models and processes.

Very often, the requirements are recorded in multiple Word documents and Excel spreadsheets, and SharePoint is used to manage versioning and sharing; analysts usually have a lot of experience with Word and Excel. The software cost will be low because we already have license for Word, Excel and SharePoint. We develop a set of linked spreadsheets, each one designed for a special task. We give each requirement and each row in a spreadsheet a unique reference, so we can cross-reference the documents when we need to.  We may decide to create hyperlinks to connect documents to each other. It’s easy to link to another file, and maybe to a bookmark of some kind within a file. But, can we see the hyperlink from the “other” end? Of course not, that’s not how hyperlinks work. What happens when the name or location of the file changes? No problem, what we need is another spreadsheet where we duplicate this rats’ nest of connections in a searchable manner. In effect, we create a catalogue of documentation and links between them – something else for our analysts to remember to maintain, as if they don’t already have enough to do. Databases can be relied on to maintain cross-references; people are much more likely to forget to update them – they even forget to add the link in the first place.

How can we store and structure this information so that it is useful and reliable once the workshops have finished? Does everyone store documents in the right place in SharePoint when they should? What happens when person B wants to edit a document or spreadsheet that person A is working on, so person B takes a copy to work on, and stores it in their own folder outside SharePoint? Eventually, there may be the copies of a document, spread around various folders inside and outside SharePoint. Which version should I be reading?

Can we find out the status of the use cases that are relevant to each requirement? Probably not, unless we create another spreadsheet, or add the information to our catalogue of documentation. Can we see which requirements reference the field maximum shift allowance? Probably not.

It’s pretty low maintenance, just a few hours needed each week, making sure everything connects, only several hundred links to maintain manually. After all, it must be cheaper than spending all that money on tools, mustn’t it?

In one way, it is cheaper – we haven’t spent any money on software. However, we do have a large overhead maintaining all this stuff. Still, we won’t need to maintain it forever, will we? We can afford to chuck away all that work, once the second and third releases have gone live, can’t we? It doesn’t matter if we can’t answer future questions like “Why did we do it this way, when that isn’t the way the package does it out of the box?” or “If we decide to change the value of the maximum shift allowance mentioned in Requirement 32, what does that affect?” When the next major version of the package comes along, we’ll have no problem figuring out the impact on our business, assuming the documents are up to date, we can find them, and the links all still work.

It’s ironic, isn’t it? The organization is in a pickle because we’ve built all these data and process silos over the years, and the solution is to simplify things using a complex package, service-oriented architecture, a combination of both, or something else. So, we’re replacing the organization’s process and data silos with something that integrates much better, reduces effort and cost, and is less liable to lead to confusion, error, or even jail sentences for those held responsible.  How do we manage the design and implementation of this new nirvana? We build our own metadata silos (multiple documents and spreadsheets, process and data models, etc.) which don’t link together reliably, each group develops their own deliverables, and a lot of effort is required to make sure all these silos connect with each other. I forgot – that’s what spreadsheets are good at, isn’t it? Nope, that’s what integrated tools are good at. After all, isn’t that what we’re giving the business people, a set of integrated tools for managing their business, often to replace their own version of “death by spreadsheet”?

Share this post

George McGeachie

George McGeachie

George McGeachie has worked in the field of modeling and metadata in a variety of industries for many years. George has evaluated, implemented, tailored and used various data modeling and metadata management tools during data modeling assignments around the globe. He is joint author of Data Modeling Made Simple with PowerDesigner.

scroll to top