Data Governance Toolset Disruption is Coming

ART01x - edited feature imageSomething is coming in technology that may not wreak havoc in the area of data governance, but it should at least shake things up a bit.

Just like with the automotive industry in the early 1900s, there is a shift beginning that will allow for meeting most of the needs at a fraction of the cost of the luxury solutions.

There were plenty of high quality cars in 1910. Daimler, Benz, Pearce, Olds, and others. The Olds Limited retailed for $4,600 US dollars, an amount greater than the purchase of a new, no-frills three-bedroom house. A Mercedes Simplex would have cost roughly $7,500.

Certainly, there were cheaper autos, but these were more like the older horseless carriages vs. the modern autos that were evolving. Much like using Excel or Word documents for your data knowledge repository vs. a more well equipped platform.

Then along came Ford at about $800 and you know the rest of that story.

Ford saw that standardization of a common platform was a key to moving the industry from producing a few hundred Mercedes Simplexes to 15 million Fords over the lifetime of the Model T.

But that didn’t mean the Model T was not customizable. Even Sears sold pickup-truck conversion kits for the Model T. There were many conversion kits. Some even turned them into farm tractors and sawmills. But those customizations utilized the fact that the car was standardized in the first place.

One key to success was the standardization of the parts to allow easy maintenance and easy fitting of new hardware that had begun a century earlier in clock and gun manufacturing.

What does this have to do with the data governance tools of today?

The Data Governance tool landscape is dominated by large full suite tool-sets that have all the bells and whistles needed to make them a stand-alone, self-contained, and naturally expensive solution. Similar to having the lambskin seats in the Simplex.

Full suite toolsets are not just expensive to implement, but consider the need for custom connections to whatever other systems you are using. The Model T assembly was efficient due to common bolt and nut sizes being used, a new concept. Consider also the training time needed for learning new workflow software, etc.

Now, consider the lowly data standards repository/Data dictionary. Today, you can spend several hundred thousand and more (and more) on a suite that has workflows, the repository, a glossary, collaboration methods, perhaps even a metadata discovery tool, and then you get to customize it to your environment (if customization is even allowed).

But the most ubiquitous option that companies use today is to manage the knowledge about your data in excel spreadsheets because it is essentially free. While it doesn’t meet the needs for rich content and is difficult to manage as a single source of truth, it is mostly OK for a low point of entry cost. With the aforementioned cost of entry for full suites, it’s easy to stick with Excel, especially when the next step up is very expensive.

But something cool is coming… actually its already here.

Companies such as ServiceNow are promoting their landscape not only as the ITIL powerhouse that it is, but also as a platform as a service (PaaS).  This means a company can develop their data standards’ knowledge management as structured data like in their old Excel sheets, but with far more functionality and the ability to take advantage of the functional standardization that is inherent with any ServiceNow implementation.

If you have ServiceNow, why pay for yet another workflow in a stand-alone governance platform? Indeed, why pay for more than one workflow in your enterprise period? Why pay for a rules engine, process analytics, dashboard, ticket system, collaboration platform, etc. when you have one already on your ServiceNow platform?

These parts are standardized, therefore new apps will use these like Lego blocks that continue to bolt on and connect so that you only pay for functionality once. Then you can add on the apps that you are ready to consume as you need them. The new apps will be more cost effective as they will reuse the standard functionality and focus their efforts on the added value they bring without reinventing any wheels.

Of course, there are many bells and whistles not yet available for data governance on ServiceNow and other platforms as a Service (PaaS) but they are coming. Artificial Intelligence (AI) has arrived. It’s only up for someone to apply that to data governance applications and you will have AI in your standards for, at worse, an incremental cost.

RuleBase is one example of a certified app that can be downloaded. All it is can be summed up by calling it a Business and IT Data Dictionary on steroids, but it fits nicely, allowing one to take advantage of a single source of truth for the “what, where, why, when, how, who, and because” of your master data.  It can take advantage of all the functionality inherent with no added cost either in software or DBA training as it simplifies incremental to training and management already in place.

DAMA (the Data Management Organization) discusses Master Data Metadata in several chapters in the DMBOK 2 (Data Management Body of Knowledge version 2). They are vague about the full scope of the business metadata needed to support knowledge transfer when organizations reorganize, acquire or become acquired, or simply add new functionality “bolt-ons” to their existing landscape. But it’s clear it’s much more than simply the technical metadata that’s needed. The data standard must cover all the bases and thus must scale both horizontally as well as vertically. If your Standards Management application is part of a much larger integrated application like ServiceNow, then its scalability concerns become insignificant. If you are using a traditional suite on a server whether externally or internally hosted, then you need to worry about scale, size, security, and hardware cost.

In a previous TDAN.com article, Bob Seiner details what it means to govern data (http://tdan.com/what-does-it-mean-to-govern-data/10867 ). In it he details eight areas of governing data and I would suggest that each of these can and should have the knowledge documented for the detailed understanding of the “who, what, when, where, why, and because” of the data fields being populated or consumed by the new processes, platform, or company.

The scope of business metadata and knowledge management is for another article (coming soon), but the point is much of it can be structured and therefore mined quantitatively and leveraged in numerous ways. The key is to have it in a central and universally accessible location on your landscape for it to truly be the single source of knowledge about your data in the most cost-effective way possible.

Now that we are in 2018, a company can choose to buy the custom Mercedes or consider the “Lego building block” approach based on starting point “apps” that can get you a fast start and still allow you to adapt it to your needs at a fraction of the investment far more quickly than building your own.

By providing this structure in a standardized way, a company can save the time needed to create the structure and to take advantage of all functionality. Just download the functionality and get working on content.

The future of governance software is not that everyone will customize their Bugatti with the required assistance of implementation experts, but that the software will become more “app-like” and require less and less software personnel resources to implement, train, and maintain. This allows for more focus on the content of the knowledge in the data dictionary to be rich, updated, and adapted for the company culture.

When it comes to data dictionaries / metadata-repositories, my bet is on fast, easy, and content-rich, but at the Model T price point.

What do you think?

Share this post

Richard King

Richard King

Since 2003 Richard King has focused on master data governance, primarily in SAP based environments and from the business perspective. He is currently a Partner at DataIntent LLC, a recent spinoff of gicom LP where he was the COO. Prior to gicom LP, RichardÕs consulting experience included PwC, after first spending 25 years in a variety of positions in the chemical industry. His experience includes managing a specialty chemical business for a major multi-national including reorganizations and acquisition integrations where data management played a key role. He has consolidated and managed a divisional level supply chain and later organized the SAP master data governance organization, both for Bayer Chemicals. He has presented ASUG in 2006, 2007, 2009 and 2011 as well as the ASUG SIG for Data Governance. He is currently on the steering committee for the ASUG Data Governance SIG working group for data quality metrics.
Ê

scroll to top