What Does Manage Your Data and Information Assets Mean?

One of the more popular sound bites in data management today is “Manage data and information assets.[1] It’s catchy and it sounds like a good idea, but what exactly does it mean? And, most importantly, what should organizations do differently? This article aims to answer the first question and suggest a first step for answering the second.

Three Tests

In today’s organizations “capital,” in all its various forms, and “people,” including their backs and brains, are generally recognized as assets. Some organizations may recognize customers, their processes, or information technologies as assets as well, but only capital and people are generally recognized as assets. A casual scan also reveals that most
organizations manage their capital and people more professionally and aggressively than things they do not recognize as assets. They:

  • Take active measures to improve the value of their assets. For example, they invest in hiring the right people, training employees, and succession planning.
  • Use (exploit might be a better word) their assets inside the organization and in their marketplaces. For example, they use people’s brainpower to innovate and develop strategies to market
    and sell their products. And they invest their dollars in plant and equipment to manufacture them economically.
  • Adjust their management systems in recognition of the asset’s special properties. They put someone in charge (e.g., chief financial officers, heads of human resources), and they recognize
    (for example) that managing a person and managing a dollar are not the same.

For data and information to rise to the level of capital and people as assets in the eyes of the organization, I propose they must pass three equivalent “tests.” I call the first test
the “care and feeding” test. To pass, organizations must acquire the right kinds of high-quality data and information; and their people must be able to find, access, and understand
them. The data and information must be of sufficiently high quality that people can trust them and use them with a high degree of confidence. And they must take reasonable steps to prevent these
assets (the data and information) from being lost, stolen, or used in inappropriate ways.

I call the second test the “unique and significant contribution” test. To pass, data and information must be part and parcel of the organization’s present and future. They must
make unique and significant contributions to the day-in, day-out running of the business, to its value proposition, and the unique selling points of its products and services. Perhaps most
significantly, data and information must be at the core of innovation, strategy, and competitive position.

I call the third test the “special properties” test. Data and information have properties that present both opportunities and perils unlike any other asset. The easiest to see is that
they may be shared. Departments compete for the organization’s investment dollars; and if operations gets funds, then finance may not. Not so data and information. Operations and finance can
use exactly the same data at exactly the same time without involving the other. The flip side is that it is harder to protect data, as the fifty million lost customer records in 2005

Do Today’s Organizations Pass?

An exercise popular in many training courses goes something like this. The class is asked to imagine a fine antique French desk, recently purchased for $20,000 USD. Atop the desk sits a brand new
laptop computer that cost $2,000 USD and comes complete with all the bells and whistles. Sitting right next to the laptop is a CD that cost about ten cents. The CD contains the only known list of
the names and purchases of the organization’s fifty largest customers. Now, the exercise goes, a fire has started and you can only save one of the three. Which do you save?

In 1900, everyone would have saved the desk. And in 1986, a mere twenty years ago, some people would have still saved the desk. Others, intoxicated by the new technology, would have selected the
PC. Today virtually everyone chooses to save the CD. And they immediately recognize that it is not the CD that is worth saving, but the data it contains. People intuitively judge that the data is
worth far more than $20,000, even though they can’t put a price tag on it. Left to the reflexes alone, virtually everyone recognizes that data and information are extremely valuable assets.

But do organizations pass the three tests?

While I’ve done no scientifically defensible study, my guess is that few organizations could pass the care and feeding test. Issues of data quality are legion. I’ll just cite a few

  • An IBM-sponsored study suggests that knowledge workers spend up to thirty percent (30%) of their time searching for the data they need.[3]
    Compounding this problem, another study suggests that searches are unsuccessful thirty percent of the time.[4]
  • Accuracy is poor. While there is considerable variation, database to database and organization to organization, a good estimate is that ten to twenty-five percent (10% – 25%) of data records
    contain errors (including data that is simply missing).[5]
  • The number one cause of failure of enterprise systems and data warehouses is poor quality.[6]
  • Organizations cannot create a complete view of the customers.

More than a few organizations would pass the unique and significant contribution test. Some, such as Bloomberg, Interactive Data, Morningstar and Tele-Tech, sell data on the open markets. Hedge
funds seek to uncover and exploit information asymmetries. The Oakland A’s are excellent “miners” of player performance data, and “infomediators” such as Google help
people find the data and information they need. I count at least fifteen distinct ways to bring data and information to the marketplace, and more and more companies are beginning to do so.

I don’t think many organizations could pass the special properties test. As one example, when asked who is ultimately responsible for data in their organization, most people cite the chief
information officer. This is an unfortunate choice of title, as most who have this title are really chief information technology officers. The difference between managing “I” (data and
information) and managing “IT” is akin to the difference between lightning and a lightning bug!

First Step: Market-Driven Data Quality

So what must organizations do if data and information are to assume their rightful place alongside capital and people as assets?

For many organizations, the most appropriate first step is improving data quality, by at least an order of magnitude and maybe even more. By now, there are plenty of examples, in industry after
industry, of companies that have done just that. And the promise of reduced costs, more nimble operations, and trusted decisions have indeed been fulfilled in such organizations.

Make no mistake. These companies have worked very hard to achieve their results, but there is really no mystery about what they’ve done. In a nutshell, they’ve:

  • Adopted the philosophy of preventing errors at their sources.
  • Made management accountabilities very clear.
  • Measured quality levels (usually against reality), identified root causes of errors, and conducted improvement projects to eliminate them.
  • Implemented controls to keep errors from coming back.

Larry English, David Loshin, Rich Wang, myself, and many others have written extensively on both the “why to’s” and “how to’s” of data quality.

Impressive as these successes have been, most have started out with rather pedestrian goals. Protecting the investment in a data warehouse, cutting costs, or complying with regulation are typical

But data will never rise to the level of an asset for such reasons alone. Data must prove itself in the marketplace. So I’d like to suggest a simpler, more direct, and more powerful driver
for the data quality program. It starts with answers to a sequence of questions:

  1. Which data is most important in the marketplace? Note that when asked this way, the answer “customer data” means “data our customers see,” rather than “data about
    our customers.”

  2. How good is this data in the eyes of our customers?
  3. Which data should we improve to improve the customer experience, make more money, and drive our innovation?

Then, drive the quality program based on this data.

Approaching data quality in this manner yields the same cost reductions as the more typical inside-out approach. It has several other benefits as well:

  • First, it gets data quality off the “cost ledger” and onto the revenue ledger.
  • It narrows the scope of the data quality program so it can move faster, and it results in increased customer satisfaction.
  • In parallel, it helps clarify business responsibilities for quality and relieves IT of a job it cannot do well.
  • It helps the business “get a feel” for data and information, their organic nature, and the unique ways they add value.

Each of the above help an organization and its managers and leaders gain the experience they will need to tackle the trickier “unique and significant contribution” and “special
properties” tests.


[1] I prefer “manage data and information assets” to the more commonly used “manage data and information as business assets” because many interpret the latter to mean, “data and information aren’t really business assets. But you should manage them as though they are.”
[2]Privacy Rights Clearinghouse, “A Chronology of Data Breaches Reported Since the ChoicePoint Incident,” Updated 7/10/06 http://www.privacyrights.org/ar/ChronDataBreaches.htm (last checked 7/12/06).
[3]Evelyn Tarner and David Paget-Brown, IBM Business Consulting Services, “How financial institutions can tune in to the advantages of enterprise data
management. http://www-03.ibm.com/industries/financialservices/doc/content/resource/thought/1595427103.html
[4]Susan Feldman, “The High Cost of Not Finding Information,” KMWorld-Volume 13, Issue 3, March 2004.
[5]This figure is taken from the Gartner study cited in “Hamstrung by Defective Data,” by Rick Whiting in Information Week, May 8, 2006. It agrees with detailed measurement made in a variety of setting by the author’s clients.
[6]T. Friedman, “Data Quality ‘Firewall’ Enhances Value of the Data Warehouse,” Gartner Research, April 13, 2003. and “CRM Demands
Data Cleansing,” Gartner Research, December 3, 2004.


submit to reddit

About Thomas Redman

Dr. Thomas C. Redman is President of Navesink Consulting Group, based in Little Silver, NJ. Known by many as the “Data Doc,” Dr. Redman was the first to extend quality principles to data and information. By advancing the body of knowledge, Tom’s innovations have raised the standard of data quality in today’s information-based economy. Tom’s work has helped any number of organizations understand the importance of high-quality data and start their data quality programs. Through his expertise and practical advice, organizations have saved millions of dollars per year.  Tom’s proven, repeatable tools, techniques and roadmaps have helped clients in telecommunications, financial services, computer products, dot-coms, logistics, consumer goods, and government agencies.