Data for Information Management

“Your strategic and corporate reporting will be as effective as your operational systems performance management is.”

Every organization or enterprise understands this one liner, but how many of them work toward ensuring this happens? The prime need for every enterprise lies around making sure the performance management for decision making is backed by a solid foundation of effective data quality, and gives them the capability to tackle larger concerns like regulatory compliance, customer churn, and growth in new lines of business, security and privacy.

Most enterprises have invested heavily into ERPs, CRM, EDW, performance management and reporting systems, yet there are doubts in decision making that’s happening on top of poor data quality. What is the core underlying issue that’s hampering effective decision making or raising doubts on the quality of data backing those decisions?

This article aims to address some of those underlying problems, and looks at a practical approach toward effective governance: performance-driven operational systems for quality decision making.

Data Management When an organization initiates or plans a project in the area of business intelligence (BI), the first and foremost question it needs to have an answer to is whether they have the right data management structure in place to create meaningful insight from the data they are accumulating.

If it is assumed that the data warehousing / business intelligence initiative will help cleanse the data and would be housed in relevant structures and assist in value creation with a handsome ROI, the decision needs to be pondered on again. If there is a structure in place that governs and monitors the data for its life cycle in the organization, then any advanced solution can be safely built with minimal investment, thus having a good ROI.

When speaking data management, it no longer restricts itself to structured form alone. With semi-structured and unstructured comprising more than half of the total data in an established organization, there is a need to encompass these to create a mature information management practice.

Each type of data has its sets of rules and policies governing its creation, existence and death within an organization. While data is still active, information management utilizes it for creating a better informed workforce with the sole intention of assisting them in making better decisions. Lately, regulatory compliance has mandated the retention of data.
A typical data life cycle in an organization goes through the following phases:

Enterprise information management (which encapsulates EII, MDM and related technologies and methods to govern information generation, access and distribution) depends on its critical subset, data life cycle management, for the integrity, auditability and quality of the data, thereby providing insightful information for decision makers.

The data by itself is not sufficient for the information needs. The raw format of the data without the meta details can be considered of no use. Data has to have a meaningful representation for the information users, needs to have a complete description of the transformation it goes through and its path through all business processes and their supporting applications. With the meta information in place, the data is ready to be re-purposed for downstream applications like operational reporting, decision support systems, etc. The re-purposed data that flows down to these applications also has a similar life cycle. This assists the information management team to govern and implement an enterprise-wide application.

The simple example shown below can further explain the point. Consider the customer entity creation. The typical business processes can encompass the following: marketing, sales, logistics and finance.
 

Each activity in the business process above contributes to the details or attributes of the customer to be maintained in the master store. For instance, the marketing team may not be able to capture the preferred mailing or shipment address of the customer, and the finance can assist in providing the banking or information pertaining to payment method opted by customer. A CRM representative can enhance the same information with customer provided scores based on the feedback provided. The transaction details would not be stored in the CDI/MDM system.

In the past, the organizations faced a challenge of consolidating data for MDM or DSS from legacy applications that captured similar information, as above for customer, having their own format, data stores and varied technology. This was successfully replaced by implementing an enterprise application catering to various businesses and covering most of the business processes. However, these had their own drawbacks of the ability to get customized entirely on an organization’s current processes and also demanded BPR exercise to be done. With SOA, and a detailed capture of business process orchestration along with metadata, many were able to overcome the need to redefine or re-engineer the existing processes. Of course, this demands a well established information and data management practice within the organization.

The speed of information, data processing and the competition along with IT becoming more pervasive the technology shift from manual to packaged software has moved toward Internet-based transactions. Most types of businesses are able to sell their products online and make financial transactions online with secured gateways. These advances have changed the way the customer data processing cycle is implemented within an organization. Now the manual dependency of getting every data from business process is captured via web-based applications, and customers now provide the information online rather than the same being captured via calls and manual feeding by representatives of the business. This has increased the number of transactions that can be done in a given period of time and the amount of data being collected. The cycle time to close an order has come down from a few days to a few hours and now to a few minutes.

The above process described can be restated as below:

This change has led to a different approach to data management and governance. The business processes that were dependent on a series of applications and interfaces among them have now been reduced to a standard package assisting online and offline data movement.

The data life cycle does not change with the new set of interfaces. The data still goes through the same set of phases. The change occurs in the way it has been captured, processed, stored and/or the retention policy implemented. The collected data is further re-purposed for downstream applications or used for reporting. The principles and practices of the data representation and how informative it can be made are still required to be governed by Information and its subset data management.

What Has Changed? From the data perspective, apart from the way it is captured and processed:

  • The volume that is generated has increased over the years. The business now has to handle more transactions per day than it used to before. The model to process, integrate, store and govern the data has made advancements with new tools and software available from vendors.
  • The dependency on internal systems alone for data is no longer sufficient. The data from external sources needs to be brought in to the systems, aligned and processed with internal data structures and then be used for inputs for day-to-day reporting or for decision making.
  • The quality approach to data has also changed. Now organizations would like to check the potential loss and proactively correct the data as it is created rather than look at it reactively when the loss has already been registered.
  • Data that is not structured also needs to be extracted and integrated with structured data for information generation.

From the business processes perspective:

  • The organizations need to redefine the strategies with ever changing market scenarios. This directly requires re-engineering of one or more of the implemented processes or new ones need to be created.
  • Each of these processes does not serve a particular information need; rather they are co-related and together create and enhance the data value as it moves along each process.
  • Organizations no longer look at departmentalized performance, but they do performance management based on processes. The seed is the data that gets generated out of each. Very rarely departments have processes and systems that cater to individual performance assessment. Thus, the organizations in a way assess how each is collaborating with others for a positive growth.
  • The latency between operational performance metrics measurement and the strategic performance metrics is much narrower than it used to be, looking at fast changing business scenario, competition and global dynamics of economy.

From the information consumer perspective:

  • The decision making is no longer restricted to the elite in the organization hierarchy. Over the years, the systems have been designed to cater to needs to top management and lately middle managerial levels, these systems have mostly been based on historical data. Now the learning curve in the organization has also been enhanced to create an intelligent workforce, empowered them with decision-making capabilities.
  • Customer is no longer willing to wait for a decision to be taken on his request, thus the direct contacts to the customer, the business representative, needs to be empowered with both static and tactical knowledge with a set of possibilities to better the customer experience.
  • The currency of data has been an issue of the past, but the technologies and solutions available can now bring in the latest information at the desktop of a decision maker and assist in responding to a situation in near real time.
  • Collaborative decision making is becoming a need to reduce the effort spent in context switching between applications to take decisions.

What kind of system needs to be in place to cater to these changes and needs and ensure the system is agile enough to withstand forthcoming business scenario-based changes?

What is Operational Intelligence (OI)? It is a solution implemented to monitor the business processes in place, most of which are managed by operational systems deployed for routine business activities. Most of the definitions speak about improving the business processes and enriching customer experience with near real time monitoring of the information via data generated by the systems.

When mature data management processes are in place, information management takes a larger role in creating value-added services from the key asset – data. How much data needs to be stored at what level and what information needs to be generated and for whom become key drivers for implementing a rewarding information management system.

The traditional business intelligence applications were designed to house the historical data once it was processed and ready to be re-purposed. The data integrated grew in size over the years, resulting in costly maintenance of huge data volumes that, if analyzed, would reveal that only a small percentage of it is used actively for decision making or information generation.

Integrating this active only data with the current can bring in a good insight on what actions needs to be taken. The rest of the data, which is called inactive or dormant, can be housed in inexpensive data storage and maintained as per regulatory compliance requirements.
The active data can be brought out of the near-line storage or maintained data repository for specific needs like MDM, process data marts, etc. Once the data in these becomes inactive, it can be moved to inexpensive storage and retained till purged.

With open source, EII and EAI and CDC techniques now available, extracting event-triggered data and combining with active repositories assists in maintaining quality in real time and making decisions faster at the right levels of hierarchy. The performance data can be mashed up in a web dashboard with minimal intervention.

A relook at the data life cycle can summarize the above:

In the above diagram, the data collected is audited for quality and shows a higher maturity with respect to data quality and governance. The processed data is captured for analysis along with the recent history, which is termed as active data. The results of this analysis are made available to the business representative with appropriate information designed for his decision making.  The steps 4, 4a, 4b are happening in near real time with the assumption that the data that has reached stage 4 is at acceptable quality limits. The active data when it loses its relevance over a period of time for decision making is moved to a different level of storage for other traditional analysis and reporting. Thus, step 6 caters to two different level of storage – one for active data that aids in the processing of information in near real time and the historical data, which is the data that has lesser relevance compared to active data.

Summary To create the right foundation for an intelligent enterprise, it is critical to have a mature information/data management practice that governs the data and information life cycles combined with an established management to take care of and integrate the business processes.

Share this post

scroll to top