Operations management and enterprise governance both rely on measuring and reporting the scores for metrics used to quantify performance of particular processes. While the use of (and reaction to)
metrics scores differ based on the business processes being monitored, there are certain characteristics that are common to the concept of a metric. In turn, individual metrics can be grouped
together into higher-level scores computed as functions of the individual component measurements. At the most basic level, a metric represents a quantification of a single score to answer a
specific business question. In other words, the base level metric, in many cases, can be as simple as a count.
It is important to remember that every metric is intended to answer a business question; therefore, before defining a model or building a system, the analysts must engage the business users to help
in identifying, clarifying and documenting specific business data needs for the system. This requires interviewing key system stakeholders and customers to clarify business processes and needs as a
prelude to synthesizing the requirements for the system model. This incorporates reviewing, analyzing, and integrating system information and stakeholder interview data to create key artifacts such
as a fact/qualifier matrix that will drive the definition of the model, the determination of data sources to populate the model, and the processes for presenting metric information back to the
The Fact/Qualifier Matrix
Without loss of generalization, let’s initially consider the base level metric and enumerate key attributes that are relevant for capturing the performance measures critical for answering the
business questions in a fact/qualifier matrix. A fact/qualifier matrix is a standard data warehouse development tool designed to organize specific business information needs into a format that aids
in developing relational and multidimensional data structures. Facts represent specific business questions or discrete items of information that are tracked or managed, and are generally items that
can be counted or measured such as quantity or volume. Qualifiers represent conditions or dimensions that are used to filter or organize facts. Qualifiers include items such as time or geographic
area. All business facts are listed down the left-hand column, and all possible qualifier dimensions are listed across the columns. An entry at any intersection identifies the qualifying dimensions
used to filter, organize or aggregate the associated fact data.
As an example, business questions such as “How many widgets have been shipped by size each week by warehouse?” and “What are the total sales for widgets by size each week for each
salesperson by region?” provide insight into the progress of a metadata registration process as part of a performance management activity. The fact/qualifier matrix will specify the business
facts that are expected to be measured – in this case, “the number of widgets shipped” and “total sales of widgets shipped” – and the specific dimensional
qualifiers are “size,” “week,” “warehouse,” “salesperson,” and “region.” Subsequent iterations of the analysis process may derive greater
precision for some of these qualifiers, such as breaking out the “week” qualifier into more granular time frames (e.g., daily, or even hourly).
Figure 1: Example Fact/Qualifier Matrix
Documentation for the facts and their qualifiers may already exist, but restructuring those requirements within a fact/qualifier matrix provides some benefits:
- It provides a logical structure for capturing business user requirements.
- It provides a standardized way to present the requirements back to the business users for validation.
- It enables the analyst to review the collected business requirements and determine if there are any overlapping facts or common dimensional qualifiers.
- It guides the analysts in identifying the data sources that can be used to populate the metrics data model.
The Metrics Model
The fact/qualifier matrix provides guidance for developing the metrics model. For the most part, each metric is either a base metric, consisting of a single quantification of a requested qualified
fact, or is a qualified score computed as a function of the scores associated with other metrics, along with other relevant metadata such as units of measure, classification, and evaluation
criteria (or thresholds) for reporting satisfaction of business user expectations.
To use the same example from earlier in this article, the business user might be interested in more than just the number of widgets shipped, but insist on each warehouse shipping a minimum level of
20 units of each size of widget every week, with an expectation of shipping more than 30 units of each size of widget each week. In this case, the metric is the count (broken out along the defined
dimensions), and the evaluation criteria might be specified in terms of a three-tiered scoring:
- Green – if the number is greater than or equal to 30
- Yellow – if the number is greater than or equal to 20 and less than 30
- Red – if the number is less than 20
Metrics defined at a higher aggregate level will compose the results of the incorporated metric scores. In our example, the cumulative score for all the warehouses for each widget size shipped is
the sum of the number of widget units shipped for each size, divided by the number of warehouses. In other words, the cumulative average reflects the higher level of performance, and we can use the
same thresholds for scoring as at the lower level.
Since metrics and scores associated with compound metrics are to be reported to the end-client, the designer must enable the definition of “buckets” or categories for the alignment of
metric scores into the right collection. For example, one might group the logistics performance metrics into one category and the sales metrics into another. This effectively defines a taxonomy
structure for metrics. However, one must be aware that some metrics may be suitable to support more than one hierarchy, so the model’s means for categorization must not restrict the inclusion
of a metric into a single logical bucket.
The metrics model must be able to support the capture of both the base-level metrics (pure counts) and the more complex compound metrics composed of lower levels. This suggests a model that
supports a hierarchy that captures the parent-child relationship as well as the categories associated with the eventual reporting scheme. Doing so enables the end-client to review the top-level
report, but also enables the drill-through capability that allows the analyst to seek out root causes exposed through the reporting. As long as the model captures the score for each metric as well
as the evaluation criteria within each hierarchy, the analyst can drill down through the hierarchy levels to determine where the specific measures indicate an opportunity for operational
improvement or for instituting alternate policies to improve performance.
We have touched upon some concepts that are relevant to developing a model for performance metrics, suggesting certain characteristics that must be incorporated, including:
- Metric concept name, unit of measure, quantity/score
- Qualifiers for each metric
- Evaluation criteria (for assessing and communicating conformance to business user expectations)
- Hierarchical grouping
- Multiple taxonomies
- Categorization of collected metrics.
In the next article, we will explore the metrics model in greater detail.