If you have had the opportunity to participate in the selection and acquisition of a software product, any product, you probably experienced, in addition to lots of work and documentation, significant distress in attempting to bring about consensus. If the product was intended to be used across your organization, the frustration was probably deeper. Even when all the participants agree at the outset to pursue the best solution for the organization as a whole, and in many cases are willing and ready to do so, quite often overcoming disparate opinions and predispositions becomes insurmountable.
The “Classic” Selection MethodUsually, the process works well during the requirements gathering in preparation for the issuance of a Request for Information (RFI) or the Request for Proposal (RFP). It may continue to be smooth until the vendors’ responses arrive. Then, as the assessment process beings, all too often, progress stalls or, even worse, there is total chaos!
What causes an otherwise reasonable group of professionals to become unable to reach common ground? Why are they unable to quickly come together to assess a product or capability? For many it seems like an unsolvable puzzle. However, those who survive the experience find that the most common reason for this breakdown is that the method used to assess products is not effective.
Let’s review the typical approach.
After significant and diligent effort in arriving to the right set of vendors and questions for the selection of a new software product, the team begins receiving responses from the solicited vendors. The team begins to rate each capability of a product to perform a particular function using the “10 point scale.” This is a 1 to 10 scale where each participant assesses each capability assigning a “rate.” This approach is widely used and many think it is “simple and to the point.” But, is it?
As stated above, the process works well until the group comes together to give the vendor a “common and final” rating. The rating of some capabilities is accomplished quickly; however, eventually the process slows down or comes to a halt. A typical conversation goes like this:
— “Well, I feel it is closer to an 8 than a 7.”
— “No, it is a 7 not an 8; vendor XYZ is better at this and we gave them an 8.”
The problem is there is no operational definition; what does “7” mean? What does “8” mean? Replace any number; what does it mean? How do you know it is one or the other? No one knows. These discussions can go on for hours. Some teams resort to “averaging” the ratings given to a product category in an attempt to overcome the standoff; but, what does the average mean? Sometimes they cannot agree on a final rating. In extreme cases, some participants break out of the team and decide to report separate, and often conflicting, recommendations to their management.
Why is this happening?
One of the reasons for this process failure is the lack of operational definitions. Dr. Deming indicated that “the purpose of an operational definition is to provide the worker with a clear understanding of what kind of work is acceptable and what kind of work is unacceptable thus enabling an operation to produce consistent results.”1 The commonly used “1 to 10” ratings lack operational definition; they are subjective and subject to interpretation based on the worker’s personal experiences and preferences.
Effective Methods for Software SelectionIn a recent software selection effort, we used a method based on operational definitions that proved to be exceedingly practical and effective in bringing about consensus and resulted in a selection that all participants agreed was the best possible outcome. A true win-win!
How did it work?
Let me explain using an example. In this case, the organization was looking for software to provide data profiling and data quality capabilities. We began, applying Steven Cove’s 2nd habit for highly effective people “begin with the end in mind,”2 defining what the final report will look like. The team agreed that the final report was to include the selected product and a description of the selection process. The team wanted the report to be self-sufficient in explaining not only the chosen product but how it was selected.
Preparing and Submitting an RFPThe process starts with the preparation on the Request For Proposal (RFP) document. In this phase, the project team identifies and documents the set of functional requirements (expected capabilities) and technical considerations (expected technical fit) to be addressed by the chosen product and prepares a draft of the RFP document. An extended team reviews the draft for completeness and correctness. The extended team is a group of subject matter experts from all interested business and information technology areas as well as a representative from the procurement department.
In our example, the RFP included the following:
- Functional Capabilities grouped into mandatory and essential sub-groups. Mandatory meant that a product unable to provide these capabilities would not to be considered for selection
- Technical Considerations grouped into mandatory, essential, and descriptive sub-groups. Descriptive items are intended to provide additional background
- Pricing: including license fees, maintenance fees, support fees, training fees & consulting fees
- Vendor & Product Strategy describing the vendor, product and consulting profile and past performance and client references
While the RFP is prepared, the team defines their vendor selection approach and identifies the candidate vendors to participate in the RFP. Vendors are identified using various methods including:
- Vendors that the organization already has a relationship. It is easier to incorporate new software if it is designed or supported to work with products the organization already has in place,
- Vendors identified by research organizations, such as Gartner or Forrester, listed as leaders in their market reports, and
- Vendors recommend by employees or contractors based on their prior experience.
Once all candidates are identified, the team immediately eliminated those not aligned with the organization’s strategic direction. The final report lists these vendors and the rationale for their exclusion. In our example, one strategic position was “any vendor or product that cannot work with our installed ETL will not be considered for acquisition.” Vendors perceived as unable to meet this expectation were excluded from the RFP submittal.
Out of the 10 vendors originally identified as candidates, three were not aligned with the organization’s strategy; the RFP was submitted to the remaining seven vendors.
Preparing for the AssessmentWhile the vendors prepare their response, the project team prepares the stage for the assessment. The assessors have two major objectives: normalize and then categorize the capabilities to give each a weight factor that properly represents the impact expected from the capability; and, develop clearly defined assessment ratings to enable consistent rating for each product capability. They:
- Normalize the capabilities in the RFP. On the one hand, the RFP must be comprehensive from the organization’s participants’ point of view; and, the product must be assessed based on business impact. Therefore, the questions on the RFP must be normalized to enable a balanced assessment; not all questions are rated equal. The assessors normalize the capabilities to ensure that the ratings given represent the organization’s needs while reducing the work associated with the assessment process as follows:
- Collapse capabilities to make equivalent as follows:
- Threshold groups (“pass or fail;” more on thresholds later) are capabilities better assessed as a single entry; in our example, these included vendor strategy (19 to one), product strategy (14 to one), consulting (6 to one) and training and support (9 to one)
- Standard capabilities that are well supported in the industry; in our example, systems security (15 to one) and product documentation (3 to one)
- Emerging capabilities that are highly unlikely to be well supported in the industry; in our example, meta-data management (18 to seven) and data lineage (five to one)
- Expand capabilities for critical functions; these are functions that have significant impact to the organization and require more attention and therefore more weight in their factors. In our example, the mission critical information profiling and quality control capability to perform “Oil & Gas Data Corrections” is expanded into its sub-sections to ensure that its weight factors for “Geographical Transformations,” “Geological Coding Consistency,” “Identification of Numeric Values Exceeding Specified Limits” and “Verification of Control Totals” increase attention to these critical functions
- Ignore capabilities with little relevance to avoid unnecessary work. In our example, the technical description questions used to better understand the candidate products did not require independent rating.
- Characterize the normalized capabilities. Each assessment capability is given a weight factor based on its impact to the organization. The characterization of a capability is based on the following two dimensions:
- Significant Value: the capability is of significant business value to the organization (stated as “yes” or “no”). A capability with a yes in this characterization requires a brief rationale concurred by all participants. It is important to note that all capabilities have value, but only a handful, about 20%, have significantly more value
- Current Practice: indicates that there is a current practice that will be enhanced by this capability; therefore, the expectation is that this capability will improve existing practices. Any capability with this characterization requires a brief explanation of the current practice. About 20 to 50% of the capabilities can be characterized as current practices.
- Categorize the characterized capabilities as follows (the icons shown in Figure 3 are provided as mnemonics for each category to enable the reader to quickly recognize the capability’s value to the organization):
- Critical: the capability is of Significant Value and a Current Practice. In our example, all the mandatory capabilities are critical; in total, 20 out of 110 capabilities (18%) fall in this category
- Important: the capability is of Significant Value but not a Current Practice. In our example 33 capabilities (30%) fall in this category
- Vital: of Nominal Value and a Current Practice. In our example 3 capabilities (3 %) fall in this category
- Useful: of Nominal Value and not a Current Practice. In our example 54 capabilities (49%) fall in this category.
Figure 1: Normalized Requirements and Considerations
The result was that the 219 original RFP questions on capabilities were condensed to 110 assessment capabilities (see Figure 1). This reduction in the number of capabilities to assess was achieved without loss of relevance.
Figure 2: Capability Characterization
- Figure 3: Capability Categories
- Once a capability is categorized it is shown in all documentation with its corresponding icon to enable the reader to associate it with its weight factor (see Figure 4).
- Develop an Assessment Scale that enables the assessor to assign a rating to a product capability with a high likelihood that the same rating will be assigned to the same product capability by any other assessor; therefore, the scale must have clear operational definitions. The scale was defined as follows:
- Excellent: the product has this capability, meets the basic expectation, offers additional features that make it more effective or efficient than expected, and does it better than its competitors. Can be described as “far exceeding expectations” (10% or less of the product capabilities may fall in this group)
- Very Good: the product has this capability, meets the basic expectation and offers additional features that make it more effective or efficient than expected; other products offer similar capabilities. Can be described as “better than expected” (10% or less of the product capabilities may fall in this group)
- Good: the product has this capability and meets expectations. Can be described as “what we want” or “what we seek.” (80% of the product capabilities may fall in this group)
- Fair: the product has this capability but does not fully meet the expectations or, to do so, adds burden (“can do it but …”). Any add-ons, third-party complements or client customizations were rated as “fair.” Can be described as “available but lacking” (10% or less of the product capabilities may fall in this group)
- Poor: the product does not have this capability or its capability is largely insufficient to meet the expectations. Can be described as “missing or marginal” (10% or less of the product capabilities may fall in this group).
Each product capability rating has a weight factor and an icon. The icons shown in Figure 5 are provided as mnemonics for each rating to enable the reader to quickly recognize the ability of the vendor or product to meet the organization’s expectation for a capability, a group of capabilities or the overall solution.
Figure 4: Categorized Capabilities
The categorization of the capabilities is done before the product assessments and independent of the vendors’ responses. The categorization is revised only when new evidence indicates that the category does not represent the organization’s needs correctly (again, independent of the vendors’ ability to meet the capability).
- Figure 5: Product Capability Assessment Ratings
- Rate each product capability and develop a Final Rating for the product:
- The score for each product capability is the result of the multiplication of the capability weight factor (independent of the product capability) by the product assessment rating factor (unique to each product capability). There are 20 possible combinations with 10 different scores ranging from 0 to 16 (see Table 1).
- To assess each product capability, the assessors took each product capability and asked the following four questions; for any product capability not rated as “good” the assessor must provide his or her rationale for the rating. These notes become the descriptions for each capability assessment in final report:
- The final product rating has a graphical representation of the average of all the individual ratings for the product rounded down to next rating; for example, a rating of 3.4, 3.5 and 3.7 are all rated as “very good;” and, a composite numeric rating resulting from the sum of the numeric ratings on all the capabilities for the product.
Table 1: Product Capability Final Score (Category & Rating)
Table 2: Decision Tree for Product Capability Assessment
Conducting the AssessmentWhen the responses to the RFP arrived, the team was ready to begin their assessment. They perform this assessment in three steps: the first two steps are “pass or fail,” the third step is a determination of “best fit” based on the vendors’ responses for final selection.
Pass or Fail: Strategic Alignment & Minimum RequiredIn the “pass or fail” steps, vendors are accepted to go to the next step only if they pass the basic criteria. Otherwise, they are dropped from the assessment at this point –no further analysis is conducted on their proposal.
First, the assessors rate the responses to the “vendor and product strategy” to determine alignment with the organization’s direction. As mentioned earlier, if the proposed solution is considered as unable to work with the installed ETL, the vendor is dropped and no further analysis is performed. One vendor was classified in this group and was dropped; six vendors remained.
Second, the assessors review the mandatory capabilities (see Table 3 for an example).
A vendor’s proposal must “pass” (must receive a minimum rating of “good”) all mandatory capabilities to move to the next step. One vendor’s proposal did not get the minimum “good” rating in all capabilities and was dropped. With little effort the team reduced the number of vendors to five.
Conducting the Best Fit AssessmentDuring this part of the assessment, the assessment team groups and then compares the vendors’ responses based on their functional and technical capabilities. The groups enabled the assessors to compare the products on key dimensions such as data profiling, defect monitoring, data lineage, and so on. Each assessor rates a proposal and then reviews the result with the core team and then the extended team (see Table 4 for an example).
The team uses the final ratings to identify the “best fit” proposal based on:
- The highest product core
- The best price (as offered by the vendor; subject to negotiation before final acquisition)
The team summarizes the ratings and rationales from the participants’ observations in a final report and selects the top performer. The report explains each of the steps and results as outlined in this article. This report consists of several pages filled with well organized, clearly stated, easy to find and understand facts about the capabilities, the vendors’ solutions and the recommended finalist.
After the AssessmentAfter the final demonstration, the organization is able to pursue the acquisition and implementation of the recommended solution. The contract negotiation, acquisition and implementation processes are conducted expeditiously and with minimal effort because everyone involved understands the selection rationale either due to direct participation or via the final report.
Final CommentThe team was able to conduct the selection and implementation of this critical but contentious software product in a very efficient and effective manner due to the use of a well defined process, operational definitions and rating methods. This process, operational definitions and rating methods can be applied to a variety of products and organizations once the methods and factors are adjusted to better align with the expected outcomes.
What do you think? Let me know your thoughts at andres.perez@irm-consulting.com.
Endnotes:
- Dr. W. E. Deming, Out of the Crisis; Massachusetts Institute of Technology Center for Advanced Educational Services, 1982.
- Steven Covey, The Seven Habits of Highly Effective People; Free Press, 1989.