As data warehousing reaches maturity, companies are increasingly focusing on improving access to the data warehouse. While there are generally accepted best
practices for designing data warehouses, there are few such standards for selecting and managing business intelligence (BI) tools. When selecting BI tools, users seldom know what they want until
they see it; and IT seldom understands the true cost, design, and maintenance implications of a BI tool until they’ve implemented it.
Complicating the scene, many companies already own multiple BI tools thanks to departmental initiatives, mergers, or acquisitions. Few executives want to terminate successful BI implementations,
however, many desire the benefits of standardizing on a single BI platform: lower cost of ownership and easier information sharing. In selecting BI tools, there are a number of things that
companies can do to ensure they select the tool that best fits their tactical and strategic needs. Keep in mind: the goal is not to find the “best tool” but to find the “best
fit.” Every BI tool has its sweet spot. If you deploy the tool outside its sweet spot, you will see its weaknesses. Ultimately, the measure of success is not which tool you select, but
rather, how you use it!
About the Author
Cindi Howson is president of ASK and has 10 years of business intelligence and data warehousing experience. She has helped Fortune 500, medium-sized and small businesses select and
implement a number of leading BI tools. She teaches Evaluating BI Toolsets and Marketing the BI Application for TDWI, writes for Intelligent Enterprise, and is the author of BusinessObjects: The
Complete Reference and biscorecard.com. She has an MBA from Rice University. She can be reached at firstname.lastname@example.org.
1. Failing to Involve Users in the BI Tool Selection
As part of a data warehouse project, you will have multiple tool selections: ETL, data cleansing, relational database, and business intelligence (BI) tools. For the most part, users care little
about which tools IT selects on the back end. Users, however, own the BI tool. Failing to involve them in the selection almost guarantees mediocre adoption. Not involving them is the equivalent of
letting the Department of Transportation select which cars we drive—after all, any old car will get us from point A to point B, won’t it?
With different stakeholders involved in BI tool selection, it’s easy to see why it can be particularly painful.
- IT wants the easiest tool to deploy and maintain, as well as one that fits within existing architecture standards.
- Purchasing may want to extend a relationship with an existing software supplier, (e.g., an ERP or RDBMS vendor) who also provides BI tools, rather than work with a new BI vendor.
- End users want sexy, intuitive tools that empower them to access their own report or to do their own analysis.
Ideally, a BI selection committee is comprised of a team that includes all of the above stakeholders. This enables the various constituents to communicate and prioritize their requirements. For IT,
the challenge is to translate technical requirements into user cost/benefits. For example, users will not care if the architecture is ROLAP or MOLAP, but they care if queries are slow or if data
can only be updated on a weekly basis. The types of users to include in the selection committee are both current users of legacy reporting systems as well secondary information consumers. Secondary
consumers are users who may not log in to a legacy reporting system but who still need the information for decision making.
When users are not adequately involved in the tool selection, a number of things can happen depending on who has budgetary control and what capabilities are missing. If you are implementing a new
centralized data warehouse with a standardized BI toolset system or independent data marts), users may continue to use the alternative tools for as long as possible. Thus IT is left supporting two
data sources and users make decisions based on multiple versions of the truth.
If the BI tool you selected is inflexible, users will dump data into Excel or Access and create mini data marts. This also leads to multiple versions of the truth and increased costs. If the BI
tool is difficult to use, decision makers may rely on gut feel or specialized analysts to retrieve the information. For individual business units with their own IT budget, they may select and
implement a departmental BI tool. Good luck trying to shut down a departmental solution! User requirements should include features (dual Y-axis charts) as well as usage scenarios (interactivity,
offline usage). For certain usage scenarios, the requirements may differ. For example, users may request both offline access and Web-based access. It’s important to understand what
functionality they want in the two environments. Is full drill down required via the Web as well as offline? Or is it acceptable if offline access has some restrictions?
- “I worked with the tool in a previous company so I thought it would be good enough for my new company.”
- “Management saw some demos and thought the BI vendor would be a good company to partner with.”
- “It came for free.”
Often I hear these comments when companies are forced to do a re-selection—if that’s any indication as to whether these approaches work. For a limited few, they might. However, given
that BI tool selections can be so contentious and that thousands or millions of dollars are at stake, I recommend following a formal selection process comprised of the following steps:
- Form the selection committee comprised of both users and IT
- Define the target user groups and usage scenarios
- Refine the business information requirements
- Research vendor capabilities and issues
- Develop ranked selection criteria, revising criteria throughout the process
- Invite vendors for onsite, scripted demos
- Match vendor capabilities with internal requirements
- Select a short-list of vendors to deliver proof of concept
A formal process better ensures that various requirements are considered and prioritized. This minimizes the risk of implementation surprises; users know what functionality they will and
won’t be getting, and IT understands the resources
required to support the new BI tool.
Even when you follow a process, you will still have some dissatisfied stakeholders. However, you will be better able to communicate the committee’s decisions with documented costs,
prioritized requirements, and acceptable trade-offs.
3. Analysis Paralysis
It’s important to adhere to a tool selection process, but that process must be short! If you take six months to a year to do a BI selection, the vendor capabilities will have changed, as will
members of the selection committee. You don’t need a new player joining the team at the final hour only to question all the work you’ve done to date. So it’s important to agree to a
strict timeframe at the start of the BI selection. If you agree to reach a decision within two months, you will have to limit the number of tools you evaluate in depth. Follow the project
manager’s mantra of time, scope, and resources. If you increase the number of tools you want to evaluate, you will have to increase the evaluation time.
It’s very easy to get drawn into playing with all the cool technology. However, the ultimate business goal of the BI tool and of the data warehouse is to deliver information and business
value. The faster you fulfill the business goals, the more momentum and success the BI applications will have. The longer you take to do that, the more opportunities you will miss.
Remember: the BI tool is not nearly as important as what you do with the BI tool.
4. Failing to Differentiate Users or BI Tools
Users have different reporting and analysis needs and information requirements. BI tools have varying capabilities. A good BI strategy will match the user types with the tool capabilities.
Increasingly, you can buy a full tool suite from one vendor or you may purchase particular categories of tools from multiple vendors to achieve best-of-breed functionality. Can you buy one tool for
all users? Well, think about your own car buying habits: in theory you can buy one car for a family of four. But imagine the concessions each stakeholder will have to make. My husband wants the gas
guzzling, yet trendy Hummer, I want the Mercedes sports coupe, and the kids want a dune buggy. Instead, we get a minivan—a compromise of everyone’s needs and costs. So in selecting BI
tools, you may group segments of users to share one tool that includes compromises.
Two general categories of users are report authors and report consumers. Report authors may be IT personnel or power users or both. These two different categories of report authors will have
different sets of requirements. Report consumers can range from executives who only want to receive a report when there is a problem, to knowledge workers who want to analyze data to uncover
problems and opportunities. Some companies will have four sets of BI tools for these four user segments (IT author, power user author, executive consumers, and knowledge workers). Others will have
one BI tool that is deployed in different ways. To identify your various user types, evaluate them against several criteria such as:
- Job function or role
- Job level
- Internal user versus external customer or supplier
- Level of computer literacy
- Type of data they access by subject area, level of aggregation, and variability
- Frequency of access
- What they want to do with the data once they access it
- Devices and locations for accessing the data (Web, notebook, PDA, desktop)
Next, understand the various categories of BI tools. TDWI’s course Evaluating BI Toolsets is an excellent resource to help with this.
- Production reporting
- Management reporting
- Online Analytical Processing (OLAP)
- Analytic Applications
Some BI tools can be deployed in ways that allow it to cross a given category. For example, some management reporting tools allow you to build basic dashboards. So you need to consider what
additional functionality you get if you select a specialized dashboard application. Does it fulfill critical user requirements for a particular user segment, and at what cost?
Finally, match the user segment with the tool you’ve selected for a given BI tool category. Look for areas in which you can minimize the number of tools any one user will have to learn as
well as tools that can be deployed in different ways. The following table provides generic user segments with possible BI tool categories.
|User Segment||BI Tool|
Scorecards; scheduled management
reports that generate alerts
|IT report developer||Production reporting authoring tool|
|Report consumers||Interactive management reports|
5. Failing to Readthe Fine Print
BI software license agreements require an advanced degree—although I can’t figure out in what! Fail to negotiate the license carefully, and you will buy more product than you need, or
conversely, significantly underestimate your license costs. No two vendors license their products the same way, making it difficult to get an apples-to-apples comparison of licensing costs.
Often the purchasing agreement is developed among multiple people with multiple understandings of what you want and what you will get. As the technical customer, you may know how you intend to
deploy the BI tool. However, the purchasing department executing the contract may not be as educated in the finer points of which modules you need, what platforms you will run on, and which
databases you will run against.
The vendor sales person knows how to sell the software, but may not understand technical limitations that affect licensing. A technical sales consultant will, but these conversations are not
automatic. (I agree that it’s absurd for a BI sales rep not to know their own product line. But to be fair, look at car salesmen—do they know how the car works?) Further, the BI
software industry has high sales force turnover, with some vendors being worse than others. The best situation is when the vendor provides you with a veteran account manager who understands the
product line in depth and understands how you plan to use the BI tools. This ensures you only buy what you need, as well as improves customer loyalty.
I have seen companies be very diligent in the RFP process. The number of users and planned architecture are well documented and communicated. However, these same requirements are not written into
the license agreement, increasing the chances that, as deployment ramps up, the company will be non-compliant.
Pay careful attention to user-based versus server-based licensing. When it’s user based, does the license restrict you to named or concurrent users? Does the software automatically prevent
you from exceeding the licensing count? With Web-based deployments, companies are increasingly purchasing server-based licenses. For these types of licenses, pay careful attention to which
platforms you plan to deploy (initially as well as in the future), which databases you will access, CPU limitations, and whether there is a fee for having test, development, and production
environments. Finally, many companies think of a “user” as someone who logs into the system.
However, if one of your requirements is the ability to distribute standard reports in PDF or Excel format via e-mail, then chances are the vendor also considers these e-mail recipients
Maintenance fees are another aspect commonly overlooked. These fees are usually a percentage of the initial agreement, but whether it’s a percentage of the discount price or the list price is
variable. Maintenance usually includes software upgrades. However, some vendors consider new functionality as new software (not an upgrade) that requires additional fees and negotiations.
Understand this in advance and word your contract accordingly.
6. Overlooking Departmental Initiatives
Many companies have multiple BI tools, implemented by individual departments, inherited through company acquisition, or bundled with other software packages. When it comes to defining requirements,
cost is one of the largest. If your company has already made heavy investments in departmental licenses, training, and consulting, these costs should not be overlooked. On the other hand, if
you’ve purchased a BI tool that has largely been shelfware, you have to consider those licenses as sunk costs. The software may be “free,” but implementation costs, training, and
hardware are not. If other BI tools have been deployed successfully, it’s helpful to understand the factors that contributed to the success. For BI selection teams focusing on an
enterprisewide solution, this can be a somewhat insulting experience that requires a high dose of diplomacy. Someone did it before you? Someone did it better than you? If your goal is to
standardize and reduce the number of BI tools, then individual departments will naturally feel threatened by your assessment. Their implementation is a success: their tool should be selected as a
standard. Perhaps it should be. In these cases, engaging the assistance of an objective consultant can minimize “turf” wars and improve communication. Finding that objective consultant
is not as easy since many consulting firms specialize in particular BI products.
Defining and communicating plans for existing departmental initiatives is essential for any dialogue with existing BI teams. When the BI selection is complete, what will happen to existing
initiatives? Will they:
- Freeze the current implementation?
- Shut down the departmental initiative?
- Convert the departmental BI solution to an enterprise standard?
Clearly, the first two alternatives are the most intimidating for departmental implementations; the latter is what they may fight for. If you can involve any current BI experts in the selection
committee, it will help with buy-in as well as with requirements definition.
At the same time, you want to learn from the mistakes of earlier BI implementations. Did they learn the hard way that you can’t build big cubes with DOLAP tools? Did they find that
materialized views sped relational queries but the company has a limited number of DBAs who know how to implement them? Just as you want to identify the success factors for the BI tool, you want to
understand the obstacles that influence your selection as well as your deployment plans.
7. Equating Ad Hoc with a Blank Screen
For years, users have been frustrated with a lack of information access. IT has been overloaded with backlogs of requests for more reports or modified reports. Ad hoc reporting was supposed to solve
these problems. Users would be empowered with the ability to create their own reports, without assistance from IT. IT could provide access to data and would never have to write another report.
Users are not very good at defining requirements. They ask for everything just in case they might one day need a particular piece of data. If they don’t ask for it up front, it may take
another year for IT to provide it. IT is also not very good at forcing users to prioritize their requirements and rightly do not want to be accused of not delivering the data the business needs. So
often with an ad hoc query environment, many IT shops give users access to every data element in the data warehouse. Whoa! The unsuspecting business user who attempts to build a query for sales by
product slowly realizes that sales can be booked by order date, by invoice date, by list price, by discount price, net of returns, and so on! The most savvy users will persevere and will get an
answer, although perhaps not the one they were expecting and most certainly not the same answer as the next user. More cautious users will seek help from a specialist. The end result: users say the
BI tool is unfriendly and the data can’t be trusted!
When users ask for ad hoc access, this is not what they want. Users want ad hoc reporting tools because they want flexibility and fast access to information. That doesn’t mean they want to
start with a blank screen that will be populated from a list of thousands of potential data elements. Nor does it mean they want to wait a year for IT to make one simple change to a report.
When deploying ad hoc reporting tools, IT must still author standard reports. (Surprise!) Alternatively, power users may author these reports. Either way, you must provide users with a starting
point: a report template that users can easily modify. The modified report may eventually become a standard report or it may remain a personalized view. These standard reports may include prompts
that dynamically build the report: do you want European sales or U.S. sales?
In addition to providing users with starter reports, IT must provide a process in which they capture new and modified reports so that these reports can be leveraged across the organization. As
users explore previously hidden data, they will find more effective ways of analyzing it. While regulatory reports may be fairly static, management reports are not. Establishing a center of
excellence with experts who periodically and proactively review available reports is one way to capture what users are doing. Here too, if you think users will establish this process themselves,
you are mistaken. Users barely have time to access and analyze data without dedicating time to ensure it’s a repeatable process.
8. Training Users on the Tool and not the Data
BI tool training is a challenge because there is no single way to deliver it. First, just as you select BI tools according to user segments, you must also deliver different training approaches to
various user segments. A senior executive, who looks at scorecards and receives alerts, may only need a cheat-sheet with instructions on how to log into a browser-based BI application. A power
user, on the other hand, may need several days of classroom training.
In either case, it’s important to provide both data and tool training. The most effective training combines both in an integrated course. Users learn how to use the tool with their data.
However, this can be expensive and difficult to deliver and maintain. For every subject area, BI tool, and user segment, you need different training material and methods. Vendor-provided training
material must be customized, but by whom? The data experts are not necessarily capable trainers.
Another approach to BI training is to provide generic tool training supplemented by data discussion workshops. This is easier to administer and maintain, however, it may not be as effective as an
Bottom line: no matter which approach you use, you must train users on both the BI tool and the data.
9. Failing to Promote the BI Application
Promoting any application is a foreign concept for most IT shops. You are building what the users asked for, why
should you promote it? Yet it’s important to realize that not all users requested the BI application. Probably only a select few have been involved in the various planning, selection,
development, and implementation phases.
You should begin promoting the BI application during the planning stages. Your messaging at this early stage should be quite high level, focusing on general capabilities and subject areas.
“In 2004, we will be delivering a new sales reporting tool.” As you move closer to delivering capabilities, the messaging should be more specific, emphasizing how the new BI tool will
benefit that particular user and when. “The BI sales reporting tool allows interactive analysis so you can identify and better serve top customers.”
Use a variety of media to promote the BI application, again tailored for each user segment. Promotional media includes email, corporate intranet, company newsletters, staff meetings or road shows.
Increasingly, BI vendors look for customers to act as case studies who have successfully deployed their tools. BI vendors may create videos describing the benefits and successes of the BI
application. This allows the vendor to promote their tool externally and provides the customer an excellent way of promoting the application internally.
When developing a marketing strategy and key promotional messages, it’s helpful to enlist the help of your internal marketing or communications department.
10. Failing to Fix the Data
Users see a graphical tool with color-coded maps and indicators and think it’s the answer to all their information needs. When you start delivering poor quality data via a sexy BI tool, the BI
tool will quickly get blamed for any data discrepancies. It’s difficult for users to differentiate between the BI tool and the data sources.
This poses a difficult challenge for project teams who are often split into distinct groups: the data warehouse team and the BI application team. Rarely does the BI application team have the
authority to correct ETL processes and aggregation rules.
Yet, when these processes have errors, users no longer trust the tool. BI tools introduce additional calculations that also may cause users to question the data quality.
Some data problems are programmatic; the ETL or BI rules were incorrect. Others are process related; the business thought certain processes worked a particular way and still others are simply input
errors. Programmatic errors can be resolved by quality assurance (QA) procedures. When writing custom code, developers frequently have code reviews in which other developers review each
other’s program logic. The same QA process seldom happens with BI logic—transformations that may happen in the metadata layer, cube, or individual report. Fail to fix any programmatic
errors before users see them in the BI tool, and you justifiably face a credibility issue. Some development teams have a defined data validation phase (Does the output in the BI tool match the
source system?). While data validation helps, it’s not a replacement for a strong QA process—data may match a basic report, but not another data slice or report that contains numerous
report-level calculations. Is the error in the report or in the ETL process? If your QA process is less than stellar, any new BI implementation must start with a tightly controlled pilot. The pilot
phase, then, is used to test functionality as well as to uncover data quality problems. Users know in advance they may get incorrect data.
For process and input errors, production users often have to see the data—in the BI tool—before they realize there is a problem. Assigning a data steward—in which a business user
(rather than IT) is ultimately responsible for data quality—can help in the long term. However, changing work processes that improve data quality rarely happens quickly. So the debate begins
about whether to implement a quicker ETL fix that gives cleaner data, or to do the harder, longer work of fixing the root cause. It depends, of course, but my vote is to do both. It is a Catch-22.
If you fix it in the ETL process, the business sees no work process to fix. If you don’t fix it in the ETL process, the data warehouse and BI tool may be unusable. When users discover these
processrelated data problems, they may devise their own solutions to improve data quality such as massaging the data in Excel or writing complex report formulas to mask the problems. Input errors
are most easily resolved (e.g., a $1000 invoice mistakenly entered as $10,000). The BI tool can help users uncover and correct these errors by analysis and exception highlighting. Sales were
exceptionally higher this month than last: was it an effective promotion or, sadly, an input error?
Bottom line: with clean data, your BI tool can be successful. With bad data, any BI tool will fail.