Measuring Your DBAs

Readers of my writings sometimes ask me questions about databases and database administration, which I welcome. And at times I will take the opportunity to answer particularly intriguing questions in print. One intriguing question I have been asked more than once is: “What metrics and measurements are useful for managing how effective your DBA group is?”

This is not a very easy question to answer because a good DBA must be a “jack of all trades.” By that, I mean a DBA needs to be competent at many different tasks (or trades) in order to be most effective — and each of these “trades” can have multiple metrics for measuring success. For example, a metric suggested by one reader was to measure the number of SQL statements that are processed successfully. But what does “successfully” mean? Does it mean simply that the statement returned the correct results, or does it mean it returned the correct results in a reasonable time?

And what is a “reasonable” time? Two seconds? One minute? A half hour? Unless you have established service level agreements, it is unfair to measure the DBA on response time. And the DBA must participate in establishing and managing reasonable service level agreements (in terms of cost and response time) lest they be handed a task that cannot be achieved. (In the September 2022 edition of this column, I addressed “The Importance of Service Level Management”.)

Measuring the number of incident reports was another metric suggested. Well, this is fine if it is limited to only true problems that might be impacted by the DBA. The DBA should not be held accountable for bugs in the DBMS (caused by the DBMS vendor), nor for design elements forced on him or her by an unreasonable schedule from management or an overzealous development team.

I like the idea of using an availability metric, but it should be tempered against the environment and your organization’s up-time needs. In other words, what is the availability required? Once again, back to service level agreements. And the DBA should not be judged harshly for not achieving availability if the DBMS does not deliver the technical infrastructure to achieve the desired level of availability or the organization does not purchase database availability solutions from a third-party vendor. Of course, the DBA can be held accountable if it was his decision to purchase the DBMS in use, but this is not usually the case. Many DBAs are brought in well after the DBMS has been selected.

What about a metric based on response to problems? This metric would not necessarily mean that the problem was resolved, but that the DBA has responded to the “complaining” entity and is working on a resolution. Such a metric would lean toward treating database administration as a service or help desk type of function. But I think this is much too narrow a metric for measuring DBAs. After all, they do much more than just respond to problems.

Any DBA evaluation metric must be developed with an understanding of the environment in which the DBA works. This requires in-depth analysis of things such as:

  • Number of applications that must be supported;
  • Number of databases and size of those databases;
  • On-premises versus cloud database support;
  • Use of the databases (OLTP, OLAP, web-enabled, analytics, AI, ad hoc, etc.);
  • Number of different DBMSs and vendors (that is, Oracle, Db2, SQL Server, etc.);
  • Number of OS platforms to be supported (Windows, UNIX, Linux, z/OS, etc.);
  • Special consideration for ERP applications due to their non-standard DBMS usage;
  • Number of users and number of concurrent users;
  • Type of service level agreements in effect or planned;
  • Requirement to participate in a DevOps, agile development environment;
  • Availability required (24/7 or something less);
  • The impact of database downtime on the business ($$$);
  • Performance requirements (sub-second or longer—gets back to the SLA issue);
  • Type of applications (mission critical vs. non-mission critical); and
  • Frequency of change requests.

This is probably an incomplete list, but it accurately shows the complexity and challenges faced by DBAs daily.

Of course, the best way to measure DBA effectiveness is to judge the quality of all the tasks that your DBAs perform. But many aspects of such measurement will be subjective. Keep in mind that a DBA performs many tasks to ensure that the organization’s data and databases are useful, useable, available, and correct. These tasks include data modeling, logical and physical database design, performance monitoring and tuning, assuring availability, authorizing security, backup and recovery, ensuring data integrity, and, really, anything that interfaces with the company’s databases. Developing a consistent metric for measuring these tasks in a non-subjective way is challenging.

You’ll probably need to come up with a complex formula encompassing all of the above — and more — to do the job correctly, which is probably why I’ve never seen a fair, non-subjective, metric-based measurement program put together for DBAs.

If you implement such a program, I’d love to hear the details and how it is accepted by the DBA group, their management, and their customers/users.

Share this post

Craig Mullins

Craig Mullins

Craig S. Mullins is a data management strategist and principal consultant for Mullins Consulting, Inc. He has three decades of experience in the field of database management, including working with DB2 for z/OS since Version 1. Craig is also an IBM Information Champion and is the author of two books: DB2 Developer’s Guide and Database Administration:The Complete Guide to Practices and Procedures. You can contact Craig via his website.

scroll to top