The Foundation of Excellent Performance

Published in TDAN.com April 2001

Introduction

Fast databases are no longer a nice-to-have – they are a necessity. E-commerce databases that power globally used web sites must complete user transactions and present information at a rate
fast enough that prevents impatient customers from clicking to a competitor’s web site. Corporations needing up-to-date internal information cannot wait for long drawn out processes that
crunch numbers and detail competitive statistics. Instead, they need databases capable of quickly churning out the data necessary to compete in today’s economy.

The quest for the holy grail of better performance is run every day by database professionals at both large and small companies alike. To help improve performance, many turn to expert database
consultants and database performances monitors that track down and assist in eliminating system bottlenecks. However in the pursuit for better overall database performance, many professionals are
ignoring what is perhaps the number one contributor to excellent DBMS speed – the physical database design. This paper addresses this issue and demonstrates how database warriors need to
return to their foundational practices of building the right database for the right type of job.

Why is Design Overlooked?

Simply, for two reasons:

  1. It’s difficult to perform correctly
  2. It takes time (and sometimes lots of it)

Designing a high performance database is complicated work. It takes skill and experience to get a design that runs as fast as a lightning. But sadly, experienced personnel are at a premium these
days so junior or completely green workers are called upon to design and build a database. The mindset of needing a staff of experienced logical data modelers was thrown out in the early nineties
when CASE tools that promised the world cracked under the strain of increasing business workloads. Since many CASE tools failed to deliver what they had promised, and because many stressed logical
design as the necessary forerunner of a good system, logical design was discounted with respect to its importance. Corporations had endured enough projects that never got off the drawing board and
so RAD became the accepted mode of development. The end result was — and still is — that logical design isn’t taken nearly as serious in overall system development as it should be.

The second reason quality designs are overlooked when the topic of performance is discussed is that a lot of up-front time is needed to create a good design. And time isn’t what a lot of
companies have these days. The application lifecycle has never been shorter in corporations than it is right now. Projects that would have taken years to complete just five years ago are being
thrown up in six months or less. Obviously, to accomplish such a feat requires one of two things (1) superior personnel using state-of-the art software tools or (2) the elimination of necessary
tasks from the application construction equation. Usually, one of the first to go is the database logical design phase. Instead of sitting down and intelligently laying out the necessary components
and objects of a database, the database structure is built in the development phase alongside the code base used to run the application. The end result is a design that never had a chance to
succeed.

Instead of concentrating on good physical database design, database professionals look to other methods to enhance performance. However, when they do, they risk missing the boat entirely and could
end up dazed and confused with a database that simply won’t perform.

Exposing the Number One Performance Myth of Today

Whether it’s in the realm of database technology or any other discipline, some maxims are whispered around the campfire so much that they are taken for gospel on face value and never questioned.
Especially when supposed “experts” mouth the words. Such is the case with a database performance myth that has been around for as long as I can remember. It goes something like this:

“60 – 70% to seventy percent of a database’s overall performance is derived from the code that is written against it.”

This is a complete untruth, or at the very least, an overestimation of the impact that properly written SQL code has against a running physical database. Good coding practices definitely count
(many times heavily) toward the success of any database application, but to state affirmatively that they make a contribution of nearly two-thirds is a stretch. The reason this proverb cannot pass
the reality test is that it is stated independent of what good or bad code can do in the face of poor physical design. Let me illustrate with a real world case.

A few years ago, I was called into a life insurance company to investigate the cause of a poorly performing management reporting system. The database was an Oracle7 engine that resided on a fairly
robust IBM AIX machine. The front end consisted of a GUI report application that constructed a number of management summary reports. The problem was that most of the reports took an abnormally long
time to run, with the average response time being ninety minutes from start to finish for a single report. Since the reports needed to be rebuilt several times a day, this was a completely
unacceptable response time scenario for the end users. After ruling out typical “quick-fix” solutions of giving Oracle more memory and ensuring no hardware bottlenecks existed (swapping, I/O
contention, etc.), I asked to see the code being used to create one of the reports. After all, I had (like most other IS professionals) been brought up through the database ranks being told that
code was what caused a system to live or die from a performance standpoint. On the surface, nothing appeared wildly out of place in the SQL code. A fairly sophisticated join predicate linked
together what seemed to be only six tables. The database itself was only about 500MB or so in size, so volume was not an issue. I then asked an important question: “Are these tables or views?”
“Views” was the response. I then decided to extract the definition of the first view used in the code, and what I discovered was nothing short of amazing. The first view used in the code
consisted of a join of 33 tables with 27 of the join predicates being outer joins!

I can say without hesitation that no amount of code rework or any SQL rewrite techniques could salvage that situation. The developers were doing the absolute best they could, however they were up
against a database that had been normalized to the nth degree by an overzealous, logical-model minded DBA. The physical design had been implemented in a way that made no sense from a performance
standpoint and nothing short of a major design change was going to alter the situation.

The physical design constrains all code – good or bad – and has the capability to turn even the best written SQL into molasses. After all, how can a SQL developer obtain unique key index access
unless the physical index has been created and is in place? How can a database coder scan only the parts of a table that they need unless that table has been partitioned to accommodate such a
request? Only when a solid physical design is put in place – a design that fits the application like a glove – can SQL code really take off and make for some impressive response times. But good
design comes first.

So what did I do with the poorly performing management reporting system? The DBA steadfastly refused to modify any of his maze-like design so changing the primary physical database structure was
out. I grabbed one of the developers and, using a 4GL-development environment (PowerBuilder), created a small, customized ETL product that the clients could utilize. The user could bring up a GUI
front end, and with a few mouse clicks, create a set of denormalized reporting tables and then build the reports they needed. Amazingly, the extract, transform, and load procedure, coupled with the
report creation, crossed the finish line in less than seven minutes. A 94% reduction in response time was achieved in the same hardware environment, but with a different, improved physical design.

The Link between Performance Monitoring and Physical Design

Every database professional I know wants to be thought of as an expert in database tuning. The consultants that make the most money out in the field are the ones who can miraculously transform a
sluggish, wheezing database into one that runs fast and efficiently. The books that fly off the shelf in the technical bookstores are the ones that promise secret hidden tips on accelerating the
performance of database systems. And almost every database administrator covets their complicated SQL scripts that dig into the heart of a database’s internals and regurgitate mountains of
difficult to interpret statistics. But do those down in the database trenches really know what to do with all the information produced through performance monitors and SQL scripts? How does one
really monitor a database for performance and become good at making a difference in the response times end users experience?

They key to understanding the discipline of performance monitoring is this: When you monitor a database for performance, you are really validating your physical design
implementation
. If the performance monitor you choose to use is blasting you with flashing lights, alarm bells, and pager alerts, it’s probably because your physical design is
failing. If all is quiet on the scene in your performance monitor, then your physical design is likely a current success. It really is almost as simple as that.

To be sure, there are performance situations that really aren’t impacted by the physical design directly. Lock contention, for example, is mostly an application or coding issue. But on a grand
scale, your performance monitoring output speaks volumes to your talents as a database designer. Got I/O contention problems in your database? Then you likely didn’t segment the tables, indexes,
and storage structures properly in your physical design. Observing too many long table scans in your database? Chances are you didn’t adhere to the proper indexing strategy. Experiencing
out-of-space headaches with either your storage structures or objects? It’s a good bet you didn’t size your database properly in your initial physical design. I could continue down this path for
some time, but you get the idea.

The tragic thing is that much of today’s mindset dismisses the idea that altering and improving a database’s physical design will yield the largest possible performance benefit. Part of the
reason for this is that modifying the design of a physical database – especially one that is currently in production – is no easy task and oftentimes requires healthy amounts of off-hours work by
the administrator. So instead, many take the quick fix approach to performance problems, which equates to throwing hardware at the situation in most cases. Either the server box itself is upgraded,
more processors are introduced to the mix, or a decent amount of RAM is added. In the short term, things appear to get better, and if the database is relatively static in nature, things may remain
that way. But if the database is dynamic and the data/user load continues to grow, the situation will slide back to the point where it once was.

The reason for this is a foundational one. If the foundation is flawed, then the house needs to be put in order at that level before anything else is done. But much of the way performance
monitoring and problem resolution is performed today isn’t handled that way. It’s like a homeowner discovering that his or her house has a cracked foundation so they put a new coat of paint on
the outside to temporarily cover up all the cracks and then they declare all is well. Even worse, the homeowner could attempt to add on to their home in hopes of improving the value or appeal. But
let’s face it – with a cracked foundation, who will buy it? The same thing holds true for adding more hardware onto a poorly designed database. You may throw more RAM, etc., at a badly
performing database and for a while those performance cracks get covered up. But over time, as more data and users are added, those foundational cracks will reappear and must be dealt with yet
again. Regardless of the effort involved, it’s much better to attack the foundation problem in order to correct the problems permanently.

As an example, a database administrator may use his or her performance monitor to find out that the data buffer cache hit ratio is far below acceptable levels (typically 80% or less). The DBA may
erroneously conclude from the situation that more RAM is needed or that the buffer cache should be enlarged to improve the scenario. But what if the problem instead stems from the fact that too
many long table scans are occurring? Most DBMS’s will quickly recycle the data obtained from large table scan operations to keep stale data out of the cache. To be sure, the problem could be a
coding problem where developers aren’t using the right indexes in the SQL predicates. Or, more likely, the database may not have the correct indexes in place to assist the code in avoiding the
many long table scans. If this physical design flaw can be correctly identified, then no extra RAM may be needed at all.

What about the link between availability and design? According to Oracle Corporation’s own studies of client downtime, the largest percentage, up to 36%, are design-related
issues. If that isn’t a wake-up call to get serious about design, I don’t know what is.

 



Figure 1 – Downtime statistics provide by Oracle Corporation

How to Make the Biggest Performance Impact in your Database

If this paper has convinced you that proper physical design should be your number one performance goal as a database administrator, then it is time to get serious about how you manage your physical
design lifecycle. So how do you get started in making a noticeable difference in the physical designs of the databases currently under your care, and those you are destined to encounter and/or
build in the future? The first step to take is a mental one and involves making the commitment to pay more attention to excellent physical design. As an aside, I might mention that all project
management personnel need to make this same commitment as the effort involved in guaranteeing a solid physical design foundation will take more up-front resources. But make no mistake, it is an
understatement to say that it is time well spent.

The next step involves education on the part of the database designer. Of course, the best way to become a design guru is to put time in the trenches and work with every style of database – heavy
OLTP, data warehousing, and cross-platform data mart designs. You will learn very quickly which designs stand and which physical foundations crack when you go up against heavy-duty e-commerce and
mega-user systems. Of course, there are also a variety of good educational classes and books on the subject of physical design to aid in the learning process.

Creating robust efficient physical designs can be difficult and intricate work. You will need to arm yourself with some serious power tools that have the capability to slice through the
difficulties involved in building and retrofitting complex physical database designs. Long gone are the days when a DBA or modeler could handle most of their work with a SQL query interface and a
drawing tool. Today, relational databases are just too robust and contain too many complexities for such primitive aids.

At a minimum, you will need two things flanking both ends of your arsenal: a serious data modeling tool and a robust performance monitoring product. We have already established the fact that
performance monitoring is really the validation of a database’s physical design. When foundational cracks are identified with the monitor, you will need a high-quality design tool to aid in
rectifying the situation.

For those physical DBA’s who do not like to use data modeling tools, then they will need two other software products: a feature-rich database administration tool and a change control product.
The database administration tool will be used to create new objects for a database as well as modify properties of existing objects. This tool is normally used in an ad-hoc manner and is great for
graphically redesigning a database in real-time mode.

The Change Control product is a different animal. If you will not use a data modeling tool to capture and version control the designs of your databases, then you will need another method for
protecting designs that are in place and are working. Having such “snapshot backups” of your database’s schemas will prove invaluable when disaster strikes.

A DBA that I once worked with was managing a large packaged financial application when she learned the value of a change control tool. She had to make a complex change to one of the
database’s critical tables and had thought she had built the right script to do the job. Unfortunately, she didn’t have everything in place and when she ran her change job, she ended up
losing a number of important indexes that existed on the table. Worse yet, since her table and data looked OK, she thought all was well and didn’t know she had lost the necessary indexes. The
next day, many parts of the application slowed down to a snail’s pace as queries that used to complete in an instant now were taking forever. The changed table was identified as the source of
the problem, but while my DBA friend discovered that the table now had no indexes, she didn’t know which columns had been indexed (something not uncommon in huge financial applications).
Through trial and error, she was able to get her indexing scheme back in place, but not before a lot of time had been lost.

This is one case where a good change control tool can save you. Nearly every good tool in this category offers a synchronization feature that allows a DBA to compare an up-and-running database with
a saved snapshot of that database’s object definitions. Once differences are identified, a click of the mouse can restore any missing objects.

But a change control tool can also help you in your physical design iterations. By periodically capturing changes you make to the physical design of your database, you can learn what worked and
what didn’t. And if you make an “oops” and actually cause more harm than good, you can instruct your change control tool to automatically put things back to the way they were.

Spotting Physical Design Flaws

Once you have your database design arsenal in place, you can begin the work of building correct physical designs from scratch and managing the physical design lifecycle once a system goes into
production. But how do you quickly spot physical design flaws in an up-and-running database? It definitely takes a trained eye to uncover the root cause of identified performance problems, but the
table below will help get you started. It lists just a few of the most common database performance problems and the possible physical design gremlins that could be the culprit. Oracle is used as
the database example.

 

Performance Category Performance
Problem
Possible
Design Cause
Memory Poor Buffer Cache Hit Ratio ·        Too many long table scans – invalid indexing scheme

·        Not enough RAM devoted to buffer cache memory area

·        Invalid object placement using Oracle 8’s KEEP and RECYCLE buffer caches

·        Not keeping small lookup tables in cache using CACHE table parameter

  Poor Memory/Disk Sort Ratio ·        Not presorting data when possible
     
Contention Redo log waits

·        Incorrect sizing of Oracle redo logs

·        Insufficient memory allocated to log buffer area

  Free list waits

·        Not enough free lists assigned to tables

     
I/O Identified disk contention

·        Not separating tables and accompanying indexes into different tablespaces on different physical drives

  Slow access to system information

·        Not placing SYSTEM tablespace on little accessed physical drive

  Slow disk sorts

·        Placing tablespace used for disk sort activity on RAID5 drive or heavily accessed physical volume

  Abnormally high physical I/O

·        Too many long table scans – invalid indexing scheme

·        Not enough RAM devoted to buffer cache memory area

·        Invalid object placement using Oracle 8’s KEEP and RECYCLE buffer caches

·        Not keeping small lookup tables in cache using CACHE table parameter

     
Space Out of space conditions (storage structures)

·        Poorly forecasted data volumes in physical design

  Tablespace fragmentation

·        Invalid settings for either object space sizes or tablespace object settings (PCTINCREASE, etc.)

·        Not using locally-managed tablespaces in Oracle8

     
Users/SQL Large JOIN queries

·        Overnormalized database design

     
Object activity Chaining in tables

·        Incorrect amount of PCTFREE, PCTUSED settings for objects

·        Too small database block size

  Rollback extension

·        Incorrect sizing of rollback segments for given application transaction

  Many table scans

·        Incorrect indexing scheme

  Object fragmentation

·        Incorrect initial sizing

Using a quality performance monitor, you can be quickly lead to the performance headaches in your database, and then using either your intelligent data modeling tool or the combination of your
database administration/change control product, you can remedy the situation. Fixing foundational flaws in a database is never easy, but perhaps one day we will be treated to software that gets
things right before the situation turns ugly.

The Dream Software Tool for Design

The ultimate software tool that would really aid in improving a database’s physical design, and therefore overall performance, has yet to be delivered. It is a product that would place a
database’s physical design and environment under a microscope and then produce an expertly altered physical design, crafted especially for the given database’s needs. All the data modeling tools
on the market can help you build a data model, but they can’t tell you how to build the right data model, and that is a subtle but huge difference.

Let’s take the case of when a designer should use a bitmap index. Every data modeling tool will allow you to design a bitmap index for a table in a model, but they won’t stop you from putting a
bitmap index on a table where one doesn’t belong.

To determine if a bitmap index should be used, the designer first needs to know the correct column cardinality. For those not familiar with a bitmap index, they work in pretty much a reverse
fashion from a normal B-Tree index. Most indexes require high cardinality (many distinct values) in the table column to work effectively. Bitmap indexes are designed to work with low cardinality
data. For example, if I have a database that tracks patients admitted to a hospital, I may have a column in an admissions table called INSURED that tracks whether the patient was insured or not –
basically a YES/NO column. This would be a terrible choice for a regular B-Tree index, but could definitely qualify for a bitmap index.

The second thing a designer needs to know when putting a bitmap index on a table is data volume. Most any index is useless when it comes to enhancing performance on tables with little data, because
most DBMS’s will ignore any index on small tables and instead cache and scan the table faster than if index access was used. On the other hand, if millions of rows were present in our hospital
admissions table, then a bitmap index could really prove useful.

The third thing a designer needs to know when deciding if a bitmap index will be necessary is if data modifications occur at frequent levels for the table. Bitmap indexes are notorious for causing
performance slowdown’s on tables with high DML activity. Demonstrating the proof of this concept, I once inherited a database that was extremely critical both in terms of company visibility and
bottom line impact. Complaints began to quickly surface in regard to the database’s performance, and while many of the normal performance statistics looked good, there seemed to be a bottleneck
whenever an OLTP transaction passed through the system. I quickly traced the problem to the hub table in the database – nearly every transaction passed into and out of this one table. The designer
who preceded me had chosen to place eight bitmap indexes on this table that was the object of much DML activity in the system. This design decision violated nearly every rule of thumb with respect
to bitmap indexes. Removing all of bitmap indexes produced an end result like the parting of the Red Sea. Response time throughout the system was immediately restored to more than acceptable
measures.

The final and perhaps most important consideration when deciding if a bitmap index is right for the table is user access patterns. In other words, will the index be used at all? If no one asks the
question “How many insured patients were admitted this month” in a SQL query, then the bitmap index placed on the INSURED column in my hospital admissions table is basically useless.

All four points of whether to use a bitmap index on a table column all count and must be weighed when it comes down to physical design time. The only problem is that a Data Modeler or DBA may not
have all facts needed to make a correct decision before the system goes live. Or, perhaps the designer isn’t privy to the knowledge needed to make the right choice when it comes to index
placement.

Here is where the dream tool comes into play. First, a data/work load must be imposed on the database to mimic what is to come with respect to user traffic, user requests, and data volume. If a
load testing tool can be used before a system goes into production to do this – great. Otherwise, a manual user-driven model office environment must be put in place. In any event, once such a
load exists, the yet-to-be-invented tool interrogates the database and captures data volumes, object statistics, and user request patterns. Using this information, the tool then digests the
information and constructs a physical design model that fits the system perfectly. All necessary indexes are present, physical storage placements are correctly in place, and all objects that
desperately need denormalizing are reconstructed. The tool would basically tell the designer that this is how your data model should have looked in the beginning.

Until such a product comes about, using a combination of modeling and performance monitoring tools will be the de facto method for ensuring high performance physical database designs.

Conclusion

Databases that showcase high performance are always riveted to an excellent physical design. Although the mentality of many corporations these days is to discount the value of planning and
correctly creating a good physical database design, the fact is that you will be hard pressed to make a better contribution to a fast moving system than when you lay the right foundation. Does it
take time and skill? Sure, but then most everything good does.

Share this post

Robin Schumacher

Robin Schumacher

Robin Schumacher is Vice-President of Product Management at Embarcadero Technologies, Inc. He has been involved in data modeling and database administration for over 14 years and has experience in DB2, Oracle, SQL Server, and Teradata.

scroll to top