What is VLDB? Very Large Databases Part 3

Published in TDAN.com April 2002

Articles in this series – Part 1, Part 2



5. Query Techniques

Welcome to the discussion on VLDB Query Techniques. In this chapter of the series we focus on an overview of building proper queries for use in a VLDB environment. If you have not been reading the
series from the beginning, we recommend you take a moment to do so. The concepts in this chapter build on the previous chapters.

There will be a full series on query techniques following this series which will provide additional insight in to these areas of constructing queries, measuring performance, single-passing, and
determining result sets. Therefore, this series will only brush the surface to introduce the ideas and concepts of querying a VLDB partitioned system. The technical details as well as a case study
will be defined in a later series, titled: retrieving data from VLDB.


5.1. Statistics of an Index

Each index has a set of statistics that are kept by the RDBMS system. These statistics tell the optimizer how to mathematically weigh the use of the index. Is it heavy? Light? Will it cause a table
scan, index scan? Or can it be index by position? Etc… The statistics house different numbers depending on the RDBMS vendor, however most RDBMS’ that we know about have statistics that
the optimizer uses. In order to keep the optimizer current with the information that has changed in the table, the commands Update or Analyze statistics (or something like it) must be issued by the
DBA.

This is especially important in VLDB, where partitioned tables are common. The statistics are kept for each partition (if a partitioned index is used), thus dividing the cost of accessing that
index by the number of partitions. One would think that it’s a good thing to have many partitions for super-fast access, but as explained before – too many partitions are high I/O cost,
and not good for indexing either. The optimizer now gets a chance to choose the right partition of information based on the statistics and the SQL Where Clause.

Statistics in some RDBMS engines can be updated or analyzed at a low, medium, or high percentage. What this means is the standard definition of statistics: sampling rate. A low percentage takes a
low percent sample of all the rows in the table. In VLDB these statistics should be kept at “high” or full-table. If partitioning is used, the statistics should be kept at
full-partition. Otherwise queries take too much time in attempting to retrieve data. Old statistics or bad statistics leads the optimizer (frequently) to table and index scans, whether or not the
table is partitioned.


5.2. Index Hints

Frequently when we consider query tuning, we think of adding hints to the optimizer. In some cases, this is a new concept – some users only think about tuning the SQL itself. There are
pro’s and con’s to adding Index Hints. In this section we will explore some of the reasons to do this, and not to do this. It can make a difference in VLDB / VLDW data set, and the
corresponding queries. This is why most databases see the value of materialized views, or in-memory index only tables.

So what is an index hint?

A hint is a statement placed inside the SQL query to tell the optimizer what index it should “favor” when going after information. The syntax varies from database to database, but
usually is listed before the table name. Why should I look at index hints? If your query has been tuned as much as possible, and still seems to take too long to execute, an index hint could help it
speed up quite a bit. On the other hand, giving the optimizer the wrong index (or changing the where clause on the query) could kill the index hint and the query too. You must take care in
providing index hints for your queries. For instance, you’ve often heard that fields surrounded with functions (such as LTRIM or UPPER) cause the query to use table scans instead of indexes.
Before “function based indexes” came along, an index hint where both those fields (or at least one side of the where clause) was indexed, would help the query time be cut in half. It
would replace the table scan with an Index Scan, which is almost always faster. Now with function based indexes, this problem is diluted. The index is defined as a function of the field, and the
results of the function are stored in the index. Then when a query is found to utilize that function – it results to using that index. Thus the query runs faster.

How can an Index Hint hurt?

There are several methods that an index hint embedded in the query can hurt the performance. These are listed below:

  • The index statistics have not been updated, and the table has undergone tremendous change.
  • The index is deleted by the DBA
  • The query changes, and there is a better index out there available for the query to use, but it doesn’t use it because there’s an index hint embedded in the original query.

Basically an index hint is best left to query re-write engines. If these query re-write engines in the course of tracking meta-data can see that the selectivity of the index is much higher (if
used), they need to put an index hint in the query without the end-user knowing it. This is becoming more commonplace as data sets and intermediate aggregates grow to new levels. In other words,
the BI tools are implementing query rewrite engines and intermediate aggregates. The selects are run against the RDBMS, the results returned, and the aggregates are built in RAM before being
shipped to the clients.

Well, that gets us away from Index Hints. Why do we need index hints on VLDB? Good question. Frequently the tables (data sets) are much too large to keep the statistics updated, or the tables
haven’t been properly designed. Because of this lack of architecture it is near impossible to rebuild the indexes, or build new ones. The perfect world says that Index Hints should never be
needed, but then again, we don’t live in a perfect world.

Consider the hints for a partitioned index, this would keep the query to a single partition. The query would then be extremely fast yet limited (due to the data in the partition). The minute the
user requested data from the query outside the partition, a table scan might take place. In VLDB table scans kill performance and must be avoided at all costs.

For specific commands to control your index hints, look in to the DBA manuals for administering your database.


5.3. Multi-pass Query Optimization

Too often when tuning queries, the query creator forgets that a multiple pass query can be too slow. Multiple pass queries are the queries that make more than one pass through the data that it
retrieves. Most of the time the multi-pass query has an I/O cost that is tremendous. It can be the difference between a query that takes 30 minutes versus the same query (tuned for single pass)
taking 2 to 5 seconds. I have been tuning queries for the past 10 years, but have utilized a book to assist me for the past 5 years. I would recommend this book to anyone in any database who must
tune queries. The book is called:

“Optimizing Transact-SQL (advanced programming techniques)” ISBN:0-9649812-0-3, Authors: David Rozenshtein, PH.D, Anatoly Abramovich, PH.D, Eugene Birger, PH.D. Published by: SQL Forum
Press.

Even though this appears to be a Sybase SQL book, the concepts in this book are standard ANSI SQL-92. I would urge everyone to read this book and understand it’s nuances. It covers everything
from pivoting to folding, histograms, to contiguous regions in data. Basically the concepts in this book provide you with the tools to take a multi-pass query of any nature and attempt to build it
as a single pass query.

What I will say is this: I can usually stop tuning queries when they’ve reached either an index scan of a single partition, or position by index (which is optimal). There have been cases
where I’ve had to use a stored procedure with a cursor to pull the primary key from a simple where clause with no group by’s. In this case, the internal query in the loop pulls the
records that meet the criteria by primary key, and inserts them in to a temporary table. Following that operation, the temporary table is then aggregated or joined to other rows.

This has taken operations with 3+ million rows, a whopping 15 hours, (where the indexes were horribly defined), and a table scan was required once a week, down to 15 minutes (without changing the
table structure, or indexes). The result set was an aggregated 1 million rows. Eventually I was able to convert this process to a view, and single pass extraction/aggregation. The table was not
partitioned, it was extremely wide, and as I said – the index statistics were not up to date, nor were the indexes defined well at all.


5.4. Parallel Query, Partitioned Query

Another forgotten aspect is the parallel query. It is frequently forgotten that in order to execute a parallel query, partitioning must be in place. Not to mention multiple engines (CPU’s).
Parallel queries only mean that the data itself can be retrieved in parallel by breaking the query down in to several pieces. Each CPU then fires a piece of that query against a different
section/partition of information. This is only one example of a parallel query. There are many other ways to describe what happens with a parallel query.

However it’s basically a query that can be partitioned in terms of the work.

In order to execute the parallel query, the environment must be tuned properly. Multiple CPU’s must be available, as well as multiple controllers. Another piece is that the disk should be RAW
format with Raid 0+1. If any one of these elements is not in place, parallel queries have a difficult time being executed by the engine. The reason being is that the cost is inhibitive. For
instance, if the RDBMS engine would chose to execute your query in parallel, but then found that the disk was Raid 5, it would determine that the cost of competing I/O across multiple queries would
prevent it from running any faster. In fact, it would run slower than a single query, therefore the RDBMS engine would chose to NOT run the query in parallel.

What would happen if it did run it in parallel? Well, let’s say it split the query in to 4 equal pieces (partitions). Then, it attempted to gather data from a non-partitioned table (even if
the table were partitioned, this would not work). Each of the 4 queries would ask the disk to get its corresponding information. The disk would then thrash, attempting to locate across all the
disks and multiple stripes – each piece of information for each piece of the query. Now if the disk were truly Raid 0+1, and RAW format, the query may find that certain partitions lie on
different disks, because of this – each disk can operate independently and return an answer to its piece of the query quickly. Again, this is an important concept – we simply wanted to
raise awareness here in the VLDB series. The series on retrieving your information will cover this in depth with a case study.


5.5. Outside Join Techniques

One of the last query items frequently overlooked in tuning is the outside join. On occasion an outside join can be faster than an equi-join (equal join). An outside join (for those that
don’t know) is basically a command in SQL where all rows from table 1 are retrieved, and any row from table 2 which has a match is also retrieved. But when table 2 doesn’t have a match,
table 1 rows are still returned, and the columns selected from table 2 return a NULL. There are ways to limit this type of join in the where clause. Sometimes this type of join is faster because
the optimizer chooses a better index, or better index method. If you aren’t familiar with outside joins, or have never utilized an outside join as an alternative during tuning of a query, I
suggest you examine it closer. It’s well documented in many different books, including the manuals that come with your RDBMS engine.


6. Conclusions

Welcome to the final discussion on VLDB Concepts. In this chapter of the series we provide some concluding remarks. If you have not been reading the series from the beginning, we recommend you take
a moment to do so. The concepts in this chapter reference previous chapters.


6.1. Impacts on the Environment – recap.

All that we have discussed here in this series are the concepts behind VLDB. We’ve introduced hardware, software, threading, and some other basic concepts that are extremely important in the
understanding of VLDB. These concepts have a huge impact on the environment within the machines running VLDB today. Please take a minute to re-read some of the points abstracted below. We hope
you’ll find it useful.

  • We are seeing rapid change in both technology and the requirements for analysis of massive sets of information. More and more businesses are looking at the related issues and recognizing that
    patterns, trends, and analysis of these items is becoming critical to the success of the business.
  • They (the users) need to understand that the questions they are beginning to ask about the content (specifically the historical patterns of the content) require more and more analysis.
  • The bottom line is: architecture, engineering, and requirements are extremely important in considering VLDB.
  • Sometimes, the configuration of the RDBMS acts like a speed governor on a car, no matter what is done to the hardware, or software configuration – even adding RAM, all speed stays
    constant
  • Without the proper configuration and setup the system could fail.

The points are made throughout the series – the bottom line is that more and more information is traveling through these systems, and environment and configuration are an important part in
their success (or failure). The impacts are many (and great), the time is now to begin thinking about these things. At the rate of change we are now doubling our RDBMS size every 6 months or less
(on average). You will be in VLDB territory before you know it, and if it catches you off-guard, well…. You know the rest of the story.


6.2. Storage Devices, things to think about.

There are new types of storage devices available today, and they are growing in capacity as well. In fact, within the next 8 months there will be so much storage available on disk, that near-line
and off-line storage may not be a necessity (except to back up information, or roll it off). Just remember: information that isn’t on-line doesn’t help anybody (can’t be used to
answer questions). Also remember: data that is compressed in any format or form must be uncompressed before it can be used. Once compressed, the information is useless. What’s coming with
storage devices? Bigger, faster, better. Of course it will be at a cost. However it’s definitely something to watch.

How can different storage devices affect my VLDB? The architecture inside the hardware must abide by VLDB rules, partitioned hardware to begin with, with parallel processing capabilities and
buffering mechanisms. Without this type of forethought, the standard storage device could cause failure. These standard devices usually push this logic off on the CPU of the machines they are
attached to, which then requires expensive clustering hardware so that the disks can operate in a “shared nothing” architecture. Shared Nothing is one of the successful methods of the
future, although clustering provides an alternative – it may not be the answer. Dividing an conquering the problem is a good thing, but if the storage device pushes this work off the device
and on to the CPU, it not only takes resources away from the RDBMS engine, but it means that it has to transfer the data to the machine to get it on to another storage device. Internal transfer of
large scales of information within a local device (never hitting the network or machine resources) is critical to the success of VLDB and data sharing.

© 2001-2002 Core Integration Partners, Inc.
www.coreintegration.com

Share this post

Dan Linstedt

Dan Linstedt

Cofounder of Genesee Academy, RapidACE, and BetterDataModel.com, Daniel Linstedt is an internationally known expert in data warehousing, business intelligence, analytics, very large data warehousing (VLDW), OLTP and performance and tuning. He has been the lead technical architect on enterprise-wide data warehouse projects and refinements for many Fortune 500 companies. Linstedt is an instructor of The Data Warehousing Institute and a featured speaker at industry events. He is a Certified DW2.0 Architect. He has worked with companies including: IBM, Informatica, Ipedo, X-Aware, Netezza, Microsoft, Oracle, Silver Creek Systems, and Teradata.  He is trained in SEI / CMMi Level 5, and is the inventor of The Matrix Methodology, and the Data Vault Data modeling architecture. He has built expert training courses, and trained hundreds of industry professionals, and is the voice of Bill Inmons' Blog on http://www.b-eye-network.com/blogs/linstedt/.

scroll to top