Did you know that the order in which you code your
predicates can
have an impact on query performance? It is usually a minimal impact,
but it may buy you a couple of microseconds for a very performance-critical
query. In order to use predicate ordering to your
advantage however, you need to be armed with some basic
information on how DB2 evaluates predicates as it processes
your SQL.
So,
before we continue, let's review the order in which DB2 evaluates predicates at execution time. DB2 will evaluate indexable predicates
first: matching predicates before non-matching. Then, Stage
1 predicates, and finally Stage 2 predicates. Within each of these
four groups, DB2 will evaluate equal predicates, then BETWEEN
and NOT NULL predicates, and finally, any other predicates.
If more than one predicate exists within a group, then DB2
will evaluate them in the physical order in which they are coded
in the SQL statement.
The
re-ordering of predicates to take advantage of this situation should
be considered only as a last resort. When implemented, the technique
will usually shave only a little bit from the query's
execution time. It is also important to note that predicate order
will not impact a query's access path: it will remain unchanged (as shown in the PLAN_TABLE).
Now,
how can we use this to our advantage?
Consider the following query:
SELECT
LAST_NAME, FIRST_NAME
FROM
EMPLOYEE
WHERE
SEX = 'M'
AND
TITLE = 'MANAGER'
For
the purposes of this discussion, no index exists for either of the
columns coded in the predicates. They are therefore the same type:
stage 1 and equal predicates. Furthermore, we know our data
- in our organization, there is approximately a 50-50 split between
males and females, and 15% of all employees are managers.
To
optimize this query then, we can swap the two predicates to achieve
better performance. So the query becomes:
SELECT
LAST_NAME, FIRST_NAME
FROM
EMPLOYEE
WHERE
TITLE = 'MANAGER'
AND
SEX = 'M'
Why
should this query outperform the previous version? Well, assume
we have 100,000 employees. If DB2 retrieves 50% of the rows
(SEX = 'M') and then retrieves 15% of those 50%, we will have
processed 57,500 rows:
(
100000 * 0.5 ) + ( ( 100000 * 0.5 ) * 0.15 ) = 57,500
But,
if instead, DB2 were to retrieve 15% of the rows (TITLE = 'MANAGER')
and then 50% of those, we will have processed only
22,500 rows:
(
100000 * 0.15) + ( ( 100000 * 0.15 ) * 0.5 ) = 22,500
Obviously,
it is better for fewer rows to qualify early, thereby reducing
the answer set and the number of rows that will have to be
subsequently scanned.
Tuesday, April 07, 2015
Wednesday, April 01, 2015
SQL Performance Basics: Part 3, Eliminating Predicates?
It is technically possible to learn how to write SQL statements without having an in-depth knowledge of the data. However, the better you know your data, the better your application performance will be. Let's look at a simple example.
By reducing the number of predicates on your SQL statements you may be able to achieve better performance by:
Consider the following SQL statement:
SELECT FIRST_NAME, LAST_NAME, GRADE_LEVEL
FROM EMPLOYEE
WHERE TITLE = 'VP'
AND GRADE_LEVEL >= 10;
This statement retrieve all rows for vice presidents who are at a grade level of 10 or above. But, what if we know more about our data? Say, for example, that the starting grade level for vice presidents in our organization is 10. Therefore, it is impossible for anyone with a lower grade level to achieve the title of VP. That makes the second predicate redundant in this case. If we remove this predicate it will not logically change the results, but with less checking of the data required (DB2 won't have to check for GRADE_LEVEL >= 10) performance may be improved.
It is important though that you truly do "know your data." For example, it is not sufficient to merely note that for current rows in the EMPLOYEE table, no vice presidents are at a grade level below 10. This may just be a coincidence. Do not base your knowledge of your data on the current state of the data. You must truly know your business criteria to determine that a correlation between two columns (such as between GRADE_LEVEL and TITLE) actually exists. And only then should you modify your SQL. Failure to do so can result in incorrect results being returned.
By reducing the number of predicates on your SQL statements you may be able to achieve better performance by:
- Reducing BIND (and REBIND) time because fewer options will probably need to be examined by the DB2 Optimizer.
- Reducing execution time due to a smaller path length caused by the removal of search criteria from the optimized access path. DB2 will always make sure that it processes each predicate coded for the SQL statement. Removing predicates removes work -- and less work equals less time to process the SQL.
Consider the following SQL statement:
SELECT FIRST_NAME, LAST_NAME, GRADE_LEVEL
FROM EMPLOYEE
WHERE TITLE = 'VP'
AND GRADE_LEVEL >= 10;
This statement retrieve all rows for vice presidents who are at a grade level of 10 or above. But, what if we know more about our data? Say, for example, that the starting grade level for vice presidents in our organization is 10. Therefore, it is impossible for anyone with a lower grade level to achieve the title of VP. That makes the second predicate redundant in this case. If we remove this predicate it will not logically change the results, but with less checking of the data required (DB2 won't have to check for GRADE_LEVEL >= 10) performance may be improved.
It is important though that you truly do "know your data." For example, it is not sufficient to merely note that for current rows in the EMPLOYEE table, no vice presidents are at a grade level below 10. This may just be a coincidence. Do not base your knowledge of your data on the current state of the data. You must truly know your business criteria to determine that a correlation between two columns (such as between GRADE_LEVEL and TITLE) actually exists. And only then should you modify your SQL. Failure to do so can result in incorrect results being returned.
Also, if the predicate was already there and you are removing it, comment out the predicate instead and be sure to document exactly why you are doing so in the code... that way, when somebody else takes a look at it later they'll know what happened and why.
Tuesday, March 24, 2015
Time to Start Your IDUG in Philadelphia Planning
Spring is in the air... well, at least it is South of the Mason-Dixon line... and that means it is time to plan your trip to this year's IDUG North American DB2 Tech Conference.
Anybody who has ever attended an IDUG conference knows about all of the good things you can expect to encounter, experience and learn at the event. That includes technical session on all of the latest and greatest DB2 technologies and features, networking opportunities to meet IBM developers and industry consultants, and the vendor exhibit hall where you can learn about software and tools to help you get the most out of DB2... not to mention picking up a few tchotchkes and mementos.
Additionally, there are Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), Hands-On Labs (delivering working training led by IBM), and the opportunity to take free IBM certification exams.
But there's even more... not only is this year's conference in Philadelphia, PA, a new venue for IDUG, but the half- and full-day educational seminars (that used to require an additional fee) are included in your registration fee.
I'll be presenting my DB2 Performance Roadmap presentation at this year's IDUG, so be sure to stop in to say "Hi" and chat about DB2, big data, or your favorite topic du jour!
Anybody who has ever attended an IDUG conference knows about all of the good things you can expect to encounter, experience and learn at the event. That includes technical session on all of the latest and greatest DB2 technologies and features, networking opportunities to meet IBM developers and industry consultants, and the vendor exhibit hall where you can learn about software and tools to help you get the most out of DB2... not to mention picking up a few tchotchkes and mementos.
Additionally, there are Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), Hands-On Labs (delivering working training led by IBM), and the opportunity to take free IBM certification exams.
But there's even more... not only is this year's conference in Philadelphia, PA, a new venue for IDUG, but the half- and full-day educational seminars (that used to require an additional fee) are included in your registration fee.
I'll be presenting my DB2 Performance Roadmap presentation at this year's IDUG, so be sure to stop in to say "Hi" and chat about DB2, big data, or your favorite topic du jour!
The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on in Philadelphia this May, why wouldn't you want to be there!?!?
Download the IDUG brochure for more details.
Download the IDUG brochure for more details.
Monday, March 16, 2015
SQL Performance Basics: Part 2, Rely on Indexes
Perhaps the single most important aspect of SQL tuning is indexing. All developers should know all of the indexes that exist on any table upon which they write SQL statements. When an index exists on a column (or columns), DB2 can use the index to reduce I/O instead of scanning the entire table to satisfy a predicate.
For critical queries, where no useful indexes exists, it usually makes sense to create an index to improve query performance. Of course, developers should enlist the assistance of a DBA to ensure the creation of appropriate indexes.
Let's learn with Bsome basics. For example, consider this SQL statement:
For critical queries, where no useful indexes exists, it usually makes sense to create an index to improve query performance. Of course, developers should enlist the assistance of a DBA to ensure the creation of appropriate indexes.
Let's learn with Bsome basics. For example, consider this SQL statement:
SELECT LASTNAME, SALARY FROM EMP WHERE EMPNO = '000010' AND DEPTNO = 'D01'
What index or indexes would make sense for this simple query? ""'First, think about all the possible indexes that you could create. Your first short list probably looks something like this:
- Index1 on EMPNO
- Index2 on DEPTNO
- Index3 on EMPNO and DEPTNO
This is a good start, and Index3 is probably the best of the lot. It lets DB2 use the index to immediately look up the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table, you might want to examine the impact of creating yet another index on the table. There are several factors to consider.
Indexing Factors to Consider
For starters, you need to weigh the impact of data modification. DB2 will automatically maintain every index you create. This means every INSERT and every DELETE to this table will insert and delete not just from the table, but also from its indexes. And if you UPDATE the value of a column that is in an index, you also update the index. So, indexes speed the process of retrieval but slow down modification.
You should also consider the impact to any existing indexes and applications before creating a new index. If an index already exists on EMPNO or DEPTNO, it might not be wise to create another index on the combination. However, it might make sense to change the other index to add the missing column. But not always, because the order of the columns in the index can make a big difference depending on the query. For example, consider this query:
SELECT LASTNAME, SALARY FROM EMP WHERE EMPNO = '000010' AND DEPTNO > 'D01';
In this case, EMPNO should be listed first in the index. And DEPTNO should be listed second, allowing DB2 to do a direct index lookup on the first column (EMPNO) and then a scan on the second (DEPTNO) for the greater-than.
Furthermore, if indexes already exist for both columns (one for EMPNO and one for DEPTNO), DB2 can use them both to satisfy this query so creating another index might not be necessary.
Finally, you should consider the importance of the query you are attempting to tune. The more important the query, the more you might want to tune by index creation. Of course, the term "importance" is not always easy to quantify. If you are coding a query that the CEO will run every day, you want to make sure it delivers optimal performance. So building indexes for that particular query is important. On the other hand, a query for a clerk might not necessarily be weighted as high, so that query might have to make do with the indexes that already exist. Of course, the decision depends on the application's importance to the business--not just on the user's importance. If the clerk runs business critical transactions and the CEO is simply printing off a report for later consumption, then the importance varies... right?
Index Overloading
Index design involves much more than I have covered so far. For example, you might consider index overloading to achieve index-only access. If all the data that a SQL query asks for is contained in the index, DB2 might be able to satisfy the request using only the index. Consider our previous SQL statement. We asked for LASTNAME and SALARY, given information about EMPNO and DEPTNO. And we also started by creating an index on the EMPNO and DEPTNO columns. If we include LASTNAME and SALARY in the index as well, we never need to access the EMP table because all the data we need exists in the index. This technique can significantly improve performance because it cuts down on the number of I/O requests.
Keep in mind that making every query an index-only access is not prudent or even possible. You should save this technique for particularly troublesome or important SQL statements.
The Bottom Line
A solid understanding of the indexes that exist -- and how additional indexes can help -- is vital to the performance of your DB2 applications. Take the time to understand the indexes that exist for your applications and you can become a better developer, becoming more valuable to your organizations and earning the respect of your peers!
Tuesday, March 10, 2015
SQL Performance Basics: Ask For Only What You Need
As you write
SQL statements to access DB2 data, there are certain very simple, yet important
rules to follow to encourage efficient SQL. Of course, SQL performance is a
complex topic and to understand every nuance of how SQL performs can take a
lifetime to master. That being said, adhering to the following simple rules
puts you on the right track to achieving high-performing DB2 applications.
The first rule
is to always provide only the exact columns that you need to
retrieve in the SELECT-list of each SQL SELECT statement. Sometimes this is
shortened to “do not use SELECT *”. The shorthand SELECT * means retrieve all
columns from the table(s) being accessed. This is fine for quick and dirty
queries but is bad practice for inclusion in application programs because DB2
tables may need to be changed in the future to include additional columns.
SELECT * will retrieve those new columns, too, and your program may not be
capable of handling the additional data without requiring time-consuming
changes.
Even if the program needs every column, it is better to explicitly ask
for each column by name in the SQL statement for clarity and to avoid accessing
new and unwanted columns.
DB2 consumes
additional resources for every column you specify to be returned. If the
program does not need the data, it should not ask for it.
The second part
of asking only for what you need is using the WHERE clause to filter data
in the SQL instead of bringing it all into your program. This is a common
rookie mistake. It is much better for DB2 to filter the data before returning
it to your program. This is true because DB2 uses additional I/O and CPU
resources to obtain each row of data. The fewer rows passed to your program,
the more efficient your SQL will be. So, the following SQL
SELECT
EMPNO, LASTNAME, SALARY
FROM
EMP
WHERE
SALARY > 50000.00;
Is better than
simply reading all of the data without the WHERE clause and then checking each
row to see if the SALARY is greater than 50000.00 in your program.
Subscribe to:
Posts (Atom)