Monday, May 21, 2007

New OLAP Capabilities [DB2 9 for z/OS]

DB2 9 for z/OS offers several new SQL improvements focused on improving OLAP functionality. The OLAP functions provide the ability to return ranking, row numbering, and existing aggregate function information as a scalar value in the result of a query. You can include OLAP specifications in an expression, in a select-list, or in the ORDER BY clause of a select-statement.


The result to which the OLAP specification is applied is the result table of the innermost subselect that includes the OLAP specification. OK, at this point you might well be asking, “So, what are these new OLAP things and what can they do for me?” Let’s take them one at a time.


First up, we have the RANK and DENSE_RANK functions. These functions specify that the ordinal rank of a row within the specified window is computed. Rows that are not distinct with respect to the ordering within the specified window are assigned the same rank. Here is a quick example:


SELECT
EMPNO,
LASTNAME,
FIRSTNAME,
SALARY+BONUS+COMM AS TOTAL_COMP,
RANK() OVER(ORDER BY SALARY+BONUS+COMM DESC) AS RANK_COMP
FROM EMP
WHERE SALARY+BONUS+COMM > 30000
ORDER BY LASTNAME;


This query will rank employees who have total compensation greater than $30,000, but order the results by last name. This allows you to rank data differently than the order in which it is presented.

You can define the results of ranking with gaps in the sequential rank numbering by using the RANK specification, or without gaps, by using the DENSE_RANK specification.

The difference between the two can be a bit difficult to decipher at first. Think of it this way: RANK specifies that the rank of a row is defined as 1 plus the number of rows that strictly precede the row. So, if two or more rows are not distinct you will get gaps in the ranking. With DENSE_RANK the rank of a row is defined as 1 plus the number of preceding rows that are distinct with respect to the ordering. In this case there will be no gaps in the sequential rank numbering. Consider the following data:


EMPNO LASTNAME FIRSTNAME SALARY BONUS COMM
----- -------- --------- ------ ------ ------
100 MULLINS CRAIG 500000 100000 400000
200 SHAW DONNA 25000 10000 0
300 ABBOTT RANDY 700000 300000 0
400 WISER BUD 10000 0 0
500 GREEN RACHEL 40000 2000 5000



The results of the previous query run against this data would look like this:


300 ABBOTT RANDY 1000000 1
500 GREEN RACHEL 47000 3
100 MULLINS CRAIG 1000000 1
200 SHAW DONNA 35000 4
400 WISER BUD 10000 5


Note that both ABBOTT and MULLINS earn the most, but the amount is the same, so they share the number one ranking. As this is not a dense rank, the next rank value is 3, and not 2.


The next OLAP specification introduced by DB2 9 for z/OS is ROW_NUMBER. ROW_NUMBER specifies that a sequential row number is computed for the row that is defined by the ordering, starting with 1 for the first row. If the ORDER BY clause is not specified in the window, the row numbers are assigned to the rows in an arbitrary order, as the rows are returned. This satisfies an often-requested capability to simply assign a number to the result rows of a query. Row numbers also enable easy formulation of queries for computing histogram statistics and they enable formation of other OLAP specifications (for example, moving sums, moving averages, and so on).


Here is a sample query using ROW_NUMBER:



SELECT
ROW_NUMBER() OVER(ORDER BY WORKDEPT, LASTNAME) AS NUMBER,
LASTNAME,
SALARY
FROM EMP
ORDER BY WORKDEPT, LASTNAME;


The result of a RANK, DENSE_RANK, and ROW_NUMBER specification is BIGINT, and the result cannot be null.


One more thing to consider for the new OLAP features is the ability to partition results for the OLAP specification. This is specified using the PARTITION BY clause with a partitioning-expression, which is an expression used to define the partitioning of the result table. Each column name that is referenced in a partitioning-expression must unambiguously reference a column of the result table of the subselect that contains the OLAP specification. A partitioning-expression cannot include a scalar-fullselect or any function that is not deterministic or has an external action.


Here is an SQL example using partitioning:



SELECT
WORKDEPT,
EMPNO,
LASTNAME,
FIRSTNAME,
EDLEVEL,
DENSE RANK() OVER
(PARTITION BY WORKDEPT ORDER BY EDLEVEL DESC) AS RANK_EDLEVEL
FROM EMP
ORDER BY WORKDEPT, LASTNAME;


This SQL ranks departments according to their education level. And because DENSE_RANK is specified multiple employees with the same rank in the department will not increase the next ranking.


Sometimes RANK, DENSE_RANK, and ROW_NUMBER are called window functions. An OLAP specification is not valid in a WHERE, VALUES, GROUP BY, HAVING, or SET clause. An OLAP specification cannot be used as an argument of an aggregate function. When invoking an OLAP specification, a window is specified that defines the rows and order of the rows over which the function is applied.

Tuesday, May 15, 2007

Implicitly Hidden Columns [DB2 9 for z/OS]

Another nice new feature deep in the bowels of DB2 9 for z/OS is the ability to hide columns from the SELECT * statement. As far back as anyone can remember the advice has been given to avoid using SELECT * in application programs. But I still see it every now and then.

Now don't get me wrong, SELECT * is a nice shorthand when you are writing quick & dirty SQL using SPUFI or some other ad hoc SQL tool. But it does not belong in your application programs because a subsequent ALTER to add a column will cause the program to fail because there are now more columns being returned than the programmer coded into the program.

"OK," you may be asking, "so what? I thought you were writing about DB2 9 here?" Fair enough. DB2 9 for z/OS adds the ability to code IMPLICITLY HIDDEN on the column specification of a CREATE or ALTER TABLE statement. By coding IMPLICITYLY HIDDEN, the column will not be visible in the result of a SQL statement unless you explicitly refer to the column by name. So, SELECT * will not return any implicitly hidden columns to the result set.

For example, in the following table C2 is implicitly hidden and will not be returned as the result of SELECT *:

CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;

This has some obvious beneifts. First of all, if you are one of those shops where programmers did not follow the no SELECT * in programs rule, then you can simply add every new column with the IMPLICITLY HIDDEN attribute and those SELECT * statements will keep on running because they won't see the new columns.

Or, you might want to take a more comprehensive approach, and specify every column (except one, perhaps the key) of every new (or modified) table as IMPLICITLY HIDDEN. If every column except the key is hidden, then a SELECT * won't return anything except the key - you'll have to explicitly specify all of those other columns to get them into your result sets. Of course, this negates the ability to use SELECT * for quick & dirty SPUFI queries because implicitly hidden columns will be hidden there, too.

There are a few caveats on the usage of IMPLICITLY HIDDEN. You cannot specify IMPLICITLY HIDDEN for a column that is defined as a ROWID, or a distinct type that is based on a ROWID. Additionally, IMPLICITLY HIDDEN must not be specified for all columns of a table.

Monday, May 14, 2007

Index on Expressions [DB2 9 for z/OS]

DB2 9 for z/OS offers, for the first time, the ability to create an index on data that is not technically in the table. At this point, you may well be asking “What does that mean?” and “Why would I do that?” I’ll attempt to answer both of those questions.

Basically, the neat new feature is an extension to the CREATE INDEX statement that lets you create an index on an expression. That begs the question of just what is an expression… Well, an expression can be as simple as a column reference, or it can be a built-in function invocation or even a more general expression, with certain restrictions.

The expression that is being indexed is referred to as the key-expression. The maximum length of the text string of each key-expression is 4000 bytes after conversion to UTF-8. Each key-expression must contain as least one reference to a column of the table named in the ON clause of the index. Referenced columns cannot be LOB, XML, or DECFLOAT data types nor can they be a distinct type that is based on one of those data types. Referenced columns cannot include any FIELDPROCs nor can they include a SECURITY LABEL. Furthermore, the key-expression cannot include any of the following:
  • subquery
  • aggregate function
  • not deterministic function
  • function that has an external action
  • user-defined function
  • sequence reference
  • host variable
  • parameter marker
  • special register
  • CASE expression
  • OLAP specification
Unlike a simple index, the index key of an index on expression is composed by concatenating the result (also known as a key-target) of the expression that is specified in the ON clause. An index that is created on an expression lets a query take advantage of index access (if the index is chosen by the optimizer) and avoid a table space scan.

Perhaps an example would help to clarify this topic. Consider this sample DDL:

CREATE INDEX XUPLN
ON EMP
(UPPER(LAST_NAME, 'En_US'))
USING STOGROUP DSN8G910
PRIQTY 360 SECQTY 36
ERASE NO
COPY YES;

This example would create an index on the LAST_NAME column of the EMP table, but would index on the data after applying the UPPER function. This is useful if you store the data in mixed case but submit queries with parameter markers (or host variables) in upper case only. By indexing the data as upper case the index better matches your queries.

Query performance can be enhanced if the optimizer chooses that index. When you use an index on an expression, the results of the expressions are evaluated during insertion time or during an index rebuild and are kept in the index. If the optimizer chooses to use that index, the predicate is evaluated against the values that are stored in the index. As a result, run-time performance overhead is eliminated.

This is a nice new feature of DB2 9 for z/OS that you should consider if you have queries that search on expressions.

Monday, April 30, 2007

Database Definition on Demand [DB2 9 for z/OS]

As you probably know, online schema evolution (sometimes referred to as “online schema change”) was one of the key new features of DB2 V8. But, as its name implies, its capabilities continue to evolve. With V9, online schema evolution expands to simplify more types of database definition changes. The new term IBM is using for this in V9 is Database Definition On Demand (DDOD).

One of the nice new components provided by DDOD in V9 is that online table space reorganization is significantly improved. Today, when reorganizing just a couple of partitions in a partitioned table space the BUILD2 phase takes a long time to complete. V8 removed the outage for DPSIs, and now V9 removes the BUILD2 phase for all types of secondary indexes.

Another new DDOD capability supports replacing one table quickly with another. This is accomplished via cloning and the technique can even avoid the need to REBIND packages. Cloning allows you to generate, in the same table space, a new table with the same structure as the original table. After creating the clone you can do with it what you want – LOAD, DELETE, INSERT, UPDATE, etc. data – and then exchange the clone table name with the current table name. In this way you can keep the existing table operational while you work on the next “generation” of that table in its clone. When the clone is ready, you EXCHANGE it with the existing table. Nice, huh?

How about being able to rename a column within your table or rename an index? V9 provides the ability to do both! No longer do you have to DROP and re-CREATE in order to rename columns and indexes.

DB2 V9 also introduces a new type of table space that combines the attributes of segmented and partitioned. It is called a universal table space. Universal table spaces offer improved space management for variable length rows. This is so because it uses the space map page with more free space information like segmented table spaces. Also, like segmented table spaces, universal table spaces deliver improved mass delete performance and you can immediately reuse the table segments after the mass delete.

There are two types of universal table spaces:

  • Partition-by-growth: this type of universal table space will partition the data as it grows without the need to specify key ranges. This type of universal table space is beneficial for tables that will grow over time and/or need the additional limits afforded by partitioning, but can benefit from the performance of segmented. You can define more than one table in this type of universal table space if you wish.
  • Range-partitioned: this type of universal table space requires a key range for partitioning – and it can contain only a single table. This is basically adding segmentation to the existing partitioned table space.

Additionally, you can define SMS constructs (MGMTCLAS, DATACLASS, and STORCLAS) on a STOGROUP and you can ALTER those constructs as well. And table space and index logging parameters can be altered.

DB2 V9 even adds a new capability to change the DB2 early code without requiring an IPL.

So, with DB2 9 for z/OS we get more flexibility in modifying our database schemas. And that is a good thing, right?

Thursday, April 19, 2007

SELECT from DELETE, UPDATE, and MERGE [DB2 9 for z/OS]

Another nice new SQL feature provides the ability to SELECT from DELETE, UPDATE, and MERGE statements.

This capability is similar to the SELECT from INSERT feature that was introduced with DB2 V8. So, before looking at the new V9 feature, let’s review the V8 feature.

The ability to SELECT from an INSERT statement is an intriguing feature. To understand why this is so, we need to review some background details. In some cases, it is possible to perform actions on an inserted row before it gets saved to disk. For example, a BEFORE TRIGGER might change data before it is even recorded to disk. But the application program will not have any knowledge of this change that is made in the trigger. Identity columns and columns with defaults (especially user-defined defaults) have similar effects. What if the program needs to know the final column values? Prior to V8 this was difficult and inefficient to implement.

Well, the SELECT FROM INSERT feature introduced in DB2 V8 solves this problem. It allows you to both insert the row and retrieve the values of the columns with a single SQL statement. It performs very well because it performs both the INSERT and the SELECT as a single operation. Consider the following example:


SELECT COL5 INTO :C5-HV
FROM FINAL TABLE
(INSERT (COL1, COL2, COL5, COL7) INTO SAMPLE_TABLE
VALUES('JONES', 'CHARLES', CURRENT DATE, 'HOURLY')
);

The data is inserted as specified in the VALUES clause, and retrieved as specified in the SELECT. Without the ability to select COL5, the program would have no knowledge of the value supplied to COL5, because it was assigned using CURRENT DATE. With this new syntax the program can retrieve the CURRENT DATE value that was just inserted into COL5 without incurring additional overhead.

OK, on to V9. In this new version you can retrieve columns from rows that are modified via DELETE, UPDATE, and MERGE statements, thereby replacing multiple SQL calls with one. DB2 V9 allows a searched UPDATE or a searched DELETE statement in the FROM clause of a SELECT statement that is a subselect, or in the SELECT INTO statement. This capability allows a user or program to know which values were updated or deleted.

And, if you recall from a few blog entries ago, DB2 9 for z/OS also adds the MERGE statement. Well, it also adds the ability to SELECT from the MERGE. This allows you to return all the rows that were either inserted or updated as a result of the MERGE.

Here is an example of a SELECT from an UPDATE:


SELECT SUM(salary) INTO :SAL-HV
FROM FINAL_TABLE
(UPDATE EMP
SET SALARY = SALARY * 1.02
WHERE WORKDEPT = ‘A01’);


Prior to the capability you would have had to run the UPDATE statement, and then only after it finishes, you would run the SELECT to add up the new salary values. Now, instead of multiple statements requiring multiple passes through the data, you can consolidate it into one.

Nice, huh?