Tuesday, May 15, 2007

Implicitly Hidden Columns [DB2 9 for z/OS]

Another nice new feature deep in the bowels of DB2 9 for z/OS is the ability to hide columns from the SELECT * statement. As far back as anyone can remember the advice has been given to avoid using SELECT * in application programs. But I still see it every now and then.

Now don't get me wrong, SELECT * is a nice shorthand when you are writing quick & dirty SQL using SPUFI or some other ad hoc SQL tool. But it does not belong in your application programs because a subsequent ALTER to add a column will cause the program to fail because there are now more columns being returned than the programmer coded into the program.

"OK," you may be asking, "so what? I thought you were writing about DB2 9 here?" Fair enough. DB2 9 for z/OS adds the ability to code IMPLICITLY HIDDEN on the column specification of a CREATE or ALTER TABLE statement. By coding IMPLICITYLY HIDDEN, the column will not be visible in the result of a SQL statement unless you explicitly refer to the column by name. So, SELECT * will not return any implicitly hidden columns to the result set.

For example, in the following table C2 is implicitly hidden and will not be returned as the result of SELECT *:

CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;

This has some obvious beneifts. First of all, if you are one of those shops where programmers did not follow the no SELECT * in programs rule, then you can simply add every new column with the IMPLICITLY HIDDEN attribute and those SELECT * statements will keep on running because they won't see the new columns.

Or, you might want to take a more comprehensive approach, and specify every column (except one, perhaps the key) of every new (or modified) table as IMPLICITLY HIDDEN. If every column except the key is hidden, then a SELECT * won't return anything except the key - you'll have to explicitly specify all of those other columns to get them into your result sets. Of course, this negates the ability to use SELECT * for quick & dirty SPUFI queries because implicitly hidden columns will be hidden there, too.

There are a few caveats on the usage of IMPLICITLY HIDDEN. You cannot specify IMPLICITLY HIDDEN for a column that is defined as a ROWID, or a distinct type that is based on a ROWID. Additionally, IMPLICITLY HIDDEN must not be specified for all columns of a table.

Monday, May 14, 2007

Index on Expressions [DB2 9 for z/OS]

DB2 9 for z/OS offers, for the first time, the ability to create an index on data that is not technically in the table. At this point, you may well be asking “What does that mean?” and “Why would I do that?” I’ll attempt to answer both of those questions.

Basically, the neat new feature is an extension to the CREATE INDEX statement that lets you create an index on an expression. That begs the question of just what is an expression… Well, an expression can be as simple as a column reference, or it can be a built-in function invocation or even a more general expression, with certain restrictions.

The expression that is being indexed is referred to as the key-expression. The maximum length of the text string of each key-expression is 4000 bytes after conversion to UTF-8. Each key-expression must contain as least one reference to a column of the table named in the ON clause of the index. Referenced columns cannot be LOB, XML, or DECFLOAT data types nor can they be a distinct type that is based on one of those data types. Referenced columns cannot include any FIELDPROCs nor can they include a SECURITY LABEL. Furthermore, the key-expression cannot include any of the following:
  • subquery
  • aggregate function
  • not deterministic function
  • function that has an external action
  • user-defined function
  • sequence reference
  • host variable
  • parameter marker
  • special register
  • CASE expression
  • OLAP specification
Unlike a simple index, the index key of an index on expression is composed by concatenating the result (also known as a key-target) of the expression that is specified in the ON clause. An index that is created on an expression lets a query take advantage of index access (if the index is chosen by the optimizer) and avoid a table space scan.

Perhaps an example would help to clarify this topic. Consider this sample DDL:

CREATE INDEX XUPLN
ON EMP
(UPPER(LAST_NAME, 'En_US'))
USING STOGROUP DSN8G910
PRIQTY 360 SECQTY 36
ERASE NO
COPY YES;

This example would create an index on the LAST_NAME column of the EMP table, but would index on the data after applying the UPPER function. This is useful if you store the data in mixed case but submit queries with parameter markers (or host variables) in upper case only. By indexing the data as upper case the index better matches your queries.

Query performance can be enhanced if the optimizer chooses that index. When you use an index on an expression, the results of the expressions are evaluated during insertion time or during an index rebuild and are kept in the index. If the optimizer chooses to use that index, the predicate is evaluated against the values that are stored in the index. As a result, run-time performance overhead is eliminated.

This is a nice new feature of DB2 9 for z/OS that you should consider if you have queries that search on expressions.

Monday, April 30, 2007

Database Definition on Demand [DB2 9 for z/OS]

As you probably know, online schema evolution (sometimes referred to as “online schema change”) was one of the key new features of DB2 V8. But, as its name implies, its capabilities continue to evolve. With V9, online schema evolution expands to simplify more types of database definition changes. The new term IBM is using for this in V9 is Database Definition On Demand (DDOD).

One of the nice new components provided by DDOD in V9 is that online table space reorganization is significantly improved. Today, when reorganizing just a couple of partitions in a partitioned table space the BUILD2 phase takes a long time to complete. V8 removed the outage for DPSIs, and now V9 removes the BUILD2 phase for all types of secondary indexes.

Another new DDOD capability supports replacing one table quickly with another. This is accomplished via cloning and the technique can even avoid the need to REBIND packages. Cloning allows you to generate, in the same table space, a new table with the same structure as the original table. After creating the clone you can do with it what you want – LOAD, DELETE, INSERT, UPDATE, etc. data – and then exchange the clone table name with the current table name. In this way you can keep the existing table operational while you work on the next “generation” of that table in its clone. When the clone is ready, you EXCHANGE it with the existing table. Nice, huh?

How about being able to rename a column within your table or rename an index? V9 provides the ability to do both! No longer do you have to DROP and re-CREATE in order to rename columns and indexes.

DB2 V9 also introduces a new type of table space that combines the attributes of segmented and partitioned. It is called a universal table space. Universal table spaces offer improved space management for variable length rows. This is so because it uses the space map page with more free space information like segmented table spaces. Also, like segmented table spaces, universal table spaces deliver improved mass delete performance and you can immediately reuse the table segments after the mass delete.

There are two types of universal table spaces:

  • Partition-by-growth: this type of universal table space will partition the data as it grows without the need to specify key ranges. This type of universal table space is beneficial for tables that will grow over time and/or need the additional limits afforded by partitioning, but can benefit from the performance of segmented. You can define more than one table in this type of universal table space if you wish.
  • Range-partitioned: this type of universal table space requires a key range for partitioning – and it can contain only a single table. This is basically adding segmentation to the existing partitioned table space.

Additionally, you can define SMS constructs (MGMTCLAS, DATACLASS, and STORCLAS) on a STOGROUP and you can ALTER those constructs as well. And table space and index logging parameters can be altered.

DB2 V9 even adds a new capability to change the DB2 early code without requiring an IPL.

So, with DB2 9 for z/OS we get more flexibility in modifying our database schemas. And that is a good thing, right?

Thursday, April 19, 2007

SELECT from DELETE, UPDATE, and MERGE [DB2 9 for z/OS]

Another nice new SQL feature provides the ability to SELECT from DELETE, UPDATE, and MERGE statements.

This capability is similar to the SELECT from INSERT feature that was introduced with DB2 V8. So, before looking at the new V9 feature, let’s review the V8 feature.

The ability to SELECT from an INSERT statement is an intriguing feature. To understand why this is so, we need to review some background details. In some cases, it is possible to perform actions on an inserted row before it gets saved to disk. For example, a BEFORE TRIGGER might change data before it is even recorded to disk. But the application program will not have any knowledge of this change that is made in the trigger. Identity columns and columns with defaults (especially user-defined defaults) have similar effects. What if the program needs to know the final column values? Prior to V8 this was difficult and inefficient to implement.

Well, the SELECT FROM INSERT feature introduced in DB2 V8 solves this problem. It allows you to both insert the row and retrieve the values of the columns with a single SQL statement. It performs very well because it performs both the INSERT and the SELECT as a single operation. Consider the following example:


SELECT COL5 INTO :C5-HV
FROM FINAL TABLE
(INSERT (COL1, COL2, COL5, COL7) INTO SAMPLE_TABLE
VALUES('JONES', 'CHARLES', CURRENT DATE, 'HOURLY')
);

The data is inserted as specified in the VALUES clause, and retrieved as specified in the SELECT. Without the ability to select COL5, the program would have no knowledge of the value supplied to COL5, because it was assigned using CURRENT DATE. With this new syntax the program can retrieve the CURRENT DATE value that was just inserted into COL5 without incurring additional overhead.

OK, on to V9. In this new version you can retrieve columns from rows that are modified via DELETE, UPDATE, and MERGE statements, thereby replacing multiple SQL calls with one. DB2 V9 allows a searched UPDATE or a searched DELETE statement in the FROM clause of a SELECT statement that is a subselect, or in the SELECT INTO statement. This capability allows a user or program to know which values were updated or deleted.

And, if you recall from a few blog entries ago, DB2 9 for z/OS also adds the MERGE statement. Well, it also adds the ability to SELECT from the MERGE. This allows you to return all the rows that were either inserted or updated as a result of the MERGE.

Here is an example of a SELECT from an UPDATE:


SELECT SUM(salary) INTO :SAL-HV
FROM FINAL_TABLE
(UPDATE EMP
SET SALARY = SALARY * 1.02
WHERE WORKDEPT = ‘A01’);


Prior to the capability you would have had to run the UPDATE statement, and then only after it finishes, you would run the SELECT to add up the new salary values. Now, instead of multiple statements requiring multiple passes through the data, you can consolidate it into one.

Nice, huh?

Tuesday, April 17, 2007

Free DB2 Access Paths Webinar - April 18, 2007

Just a quick note to inform my readers that I will be conducting a webinar tomorrow - Wednesday, April 18, 2007 - on the topic of Change Control for DB2 Access Path Selection. The session will start at 10:30 Central time and will last about an hour. To register, click on the link above.

Here is a short description of what I'll discuss: Most changes are strictly controlled in the mainframe environment. But that is not the case for DB2 access paths. When we BIND or REBIND a program, DB2 formulates access paths “on the fly” and we do not have any control over what DB2 creates for us. This lack of control over changes can cause unpredictable performance.

NOTE
This webinar is in the past and can no longer be viewed.

Tuesday, April 10, 2007

MERGE and TRUNCATE [DB2 9 for z/OS]

Two additional new SQL statements available in DB2 Version 9 are the MERGE and TRUNCATE statements.

MERGE

The MERGE statement basically takes two “tables” and merges the data into one table. The table that will contain the merged results is referred to as the target; the other participating table is called the source. Rows in the target that match the source are updated and rows that do not exist in the target are inserted from the source to the target.

If you use other DBMSs you may be somewhat familiar with MERGE functionality. It is sometimes called an UPSERT (taking the UP from update and the SERT from insert). A simplified version of the MERGE syntax follows:


MERGE INTO table_name
USING table_name
ON (condition)
WHEN MATCHED THEN
UPDATE SET column1 = value1 [, column2 = value2 ...]
WHEN NOT MATCHED
THEN INSERT column1 [, column2 ...]
VALUES (value1 [, value2 ...]) ;



The DB2 implementation is a tad different, though. Instead of the USING clause specifying an actual table, it instead specifies a VALUES clause of data for a single row or an array of rows. So the source, if it consists of multiple rows, must be populated into a host variable array.

So, say we have a customer table, CUST, and we want to accept several customers from a file. If the customer already exists, we want to populate it with the new, updated information; if the customer does not exist, we want to insert the new customer. To accomplish this in DB2 V9, we can write a MERGE statement such as this:


MERGE INTO CUST C
USING VALUES
((:CUSTNO, :CUSTNAME, :CUSTDESC)
FOR :HV_NROWS ROWS) AS N (CUSTNO, NAME, DESC)
ON (C.CUSTNO = N.CUSTNO)
WHEN MATCHED THEN UPDATE
SET (C.NAME, C.DESC) = (N.NAME, N.DESC)
WHEN NOT MATCHED THEN INSERT (CUSTNO, NAME, DESC)
VALUES (N.CUSTNO, N.NAME, N.DESC)
NOT ATOMIC CONTINUE ON SQL EXCEPTION;



Of course, this is a simple example as there will likely be many other columns in the customer table that would be of interest. But you can easily extrapolate from this example using it as a template of sorts to build a more complex example.

The rows of input data are processed separately. When errors are encountered and NOT ATOMIC CONTINUE ON SQL EXCEPTION is in effect, processing continues, and some of the specified rows will not be processed. Regardless of the failure of any particular source row, the MERGE statement will not undo any changes that are made to the database by the statement. Merge will be attempted for rows that follow the failed row. However, the minimum level of atomicity is at least that of a single source row (in other words, it is not possible for a partial merge to complete).

If you are using triggers be sure to consult the SQL Reference manual (PDF) to understand how MERGE impacts trigger processing.

At any rate, you need to know that the MERGE statement lets you combine UPDATE and INSERT across many rows into a table based upon a matching key. You can embed the MERGE statement in an application program or issue it interactively. The statement is executable and can be dynamically prepared. In addition, you can use the SELECT FROM MERGE statement to return all the updated rows and inserted rows, including column values that are generated by DB2.

TRUNCATE

OK, so that is MERGE, but the title of this blog entry is MERGE and TRUNCATE, so what is TRUNCATE? Well, that is an easier story to tell. The TRUNCATE statement is simply a quick way to DELETE all of the data from a table. The table can be in any type of table space and it can be either a base table or a declared global temporary table. If the table contains LOB or XML columns, the corresponding table spaces and indexes are also truncated.

For clarification, consider the following example:



TRUNCATE TABLE EXAMPLE_TABLE
REUSE STORAGE
IGNORE DELETE TRIGGERS
IMMEDIATE;



Seems easy enough, doesn’t it? But what are those additional parameters? Well, REUSE STORAGE tells DB2 to empty the storage that is allocated but keeps it allocated. The alternate, which is the default, is DROP STORAGE. This option tells DB2 to release the storage that is allocated for the table and to make it available for use for the same table or any other table in the table space. REUSE STORAGE is ignored for a table in a simple table space and the statement is processed as if DROP STORAGE is specified.

The next parameter, which is the default if nothing is specified, is IGNORE DELETE TRIGGERS. This tells DB2 to not fire any DELETE triggers. Alternately, you could specify RESTRICT WHEN DELETE TRIGGERS, which will return an error if there are any delete triggers defined on the table.

Finally, we have the IMMEDIATE option. This causes the TRUNCATE to be immediately executed and it cannot be undone. If IMMEDIATE is not specified you can issue a ROLLBACK to undo the TRUNCATE.

Synopsis

So with DB2 9 for z/OS we have two new helpful SQL statements that can simplify our coding efforts. MERGE to combine INSERT and UPDATE processing and TRUNCATE to quickly DELETE all data from a table. Keep them in mind and use them to aid your DB2 application development efforts.

Monday, April 09, 2007

New DB2 9 Security Redbook

Just a quick entry today to alert everyone that there is a new DB2 for z/OS redbook available on the topics of regulatory compliance, security, and audit. The redbook is titled Securing DB2 and Implementing MLS on z/OS and you can download it for free today over the web.

The redbook is 360 pages (including index) and covers the plethora of new security features in DB2 for z/OS. If you haven't looked at DB2's authorization and security functionality in awhile there is much to learn... and this redbook will be very illuminating.

That's all for today.

Thursday, April 05, 2007

Native XML Support in DB2 Databases [DB2 9 for z/OS]

One of the biggest technological advances in DB2 V9 is the ability to combine the management of structured and unstructured data. Basically, V9 will allow you to store data as native XML. This capability has already been introduced into V9 of DB2 for Linux, Unix, and Windows. Many of you may well ask “Hey, what’s the big deal here? Can’t we already use the XML Extender and store XML data in DB2 prior to V9?” Yes, but V9 changes the game. You will be able to search and analyze structured data in a relational data repository and unstructured data in an XML repository without the need to reformat it. So your regular “relational” data gets stored as always; and XML data gets stored in its native format without the need to shove it into a CLOB or shred it into “relational” columns. The approach is novel in that DB2 will now support native XML via dual storage engines – the traditional SQL/relational engine and a new XML engine. DB2 9 for z/OS handles XML as a new data type that is stored in a natural hierarchy - different from relational data. For those of you not familiar with XML, you need to know that there are big differences between XML data and typical DB2 data. Foremost among these differences is that XML data is hierarchical, whereas “relational” DB2 data is basically “flat.” Additionally, XML data is self-describing. XML tags identify and name the data elements in the XML document. This capability concentrates both the data and its structure into a single document. So, in essence, the XML document becomes self-describing. This is important to keep in mind because a single XML document can have many different types of data, whereas “relational” DB2 data is defined in the system catalog by its column definition. And all data in the same column must have the same data type (e.g. you cannot store a name in an integer column). Finally, XML data is ordered, whereas “relational” DB2 data is not. The order in which data items are specified in the XML document is relevant. There is often no other way to specify order within an XML document. For relational data, the order of the rows is not guaranteed unless you specify an ORDER BY clause on one or more columns. OK, now, just how would you support XML data in DB2 V9 then? Think of XML as just another data type. You would use the XML data type in a CREATE TABLE statement to define a column to be of type XML. Each column of type XML can hold one XML document for every row of the table. Even though the XML documents are logically associated with a row, XML and “relational” columns are stored differently. The “relational” columns are stored in the traditional structures we all know and love. The XML data is stored in hierarchical structures. Don’t let that scare you. IBM has seamlessly integrated XML with relational data to simplify application development while optimizing search performance with highly optimized XML indexes. Here is a quick example walkthrough from the DB2 XML Guide manual that creates a simple table with an XML column. First, as with triggers, when you create tables with XML in SPUFI be sure to set the SQL terminator to a character other than a semicolon, for example, the pound sign (#). This is done so that your SQL can have embedded semicolons. Also, you’ll probably want to set CAPS OFF in SPUFI to preserve lower case. Then, create a table like this: CREATE TABLE MYCUSTOMER (CID BIGINT, INFO XML) # This creates a two columns table, the first column as a big integer and the second for the XML data. Next, we’ll build an index over XML data. We will assume that the XML documents to be stored in the INFO column will have a root element named customerinfo with an attribute named Cid. So, here is the DDL for the unique index on the Cid attribute:
CREATE UNIQUE INDEX MYCUT_CID_XMLIDX ON MYCUSTOMER(INFO)
GENERATE KEY USING XMLPATTERN
‘declare default element namespace
   "http://posample.org"; /customerinfo/@Cid’
AS SQL DECFLOAT
#
The XML pattern defining the index is case-sensitive. The element and attribute names in the XML pattern must match the element and attribute names in the XML documents exactly. Now we can insert a couple of XML documents into the INFO column, such as:
INSERT INTO MYCUSTOMER (CID, INFO) VALUES (1000,
’<customerinfo xmlns="http://posample.org" cid="1000">
  <name>Kathy Smith</name>
  <addr country="Canada">
    <street<5 Rosewood</street>
    <city>Toronto</city>
    <prov-state<Ontario</PROV-STATE>
    <pcode-zip<M6W 1E6</PCODE-ZIP>
  </addr>
  <phone type="work">416-555-1358</phone>
 </customerinfo>’)
#
INSERT INTO MYCUSTOMER (CID, INFO) VALUES (1002,
’<customerinfo xmlns="http://posample.org" cid="1002">
  <name>Jim Noodle</name>
  <addr country="Canada">
    <street>25 EastCreek</street>
    <city>Markham</city>
    <prov-state>Ontario</PROV-STATE>
    <pcode-zip>N9C 3T6</PCODE-ZIP>
  </addr>
  <phone type="work">905-555-7258</phone>
  <phone type="cell">905-555-7254</phone>
 </customerinfo>’)
#
Then you can issue a SELECT statement against this table and thereby verify that the XML documents were successfully inserted. For example: SELECT CID, INFO FROM MYCUSTOMER # V9 also supports XPath to query elements within an XML document, as well as catalog extensions to support definitions of XML schemas. Furthermore, the IBM DB2 utilities have been extended such that they can be used to administer XML data, too. To my mind, though, one of the problems with XML in DB2 9 for z/OS is the lack of support for XQuery. XQuery is an XML query language capable of traversing XML documents. Just like SQL is the query language for native DB2 data, XQuery is the query language for native XML data. DB2 9 for Linux, Unix, and Windows supports XQuery, but DB2 9 for z/OS does not. For an independent tutorial on XQuery, click on this link or for an IBM tutorial on using XQuery in DB2 LUW click on this link instead. So, how do you retrieve XML data using DB2 9 for z/OS? You can use SQL to retrieve entire XML documents from XML columns just like you would retrieve any other column. But if you need to retrieve portions of that XML document you will need to specify XPath expressions, through SQL with XML extensions. For an independent tutorial on XPath, click on this link . Here is an example of using XPath to identify data within our XML data: DELETE FROM MYCUSTOMER WHERE XMLEXISTS ( ’declare default element namespace "http://posample.org"; /customerinfo/phone[@type="cell"]’ PASSING INFO) # This should DELETE any XML document that has cell phone information, and for the purposes of this example, that would be CID 1002. I do not wish to go into any detailed description of XPath in this blog, but you can use XML functions with XPath expressions to traverse the XML document for data. One final note: some of the IBM documentation could be clearer. For example, I take exception with this paragraph lifted directly out of the “What’s New” manual (GC18-9856-00): “Support for XML capabilities and functions span the entire DB2 family. Version 8 of DB2 for z/OS and Version 8 of DB2 for Linux, Unix, and Windows provide basic support for storing, retrieving, and querying XML documents. DB2 9 for Linux, UNIX and Windows continues the work by delivering rich support of XML, including an XQuery interface to the data. Now, DB2 V9.1 for z/OS expands on similar support by delivering seamless integration of XML data and relational data in the DB2 database.” Anyone reading that paragraph would be completely justified in expecting DB2 V9 for z/OS to include XQuery support. It seems to have been written using intentionally misleading wording in order to avoid admitting that XQuery is not supported on z/OS. At least, that is what it seems like to me, I could be wrong. I’m also interested in how many folks out there in DB2-mainframe-land expect to use the XML capabilities of DB2 for z/OS? Please sign in and leave a comment expressing whether or not you plan to use DB2’s XML support. Thanks, and that is all for today. Keep an eye out for future DB2 9 for z/OS related posts as I plan to continue adding to this series on new V9 features over the course of the next month or so (at least). Cheers!

Wednesday, April 04, 2007

ORDER BY and FETCH FIRST in Subselects [DB2 9 for z/OS]

Here is another quick post in my series on new features in DB2 9 for z/OS.

Today, we will look at the additional flexibility gained in how the ORDER BY and FETCH FIRST n ROWS ONLY clauses can be specified in V9. Prior to the V9, the only place you could specify these clauses was at the statement level. Indeed, this has been a source of confusion for many DB2 SQL programmers.

A discussion of DB2 SELECT should be broken down into three topics:
  1. fullselect,
  2. subselect, and
  3. select-statement.
The select-statement is the form of a query that can be directly specified in a DECLARE CURSOR statement, or prepared and then referenced in a DECLARE CURSOR statement. It is the thing most people think of when they think of SELECT in all its glory. If so desired, it can be issued interactively using SPUFI. The select-statement consists of a fullselect, and any of the following optional clauses: order-by, fetch-first, update, read-only, optimize-for, isolation and queryno. Well, that is, until V9 which still allows the fetch-first and order-by at this level, but also at the fullselect and subselect level!

A fullselect can be part of a select-statement, a CREATE VIEW statement, or an INSERT statement. Basically, a fullselect specifies a result table. Prior to V9, this sometimes confused folks as they tried to put a FETCH FIRST n ROWS clause or an ORDER BY in a view or as part of an INSERT. That was not allowed!

Finally, a subselect is a component of the fullselect. A subselect specifies a result table derived from the result of its first FROM clause. The derivation can be described as a sequence of operations in which the result of each operation is input for the next.

This is all a bit confusing. Think of it this way: in a subselect you specify the FROM to get the tables, the WHERE to get the conditions, GROUP BY to get aggregation, HAVING to get the conditions on the aggregated data, and the SELECT clause to get the actual columns. In a fullselect you add in the UNION to combine subselects and other fullselects. Finally, you add on any optional order-by, fetch-first, update, read-only, optimize-for, isolation and queryno clauses to get the select-statement.

But, of course, as of V9 you can use the order-by and/or the fetch-first at the subselect or fullselect level. This can be useful if you want to limit the results within a subquery or part of a UNION statement. Also, if you specify ORDER BY with FETCH FIRST n ROWS ONLY, the result is ordered before the fetch-first is applied. (That's a good thing.)

So, that means all of the following are now legal SQL formulations in V9:

(SELECT COL1 FROM T1
UNION
SELECT COL1 FROM T2
ORDER BY 1)
UNION
SELECT COL1 FROM T3
ORDER BY 1;


This example shows how ORDER BY can be applied in fullselects with UNION.

SELECT EMP_ACT.EMPNO, PROJNO
FROM EMP_ACT
WHERE EMP_ACT.EMPNO IN (SELECT EMPLOYEE.EMPNO
FROM EMP
ORDER BY SALARY DESC
FETCH FIRST 10 ROWS ONLY);


And this example will return the employee number and project number for any projects assigned to employees with one of the top ten salaries.

So, once you move to V9 you will have much more lattitude in how you write your SELECTs. If you are interested in more details, here is a link to the section of the DB2 9 for z/OS SQL Reference manual on building SQL queries.

Tuesday, April 03, 2007

INTERSECT and EXCEPT [DB2 9 for z/OS]

With this blog entry I am introducing a new series in which I will briefly blog about the new feature of DB2 9 for z/OS. Today's entry will cover the new INTERSECT and EXCEPT keywords.

DB2 for Linux, Unix, and Windows has supported INTERSECT and EXCEPT in SQL SELECT statements for quite some time now, and with V9 the z/OS platform catches up. These two set operations can be used to simplify some SQL statements. Think of them as being similar to the UNION operation.

INTERSECT is used to match result sets between two tables. If the data is the same in both results sets it passes through. When INTERSECT ALL is specified, the result consists of all rows that are in both result sets. If INTERSECT is specified without the ALL option, the duplicates will be removed from the results. For example, the following SQL will show all customers in the USA who are also employees (with no duplicates):

SELECT last_name, first_name, cust_num
FROM CUST
WHERE country = 'USA'
INTERSECT
SELECT last_name, first_name, emp_num
FROM EMP
WHERE country = 'USA';


EXCEPT, on the other hand, combines non-matching rows from two result tables. Some other DBMS implementations refer to this as the MINUS operation. When EXCEPT ALL is specified, the result consists of all rows from the first result table that do not have a corresponding row in the second and any duplicate rows are kept. If EXCEPT is specified without the ALL option, duplicates are eliminated. As an example, the following SQL will return only those items from TABLE1 that are not also in TABLE2:

SELECT item FROM TABLE1
EXCEPT
SELECT item FROM TABLE2;


Both INTERSECT and EXCEPT make it easier to formulate SQL requests...

Friday, March 30, 2007

Mimicking ROWNUM in DB2

I was going through the many random files stored on my PC when I came across this useful nugget. A year or so ago a reader sent me this code. He and his supervisor had come up with a way to simulate the Oracle rownum feature in DB2 and wanted to share it with the world. I think, at the time, I posted it up on TechTarget as I was doing Q+A there on the old search390.com site. But that site is now defunct, so I'm reposting the solution here on my DB2portal blog for all to share.



SELECT DEPTNO, ROW#
FROM DEDBADM.DEPT TB1,
TABLE (SELECT COUNT(*) + 1 AS ROW#
FROM DEDBADM.DEPT TB2
WHERE TB2.DEPTNO < TB1.DEPTNO) AS TDEPT_TAB
WHERE ROW# = 2;



If you decide to use this code, be careful because there is no guarantee that the row numbers will be the same from run to run of the SQL. As long as you get the same access path the row numbers should be the same from run to run, but if the access path changes -- for example, from a table space scan to indexed access or vice versa -- then the row numbers for the runs will differ. That is, each row may have a different number than it had for a previous execution.

All in all, though, a nice solution to the problem!


Friday, March 23, 2007

IBM Annual Report: Good News for DB2 DBAs

I sometimes enjoy reading through the annual reports of companies like IBM (that is, companies whose products I use "on the job"), so I thought maybe readers of my blog might enjoy that, too. With that in mind, IBM's annual report for 2006 is now up and available online.

What nuggets of information did I find in it? Well, I have to admit, I have not scoured it from front to back (and I probably won't), but I did try to track down the status of DB2 within the company. DB2 falls within IBM's Information Management brand, in their Middleware software segment. The annual report tells me that the Middleware software segment is doing better than the overall software category within IBM:
  • The Middleware segment grew to $13.891 billion in 2006 from $12.552 billion in 2005. And that represents year over year growth of 10.7%.
  • IBM's entire software portfolio posted revenue of $18.161 in 2006 versus $16.83 billion in 2005, representing a year over year growth of 7.9%.
OK, but what about DB2? The best I can do is all of Information Management, which include DB2, as well as IMS and Informix, and all of the data management tools that IBM sells. These products posted a 14% gain in 2006 over 2005. Nice...

In terms of hardware, though, IBM saw a slide in that market segment. The company posted revenue of $21.97 billion in 2006 down from the $23.857 billion posted in 2005. This is a 7.9% drop in annual revenue for hardware. But, you may well ask, we care about the mainframe more than those other platforms, how did it do?

It did well, my friends. Of the four major computer brands sold by IBM, the z series boxes (that is, the mainframes) did the best. System z was up 7.8% over 2005. Its closest competitor was System x (UNIX servers) which was up 3.7%. Bad news for the other two computing platforms: System i (that is, the AS/400s) was down 15% and System p (Windows servers) was down 1.1%.

So, if you are my kinda people - that is, DBAs working on DB2 for z/OS - then IBM's annual report contains good news all around. The DBMS business is healthy and so is the mainframe business.

Monday, March 19, 2007

DB2 for z/OS V8 Performance Workshops

Just a quick note this morning to let folks in Zurich, Switzerland and London, UK know about a three-day performance workshop for DB2 V8 coming up in June. The workshop is based on the IBM Redbook DB2 for z/OS Version 8 Performance Topics, SG24-6465. If you are looking for an overview and update of performance-related issues you can tackle with DB2 V8, consider this workshop. According to the redbook web site this workshop helps "you understand the performance implications of migrating from DB2 V7 to DB2 V8, highlights the key performance functions and sets the right expectations. It provides the type of information needed to evaluate the performance impact of DB2 V8 and the capacity planning needs."

Of course, this workshop information is directed mostly at my European readers... unless, of course, you are a USA reader with some extra travel budget and your manager will allow you to travel overseas!?!? (I know, I know, you can't even get budget to travel to Scranton, let alone Zurich...)

Here is the information for each:
That's all for today!

Wednesday, March 14, 2007

IDUG News

A lot of new stuff has been going on at the International DB2 User's Group (IDUG) the past few months, so I thought I'd write a quick blog entry to update folks about what's new.

First of all, this year's North American IDUG conference will be held the week of May 6 thru 10, 2007, in San Jose, CA. As always, the conference offers a fantastic learning and networking opportunity for DB2 and Informix users with hundreds of technical sessions, as well as 16 full day pre-conference seminars, the opprotunity to take certification exams for free, numerous special interest groups, and a great vendor exhibition. Also, this year's event is the first to offer IMS content, too - so if you are an IMS DBA or technician now is the time to add IDUG to your educational opportunities!

In other IDUG news, the group is expanding its conference coverage to India. The 2007 IDUG India Forum, taking place May 31 – June 2, 2007 in Bangalore, India, is geared towards professional application developers and DBAs. This three-day event will present content that will appeal to new users as well as experienced professionals. So if you are in India, IDUG is coming your way!

Finally, if you haven't visited the IDUG web site in awhile, now is the time to check it out again. IDUG volunteers have put a lot of effort into revamping the site and it looks great.

That's it for now... check in again soon!

Tuesday, March 06, 2007

DB2 9 for z/OS General Availability Announced

Today, March 6, 2007, IBM announced the general availability of the next version of DB2 for z/OS, DB2 9, for March 16, 2007. The full announcement can be read here.

You can read a high-level overview of DB2 9 for z/OS in my DB2 Magazine article titled DB2 9 for z/OS Roars to Life.

Wednesday, February 28, 2007

QMF vs. SPUFI

As regular readers of my blog know, I sometimes use the blog to answer questions I get via e-mail. This is one of those times...

The question I received is this:
Can you bring out the major differences between QMF and SPUFI?

Here is my response:
The biggest difference between QMF and SPUFI is that QMF is a query/reporting environment with the ability to format reports. SPUFI is just a quick and dirty SQL execution engine.

(Here is a trivia question: what does the acronym SPUFI stand for? The answer is provided at the bottom, so page down if you want to know...)

If you need to produce nice-looking reports, enable user input to a query, or store your queries and reports for future usage, QMF is a much better technology for doing so. QMF also offers data formatting and translation capabilities that are difficult (sometimes impossible) to accomplish using SPUFI and SQL alone.

A typical end-user might have difficulty using SPUFI because it requires using data sets to store your SQL statements and pass them to DB2. The results are also delivered to another data set. Most end users are not comfortable managing and manipulating mainframe data sets. QMF, on the other hand, stores its queries in tables and hides this fact from the user with a nice interface for saving and recalling SQL queries (and results).

Keep in mind, though, that SPUFI comes for free with DB2 whereas QMF is an add-on product and costs money. Not every DB2 customer will have QMF, whereas every DB2 (mainframe) customer will have SPUFI.



OK, now, what does SPUFI stand for?

The answer: SQL Processor Using File Input.

Tuesday, February 27, 2007

Dynamic SQL and Performance

The performance of dynamic SQL is one of the most widely debated DB2 issues. Some shops try to avoid it, but in this day and age of ERP/CRM systems, Java/JDBC programming, and web applications, all of which heavily rely on dynamic SQL, its use is becoming more ubiquitous.

Still, many shops that allow dynamic SQL try to place strict controls on its use. But as new and faster versions of DB2 are released, many of the restrictions on dynamic SQL use can be eliminated.

I suppose that you can still find some valid reasons for prohibiting dynamic SQL. For example, you should avoid dynamic SQL when the dynamic SQL statements are just a series of static SQL statements in disguise. Consider an application that needs two or three predicates for one SELECT statement that is otherwise unchanged. Coding three static SELECT statements can be more efficient than coding one dynamic SELECT with a changeable predicate. The static SQL takes more time to code but probably less time to execute.

Another reason for avoiding dynamic SQL is that it can require more overhead to process than equivalent static SQL. Dynamic SQL incurs overhead because the cost of the dynamic bind, or PREPARE, must be added to the processing time of all dynamic SQL programs. But this overhead is not quite as costly as many people seem to think it is.

To determine the cost of a dynamic bind, consider running some queries using SPUFI with the DB2 Performance trace turned on. Then examine the performance reports or performance monitor output to determine the elapsed and TCB time required to perform the PREPARE. The results should show elapsed times less than 1 second and subsecond TCB times. The actual time required to perform the dynamic prepare will vary with the complexity of the SQL statement. In general, the more complex the statement, the longer DB2 will take to optimize it. So be sure to test SQL statements of varying complexity.

Of course, the times you get will vary based on your environment, the type of dynamic SQL you use, and the complexity of the statement being prepared. Complex SQL statements with many joins, table expressions, unions, and subqueries take longer to PREPARE than simple queries. However, factors such as the number of columns returned or the size of the table being accessed have little or no effect on the performance of the dynamic bind.

And prepared dynamic SQL can be cached in the EDM pool so that the same SQL statement can reuse the mini plan for the statement the next time it runs. Of course, the statement has to be exactly the same in order to benefit from the dynamic SQL cache.

Keep in mind, too, that performance is not the only factor when deciding whether or not to use dynamic SQL. For example, if a dynamic SQL statement runs a little longer than a static SQL statement but saves days of programming cost then perhaps dynamic SQL is the better choice. It all depends on what is more important -- the cost of development and maintenance or squeezing out every last bit of performance at any cost.

Overhead issues notwithstanding, there are valid performance reasons for favoring dynamic SQL, too. For example, dynamic SQL can enable better use of indexes, choosing different indexes for different SQL formulations. Properly coded, dynamic SQL can use the column distribution statistics stored in the DB2 catalog, whereas static SQL is limited in how it can use these statistics. Use of the distribution statistics can cause DB2 to choose different access paths for the same query when different values are supplied to its predicates.

The REOPT bind parameter can be used to allow static SQL containing host variables or special registers to behave like incremental-bind statements. When dynamic reoptimization is activated, a dynamic bind similar to what is performed for dynamic SQL is performed. This means that these statements get compiled at the time of EXECUTE or OPEN instead of at BIND time. During this compilation, the access plan is chosen, based on the real values of these variables.

Specifying REOPT ONCE causes the access plan to be cached after the first OPEN or EXECUTE request and it is used for subsequent execution of this statement. With REOPT ALWAYS, the access plan is regenerated for every OPEN and EXECUTE request, and the current set of host variable, parameter marker, and special register values is used to create this plan.

Additionally, consider that the KEEPDYNAMIC bind option can enhance the performance of dynamic SQL. When a plan or package is bound specifying KEEPDYNAMIC(YES), the prepared statement is maintained across COMMIT points. Contrast that with KEEPDYNAMIC(NO), where only cursors using the WITH HOLD option keep the prepared statement after a COMMIT.

Dynamic SQL usually provides the most efficient development techniques for applications with changeable requirements (for example, numerous screen-driven queries). In addition, dynamic SQL generally reduces the number of SQL statements coded in your application program, thereby reducing the size of the plan and increasing the efficient use of system memory.
So, if you have a compelling reason to use dynamic SQL, then by all means, go ahead and code up your program to use dynamic SQL. I mean, after all, it is no longer the early, dark days of DB2 when dynamic SQL almost certainly meant performance problems. And, as I mentioned in the beginning, dynamic SQL is likely to be foisted on you in your new, more modern applications even if you continue to desperately keep it out of your COBOL programs.

For more in-depth details on dynamic SQL, consider downloading the following IBM redbook: DB2 for z/OS and OS/390 : Squeezing the Most Out of Dynamic SQL.

Wednesday, February 14, 2007

A Report From SHARE

As regular readers of this blog know, I regularly attend and speak at the SHARE user conferences held twice annually. The Spring 2007 SHARE conference is being held this week, February 12 thru 16, 2007, in Tampa, Florida. It is now Wednesday, mid-way through the week, and time for me to post a quick update of what is going on at SHARE, from a DB2 point of view.

First of all, I’ve been privileged to deliver two presentations this week. The first is one of the perennial favorites called “A DB2 Performance Tuning Roadmap.” I’ve been delivering this presentation now, in various forms and with various changes, for about 8 years. And every time I think it is time to retire it, I hear from a user group or a conference that is interested in having me present it. And every time I get good feedback. I guess that is not so surprising, I mean, after all, performance is always one of those topics to raise interest among all DBAs and many developers. This particular presentation works as an overview of the three basic components of database performance management: that is, you need to manage and tune the application/SQL, the database objects, and the DBMS/subsystem. All three are integral to optimizing DB2 performance and the presentation walks through the basics of each of the three areas and offers guidance and suggestions.

The second presentation I delivered is called “Change Control for DB2 Access Paths” and it is a bit newer. The premise of this presentation is that organizations need to REBIND for performance, but they also need to control change in their production environment. In the presentation I discuss the issues involved in BIND/REBIND management and then introduce how NEON Enterprise Software’s Bind ImpactExpert can provide a solution.

OK, so much for my presentations, what else is happening DB2-wise at SHARE this week? Well, a lot of the presentations are by IBMers focusing on DB2 9 for z/OS. Even though this new version has not been officially released yet, IBM is getting the word out early so we all can understand the benefits of this new release – and also, so we can understand that it is not the goliath that V8 was (nay, let’s say is).

Roger Miller has offered a couple of great presentations on migration and performance issues, and Willie Favero spoke about all of the great new SQL features and functionality that will be available in DB2 9. Then there was the update on the DB2 utilities and what is happening to them in DB2 9… And there is a lot of good news – to net it out, more online utilities, more availability, and less CPU.

If you’re an IMS person, then there has been a lot of coverage of IMS Version 10 here, too. I haven’t personally attended many of the IMS sessions, but I’ve gathered handouts that cover IMS V10 dynamic resource definition, transaction manager enhancements, and database and DBRC enhancements.

And the zNextGen program, that is spearheaded by NEON Enterprise Software's own Kristine Harper, offered an interesting program this week as well. One of the highlights was a fun presentation by Cheryl Watson, a well-reknowned expert in the mainframe world.

Basically, there is a lot of education going on in Tampa this week for mainframe database zealots… and I’m glad I’m hear to soak it up. If you weren’t able to make it to Tampa, consider trying to get to the Fall 2007 SHARE conference in San Diego, CA this August.

Monday, February 05, 2007

See You at SHARE?

Well, it is that time of the year again... that is, time for the first of two annual SHARE conferences. The "Spring 2007" SHARE will be held next week, in Tampa, FL.

For those of you who don't know about SHARE, it is an independent, volunteer-run association for IBM customers. It is not only mainframe-focused, but it is one of the last events mainframe folks can attend to hear about mainframe topics from z/OS to JES and from CICS to IMS to DB2.

I will be presenting on Monday (2/12) and Tuesday (2/13). On Monday I will cover a DB2 Performance Tuning Roadmap - basically presenting a ahigh-level overview of the "things" you need to be aware of, and track, in order to assure an optimally performing DB2 environment. And then on Tuesday I'll be addressing change control for DB2 access paths. As you know, change is strictly controlled on the mainframe, but one exception has been DB2 access paths. Binds and Rebinds are typically done in the production environments without the benefit of oversight or prior testing. This lack of change control results in unpredictable performance impacts. This presentation will discuss the issues and problems this causes, as well as how you can use Bind ImpactExpert to analyze and manage access path changes in your DB2 environment.

And I'll be attending the event all week long, so I'll be sure to post a blog or two about interesting things I learn at the show... so get yourself to Tampa if you can, but if you can't, be sure to tune in during the week next week to hear about some of the highlights (at least my take on the highlights) of SHARE.

Wednesday, January 31, 2007

A Quick Intro to DB2 SQL Functions

The last post I made to this blog used functions to translate data from one form to another. Functions frequently can be used to overcome issues when you are reading data from your DB2 tables. With that in mind, I thought it might make sense to go over DB2 SQL functions at a basic level - as a review for those who might not have used them before and to refresh the memory of those who have.

Functions can be specified in SQL statements to transform data from one state to another. Two types of functions can be applied to data in a DB2 table using SQL: column functions and scalar functions. Column functions analyze a group of rows and compute a single value for a designated column or expression. For example, the SUM function can be used to add, returning the sum of the values instead of each individual value:

SELECT SUM(SALARY)
FROM EMP
WHERE WORKDEPT = 'A01';

This query will return the total salary for all employees in the A01 department. This is a better solution than writing a program to read all employee rows in department A01 and then adding the salaries. With the SUM function less data is returned to the program, so it will operate much more efficiently.

The second type of function is a scalar function. Scalar functions are applied to a column or expression and operate on a single value. For example, the DAYOFWEEK function reads a single date value and returns an integer in the range of 1 to 7; where 1 is Sunday and 7 is Saturday:

SELECT DAYOFWEEK(HIREDATE)
FROM EMP
WHERE EMPNO = '000100';

This query will return an integer between 1 and 7 that indicates the day of the week when employee number 000100 was hired. Of course, this is just one example of many, many scalar functions supported by DB2. For a complete list of the available functions, refer to Chapter 3 of the IBM DB2 SQL Reference manual. Using scalar functions is generally more efficient than writing your own code to do the same thing.

Speaking of writing your own code, though, you can write your own user-defined functions (UDFs) in addition to the base, system-defined functions. UDFs give you the ability to write your own customized DB2 functions and standardize on that code. Consider creating UDFs for specific algorithms and processing that need to be executed in multiple applications in your organization. By writing the code once, putting it into a DB2 UDF, and then calling it whenever you need it, you will minimize errors and improve reusability.

A user-defined function can be specified anywhere a system-defined function can be specified.

Sunday, January 28, 2007

Sorting Out a Solution in DB2

Sometimes an application can require different, perhaps even somewhat "odd" data sorting. These needs may cause developers to sit and scratch their head for hours trying to make DB2 do something that seems "unnatural." But often you can conjure up an answer by properly understanding the problem and applying some creative SQL.

Some of you might be asking "What the heck is he talking about?" That's a fair question, so let’s look at an example to clarify.

Assume that you have a table containing transactions, or some other type of interesting facts. The table has a CHAR(3) column containing the name of the day on which the transaction happened; let’s call this column DAY_NAME.

Now, let’s further assume that we want to write queries against this table that orders the results by DAY_NAME. We’d want Sunday first, followed by Monday, Tuesday, Wednesday, and so on. How can this be done?

Well, if we write the first query that comes to mind the results will obviously be sorted improperly:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY DAY_NAME;

The results from this query would be ordered alphabetically; in other words

FRI
MON
SAT
SUN
THU
TUE
WED

This is what I mean be an irregular sorting requirement. The example may not be an everyday need, but it is not unrealistic for a business to have this, or a similar requirement that needs a different sort order than strictly alphabetical or numeric. So what is the solution here?

Of course, one solution would be to design the table with an additional numeric or alphabetic column that would sort properly. By this I mean that we could add a DAY_NUM column that would be 1 for Sunday, 2 for Monday, and so on. But this requires a database design change, and it becomes possible for the DAY_NUM and DAY_NAME to get out of sync.

There is another solution that is both elegant and does not require any change to the database. To implement this solution all you need is an understanding of SQL and SQL functions – in this case, the LOCATION function. Here is the SQL:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY LOCATE(DAY_NAME, 'SUNMONTUEWEDTHUFRISAT');


The trick here is to understand how the LOCATE function works: it returns the starting position of the first occurrence of one string within another string.

So, in our example SQL, LOCATE finds the position of the DAY_NAME value within the string 'SUNMONTUEWEDTHUFRISAT', and returns the integer value of that position. So, if DAY_NAME is WED, the LOCATE function in the above SQL statement returns 10. Sunday would return 1, Monday 4, Tuesday 7, Wednesday 10, Thursday 13, Friday 16, and Saturday 19. This means that our results would be in the order we require.

(Note: Other database systems have a similar function called INSTR.)

Of course, you can go one step further if you’d like. Some queries may need to actually return the day of week. You can use the same technique with a twist to return the day of week value given only the day’s name. To turn this into the appropriate day of the week number (that is, a value of 1 through 7), we divide by three, use the INT function on the result to return only the integer portion of the result, and then add one:

INT(LOCATE(DAY_NAME, 'SUNMONTUEWEDTHUFRISAT')/3) + 1;

Let’s use our previous example of Wednesday again. The LOCATE function returns the value 10. So, INT(10/3) = 3 and add 1 to get 4. And sure enough, Wednesday is the fourth day of the week.

With a sound understanding of the features of DB2 SQL, many requirements that seem "odd" at first glance, are achievable using nothing more than SQL and your imagaination.