Monday, June 18, 2007

Skipping Locked Rows [DB2 9 for z/OS]

In DB2 9 it is possible for a transaction to skip over rows that are locked. This can be accomplished by means of the SKIP LOCKED DATA option within your SQL statement(s). SKIP LOCKED DATA can be specified in SELECT, SELECT INTO, and PREPARE, as well as searched UPDATE and DELETE statements. You can also use the SKIP LOCKED DATA option with the UNLOAD utility.

Of course, if a program skips over locked data then that data is not accessed and the program will not have it available. When this option is used DB2 will just skip over any locked data instead of waiting for it to be unlocked. The benefit, of course, is improved performance because you will not incur any lock wait time. But it comes at the cost of not accessing the locked data at all. This means that you should only utilize this clause when your program can tolerate skipping over some data.

The SKIP LOCKED DATA option is compatible with cursor stability (CS) isolation and read stability (RS) isolation. But it cannot be used with uncommitted read (UR) or repeatable read (RR) isolation levels. DB2 will simply ignore the SKIP LOCKED DATA clause under UR and RR isolation levels.

Additionally, SKIP LOCKED DATA works only with row locks and page locks. That means that SKIP LOCKED DATA does not apply to table, partition, LOB, XML, or table space locks.
Let's look at an example. Suppose we have a table with 5 rows in it that looks like this:

KEY FNAME LNAME
--- ------ -------
1 JOE MAMA
2 DON KNOTTS
3 KIM PORTANT
4 BOB NOBBLE
5 KIM BIMBO


Assume row level locking. Next assume that an UPDATE statement is run against the table changing FNAME to JIM WHERE LNAME = 'KIM'. And it is hanging out there without a COMMIT. Next, we run:


SELECT COUNT (*)
FROM TABLE
WHERE FNAME >= ’AAA’
SKIP LOCKED DATA;

The count returned would be 3 because DB2 skips the two locked rows (rows 3 and 5). And, of course, if the locks are released the count would be 5 again.

Wednesday, June 13, 2007

Do You Want to Ignore Clustering? [DB2 9 for z/OS]

DB2 9 for z/OS offers a new DDL parameter for your tables: APPEND. If you specify APPEND NO, which is the default, DB2 will operate as you are accustomed to it operating. That is, when rows are inserted or loaded DB2 will attempt to sequence them based on the clustering index key.

If you specify APPEND YES though, DB2 will ignore clustering during inserts and online LOAD processing. Instead of attempting to maintain clustering, DB2 will just append the rows at the end of the table or partition. If the table space is partition-by-growth (new DB2 9 feature) then DB2 can use any partition with space available at the end; for range-partitioned table spaces, obviously DB2 will append the data to the end of the partition corresponding to the range for the value(s) being inserted.

You might want to choose this option to speed up the addition of new data. Appending data is faster because DB2 does not have to search for the proper place to maintain clustering. And you can always re-cluster the table by running a REORG.

The APPEND option cannot be specified on LOB tables, XML tables, or tables in work files.

To track the state of the APPEND option there is a new column, APPEND, in the DB2 Catalog in SYSTABLES. Its value will be either ‘Y’ or ‘N’.

Monday, June 04, 2007

Reordered Row Format [DB2 9 for z/OS]

If you’ve worked with DB2 for awhile, especially as a DBA, you’ve probably heard the advice to re-arrange the columns of your tables to optimize logging efficiency. Basically, the more data that DB2 has to log, the more overhead your programs will incur, and performance will degrade. DB2 will log data from the first byte changed to the last byte changed – unless the row is variable, in which case DB2 will log from the first byte changed to the end of the row – unless the change does not cause the length of the variable row to change, in which case DB2 goes back to logging from the first byte changed to the last byte changed.

So, the advice goes something like this: put you static columns (those that do not change frequently) at the beginning of the row and your dynamic columns (those that will change more frequently) at the end of the row. And put your variable columns at the end of each. This would make your row look something like this:

[Static fixed-length cols]
[Static variable cols]
[Dynamic fixed-length cols]
[Dynamic variable cols]

Make sense?

Well, DB2 9 for z/OS takes this advice to heart (sort of). In New Function Mode (NFM), for new table spaces, DB2 will automatically put the variable columns at the end of the row. This is called reordered row format (RRF); the row format we are all familiar with today is now referred to as basic row format (BRF). This is all how the row is stored – it does not mean that your DDL is changed nor does it require changes to anything external or how you access the rows.

To summarize, this means that a row in RRF will store the fixed-length columns first and the variable columns at the end. Pointers within the row will point to the beginning of the variable columns.

So far so good, right? Well, we DB2 will also convert our old table spaces to RRF over time. Once we are in DB2 9 NFM, a REORG or a LOAD REPLACE will cause a change from BRF to RRF. So run a LOAD REPLACE a table space in NFM and the row format changes. REORG a partition and the row format for that partition changes. And yes, you can have a partitioned table space with some partitions in BRF and some in RRF.

With BRF we can be sure that DB2 is putting our variable columns at the end of the row – where they belong. But it still is not helping us with placing static columns before the dynamic ones. You’ll still have to guide DB2 to do that.

Tuesday, May 29, 2007

INSTEAD OF Triggers [DB2 9 for z/OS]

DB2 9 for z/OS introduces a new type of trigger: the INSTEAD OF trigger. The primary usage of INSTEAD OF triggers is to enable views that would not otherwise be updatable to support updates. Typically, a view that consists of multiple base tables cannot be updated. But with an INSTEAD OF trigger this problem can be surmounted. You can code an INSTEAD OF trigger to direct inserts, updates and deletes to the appropriate underlying tables of the view.


With the INSTEAD OF trigger, your application code does not have to include complex algorithms to specify which operations should be performed against views and which should be performed against base table. Instead, all actions are performed against the view and the activated trigger determines which underlying base tables are to be impacted.


So, you might choose to code an INSTEAD OF trigger on a view over a join to allow modifications on the view to go through to the underlying base tables joined in that view. Or you can encode and decode data within a view: for example, the view could contain the decryption functions while the INSTEAD OF triggers use the encryption functions to ensure security.


Only one INSTEAD OF trigger is allowed for each type of operation on a given subject view. That is, one for inserts, one for updates, and one for deletes. Therefore, you can have a grand total of three INSTEAD OF triggers per view.


DB2 executes the triggered-action instead of the insert, update, or delete operation on the subject view. Neither the WHEN clause nor the FOR EACH STATEMENT clause are allowed in INSTEAD OF triggers.


Furthermore, there are some restrictions on the view in order for an INSTEAD OF trigger to be allowed. First of all, the view must exist at the current server when the INSTEAD OF trigger is created. Additionally, none of the following are permitted for a view to have an INSTEAD OF trigger:

  • the WITH CASCADED CHECK option
  • a view on which a symmetric view has been defined
  • a view that references data encoded with different encoding schemes or CCSID values
  • a view with a ROWID, LOB, or XML column (or a distinct type that is defined as one of these types)
  • a view with a column based on an underlying column defined as an identity column, security label column, or a row change timestamp column
  • a with a column that is defined (directly or indirectly) as an expression
  • a view with a column that is based on a column of a result table that involves a set operator
  • a view with any columns that have field procedures
  • a view where all of the underlying base tables are DB2 Catalog tables or created global temporary tables
  • a view that has other views dependent on it

One way to think of INSTEAD OF triggers is that they contain the inverse of the logic in the body of the view. If the view joins tables, the trigger should break the join apart to modify the correct data. If the view decrypts columns, the INSTEAD OF trigger should encrypt the columns. Etc.


Let’s take a look at an example to better understand the INSTEAD OF trigger. First, we create a view that joins the EMP and DEPT tables:



CREATE VIEW EMP_DEPT (EMPNO, FIRSTNME, MIDINIT, LASTNAME,
PHONENO, HIREDATE, DEPTNAME)
AS SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, PHONENO,
HIREDATE, DEPTNAME
FROM EMP, DEPT
WHERE EMP.WORKDEPT = DEPT.DEPTNO;



OK, so far, so good. But since this view is a join it is not updateable. Let’s fix this by coding up some INSTEAD OF triggers. First, we’ll take care of inserts:




CREATE TRIGGER E_D_ISRT
INSTEAD OF INSERT ON EMP_DEPT
REFERENCING NEW AS NEWEMP
FOR EACH ROW INSERT INTO EMPLOYEE
(EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE)
VALUES
(EMPNO, FIRSTNME, MIDINIT, LASTNAME,
COALESCE
((SELECT DEPTNO FROM DEPT AS D WHERE D.DEPTNAME = NEWEMP.DEPTNAME),
RAISE_ERROR('70001', 'Unknown dept name')
),
PHONENO, HIREDATE);



An insert against the view would not be inserting a new department, so we will be inserting data into the EMP table. If the department does not exist, we’ll raise an error. Next we’ll consider updates:




CREATE TRIGGER E_D_UPD
INSTEAD OF UPDATE ON EMP_DEPT
REFERENCING NEW AS NEWEMP OLD AS OLDEMP
FOR EACH ROW
BEGIN ATOMIC
VALUES(CASE WHEN NEWEMP.EMPNO = OLDEMP.EMPNO THEN 0
ELSE RAISE_ERROR('70002', 'Must not change EMPNO') END);
UPDATE EMP AS E SET
(FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE)
= (NEWEMP.FIRSTNME, NEWEMP.MIDINIT, NEWEMP.LASTNAME,
COALESCE((SELECT DEPTNO FROM DEPT AS D
WHERE D.DEPTNAME = NEWEMP.DEPTNAME),
RAISE_ERROR ('70001', 'Unknown dept name')),
NEWEMP.PHONENO, NEWEMP.HIREDATE)
WHERE NEWEMP.EMPNO = E.EMPNO;
END








Finally we take care of deletions:




CREATE TRIGGER E_D_DEL
INSTEAD OF DELETE ON EMP_DEPT
REFERENCING OLD AS OLDEMP
FOR EACH ROW
DELETE FROM EMP AS E WHERE E.EMPNO = OLDEMP.EMPNO;


Using an INSTEAD OF trigger, each requested modification operation made against the view is replaced by the trigger logic. The trigger performs the insert, update, or delete on behalf of the view. No application changes are required because the code is in the trigger which resides in the database.


If you want to read more about INSTEAD OF triggers, I recommend this quite extensive article (albeit for DB2 LUW) out on the IBM Developer Works web site.


Thursday, May 24, 2007

LOB Enhancements [DB2 9 for z/OS]

IBM focused their attention on improving DB2’s ability to store and manage LOB data in Version 9. As anyone who has tried to use LOBs in a previous version of DB2 knows, the usability limitations were troublesome. But with Version 9, IBM chips away at some of the more annoying LOB limitations.

FETCHing LOBs

Prior to Version 9, there were two methods you could deploy in your programs to fetch LOB data:

  • Fetching data into a pre-allocated buffer
  • Using a LOB locator to retrieve a handle on the data.

Both methods have their issues. Fetching data into a preallocated buffer can cause virtual storage constraint problems, especially for larger LOBs. On the other hand, using LOB locators that commit infrequently or do not explicitly free the locators can use considerable amounts of DB2 resources.

V9 introduces a new clause, WITH CONTINUE, for use on your FETCH statements. By coding your program to use WITH CONTINUE you can retrieve LOB columns in multiple pieces without using a LOB locator, and continue a FETCH operation to retrieve the remaining LOB data when truncation occurs. (Note: this method can be used with XML data, too.) You will have to manage the buffers and reassemble the pieces of data in your application program.

So, by specifying WITH CONTINUE on your FETCH statement you tell DB2 to allow subsequent FETCH CURRENT CONTINUE operations. These will allow you to access the remaining truncated LOB (or XML) column after the initial FETCH. If truncation occurs, DB2 will remember the truncation position and will not discard the remaining data. DB2 will return the total length that would have been required to hold all of the data of the LOB or XML column. This will either be in the first four bytes of the LOB host variable structure or in the 4 byte area that is pointed to by the SQLDATALEN pointer in the SQLVAR entry of the SQLDA for that host variable.

File Reference Variables

DB2 V9 adds support for a LOB file reference variable. A file reference variable is a host variable that is defined in a host language to contain the file name that directs file input and output for a large object (LOB).

Using file reference variables, large LOB values can be inserted from a file or selected into a file rather than a host variable. This means that your application program does not need to acquire storage to contain the LOB value. File reference variables also enable you to move LOB values from the DBMS to a client application or from a client application to a database server without going through the working storage of the client application.

LOBs and Utilities

The manner in which DB2 handles LOBs in utility processing has also been improved in DB2 V9.

Loading and unloading LOBs has been improved. For LOAD, an input field value can contain the name of the file that contains a LOB column value. The LOB column value will then be loaded from that file. For UNLOAD, you can store the value of a LOB column in a file and record the name of the file in the unloaded record in the base table.

What about REORG? Well, prior to V9 you could not access LOB data during a REORG. And a REORG did not reclaim physical space from the LOB data set because LOBs were moved within the existing LOB table space. V9 fixes these problems. During a REORG (in V9), the original LOB table space is drained of writers. All LOBs are then extracted from the original data set and inserted into a shadow data set. When this operation is complete, all access to the LOB table space is stopped (the readers are drained) while the original data set is switched with the shadow data set. At this point, full access to the new data set is enabled, and an inline copy is taken to ensure recoverability of data.

Additionally, both CHECK LOB and CHECK DATA have SHRLEVEL REFERENCE and SHRLEVEL CHANGE options. Using these you can minimize downtime. For LOBs, CHECK DATA checks for consistency between a base table space and the corresponding LOB or XML table spaces. The CHECK LOB utility identifies structural defects in the LOB table space and any invalid LOB values. Running CHECK (DATA or LOB) with SHRLEVEL REFERENCE indicates that applications can read from but cannot write to the index, table space, or partition that is to be checked. SHRLEVEL CHANGE means that applications can read from and write to the index, table space, or partition that is to be checked.

Performance

Finally, DB2 V9 provides several performance improvements related to LOBs. Using file reference variables or WITH CONTINUE to “chunk” read LOBs can improve performance over using locators.

And as we all know, removing locks can improve performance. DB2 V9 eliminates LOB locking for space allocation, as well as for insert, delete, update, and select operations. Additionally, a LOB lock is no longer required to serialize the consistency between the value of the LOB and the column of the base row for an uncommitted read operation.