Saturday, May 12, 2012

A Cookbook for DB2 LUW Developers


I recently came across an interesting new book on DB2 for Linux, Unix, and Windows titled IBMDB2 9.7 Advanced Application Developer Cookbook. The book shakes up the typical technology book with a cookbook metaphor and succeeds in delivering the best of both of those approaches.

The authors, Sanjay Kumar and Mohankumar Saraswatipura, have obviously used the technology and clearly explain to the reader step-by-step tactics for tackling most of the important components of the DB2 application development life cycle. The book is divided into 9 chapters, each delivering a series of practical recipes for using, creating and managing various aspects of DB2 application programming. The recipes are well-written and easy to understand with lots of supporting code to guide you as you work through the recipe. The cover of the book references “over 70 practical recipes” but it sure seems like there are more than that.

The first chapter of the book covers recipes that focus on the application development enhancements made to DB2 9.7. This is an important chapter because sometimes new features get introduced so rapidly that developers do not get the chance to learn them before the next version comes out. Indeed, with DB2 10 being released in early April 2012, it is even more important to learn what DB2 9.7 added to the mix.

Other chapters I found quite useful were Chapter 5, which covers recipes for coding Java applications against DB2 databases; and Chapter 8 and 9, which covers recipes for monitoring and tuning your DB2 applications.

Of course, the trouble with recipes is that you rely on them when you want to make something specific. A book that contained a recipe for everything you ever wanted to do with DB2 is not practical though. You can, however, use many of the recipes as starting points for beginning the “dish” you wish to “cook” and then add to the recipe the additional “flourishes” you need to make the dish your own.

Whether you are a novice or a long-time DB2 coder this book will be helpful as you design, plan, develop, and optimize your DB2 9.7 applications and databases.

Wednesday, April 25, 2012

IDUG DB2 Technical Conference 2012



The time is drawing near for this year’s North American IDUGDB2 Technical Conference . This year’s event is being held in Denver, CO the week of May 14-18, 2012. The IDUG conference planning committee has brewed up a wonderful agenda of educational sessions, presentations, and courses. Check ‘em all out online here. Whether you are a DBA, a developer, a or a systems programmer there are a plethora of learning opportunities. And there are sessions that run the gamut from DB2 for z/OS to DB2 LUW to data warehousing to Big Data to new version information and more! You can even try your hand at certification at the conference.

Even more importantly, IDUG offers a great chance to build and strengthen your DB2 network. You will meet everyone who is anyone at the IDUG conference. You can network with IBMers, consultants, and peers throughout the week at technical networking sessions and the products and services expo. And don’t forget the Dine Around where you can sign up to eat with your favorite DB2 experts.

One of my favorite stops every year is the IDUG book store, where you can browse the latest publications on DB2 and related technologies. I’m super excited about this year’s book store because the latest edition of my book, DB2 Developer’s Guide, covering through DB2 V10 will be available at the book store this year.

There is truly no other place where you can cram so much DB2 stuff into a week. So if you rely on DB2 in your professional life, the IDUGDB2 Technical Conference is the place to be. 

Thursday, March 22, 2012

Working With Variable Data


One of the key design issues that is addressed in most DB2 applications is how to implement character data that varies in size from row to row. The basic trade-off is “ease-of-use and performance” versus “storage requirements.” It is possible to save storage space by using variable columns instead of placing small amounts of data in a large fixed space. Each variable column carries a 2-byte overhead, however, for storing the length of the data. Additionally, variable columns tend to increase CPU usage and can cause the update process to become inefficient. When a variable column is updated with a larger value, the row becomes larger; if not enough space is available to store the row, it must be moved to another page. This makes the update and any subsequent retrieval slower.

Follow these rules when defining variable character columns:
  • Avoid variable columns if a sufficient disk space is available to store the data using fixed columns. The general rule is: disk is cheap! Trading the cost of disk for more efficient development and subsequent performance is often an easy decision to make.
  • Though this may be obvious, do not define a variable column when the data does not vary from row to row.
  • Do not define a variable columns if its maximum length is less than 30 bytes. Furthermore, do not define a variable column if its maximum length is within 10 bytes of the average length of the column. Adhering to these two rules will cause you to choose VARCHAR data types only when they can potentially provide enough DASD savings to offset other costs.
  • Consider redefining variable columns by placing multiple rows of fixed length columns in another table or by shortening the columns and placing the overflow in another table.

If, after following these guidelines, VARCHAR columns need to be implemented, go ahead and do so. However, it is wise to continuously re-assess the decision to use variable character data.

So how can you monitor the effectiveness of variable columns? Well, it is possible to query the DB2 Catalog to determine the effectiveness of using VARCHAR for a column instead of CHAR. Consider, for example, the PROJNAME column of the DSN81010.PROJ table. It is defined as VARCHAR(24).

To gauge whether VARCHAR is appropriate for this particular column, consider issuing the following SQL query:

     SELECT    COL_LGTH, COUNT(*)
     FROM      (SELECT LENGTH(PROJNAME) AS COL_LENGTH
                FROM   DSN81010.PROJ
               )
     GROUP BY  COL_LGTH
     ORDER BY  COL_LGTH;


This query will produce a report listing the lengths (in this case, from 1 to 24, excluding those lengths which do not occur) and the number of times that each length occurs in the table.  These results can be analyzed to determine the range of lengths stored within the variable column. 

If you are not concerned about this level of detail, the following query can be used instead to summarize the space characteristics of the variable column in question:

    SELECT  24*COUNT(*),
            SUM(2+LENGTH(PROJNAME)),
            24*COUNT(*)-SUM(2+LENGTH(PROJNAME)),
            24,
            AVG(2+LENGTH(PROJNAME)),
            24-AVG(2+LENGTH(PROJNAME))
    FROM    DSN81010.PROJ;



This query produces a report similar to the one shown below:

SPACE      SPACE        TOTAL   AVG.       AVG.         AVG.
USED AS    USED AS      SPACE   SPACE AS   SPACE AS     SPACE
CHAR(24)   VARCHAR(24)  SAVED   CHAR(24)   VARCHAR(24)  SAVED
--------   -----------  ------  ---------  -----------  -----
158058        96515     61543     24          16          8



The following list itemizes the definition for each of the individual columns calculated by this query:

Definition                               
Calculation                                                 
Space Used As CHAR(24)
24*COUNT(*)
Space Used As VARCHAR(24)
SUM(2+LENGTH(PROJNAME))
Total Space Saved Using VARCHAR
24*COUNT(*)-SUM(2+LENGTH(PROJNAME))
Avg. Space Used As CHAR(24)
24
Avg. Space Used As VARCHAR(24)
AVG(2+LENGTH(PROJNAME))
Avg. Space Saved Using VARCHAR
24-AVG(2+LENGTH(PROJNAME))

The query can be modified to be used for any VARCHAR-defined column. The constant 24 can be changed to indicate the maximum length of the variable column as defined in the DDL. Using these tools, you can better judge the actual disk space savings accruing as a result of VARCHAR usage.

Synopsis

There are sound reasons for using variable data types within DB2 databases. Yet, it is important to remember that business conditions change and what may have been a sound reason for using VARCHAR data in the past, may no longer be sound. Use the queries in this article to determine if it is still appropriate to use VARCHAR data. In general, do not use VARCHAR for small columns or for columns whose length does not vary considerably.

Sunday, March 04, 2012

Fetching Multiple Rows

When you need to retrieve multiple rows, consider deploying a multi-row fetch to transfer more than one row using a single FETCH statement. This capability was added as of DB2 Version 8.

A multi-row FETCH retrieves multiple rows at one time into column arrays in your application program. By fetching multiple rows at once, your request can become more efficient, especially for distributed requests. The performance improvement using multi-row FETCH in general depends on several factors, such as whether the request is distributed, the number of rows to be fetched, the complexity of the SELECT statement, and the number of columns being fetched.

Nevertheless, using multi-row FETCH (in local environments) can improve performance with a significant reduction of CPU time possible. Tests conducted by IBM have shown performance gains of between 25% and 40% processing 10 rows per SQL statement for programs processing a considerable number of rows. With such significant gains possible, why hasn’t everyone moved to multi-row FETCH? Well, perhaps because it requires programming changes. A multi-row FETCH requires a cursor defined with rowset positioning. A rowset is a group of rows that are operated on as a set. Such a cursor enables your program to retrieve more than one row using a single FETCH statement. By fetching multiple rows at once, your request might become more efficient, especially for distributed requests.

 To use this feature, you must DECLARE your cursor as using the WITH ROWSET POSITIONING parameter. For example

EXEC SQL
  DECLARE CURSOR SAMPCURS
  WITH ROWSET POSITIONING
  FOR
  SELECT DEPTNO
  FROM   DSN81010.DEPT
END-EXEC.

Furthermore, to use a multi-row fetch you must have defined the appropriate structures to receive multi-row data. This means you must have defined an array of host variables into which the fetched rows can be placed. Each column fetched requires its own host variable array into which its values will be placed. Be sure to match the array size to the rowset size. With the appropriate setup coded, FETCH statements can be written to retrieve more than a single row from the result set. For example

FETCH ROWSET FROM SAMPCURS
  FOR 5 ROWS
  INTO HOSTVAR-ARRAY;

As you can see, the multiple-row fetch block is identical to a single-row-fetch block, except that there are two additional clauses—ROWSET and FOR n ROWS. The ROWSET clause specifies that the orientation of this cursor is rowset positioning (instead of single row). The FOR n ROWS clause specifies the size of the rowset to be returned. The maximum rowset size is 32,767.

Rowset cursors are very useful when you need to retrieve many rows or large amounts of data in distributed systems. By retrieving multiple rows with a single FETCH, multiple trips between the application and the database can be eliminated, thereby improving network performance.

To learn more about multi-row FETCH consider attending my upcoming webinar on the topic. This presentation will introduce and define multi-row FETCH, how to use it, and the performance implications of doing so. The presentation will also touch upon multi-row UPDATE. And it will introduce the new SoftBase Attach Facility MRF Feature, which allows you to implement multi-row FETCH without coding changes. To attend, sign up at this link: https://www1.gotomeeting.com/register/754473201

Tuesday, February 28, 2012

Identifying Unused Indexes

Did you know that DB2 V9 added a new column to the Real Time Statistics to help identify unused indexes?

The LASTUSED column in the SYSINDEXSPACESTATS table contains a date indicating the last time the index was used. Any time the index is used to satisfy a SELECT, FETCH, searched UPDATE, searched DELETE, or to enforce a referential constraint, the date is updated.

This helps to solve the problem of determining whether or not an index is being used. Standard operating advice is to DROP or delete anything that is not going to be used. But trying to determine whether something is actually used or not can be difficult at times.

You could always query your PLAN_TABLEs or the plan and package dependency tables in the DB2 Catalog for static SQL. But what about dynamic SQL? That is more difficult. Now, as of DB2 V9, you can simply query the LASTUSED column to see when the index was last used. The LASTUSED date is by partition. So, for a partitioned index, the last used date of each partition in the index should be checked.

Of course, you will have to give it some time because you might have an index supporting a rarely used query. Most shops have queries and programs that run quarterly, or even annually, but nevertheless are very important... and you wouldn't want to drop indexes on those queries even though they do not run frequently because when they do run, they are important...

Examine the LASTUSED column over time to determine which indexes are truly not being used, and then DROP the unused indexes.