Tuesday, August 14, 2007

DB2-L Is Back Up & Running

In case you haven't noticed yet, the DB2 mailing list (DB2-L) is back up and running as of Monday, August 13, 2007.

However, the archives are not yet available, but it looks like they will be back up soon (with no data loss). So, welcome back DB2-L - - we all missed you!

Thursday, August 09, 2007

DB2-L Is Down

Those of you who subscribe to the DB2 mailing list, also known as DB2-L, may have noticed that the usually steady stream of information, questions, and comments that used to pour into your in-box has dried up. Don't worry (well, you can worry a little bit, I s'pose), it is nothing you have done... DB2-L is not functional.

The company that hosts the list for IDUG moved the server two weeks ago and when they did they destroyed the disk drive. IDUG has been struggling to get everything rebuilt and back up since then, but it is taking longer than expected.

If you want to keep abreast of when it will be back online, status updates will be posted on IDUG's website...

Wednesday, August 08, 2007

Altering LONG VARCHAR [DB2 9 for z/OS]

Just a short entry today to discuss a feature that will be useful for organizations having tables containing long text strings which are stored as VARCHARs. It involves altering the data type of a column from LONG VARCHAR to a compatible data type.

In previous versions of DB2 before to V9 it is not possible to change the data type of a LONG VARCHAR or LONG VARGRAPHIC column using ALTER. Instead, you would have had to unload the data, drop the table, create the table using the new column definitions and reload the data back into the new table. Of course, when you drop the table you’d lose any authorizations, indexes, etc. defined on the table so you’d have to save and re-create those as well. Basically, it wasn’t easy unless you have a change management tool that automates the process.

But before going any further, what is a LONG VARCHAR? Well, a VARCHAR is a varying-length column for character string data where n specifies the maximum length of the string. If n was greater than 254, the column was setup as a long string column – that is, a LONG VARCHAR; same story for VARGRAPHIC columns.

As of DB2 V9, you will not create any more LONG VARCHAR columns – they are all treated as VARCHARs. But the old LONG VARCHAR columns, migrated from earlier releases, still exist. The COLTYPE in the DB2 Catalog (SYSIBM.SYSCOLUMNS) will be LONGVAR for LONG VARCHAR columns and LOGVARG for LONG VARGRAPHIC columns.

The good news is that V9 allows you to ALTER the data type of LONG VARCHAR to VARCHAR and LONG VARGRAPHIC to VARGRAPHIC using ALTER TABLE with the ALTER COLUMN parameter. First, you must lookup the LENGTH of the column in the SYSCOLUMNS and use that for the length of the new data type specification. For example, if LONGCOL in table EXAMPLE is a LONG VARCHAR you can ALTER it in DB2 V9 to VARCHAR. Assuming the length of the column was 500, you can issue the following ALTER to change the column to VARCHAR from LONG VARCHAR:

ALTER TABLE EXAMPLE
ALTER COLUMN LONGCOL
SET DATA TYPE VARCHAR(500);

You must specify the exact value stored in the DB2 Catalog for LENGTH in this ALTER TABLE statement. Failure to do so will result in an error message.

So, you can convert all of those LONG VARCHAR columns from past versions of DB2 to VARCHAR using the ALTER statement in V9.

Monday, August 06, 2007

Encryption [DB2 9 for z/OS]

DB2 V9 offers some encryption news, but we need to go back a version to start the story. You see, DB2 supports encryption in Version 8 through encryption functions that need to be explicitly coded in order to encrypt and decrypt data.

These functions (ENCRYPT and DECRYPT) allow you to encrypt and decrypt data at the column level. Because you can specify a different password for every row that you insert, you are encrypting data at the “cell” level in your tables. If you use these functions to encrypt your data, be sure to put some mechanism in place to manage the passwords that are used to encrypt the data. Without the password, there is absolutely no way to decrypt the data.

To assist you in remembering the password, you have an option to specify a hint (for the password) at the time you encrypt the data. The following SQL example shows an INSERT that encrypts the SSN ( social security number ) using a password and a hint:

INSERT INTO EMP (SSN)
VALUES(ENCRYPT('289-46-8832','TARZAN','? AND JANE'));

The password is “TARZAN” and the hint we’ve chosen to provide is “? AND JANE”… so the hint will prompt us to think of Tarzan as the companion of Jane.

In order to retrieve the encrypted data you will need to use the DECRYPT function supplying the correct password. This is shown in the following SELECT statement:

SELECT DECRYPT_BIT(SSN,'TARZAN') AS SSN
FROM EMP;


If we fail to supply a password, or the wrong password, the data is returned in an encrypted format that is unreadable.

The result of encrypting data using the ENCRYPT function is VARCHAR FOR BIT DATA. The encryption algorithm is an internal algorithm. For those who care to know, it uses Triple DES cipher block chaining (CBC) with padding and the 128-bit secret key is derived from the password using an MD5 hash.

When defining columns to contain encrypted data the DBA must be involved because the data storage required is significantly different. The length of the column has to include the length of the non-encrypted data + 24 bytes + the number of bytes to the next 8 byte boundary + 32 bytes for the hint.

OK, that is all V8 stuff and this series of blog postings is supposed to be about V9 functionality, right? So what about version 9? Well, DB2 9 for z/OS offers some nice improvements to encryption support. Firstly, DB2 can take advantage of encryption hardware advances.

CP Assist for Cryptographic Function, aka CPACF, is available on z990 hardware. CPACF can run on all the CPUs, but remember, this feature is available only on z990 and later machines, not the older z900. The z990 also introduces a PCIXCC card which is needed for the IBM Data Encryption Tool, but not for the DB2 encryption functions.

Note: The IBM Data Encryption Tool (available from IBM at an additional price) offers encryption for DB2 tables at the table level, whereas the encryption functions (free with DB2) offer encryption at the column level.

The CP Assist for Cryptographic Function delivers cryptographic support on every CP with Data Encryption Standard (DES), Triple DES (TDES), and Advanced Encryption Standard (AES)-128 bit data encryption/decryption, as well as Secure Hash Algorithm (SHA-1) and SHA-256 hashing. For a more detailed discussion of CPACF, associated technology and functionality, check out the following IBM redbook: IBM eServer zSeries 990 (z990) Cryptography Implementation (SG24-7070).

Basically, the net result is that the cost of encrypting DB2 data under V9 is reduced on the z990 hardware.

Additionally, IBM has added encryption support in the controllers of its storage devices.

Both the IBM TS1120 tape drive and IBM Ultrium 4 tape drives include data encryption capabilities within the drives. This support can allow you to avoid the need for host-based encryption of data or the use of specialized encryption appliances. In addition, IBM claims that the encryption does not significantly impact the performance of the drives so there should be minimal to no impact on the batch processing window when encrypting in this manner.

So far, we’ve been talking about encryption for data at rest. But DB2 9 for z/OS also improves support for encryption of data in transit. DB2 9 supports the Secure Socket Layer (SSL) protocol by implementing the z/OS Communications Server IP Application Transparent Transport Layer Security (AT-TLS) function. The z/OS V1R7 Communications Server for TCP/IP introduces the AT-TLS function in the TCP/IP stack for applications that require secure TCP/IP connections. AT-TLS performs transport layer security on behalf of the application, in this case DB2 for z/OS, by invoking the z/OS system SSL in the TCP layer of the TCP/IP stack. The z/OS system SSL provides support for TLS V1.0, SSL V3.0, and SSL V2.0 protocols.

So encryption of data over the wire is improved in z/OS 1.7. The Communications Server supports AT-TLS, which uses SSL data encryption. Now SSL encryption has been available on z/OS for a long time, but now DB2 9 for z/OS makes use of this facility and offers SSL encryption using a new secure port.

When acting as a requester, DB2 for z/OS can request a connection using the secure port of another DB2 subsystem. When acting as a server, and from within a trusted context (I’ll discuss trusted context in a later DB2portal blog entry), SSL encryption can be required for the connection.

So, little by little, better encryption support is being made available within the world of DB2 for z/OS.

Tuesday, July 24, 2007

Index Compression [DB2 9 for z/OS]

Another useful new feature debuting in V9 is the ability to compress indexes. We’ve been able to compress DB2 data in table spaces for a long time now, either through an exit routine or with the COMPRESS table space parameter (added in DB2 V3). But before V9 we’ve never been able to compress index data.

Why would you want to compress index data? Well, some types of applications require very large indexes on very large tables - - data warehousing applications are one good example. Sometimes, the storage required for indexes to support your data warehouse applications can exceed the storage required for the base table. So it makes sense that you might want to reduce the storage consumed by such indexes.

DB2 V9 introduces the COMPRESS parameter for indexes. You can specify COMPRESS YES (or NO) on your CREATE INDEX and ALTER INDEX statements. Index partitioning is done at the index level and cannot be performed on a partition by partition basis.

Additionally, DB2 will only compress the data in leaf pages, not in the root page and any non-leaf pages in between. This makes sense because you don’t want to incur the expense of decompressing all of these type of pages in your indexes in order to find the right leaf page range.

Index compression does not require a compression dictionary. As such, DB2 can begin to immediately compress data in the leaf pages of your newly created indexes.

Now think about what we’ve already learned for a minute. The data on the leaf page is compressed, but we will want to access it uncompressed, right? So index pages are stored on disk in a compressed format but will be expanded when read. So those 4K index pages on disk will require more than 4K when expanded. This means that compressed indexes must be defined in a larger buffer pool (8K, 16K, or 32K). Nevertheless, when you compress an index DB2 will always compress the data down into a 4K page size on disk no matter what page size you choose.

Another consideration to keep in mind is that index data is decompressed for index image copies, so a copy of an index will require more storage space than the actual index requires.

So, when you move to DB2 9 in NFM you have an additional compression decision to make: which indexes should be compressed and which should not? But it is a good thing to have more options at our disposal, especially for applications with huge indexing requirements.