Thursday, July 12, 2007

New Data Types [DB2 9 for z/OS]

As we continue our blog series on the new features and functionality of DB2 9 for z/OS, today we examine the four (OK, five) new data types introduced in this version of DB2.

BIGINT

First up, we have the BIGINT data type. A BIGINT is an exact numeric data type capable of representing 63-bit integers. This is the third integer data type now available to DB2 and it offers the ability to store the largest range of values:

  • SMALLINT values can range from -32768 to 32767
  • INTEGER values can range from -2147483648 to 2147483647
  • BIGINT values can range from -9223372036854775808 to 9223372036854775807

So when you have the need to store very large integers you don’t have to use DECIMAL with a zero scale any longer.

BINARY and VARBINARY

Next up, and perhaps more exciting, V9 delivers a true binary data type for the first time in DB2, so we no longer have to use a BLOB or CHAR with FOR BIT DATA. BINARY is a fixed-length binary string up to 255 bytes. DB2 9 also delivers a VARBINARY data type, which is a variable-length binary string up to 32704 bytes.

BINARY and VARBINARY data types extend current support of binary strings (BLOB), and are compatible with the BLOB data type. They are not compatible with character string data types. IBM does make it somewhat easy to migrate existing columns defined as CHAR FOR BIT DATA or VARCHAR FOR BIT DATA over to BINARY or VARBINARY. If there is an index defined on the column, the index is placed in RBDP. You cannot alter BINARY or VARBINARY data types to CHAR FOR BIT DATA or VARCHAR FOR BIT DATA.

Also, there are some usage considerations to keep in mind. Two binary strings are equal only if the lengths are identical. If two strings are equal up to the length of the shorter string length, the shorter string is considered less than the longer string, even when the remaining bytes in the longer string are hex zeros.

DECFLOAT

The next new data type supported by DB2 9 for z/OS is DECFLOAT. V9 takes advantage of new System z9 hardware support delivering the DECFLOAT data type that lets you use decimal floating-point numbers with greater precision. A decimal floating-point value (DECFLOAT) is an IEEE 754r number with a decimal point.

The maximum precision is 34 digits and the range of a decimal floating point number is either 16 or 34 digits of precision.


DECFLOAT(16) values can range from a low of:

-9.999999999999999×10**384


to a high of:

9.999999999999999×10**384



DECFLOAT(34) values can range from a low of:

-9.999999999999999999999999999999999 ×10**6144


to a high of:

9.999999999999999999999999999999999 ×10**6144



Anyone think they’ll need bigger numbers?

In addition, the DECFLOAT data type is able to represent special values representing “non-number numbers”:

  • Infinity - a value that represents a number whose magnitude is infinitely large.
  • Quiet NaN - a value that represents undefined results which does not cause an invalid number condition. NaN is not a number.
  • Signaling NaN - a value that represents undefined results which will cause an invalid number condition if used in any numerical operation.

XML

And finally, we get pureXML support to store XML as a native data type in DB2. That means you can specify XML as a data type for columns in your DB2 tables in DB2 9 for z/OS. But I’m not really going to elaborate any further on XML here. A few more details can be found in an earlier post here.

3 comments:

Anonymous said...

Can you please provide additional information on why someone would want to use BigInt when they can use BigDecimal for better precision?

Is there a performance gain for using BigInt? How?

Thank you.

Craig S. Mullins said...

Well, BIGINT would be useful for someone who wanted to store integers greater than 2.1 billion (or so) within 8 bytes of storage.

For a DECIMAL(m,n) column the amount of storage required is: (m/2)+1 bytes. So, for m = 15 or more there is a range of values where BIGINT can take up less storage.

Anonymous said...

Any idea how the DCLGEN will look like using this new data-types?

As far as I know COBOL does not support such enormous numeric formats.

Does this main that you must constantly CAST in your embedded SQL?

b.t.w. Someware else I read that the new power6 also supports that format, so using DECFLOAT on Viper on AIX is also adviced.