Tuesday, May 19, 2015

The Importance of Using DB2 Functions

There are three basic types of built-in functions available to DB2 users: 
  1. aggregate functions, 
  2. scalar functions, and 
  3. table functions. 

It is important that you understand what functions are available – as well as what they do – in order to simplify program development and data access. Functions are invoked simply by specifying the function name and any required operands. A built-in function can be used any place an expression can be used (with some exceptions).

The first type of function is called an aggregate function, or sometimes a column function. Aggregate functions can be used to compute, from a group of rows, a single value for a designated column or expression. This provides the capability to aggregate data, thereby enabling you to perform statistical calculations across many rows with one SQL statement.

An aggregate function can be specified in a WHERE clause only if that clause is part of a subquery of a HAVING clause. In addition, every column name specified in the expression of the aggregate function must be a correlated reference to the same group.

The aggregate functions are AVG, CORRELATION, COUNT, COUNT_BIG, COVARIANCE, COVARIANCE_SAMP, MAX, MIN, STDDEV, SUM, VARIANCE, and XMLAGG.

I won’t be explaining what each of these functions does in today’s blog entry; instead, I suggest that you review their functionality in the SQL Reference Guide (SC19-4066 for DB2 11 for z/OS).

The second type of function is the scalar function, which is probably the most commonly used type of function. However, there are so many scalar functions that it can be difficult to keep track of them all. A scalar function is applied to a column or expression and operates on a single value. Contrast this with the aggregate functions, which are applied to a set of data and return only a single result.

The result of a scalar function is a transformed version of the column or expression being operated on. The transformation of the value is based on the scalar function being applied and the value itself. For example, you might use the CHAR function to convert a DB2 date, time, timestamp, ROWID, floating point, integer, or decimal value to a character value… or you might use the DIGITS function to convert a number to a character string of digits.

Of course, these are just a few simple examples of the many functions available in DB2 for z/OS. There are well over 100 scalar functions, each of which may be useful to minimize the amount of code you write by making DB2 do it, instead!
With so many functions available it would not make sense to cover them all here. Suffice it to say, by acquiring a sound understanding of DB2’s functions many requirements that seem to be difficult at first glance, are achievable using nothing more than SQL (check out this example to clarify what I mean: Sorting Out a Solution in DB2). 

The third type of function supported by DB2 is the table function, which can be specified only in the FROM clause of a query. This is because table functions return columns and resemble a table. There are three basic table functions:
  • ADMIN_TASK_LIST: Returns a table with one row for each defined task in the administrative scheduler task list. The administrative scheduler is covered in Part VI, “DB2 Utilities and and Commands.”
  • ADMIN_TASK_OUTPUTFor an execution of a stored procedure, returns the output parameter values and result sets, if available.
  • ADMIN_TASK_STATUS: Returns a table with one row for each defined task in the administrative scheduler task list, showing the status of the task the last time it was run.

Additional Useful Function Information 

DB2 also offers some other types of built-in functions including:
  • MQSeries Built-In Functions - scalar and table functions for use with IBM’s message queuing software, MQSeries. These functions enable MQSeries messages to be received and sent.
  • XML Built-In Functions - scalar functions to allow applications to efficiently generate and manipulate XML data with DB2.
The final built-in function to be aware of is the RAISE_ERROR Function. It is used to raise an error condition in the SQLCA. The user supplies the SQLSTATE and error description for the error to be raised. The error will be raised with the specified SQLSTATE and a SQLCODE of –438. The RAISE_ERROR function can be used to signal application program and data problems.

What About User-Defined Functions?

DB2 also supports the creation of additional functions (and data types) to supplement the built-in function (and data types) supplied out-of-the-box. User-defined functions (and types) give users the ability to effectively customize DB2 to their shop requirements. The ability to customize is potentially very powerful. It also can be quite complex and requires detailed knowledge, additional application development skills, and administrative dexterity.


We will take a look at UDFs in the next blog post, so check in again next week for that!

Tuesday, May 12, 2015

A Trip Report from the 2015 IDUG DB2 Tech Conference

Last week I attended, and spoke at, the annual North American IDUG DB2 Tech Conference in Philadelphia, PA. As usual, the event was busy and chock full of useful and interesting DB2 information.

My week started on Sunday with the IBM gold consultant briefing, whereby IBM talks to the gold consultants about their upcoming plans and solicits feedback from us. I can’t really share with you what happened there, but as is almost always the case, it was time well spent.
The conference kicked off in earnest on Monday with the keynote from IBMers Tim Vincent and Namik Hrle titled “Big Data, Analytics and the Future of Data Management.” Tim and Namik discussed how the growth of data is fueling innovation causing a big change in the way value is created. Some of the key takeaways from the keynote, for me at least, were:
  • The predominant source of Big Data for most projects is still structured transactions
  • Primary focus of most data projects is usually on understanding customers
  • There is a trend toward self-service
  • Clearly there is value in Hadoop but you can't replace all your DBMSes with it!

Perhaps the most salient tidbit shared at the keynote address was this: “People have forgotten, or never truly understood, how complex data integration actually is.” I think this is true; all too often people underestimate how difficult the integration of data will be. And I agree, too, with sentiment of the presenters who declared that “We need to realize that data is never going to be in one place.”

The bottom line of the keynote: All of our existing DB2 skills will still be relevant but must co-exist with skills in newer technologies like NoSQL and Hadoop.

Good stuff!

Some of the other highlights of the week:
  • Attendance seemed to be higher than at the past few IDUG conferences. And I see growth in the number of DB2 for LUW attendees. IDUG, at least historically, was mostly attended by DB2 for z/OS folks. And it is probably still true that more z/OS folks attend than LUW folks, but the LUW attendance continues to grow and may surpass z/OS at some point! Of course, this is all based on my unscientific eyeballing of attendees at presentations.
  • My session on DB2 performance had a full room and nobody seemed to nod off or run for the exits while I spoke. I also delivered a VSP for Dell to a room full of DBAs and developers, as well as a couple of presentations for DataKinetics during an evening session. So that was all good!
  • I attended a couple of sessions on newer features of DB2 and how some of them are under-utilized. I think this speaks to a few trends hitting the world of DB2 these days. Number one: training is drying up. If you don’t get training on new features it is hard to use them appropriately. Number two: failure to take advantage of free learning opportunities like webinars and freely-available PDF manuals:
  • The vendor exhibit hall was active and well-attended throughout the week. All the usual suspects were there like BMC, CA, dbi, Dell, and IBM, but there were some newer (at least to IDUG) companies, too, like DataKinetics and Imperva.
  • The educational seminars were integrated into the conference this year. And they did not cost extra to attend. That means that more people attended ed seminars this year (at least I think they did) and the conference offered more value to attendees looking for more in-depth education than can be gained by an hour session.


All in all, the 2015 North American IDUG Tech Conference was a big success. And it is not too early to start planning for next year’s conference, which will be in Austin, TX. Finally, I’ll be able to drive to an IDUG… see you there in 2016!