Monday, June 22, 2015

The DBMS Market Circa 2015

Today's blog post is to call attention to a series of articles and product overviews I have been writing for the TechTarget SearchDataManagement portal on Database Management Systems (DBMS).

Firstly, I wrote a 7 part series of articles reviewing the DBMS technology circa 2015. This series spans relational, NoSQL and in-memory database technology and here are the links to each of the seven articles:


Now you may be asking, why would I provide links to these articles on a DB2 blog? Good question. The answer is that it behooves you to keep up to date on the latest breakthroughs and offerings in the world of data management. Sure, we all can agree that DB2 is great and should be used by everybody! But let's face it, our organizations are going to have data-related projects where DB2 is not the primary DBMS... so read through those articles and get up to speed on the new NoSQL and in-memory database offerings out there.


I have also been writing a series of DBMS product overview documents that briefly review and highlight the features and capabilities of many popular DBMSes. I won't share all of them with you here today (if you are interested, they will all be linked to, over time, on my web site at http://mullinsconsulting.com/articles.html.  I will, though, share the link for the TechTarget product overview I wrote of DB2: IBM DB2 relational DBMS overview.

That's all for today... thanks for reading!

Tuesday, June 16, 2015

Planning DB2 Educational Resources

Ensuring access to proper DB2 educational materials should be one of the first issues to be addressed after your organization decides to implement DB2. But education sometimes falls through the cracks... 

Does your organization understand what DB2 is? How it works? For what purposes it is needed at your shop? How it will be used? Without a sound understanding of DB2, its components, and features, it is unlikely that you will be able to use DB2 to its best advantage. A basic level of DB2 knowledge can be acquired through a short DB2 fundamentals class for the IT personnel charged with making DB2 a success at your ­organization. But long-term success with DB2 requires ongoing education.

After addressing the basics of DB2 education, you must support continuing DB2 education for your co-workers/employees. This support falls into four categories. 

The first category of training is a standard regimen of SQL and DB2 programming training to be used by application developers. Every programmer should receive basic training on SQL, education on how to embed SQL in the programming languages they will be using, and possibly additional courses on using DB2 with specific infrastructure software like MQ, WebSphere, CICS, and IMS. Also, with the importance of distributed access to DB2 these days, a course should be made available on that topic, with specific portions that address the technology used by your shop to provide distributed DB2 data access. If this basic level of DB2 education is not required for every DB2 programmer, then DB2 application performance will surely suffer as untrained coders write inefficient and incorrect SQL.

The second category of education support is external training for special needs. This support includes education for database administrators, technical support personnel, and performance analysts. Additionally, your organization needs to plan for ongoing education to keep appropriate personnel up-to-date on new versions and releases of DB2. Although IBM typically offers great courses for new DB2 releases, several third-party vendors such as KBCE and Themis regularly offer in-depth training and release-specific DB2 courses and lectures.

The third category of education is in-house, interactive training in the form of videos, computer-based training, and instructor-led courses. These courses should be used to augment and refresh the formal training given to your DB2 professional staff. 

The fourth, and final category of support, is reference material—for example, IBM’s DB2 manuals, DB2 books (such as DB2 Developer's Guide), vendor-supplied white papers, and industry publications and periodicals. The current IBM manuals for DB2 are listed on the inside back cover of this book. Some organizations have third-party software for accessing manuals online, but the Web offers most everything needed these days.

IBM offers the free, web-based Information Center, as well as PDF versions of all DB2manuals freely available for download over the Web.

Of course, you should consider augmenting the standard IBM DB2 manuals with IBM redbooks. IBM redbooks provide in-depth, detailed coverage of a specific technology topic. IBM publishes redbooks on multiple subjects, including DB2, IMS, CICS, z/OS, and many other topics. IBM redbooks can greatly assist DB2 technicians working to understand a feature or nuance of DB2. You can download IBM red books for free in Adobe Acrobat format over the Web at http://www.redbooks.ibm.com/

IBM also offers many other useful sources of DB2 information on the Web including white papers, articles, and book excerpts from IBM Press publications. Another useful source for DB2 information is the IBM Developerworks website. Developerworks offers technical details about development using IBM software and frequently offers DB2-related articles. 

Independent Software Vendors (ISVs) are another rich source of DB2 information. The major vendors provide in-depth technical papers on features of DB2 that would be difficult for most shops to research in the same detail. BMC Software, CA Technologies, Compuware Corporation, and others are good sources for DB2-related white papers and content.

All of these educational components—in-house education, external education, and industry publications—are useful for learning how you can use DB2 effectively. You would be wise to have a mix of material that supports more than one of the categories outlined previously. In this way, you provide a varied learning environment that meets the needs of all students. This varied learning environment allows each student to learn in the most conducive way for him or her. Plan to provide an on-site library of educational material addressing the following subjects:
  • Introduction to relational databases
  • Introduction to DB2 and SQL
  • Advanced SQL
  • Programming DB2 using (your languages of choice)
  • Programming DB2 in batch
  • Programming DB2 using TSO, CICS, and IMS
  • Programming DB2 and the Web
  • Creating DB2 stored procedures, triggers, and UDFs
  • Programming DB2 in a Distributed Environment
  • Debugging and Problem Analysis


You also might want to have an introductory DB2 database administration course to train new DBAs. In addition to this basic education library, plan to provide advanced education for technical DB2 users, such as DBAs, technical support personnel, and technical programmers and analysts. Advanced DBA topics (such as Sysplex Data Sharing, performance management, and backup/recovery) should be left to instructor-led training courses because of the complex nature of DB2 database administration.Additional advanced topics to consider include system administration (for systems programmers) and disaster recovery. Many vendors, including IBM and Themis, offer these classes. Searching for smaller consulting firms and local resources is also prudent; these firms sometimes can provide courses tailored to your installation needs.


The advanced education program should include allocating time to attend area user groups meetings, the annual IBM Insight Conference, and the International DB2 UsersGroup (IDUG) conferences. The DB2 Symposium events also are useful resources for DB2 education. When DB2 users get together to share experiences at such forums, they uncover undocumented solutions and ideas that would be difficult to arrive at independently.

The bottom line is simple, though. Plan for ongoing DB2 education for your DBAs, programmers, and analysts... or plan on failing.

Tuesday, June 09, 2015

New England DB2 User Group

Just a brief blog post today to promote my upcoming speaking engagement at the New England DB2 User Group, in Sturbridge, MA (just outside of Boston). If you are in or around the area on June 18, 2015 be sure to stop by and participate in the meeting!

I will be giving two presentations (in the afternoon) that day. First up is my Big Data introductory presentation titled A Big Data Roadmap for the DB2 Professional. This was voted one of the Top Ten presentations at IDUG North America in 2014. My second presentation is a little bit different than what I normally present. It is titled Reduce Costs by Tuning DB2 to Reduce Your Rolling 4 Hour Average and in it I will walk through the components of subcapacity pricing and variable workload license charge... and how to use that knowledge to tune and reduce your monthly mainframe costs.

There are two other speakers that day, including a pitch from IBM on migrating to DB2 11 for z/OS and another presentation from the State of CT on IBM's  PureData system.

So if you are going to be in the New England area be sure to put the NEDB2UG meeting on your agenda.

Hope to see you there!

Monday, June 01, 2015

DB2 User-Defined Functions

In my last post here, I talked about the importance of built-in DB2 functions and how they can be used to simplify development. But DB2 also allows us to create our own, user-defined functions to programmatically extend the functionality of DB2.

A user-defined function, or UDF for short, is procedural functionality that you code up and add to DB2. The UDF, after coded and implemented, extends the functionality of DB2 SQL by enabling users to specify the UDF in SQL statements just like built-in SQL functions.
User-defined functions are ideal for organizations wanting to utilize DB2 and SQL to perform specialized routines with your own rules, business logic and data transformation procedures.

UDFs are similar in functionality to application subroutines, but user-defined functions are different because they can be used inside SQL statements. The only way that user-defined functions can be executed is from within an SQL statement. This gives them great power. A user-defined function is not a substitute for an application subroutine, subprogram, or procedure. Instead, user-defined functions extend the functionality of the SQL language.
There are two basic ways to create a user-defined function: You can code your own program from scratch (using either a host programming language or SQL) or you can edit an existing function. When coded, the CREATE FUNCTION statement registers the user-defined function to DB2 for use.

Types of User-Defined Functions (UDFs)

DB2 supports five different types of UDFs depending upon the manner in which they are coded and the type of functionality they support.
  • The first type is the external scalar UDF. Recall, from the prior post, that scalar functions are applied to a column or expression and operate on a single value. Furthermore, external functions are referred to as external because they are written and developed outside of (or external to) DB2. External UDFs must be written in a host programming language. DB2 user-defined functions can be written in Assembler, C, C++, COBOL, Java, or PL/I.
  • The next type of UDF is the external table UDF. Remember that table functions, when called, return an entire table. A table function is specified in the WHERE clause of a SELECT statement taking the place of a table, view, synonym, or alias.
  • A third type of user-defined function can be created from another existing function. This is a sourced function. A sourced function is based on a function that already exists -— it can be based on a built-in function (scalar or column) or another user-defined function that has already been created. External functions are coded from scratch, whereas sourced functions are created based on a pre-existing function and inherit the attributes of the underlying source function.
  • DB2 also supports the creation of SQL scalar user-defined functions. SQL scalar functions return a scalar value and are written entirely using SQL. Fairly extensive functionality can be delivered via SQL scalar functions using SQL procedural language.
  • The fifth, and final type of UDF supported by DB2 was introduced as of DB2 10 for z/OS: the SQL table UDF. SQL table functions are written exclusively in SQL but return a table as a result.
User-defined functions can be used to handle non-traditional data requirements, as well as to build DB2 databases that are customized to your business requirements.

There are a lot of details involved in coding and using UDFs, such as schema management, configuring WLM for execution, SQL restrictions, and more. But I will not be digging into those details in today’s post. I invite you to research the topic and learn how to extend the functionality of your DB2 environment using UDFs. 

Tuesday, May 19, 2015

The Importance of Using DB2 Functions

There are three basic types of built-in functions available to DB2 users: 
  1. aggregate functions, 
  2. scalar functions, and 
  3. table functions. 

It is important that you understand what functions are available – as well as what they do – in order to simplify program development and data access. Functions are invoked simply by specifying the function name and any required operands. A built-in function can be used any place an expression can be used (with some exceptions).

The first type of function is called an aggregate function, or sometimes a column function. Aggregate functions can be used to compute, from a group of rows, a single value for a designated column or expression. This provides the capability to aggregate data, thereby enabling you to perform statistical calculations across many rows with one SQL statement.

An aggregate function can be specified in a WHERE clause only if that clause is part of a subquery of a HAVING clause. In addition, every column name specified in the expression of the aggregate function must be a correlated reference to the same group.

The aggregate functions are AVG, CORRELATION, COUNT, COUNT_BIG, COVARIANCE, COVARIANCE_SAMP, MAX, MIN, STDDEV, SUM, VARIANCE, and XMLAGG.

I won’t be explaining what each of these functions does in today’s blog entry; instead, I suggest that you review their functionality in the SQL Reference Guide (SC19-4066 for DB2 11 for z/OS).

The second type of function is the scalar function, which is probably the most commonly used type of function. However, there are so many scalar functions that it can be difficult to keep track of them all. A scalar function is applied to a column or expression and operates on a single value. Contrast this with the aggregate functions, which are applied to a set of data and return only a single result.

The result of a scalar function is a transformed version of the column or expression being operated on. The transformation of the value is based on the scalar function being applied and the value itself. For example, you might use the CHAR function to convert a DB2 date, time, timestamp, ROWID, floating point, integer, or decimal value to a character value… or you might use the DIGITS function to convert a number to a character string of digits.

Of course, these are just a few simple examples of the many functions available in DB2 for z/OS. There are well over 100 scalar functions, each of which may be useful to minimize the amount of code you write by making DB2 do it, instead!
With so many functions available it would not make sense to cover them all here. Suffice it to say, by acquiring a sound understanding of DB2’s functions many requirements that seem to be difficult at first glance, are achievable using nothing more than SQL (check out this example to clarify what I mean: Sorting Out a Solution in DB2). 

The third type of function supported by DB2 is the table function, which can be specified only in the FROM clause of a query. This is because table functions return columns and resemble a table. There are three basic table functions:
  • ADMIN_TASK_LIST: Returns a table with one row for each defined task in the administrative scheduler task list. The administrative scheduler is covered in Part VI, “DB2 Utilities and and Commands.”
  • ADMIN_TASK_OUTPUTFor an execution of a stored procedure, returns the output parameter values and result sets, if available.
  • ADMIN_TASK_STATUS: Returns a table with one row for each defined task in the administrative scheduler task list, showing the status of the task the last time it was run.

Additional Useful Function Information 

DB2 also offers some other types of built-in functions including:
  • MQSeries Built-In Functions - scalar and table functions for use with IBM’s message queuing software, MQSeries. These functions enable MQSeries messages to be received and sent.
  • XML Built-In Functions - scalar functions to allow applications to efficiently generate and manipulate XML data with DB2.
The final built-in function to be aware of is the RAISE_ERROR Function. It is used to raise an error condition in the SQLCA. The user supplies the SQLSTATE and error description for the error to be raised. The error will be raised with the specified SQLSTATE and a SQLCODE of –438. The RAISE_ERROR function can be used to signal application program and data problems.

What About User-Defined Functions?

DB2 also supports the creation of additional functions (and data types) to supplement the built-in function (and data types) supplied out-of-the-box. User-defined functions (and types) give users the ability to effectively customize DB2 to their shop requirements. The ability to customize is potentially very powerful. It also can be quite complex and requires detailed knowledge, additional application development skills, and administrative dexterity.


We will take a look at UDFs in the next blog post, so check in again next week for that!

Tuesday, May 12, 2015

A Trip Report from the 2015 IDUG DB2 Tech Conference

Last week I attended, and spoke at, the annual North American IDUG DB2 Tech Conference in Philadelphia, PA. As usual, the event was busy and chock full of useful and interesting DB2 information.

My week started on Sunday with the IBM gold consultant briefing, whereby IBM talks to the gold consultants about their upcoming plans and solicits feedback from us. I can’t really share with you what happened there, but as is almost always the case, it was time well spent.
The conference kicked off in earnest on Monday with the keynote from IBMers Tim Vincent and Namik Hrle titled “Big Data, Analytics and the Future of Data Management.” Tim and Namik discussed how the growth of data is fueling innovation causing a big change in the way value is created. Some of the key takeaways from the keynote, for me at least, were:
  • The predominant source of Big Data for most projects is still structured transactions
  • Primary focus of most data projects is usually on understanding customers
  • There is a trend toward self-service
  • Clearly there is value in Hadoop but you can't replace all your DBMSes with it!

Perhaps the most salient tidbit shared at the keynote address was this: “People have forgotten, or never truly understood, how complex data integration actually is.” I think this is true; all too often people underestimate how difficult the integration of data will be. And I agree, too, with sentiment of the presenters who declared that “We need to realize that data is never going to be in one place.”

The bottom line of the keynote: All of our existing DB2 skills will still be relevant but must co-exist with skills in newer technologies like NoSQL and Hadoop.

Good stuff!

Some of the other highlights of the week:
  • Attendance seemed to be higher than at the past few IDUG conferences. And I see growth in the number of DB2 for LUW attendees. IDUG, at least historically, was mostly attended by DB2 for z/OS folks. And it is probably still true that more z/OS folks attend than LUW folks, but the LUW attendance continues to grow and may surpass z/OS at some point! Of course, this is all based on my unscientific eyeballing of attendees at presentations.
  • My session on DB2 performance had a full room and nobody seemed to nod off or run for the exits while I spoke. I also delivered a VSP for Dell to a room full of DBAs and developers, as well as a couple of presentations for DataKinetics during an evening session. So that was all good!
  • I attended a couple of sessions on newer features of DB2 and how some of them are under-utilized. I think this speaks to a few trends hitting the world of DB2 these days. Number one: training is drying up. If you don’t get training on new features it is hard to use them appropriately. Number two: failure to take advantage of free learning opportunities like webinars and freely-available PDF manuals:
  • The vendor exhibit hall was active and well-attended throughout the week. All the usual suspects were there like BMC, CA, dbi, Dell, and IBM, but there were some newer (at least to IDUG) companies, too, like DataKinetics and Imperva.
  • The educational seminars were integrated into the conference this year. And they did not cost extra to attend. That means that more people attended ed seminars this year (at least I think they did) and the conference offered more value to attendees looking for more in-depth education than can be gained by an hour session.


All in all, the 2015 North American IDUG Tech Conference was a big success. And it is not too early to start planning for next year’s conference, which will be in Austin, TX. Finally, I’ll be able to drive to an IDUG… see you there in 2016!

Tuesday, April 28, 2015

Controlling Mainframe Software Costs

Although this blog usually focuses on DB2 issues, I sometimes use it to focus on other IT issues, usually mainframe-related. The primary purpose of today's blog post is to promote a webinar I'm conducting this Thursday called, Managing Your z/OS Software BillThe webinar is sponsored by Data Kinetics, the North American distributor of a product called AutoSoftCapping (or ASC for short), that can be used to help control the rolling four hour average and thereby reduce monthly software bills.

Cost containment is of critical importance for IT departments in this day and age of financial austerity... especially so in the mainframe world.  Every decision regarding your computer resources is weighed based on not only the value that they can deliver to your organization, but upon their cost to procure, implement, and maintain. And, in most cases, if a positive return on investment cannot be calculated, the software won’t be adopted, or the hardware won’t be upgraded.

An important opportunity for mainframe cost containment is to better manage the peak monthly capacity of your mainframe on an LPAR (logical partition) by LPAR basis. The pricing model for most mainframe software is based on the capacity of the machine on which the software will run. Note that this pricing model reflects the potential usage based on the capacity of the machine, not the actual usage. Some vendors offer usage-based pricing. You should actively discuss this with your current ISVs as it is becoming more common, more accurately represents fair usage, and can save you money.
IBM offers several subcapacity pricing models for many of its popular software offerings, including products such as z/OS, DB2, IMS, CICS, MQSeries and COBOL. Some of the benefits of subcapcity pricing include the ability to:
  • Grow hardware capacity without necessarily increasing your software charges
  • Pay for key software at LPAR-level granularity
  • Experience a low cost of incremental growth
  • Manage software cost by managing workload utilization
By tracking MSU usage by LPAR you can be charged based on the maximum rolling four hour (R4H) average MSU usage, instead of full capacity. Most organizations with mainframes have shifted to some form of subcapacity pricing model, but not all of them understand how all of the "moving parts" work together. Thursday's webinar will help to clear that all up!
Managing mainframe software costs by adopting subcapacity pricing, soft capping techniques, and software like Data Kinetics' AutoSoftCapping can help your company to assure a cost-effective IT organization. In today’s cost-cutting, ROI-focused environment, doing anything less than that is short-sighted.

Tuesday, April 21, 2015

SQL Performance Basics: Part 7, Embed the SQL Directly Into Your Programs

You may recall that this is a subject I've written about before, but I think it is important enough to warrant covering briefly in this series on SQL performance basics... and that is, you should avoid black boxes if you want to optimize your applications.

So what is a black box? Simply stated, a black box is a database access routine that sits in between your application programs and DB2. Instead of coding embedded SQL in your application code, you make a call to the access routine (or black box).

The general idea here is that development is easier because programmers do not need to know how to write SQL. Instead, programmers call the black box to request data. SQL statements become calls – and every programmer knows how to code a call, right?

This approach is commonly referred to as a “black box” approach because the data access interface shields the developers from the “complexities” of SQL. The SQL is contained in that black box and programmers do not need to know how the SQL works – just how to call the black box for data. But there are a number of reasons why this approach is not sound:
  1. It is unwise to have uninformed and untrained developers writing DB2 applications without knowing the SQL behind their data access requests. What may seem like a simple request to a non-educated programmer may actually involve very complex and inefficient SQL “behind the scenes” in the black box.
  2. The black box coders will take shortcuts, like combining multiple types of SQL requests into one that will cause more data to be returned than is needed... but then send only the needed data back to the requester. This violates Part 1 of this series.
  3. The black box is an access routine to DB2 data, but SQL is already an access method to DB2 data -- another one is not needed, and is therefore superfluous.
  4. The black box will add CPU cycles to your application because it consists of extra code that is not needed if the SQL is embedded right into your application programs.

Suffice it to say, avoid implementing SQL black boxes... they are just not worth the effort!


If you are interested in more details on this subject, I point you to this article I wrote on the subject in my DBA Corner column for Database Trends & Applications.

Thursday, April 16, 2015

SQL Performance Basics: Part 6, Code Relationally... or Avoid the Flat-File Mentality

Another pervasive problem permeating the DB2 development community -- and indeed for most relational DBMSes -- is the “flat file” development mentality. What I mean by this is, when a programmer tries to access data in a relational database the same way that he would access data from a flat file. 

DB2 is ‘relational’ in nature and, as such, operates on data a set-at-a-time, instead of the record-at-a-time processing used against flat files. In order to do justice to DB2, you need to change the way you think about accessing data.

To accomplish this, all users of DB2 need at least an overview education of relational database theory and a moderate to extensive amount of training in SQL. Without such a commitment your programmers are sure to develop ugly and inefficient database access code – and who can blame them? Programmers are used to working with files so they are just doing what comes naturally to them.

SQL is designed so that programmers specify what data is needed but they cannot specify how to retrieve it. SQL is coded without embedded data-navigational instructions. The DBMS analyzes SQL and formulates data-navigational instructions "behind the scenes.” This is foreign to the programmer who has never accessed data using SQL.

Every SQL manipulation statement operates on a table and results in another table. All operations native to SQL, therefore, are performed at a set level. One retrieval statement can return multiple rows; one modification statement can modify multiple rows. This feature of relational databases is called relational closure.

When accessing data, a programmer needs to think about what the end result should be and then code everything possible into the SQL. This means using the native features of SQL – joins and subselects and functions, etc. – instead of coding procedural host language code (whether in COBOL, C, Java or whatever) that, for example, opens up a cursor, fetches a row, and then uses a fetched value to open up another cursor. This is processing DB2 like a set of flat files... better to join the data!

Educating programmers how to use SQL properly -- at a high level -- is probably the single most important thing you can do to optimize performance of your DB2 applications.

Saturday, April 11, 2015

SQL Performance Basics: Part 5, Avoid Sorting

Sorting is a very costly operation that you should strive to avoid if at all possible. 
Indexes are very useful for this purpose. DB2 can choose to use an available index 
to satisfy the sorting requirements of an ORDER BY, GROUP BY, or DISTINCT clause. 
Therefore, it can be beneficial to create indexes that correspond to these clauses 
for frequently executed SQL statements. 
For example, consider this SQL statement:
      SELECT    LAST_NAME, TITLE, SALARY
      FROM      EMPLOYEE
      ORDER BY  LAST_NAME, TITLE;
 
If an index on LAST_NAME and TITLE exists, DB2 can use it to avoid sorting. By 
retrieving the data in order from the index, sorting becomes unnecessary.
You can use this information to tune your SQL further to avoid sorting. When 
ordering is required only by one column, sometimes adding another column to 
the ORDER BY can enable indexed access to avoid the sort. 
Consider this SQL statement:
      SELECT    DEPT_CODE, MANAGER, DEPT_NAME,
      FROM      DEPARTMENT
      ORDER BY  MANAGER, DEPT_CODE;
 
Say there is a unique index on MANAGER. DB2 will probably use this index to 
satisfy the request. However, a sort will be done after retrieving the data to 
put the results into DEPT_CODE order within MANAGER. But, since we know 
our data, we are able to change this situation. Because MANAGER is unique 
we know that the following SQL statement is equivalent to the prior one: 
      SELECT    DEPT_CODE, MANAGER, DEPT_NAME,
      FROM      DEPARTMENT
      ORDER BY  MANAGER;
 
In this case, DB2 can use the index on MANAGER to avoid the sort. The extra 
column, DEPT_CODE, is removed from the ORDER BY clause. But, since MANAGER 
is unique, there can be at most one DEPT_CODE per MANAGER. Therefore, the 
sort order will be equivalent. Because we knew our data we removed a sort from 
the process and made our query more efficient!
One final note on sorting: although most of you probably know this, it cannot be 
stated too strongly (or frequently) - always code an ORDER BY if the order of the 
results of your query is important. The ORDER BY clause is the only foolproof 
method of ensuring appropriate sort order of query results. Simply verifying that 
access is via an index is not sufficient to ensure the order of the results because:
  • The access path may change in the future to non-indexed access.
  • Even indexed access does not always return ordered results (e.g. list prefetch).
UNION VERSUS UNION ALL
Another simple, yet effective, means of enhancing SQL performance is to 
understand how UNION, EXCEPT and INTERSECT work. Let's start with UNION 
because it has been around in DB2 the longest. UNION takes the results of 
multiple SELECT statements and combines them together. It does this, as part 
of the UNION operation, by sorting the results and removing any duplicates. 
UNION ALL, on the other hand, will not sort and will not remove duplicates.
If you know that the results of the queries being unioned together are distinct 
(that is, no duplicates will be encountered), then you can use UNION ALL instead 
of UNION, and thereby enhance performance by avoiding the sort. Additionally, 
if you do not care whether duplicates are returned, always use UNION ALL. 
The same applies to the EXCEPT and INTERSECT operations, which were added 
as of DB2 9 for z/OS.

Tuesday, April 07, 2015

SQL Performance Basics: Part 4, The Order of Predicates

Did you know that the order in which you code your predicates can have an impact on query performance? It is usually a minimal impact, but it may buy you a couple of microseconds for a very performance-critical query. In order to use predicate ordering to your advantage however, you  need to be armed with some basic information on how DB2 evaluates predicates as it processes your SQL.

So, before we continue, let's review the order in which DB2 evaluates predicates at execution time. DB2 will evaluate indexable predicates first: matching predicates before non-matching. Then, Stage 1 predicates, and finally Stage 2 predicates. Within each of these four groups, DB2 will evaluate equal predicates, then BETWEEN and NOT NULL predicates, and finally, any other predicates. If more than one predicate exists within a group, then DB2 will evaluate them in the physical order in which they are coded in the SQL statement. 

The re-ordering of predicates to take advantage of this situation should be considered only as a last resort. When implemented, the technique will usually shave only a little bit from the query's execution time. It is also important to note that predicate order will not impact a query's access path: it will remain unchanged (as shown in the PLAN_TABLE).

Now, how can we use this to our advantage

Consider the following query:

SELECT LAST_NAME, FIRST_NAME
FROM EMPLOYEE
WHERE SEX = 'M'
AND TITLE = 'MANAGER'


For the purposes of this discussion, no index exists for either of the columns coded in the predicates. They are therefore the same type: stage 1 and equal predicates. Furthermore, we know our data - in our organization, there is approximately a 50-50 split between males and females, and 15% of all employees are managers.

To optimize this query then, we can swap the two predicates to achieve better performance. So the query becomes:

SELECT LAST_NAME, FIRST_NAME
FROM EMPLOYEE
WHERE TITLE = 'MANAGER'
AND SEX = 'M'


Why should this query outperform the previous version? Well, assume we have 100,000 employees. If DB2 retrieves 50% of the rows (SEX = 'M') and then retrieves 15% of those 50%, we will have processed 57,500 rows:

( 100000 * 0.5 ) + ( ( 100000 * 0.5 ) * 0.15 ) = 57,500

But, if instead, DB2 were to retrieve 15% of the rows (TITLE = 'MANAGER') and then 50% of those, we will have processed only 22,500 rows:

( 100000 * 0.15) + ( ( 100000 * 0.15 ) * 0.5 ) = 22,500

Obviously, it is better for fewer rows to qualify early, thereby reducing the answer set and the number of rows that will have to be subsequently scanned.

Wednesday, April 01, 2015

SQL Performance Basics: Part 3, Eliminating Predicates?

It is technically possible to learn how to write SQL statements without having an in-depth knowledge of the data. However, the better you know your data, the better your application performance will be. Let's look at a simple example.

By reducing the number of predicates on your SQL statements you may be able to  achieve better performance by:
  1. Reducing BIND (and REBIND) time because fewer options will probably need to be examined by the DB2 Optimizer.
  2. Reducing execution time due to a smaller path length caused by the removal of search criteria from the optimized access path. DB2 will always make sure that it processes each predicate coded for the SQL statement. Removing predicates removes work -- and less work equals less time to process the SQL.
Of course, you have to make sure that you can actually remove predicates without impacting the result set of your query, right? But sometimes - if you know your data - there are cases where you can eliminate predicates.

Consider the following SQL statement:

 SELECT FIRST_NAME, LAST_NAME, GRADE_LEVEL
 FROM   EMPLOYEE
 WHERE  TITLE = 'VP'
 AND    GRADE_LEVEL >= 10;


This statement retrieve all rows for vice presidents who are at a grade level of 10 or above. But, what if we know more about our data? Say, for example, that the starting grade level for vice presidents in our organization is 10. Therefore, it is impossible for anyone with a lower grade level to achieve the title of VP. That makes the second predicate redundant in this case. If we remove this predicate it will not logically change the results, but with less checking of the data required (DB2 won't have to check for GRADE_LEVEL >= 10) performance may be improved.

It is important though that you truly do "know your data." For example, it is not sufficient to merely note that for current rows in the EMPLOYEE table, no vice presidents are at a grade level below 10. This may just be a coincidence. Do not base your knowledge of your data on the current state of the data. You must truly know your business criteria to determine that  a correlation between two columns (such as between GRADE_LEVEL and TITLE) actually exists. And only then should you modify your SQL. Failure to do so can result in incorrect results being returned.

Also, if the predicate was already there and you are removing it, comment out the predicate instead and be sure to document exactly why you are doing so in the code... that way, when somebody else takes a look at it later they'll know what happened and why.

Tuesday, March 24, 2015

Time to Start Your IDUG in Philadelphia Planning

Spring is in the air... well, at least it is South of the Mason-Dixon line... and that means it is time to plan your trip to this year's IDUG North American DB2 Tech Conference.

Anybody who has ever attended an IDUG conference knows about all of the good things you can expect to encounter, experience and learn at the event. That includes technical session on all of the latest and greatest DB2 technologies and features, networking opportunities to meet IBM developers and industry consultants, and the vendor exhibit hall where you can learn about software and tools to help you get the most out of DB2... not to mention picking up a few tchotchkes and mementos.

Additionally, there are Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), Hands-On Labs (delivering working training led by IBM), and the opportunity to take free IBM certification exams.

But there's even more... not only is this year's conference in Philadelphia, PA, a new venue for IDUG, but the half- and full-day educational seminars (that used to require an additional fee) are included in your registration fee.

I'll be presenting my DB2 Performance Roadmap presentation  at this year's IDUG, so be sure to stop in to say "Hi" and chat about DB2, big data, or your favorite topic du jour!

The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on in Philadelphia this May, why wouldn't you want to be there!?!?


Download the IDUG brochure for more details.

Monday, March 16, 2015

SQL Performance Basics: Part 2, Rely on Indexes

Perhaps the single most important aspect of SQL tuning is indexing. All developers should know all of the indexes that exist on any table upon which they write SQL statements. When an index exists on a column (or columns), DB2 can use the index to reduce I/O instead of scanning the entire table to satisfy a predicate. 

For critical queries, where no useful indexes exists, it usually makes sense to create an index to improve query performance. Of course, developers should enlist the assistance of a DBA to ensure the creation of appropriate indexes.

Let's learn with Bsome basics. For example, consider this SQL statement:
    SELECT   LASTNAME, SALARY 
    FROM     EMP 
    WHERE    EMPNO = '000010' 
    AND      DEPTNO =  'D01'
What index or indexes would make sense for this simple query? ""'First, think about all the possible indexes that you could create. Your first short list probably looks something like this:
  • Index1 on EMPNO
  • Index2 on DEPTNO
  • Index3 on EMPNO and DEPTNO
This is a good start, and Index3 is probably the best of the lot. It lets DB2 use the index to immediately look up the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table, you might want to examine the impact of creating yet another index on the table. There are several factors to consider. 

Indexing Factors to Consider
For starters, you need to weigh the impact of data modification. DB2 will automatically maintain every index you create. This means every INSERT and every DELETE to this table will insert and delete not just from the table, but also from its indexes. And if you UPDATE the value of a column that is in an index, you also update the index. So, indexes speed the process of retrieval but slow down modification.
You should also consider the impact to any existing indexes and applications before creating a new index. If an index already exists on EMPNO or DEPTNO, it might not be wise to create another index on the combination. However, it might make sense to change the other index to add the missing column. But not always, because the order of the columns in the index can make a big difference depending on the query. For example, consider this query:
SELECT   LASTNAME, SALARY 
FROM     EMP 
WHERE    EMPNO = '000010' 
AND      DEPTNO >  'D01';
In this case, EMPNO should be listed first in the index. And DEPTNO should be listed second, allowing DB2 to do a direct index lookup on the first column (EMPNO) and then a scan on the second (DEPTNO) for the greater-than.
Furthermore, if indexes already exist for both columns (one for EMPNO and one for DEPTNO), DB2 can use them both to satisfy this query so creating another index might not be necessary.
Finally, you should consider the importance of the query you are attempting to tune. The more important the query, the more you might want to tune by index creation. Of course, the term "importance" is not always easy to quantify. If you are coding a query that the CEO will run every day, you want to make sure it delivers optimal performance. So building indexes for that particular query is important. On the other hand, a query for a clerk might not necessarily be weighted as high, so that query might have to make do with the indexes that already exist. Of course, the decision depends on the application's importance to the business--not just on the user's importance. If the clerk runs business critical transactions and the CEO is simply printing off a report for later consumption, then the importance varies... right?

Index Overloading
Index design involves much more than I have covered so far. For example, you might consider index overloading to achieve index-only access. If all the data that a SQL query asks for is contained in the index, DB2 might be able to satisfy the request using only the index. Consider our previous SQL statement. We asked for LASTNAME and SALARY, given information about EMPNO and DEPTNO. And we also started by creating an index on the EMPNO and DEPTNO columns. If we include LASTNAME and SALARY in the index as well, we never need to access the EMP table because all the data we need exists in the index. This technique can significantly improve performance because it cuts down on the number of I/O requests.
Keep in mind that making every query an index-only access is not prudent or even possible. You should save this technique for particularly troublesome or important SQL statements.

The Bottom Line
A solid understanding of the indexes that exist -- and how additional indexes can help -- is vital to the performance of your DB2 applications. Take the time to understand the indexes that exist for your applications and you can become a better developer, becoming more valuable to your organizations and earning the respect of your peers!

Tuesday, March 10, 2015

SQL Performance Basics: Ask For Only What You Need

As you write SQL statements to access DB2 data, there are certain very simple, yet important rules to follow to encourage efficient SQL. Of course, SQL performance is a complex topic and to understand every nuance of how SQL performs can take a lifetime to master. That being said, adhering to the following simple rules puts you on the right track to achieving high-performing DB2 applications.

The first rule is to always provide only the exact columns that you need to retrieve in the SELECT-list of each SQL SELECT statement. Sometimes this is shortened to “do not use SELECT *”. The shorthand SELECT * means retrieve all columns from the table(s) being accessed. This is fine for quick and dirty queries but is bad practice for inclusion in application programs because DB2 tables may need to be changed in the future to include additional columns. SELECT * will retrieve those new columns, too, and your program may not be capable of handling the additional data without requiring time-consuming changes. 

Even if the program needs every column, it is better to explicitly ask for each column by name in the SQL statement for clarity and to avoid accessing new and unwanted columns.
DB2 consumes additional resources for every column you specify to be returned. If the program does not need the data, it should not ask for it.

The second part of asking only for what you need is using the WHERE clause to filter data in the SQL instead of bringing it all into your program. This is a common rookie mistake. It is much better for DB2 to filter the data before returning it to your program. This is true because DB2 uses additional I/O and CPU resources to obtain each row of data. The fewer rows passed to your program, the more efficient your SQL will be. So, the following SQL

    SELECT  EMPNO, LASTNAME, SALARY
    FROM    EMP
    WHERE   SALARY > 50000.00;


Is better than simply reading all of the data without the WHERE clause and then checking each row to see if the SALARY is greater than 50000.00 in your program.