Tuesday, October 11, 2022

See You In Scotland for IDUG?

Just a quick post to let folks know that I will be presenting at the IDUG EMEA Db2 Tech Conference the last week of October 2022. The conference will be held in Edinburgh, Scotland, UK at the Edinburgh International Conference Center (EICC), and even though the event runs from Saturday to Wednesday (instead of the customary Sunday through Thursday), you can still find a full slate of educational opportunities delivered by IBMers, vendors, users, and consultants from all over the world. In other worlds, there will be an impressive array of shared Db2 knowledge to consume.

So, where can you find me there?


First of all, on Monday, October 24th at 15:40 (for those of you who struggle with military time that is 3:40 PM) I will be presenting with Insoft-InfoTel on the topic "Intelligent Automation of Db2 Administration and Management." 

This session will discuss the importance of automation, particularly as it is applied to managing your Db2 for z/OS databases. I'll look at trends in automation and things you should consider automating. We'll also take a look at the various ways that traditional database management tasks are undertaken and the risks and cost associated with those tactics. And then InfoTel will share their iDBA-Online solution for automating your Db2 for z/OS management tasks and how it can mitigate risks and cost.

I will also be spending some time at the InfoTel booth in the exhibit hall on Monday evening, so be sure to stop by and discuss any Db2 topics with me!

I hope to see you at this year's IDUG EMEA conference the last week of October 2022. And if you are going to be there, be sure to attend my presentation and/or seek me out to say "Hello!"

Thursday, September 08, 2022

The Importance of Data Masking for IBM Db2 z/OS

A three-pronged set of trends are conspiring to increase the need for organizations to protect and mask sensitive data stored in Db2 for z/OS databases:

  • data privacy regulations place specific requirements on how data is to be protected, 
  • data breaches continue to grow, thereby requiring improvements in data protection protocols,
  • and the on-going requirement to copy production data to test in order to conduct realistic application testing 

Protecting your data while managing these three trends requires a systematic way to ensure that your sensitive data is not exposed or surreptitiously accessed. An ideal method to accomplish this is by masking sensitive data using a data masking tool.



If you want to learn all about the requirements for data masking, and how UBS-Hainer's BCV5 Masking Tool can protect your critical Db2 for z/OS data, be sure to register for and attend my upcoming webinar, Protect Your Sensitive Db2 for z/OS Data with the BCV5 Masking Tool on October 13, 2022 at 11am EST / 5 pm pm CEST.

I hope you will join me for this informative webinar.


Tuesday, September 06, 2022

New Series of Short, Low-Price Books

Today's blog post is to introduce my regular readers to a series of new books that I am writing and selling exclusively on Amazon. The books will be shorter than the ones I've written in the past, each one will be under 100 pages. The primary audience is for Kindle eBook readers, but there will also be paperback, print editions, too!

The general idea is to discuss a specific topic or idea within the realm of data, database systems, or mainframe computing. And to offer the books at a low price point. The eBook format will generally cost a couple of bucks or so (with the option for Kindle Unlimited readers to read free)! And the print editions will generally cost around $10 or so. Very low cost with some high quality information!

As of today, the first three books are now available to be ordered on amazon. They are as follows:

The Tao of Db2: Achieving Balance and Understanding with Db2

This book follows the exploits of a seasoned Db2 DBA training his intern in the ways of Db2 before he can enjoy a well-earned retirement. Learn best practices for managing Db2 properly to achieve harmonious systems and applications that deliver quality and performance using the Tao of Db2

Based on a presentation I gave at IDUG several years ago now, this book uses Tao as a training mechanism for achieving the proper mindset and qualities to manage and administer Db2 databases and applications. 

Database Design Reviews: Techniques to Prepare Databases and Applications for Operational Implementation

This book provides a framework for a series of design reviews that should be conducted to ensure the delivery of quality database and application code. It defines what a database design review is, and offers guidance on structuring the design review meetings, assigning appropriate participants to attend, and describing the timing and purpose of each meeting and attendee.

You can use this book as a template for establishing a consistent approach to database design reviews in your organization.

Regulatory Compliance for Data & Database Systems: Protect Your Sensitive Data

Being in compliance with governmental and industry regulations is important for organizations of all types and sizes. Indeed, there are numerous regulations impacting your data and database systems, ranging from industry regulations like PCI DSS to more encompassing governmental regulations like GDPR. There are around 100,000 regulations "out there" so it is certain that many of them apply to you and your business. But what do you need to do to protect your senstive data to comply with all these regulations?

This book delivers an introduction to the regulatory landscape and its impact on data and database systems and management. We will take a high-level look at some of the most impactful regulations that affect how we manage database systems. The primary focus of this book is to examine several techniques that can be used to ensure compliance. We will look at things like database auditing, data masking, and data archiving. And it will show the tactics and software required to implement compliance within your database systems.
Armed with the information in this book you can confidently implement effective compliance for your corporate databases and systems.


Thursday, September 01, 2022

Know Your Isolation Levels to Develop Correct and Efficient Db2 Programs

Did you know that Db2 provides a method to change the way a program or SQL statement acquires locks? That way is known as the isolation level and it can be set to specify the locking behavior for a transaction or statement. Standard SQL defines four isolation levels that can be set using the SET TRANSACTION ISOLATION LEVEL statement:

·        Serializable

·        Repeatable read

·        Read committed

·        Read uncommitted

The isolation level determines the mode of page or row locking implemented by the program as it runs.

Db2 supports a variation of the standard isolation levels. Db2 implements page and row locking at the program execution level, which means that all page or row locks are acquired as needed during the program run. Page and row locks are released as the program run depending on the isolation level.

In Db2 you can specify the following four isolation levels: cursor stability (CS), repeatable read (RR), read stability (RS), and uncommitted read (UR).

Using the ISOLATION parameter of the BIND command you can set the isolation level of a package or plan. You also can use the WITH parameter on a SELECT statement to set the isolation level of a single SQL statement.

Cursor stability is the Db2 implementation of the SQL standard read committed isolation level. CS is perhaps the most common DB2 isolation level in use in production applications because it offers a good tradeoff between data integrity and concurrency. When CS is specified the transaction will never read data that is not yet committed; only committed data can be read. Cursor stability is the current default isolation level if none is specified at BIND time.

A higher level of integrity is provided with repeatable read. Under an RR isolation level all page locks are held until they are released by a COMMIT (or ROLLBACK), whereas with CS read-only page locks are released as soon as another page is accessed.

An RR page locking strategy is useful when an application program requires consistency in rows that may be accessed twice in one execution of the program, or when an application program requires data integrity that cannot be achieved with CS.

For example, of a good reason to use RR page locking, consider a reporting program that scans a table to produce a detail report, and then scans it again to produce a summarized managerial report. If the program is bound using CS, the results of the first report might not match the results of the second.

Suppose that you are reporting the estimated completion dates for project activities. The first report lists every project and the estimated completion date. The second, managerial report lists only the projects with a completion date greater than one year.

The first report indicates that two activities are scheduled for more than one year. After the first report but before the second, however, an update occurs. A manager realizes that she underestimated the resources required for a project. She invokes a transaction to change the estimated completion date of one of her project's activities from 8 months to 14 months. The second report is produced by the same program, but reports 3 activities.

If the program used an RR isolation level rather than CS, an UPDATE that occurs after the production of the first report but before the second would not have been allowed. The program would have maintained the locks it held from the generation of the first report and the updater would be locked out until the locks were released.

How about another example? Consider a program that is looking for pertinent information about employees in the information center and software support departments who make more than $30,000 in base salary. In the DB2 sample tables department 'C01' is the information center and department 'E21' is software support.

The program opens a cursor based on the following SELECT statement:

    SELECT  EMPNO, FIRSTNME, LASTNAME,
            WORKDEPT, SALARY
    FROM    DSN8B10.EMP
    WHERE   WORKDEPT IN ('C01', 'E21')
    AND     SALARY > 30000;

The program then begins to FETCH employee rows. Assume further, as would probably be the case, that the statement uses the XEMP2 index on the WORKDEPT column. An update program that implements employee modifications is running concurrently. The program handles transfers by moving employees from one department to another, and implements raises by increasing the salary.

Assume that Sally Kwan, one of your employees, has just been transferred from the information center to software support. Assume further that another information center employee, Heather Nicholls, received a 10 percent raise. The update program running concurrently with the report program implements both of these modifications.

If the report program were bound with an isolation level of CS, the second program could move Sally from 'C01' to 'E21' after she was reported to be in department 'C01' but before the entire report was finished. Thus, she could be reported twice: once as an information center employee and again as a software support employee. Although this circumstance is rare, it can happen with programs that use cursor stability. If the program were bound instead with RR, this problem could not happen. The update program probably would not be allowed to run concurrently with a reporting program, however, because it would experience too many locking problems.

Now consider Heather's dilemma. The raise increases her salary 10 percent, from $28,420 to $31,262. Her salary now fits the parameters specified in the WHERE condition of the SQL statement. Will she be reported? It depends on whether the update occurs before or after the row has been retrieved by the index scan, which is clearly a tenuous situation. Once again, RR avoids this problem.

You might be wondering, "If CS has the potential to cause so many problems, why is it used so ubiquitously? Why not trade the performance and concurrency gain of CS for the integrity of RR?"

The answer is simple: the types of problems outlined are rare. The expense of using RR, however, can be substantial in terms of concurrency. So the tradeoff between the concurrency expense of RR and the efficiency of CS usually is not a sound one.

The third isolation level provided by DB2 is read stability (RS). Read stability is similar in functionality to the RR isolation level, but a little less. A retrieved row or page is locked until the end of the unit of work; no other program can modify the data until the unit of work is complete, but other processes can insert values that might be read by your application if it accesses the row a second time.

Consider using read stability over repeatable read only when your program can handle retrieving a different set of rows each time a cursor or singleton SELECT is issued. If using read stability, be sure your application is not dependent on having the same number of rows returned each time.

Finally, we come to the last, and most maligned isolation level, uncommitted read (UR). The UR isolation level provides read-through locks, also know as dirty read or read uncommitted. Using UR can help to overcome concurrency problems. When you're using an uncommitted read, an application program can read data that has been changed but is not yet committed.

UR can be a performance booster, too, because application programs bound using the UR isolation level will read data without taking locks. This way, the application program can read data contained in the table as it is being manipulated. Consider the following sequence of events:

1.   To change a specific value, at 9:00 a.m. a transaction containing the
      following SQL is executed:

UPDATE DSN8B10.EMP
   SET FIRSTNME = ˈMICHELLEˈ
WHERE  EMPNO = 10020;

The transaction is a long-running one and continues to execute without issuing a COMMIT.

2.   At 9:01 a.m., a second transaction attempts to SELECT the data that was
      changed, but not committed.

If the UR isolation level were used for the second transaction, it would read the changed data even though it had yet to be committed. Obviously, if the program doesn't need to wait to take a lock and merely reads the data in whatever state it happens to be at that moment, the program will execute faster than if it had to wait for locks to be taken and resources to be freed before processing.

The implications of reading uncommitted data, however, must be carefully examined before being implemented. Several types of problems can occur. Using the previous example, if the long-running transaction rolled back the UPDATE to EMPNO 10020, the program using dirty reads may have picked up the wrong name ("MICHELLE") because it was never committed to the database.

Inaccurate data values are not the only problems that can be caused by using UR. A dirty read can cause duplicate rows to be returned where none exist. Alternatively, a dirty read can cause no rows to be returned when one (or more) actually exists. Additionally, an ORDER BY clause does not guarantee that rows will be returned in order if the UR isolation level is used. Obviously, these problems must be taken into consideration before using the UR isolation level.

Keep in mind, too, that the UR isolation level applies to read-only operations: SELECT, SELECT INTO, and FETCH from a read-only result table. Any application plan or package bound with an isolation level of UR will use uncommitted read functionality for any read-only SQL. Operations contained in the same plan or package and are not read-only will use an isolation level of CS.

When is it appropriate to use UR isolation? The general rule of thumb is to avoid UR whenever the results must be 100 percent accurate. Following are examples of when this would be true:

  • Calculations that must balance are being performed on the selected data
  • Data is being retrieved from one source to insert to or update another
  • Production, mission-critical work is being performed that cannot contain or cause data integrity problems

In general, most production Db2 applications are not serious candidates for dirty reads. In a few specific situations, however, the dirty read capability will be of major benefit. Consider the following cases in which the UR isolation level could prove to be useful:

  • Access is required to a reference, code, or look-up table that basically is static in nature. Due to the non-volatile nature of the data, a dirty read would be no different than a normal read the majority of the time. In those cases when the code data is being modified, any application reading the data would incur minimum, if any, problems.
  • Statistical processing must be performed on a large amount of data. Your company, for example, might want to determine the average age of female employees within a certain pay range. The impact of an uncommitted read on an average of multiple rows will be minimal because a single value changed will not greatly impact the result.
  • Dirty reads can prove invaluable in a data warehousing environment that uses DB2 as the DBMS. A data warehouse is a time-sensitive, subject-oriented, store of business data that is used for online analytical processing. Other than periodic data propagation and/or replication, access to the data warehouse is read-only. Because the data is generally not changing, an uncommitted read is perfect in a read-only environment due to the fact that it can cause little damage. More data warehouse projects are being implemented in corporations worldwide and DB2 with dirty read capability is a very wise choice for data warehouse implementation.
  • In those rare cases when a table, or set of tables, is used by a single user only, UR can make a lot of sense. If only one individual can be modifying the data, the application programs can be coded such that all (or most) reads are done using UR isolation level, and the data will still be accurate.
  • Finally, if the data being accessed already is inconsistent, little harm can be done using a dirty read to access the information.

Although the dirty read capability can provide relief to concurrency problems and deliver faster performance in specific situations, it also can cause data integrity problems and inaccurate results. Be sure to understand the implications of the UR isolation level and the problems it can cause before diving headlong into implementing it in your production applications.

Summary

It is important for Db2 DBAs and application programmers to know the four isolation levels and their impact on SQL. Using the isolation levels is an effective way to control concurrency and locking for your Db2 applications.

Thursday, August 04, 2022

All About zIIPs

If you work with Db2 then you need to know about the IBM zIIP specialty processor. This is true whether you are a DBA or a developer... and, let's face it, if you work on a mainframe in any capacity, you'll want to know at least something about zIIPs.

With that in mind, today's blog post is sort of a meta-post. You see, I've been writing a series of blog posts for Cloudframe on the topic of zIIPs. So, the goal today is to provide you with information about those posts and links to their content.

The first blog post, Understanding Mainframe Specialty Processors: zIIPs and More, is introductory in nature. It defines the term "specialty processor" and defines the various types of specialty processors available from IBM. And then it offers a bit more information about the zIIP.

The second post in this series is titled Digging Into the zIIP: What Does zIIP-Eligible Mean? The title is kind of self-describing as to what this post offers!

Next (3 of 8) we have Types of Processing That Can Utilize zIIPs & Why You Want to Use zIIPs which gets into TCBs and SRBs, enclaves, and discusses how zIIPs can reduce your mainframe software costs.

The fourth post in this series digs into one of the major benefits of zIIPs, namely the ability to run Java workloads on the zIIP. In the article, titled Java and the zIIP: Five Major Benefits, we look at the primary advantages that can be accrued by running Java on zIIPs instead of a general purpose CP.

And because the vast majority of mainframe programs are written in COBOL the fifth post, provides Options for Converting from COBOL to Java.

Next, we take a look at Common zIIP Usage Mistakes and How to Identify Them.

Then we turn our attention to the latest IBM pricing model, Tailored Fit Pricing, and examine how it can impact cost savings when it comes to zIIP usage. This post is titled, appropriately enough, The Impact of Tailored Fit Pricing on zIIPs.

And we close out with Predictions on the Future of zIIP and Specialty Processors (of course, this is my personal viewpoint on the topic with the proviso that nobody knows the future for sure)!

So it is my hope that you will take a moment or two and click through the links to the articles above that look interesting to you. And if you have any comments, or suggestions for future articles, as always, please post them below!

Friday, June 17, 2022

My Speaking Schedule at IDUG Db2 Tech Conference in Boston (2022)

 Just a quick note to let everybody who is coming to Boston in July for IDUG know what I will be speaking about and when my presentations are scheduled!

First of all, my regular IDUG session this year is titled "Things Your DBAs Hear... and how to stop making them crazy!" This session is based on my decades of experience as a DBA and as a consultant. This session walks you through interactions between developers and DBAs, in a light-hearted way. All of them are real-life examples of actual conversations I've been in (or observed).  Attend this session to learn what frustrates DBAs and how improving your communication can improve your relationship with your DBAs... and therefore improve your development  efforts!  This is session E11, and it will be delivered on Wednesday, July 13 at 11:30 AM.

I will also be presenting at two different VSP sessions, one on Tuesday and another on Wednesday (this is the 10:15 AM time slot on both days).

On Tuesday, I will be presenting with InfoTel on the topic "To Protect and Preserve: Treat Your Data Properly or Pay the Consquences." This session will discuss vital data management issues such as data archiving and data protection (my portion), as well as some products that can help you manager your data better (the InfoTel portion). 

On Wednesday, I will be presenting "How to Accelerate Db2 SQL Workloads... Without Db2!" for Log-On Software. This session takes a look at in-memory trends and issues, and shines a light on how QuickSelect can improve the performance of SQL queries.

I hope to see you at this year's IDUG North American conference the week of July 11, 2022. If you are there, come see one (or all) of my sessions... and be sure to say "Howdy!"

Friday, June 03, 2022

Time to Download the Db2 13 for z/OS Manuals!

Yesterday I posted a brief piece about the new version of Db2 for z/OS that is now generally available: Db2 13 for z/OS is Here! It is always an exciting time when a new version of Db2 for z/OS is unleashed on the world and one of the first things I recommend is to start reading the manuals to see what all is there!

I like having the PDF versions of the manuals on my hard drive, so with each new version, I start by downloading them! With that in mind, here is the link to the IBM page that hosts the PDF manuals for Db2 13 for z/OS

Of course, over time, this documentation will be modified. Therefore, it is a good idea to be aware that subsequent modifications may render the versions on your hard drive inaccurate. You can always replace them periodically if you wish... perhaps when you adopt a new function level make plans to downloads the latest iterations of the manuals. 

There is also a great, new redbook on Db2 13 for z/OS that you should download and read. It is titled IBM Db2 13 for z/OS and More, and it, along with the What's New manual, can serve as a nice introduction to this new version.

A new manual that I need to mention is the SQL Data Insights Users Guide. If you read my earlier post (Db2 13 for z/OS is Here!) then you know that SQL Data Insights is a new Db2 13 capability that combines AI/deep learning with Db2 for z/OS and IBM Z technologies to deliver SQL-based semantic queries on tables and views. It is probably one of the most exciting new things in this new version...

So if you work with Db2 for z/OS don't delay... download those new manuals and start learning what new wonders Db2 13 will offer up!


Thursday, June 02, 2022

Db2 13 for z/OS is Here!

Here we are, in June 2022, about 5 years or so since Db2 12 for z/OS was released. And lo' and behold, IBM has given us a new version of Db2 for z/OS to learn and adopt: Db2 13 for z/OS

The new version is generally available (as of May 31, 2022). If you were not paying close attention though, you may have missed it. Db2 13 was announced at the same time as the new mainframe (IBM z16), so it didn't get quite the same level of attention. But those of us who use Db2 for z/OS day in and day out will find a lot of great new stuff in this latest and greatest version of Db2.

I'm not going to go into great detail about the new features and functionality of Db2 13 for z/OS today, but I will offer a high-level overview. Look for future blog posts to dig into more of the nitty gritty tech details and capabilities.

The first thing to mention is that you will need to activate the last Db2 12 function level 510 (FL510) before you can migrate to Db2 13. As many organizations are lagging behind in terms of function level activation, it will be interesting to see how this requirement impacts migration to Db2 13.

AI

So what can users expect from this new version? Well, it seems that the most talked-about features are related to adopting AI. Functions that deliver AI capabilities into Db2 will make it easier for organizations on the AI journey to integrate Db2 into their processes.

Perhaps the most significant AI addition to Db2 13 is the SQL Data Insights feature. Provided as an extension to Db2, SQL Data Insights is delivered using built-in functions to deliver AI capabilities like uncovering heretofore unknown relationships in your data. Since it uses built-in functions you can use it anywhere that you use SQL!

Db2 13 offers additional AI help delivering the ability to simplify building models, Natural Language Processing (NLP), and exploiting the speed of the IBM z16 for training and querying data.

IBM z16 Synergy

The next thing that has been highly-touted is that Db2 13 takes advantage of new capabilities delivered in the IBM z16 hardware.  The new Telum chip used by the z16 mainframe provides powerful AI capabilities that Db2 uses to bolster its AI capabilities (such as SQL Data Insights). And we have already touched on that in terms of speeding up training and querying data for AI.

Db2 for z/OS is unique in that it is the only major DBMS that is designed specifically for a single operating system (z/OS) and hardware platform (IBM Z). This enables IBM (the provider of the DBMS, O/S, and hardware) to take advantage of capabilities unique to the platform, because there is no worry about supporting other platforms.

One example of this unique synergy is the ability to improve sort performance using the SORTL instruction of the IBM Z15 and z16. Additionally, the IBM z16 System Recovery Boost can minimize downtime by speeding up the performance of Db2 for z/OS restart. 

But What About BAU?

OK, so there is new AI stuff and great synergy with the IBM Z, but what about the features and functionality that make it easier to keep up with Business As Usual (BUA)? You know, things like easier administration, better performance, and so on?

Good news! There are a plethora of great new capabilities and improvements in Db2 13 for z/OS. While I cannot adequately cover them in detail today, some examples include:

  • The ability to convert back-and-forth between partition-by-growth and partition-by-range Db2 table spaces. 
  • Support for more concurrent threads and open data sets, as well as improved storage conditions. 
  • DDF storage relief.
  • Real Time Statistics (RTS) improvements.
  • Many improvements to IBM Db2 utility functionality.
  • Security and compliance improvements, including integration to the IBM Z Security and Compliance Center.

Summary

The bottom line is that there is a new version of Db2 for z/OS that mainframe shops will need to learn and prepare for. As with any new Db2 version, it will be exciting to dig in and discover all of the new stuff that can help us do our jobs better... and improve our organization's efforts to use its data to improve business.

Keep checking back here as I will blog in more detail about the new capabilities of Db2 13 for z/OS over time... 

Wednesday, May 25, 2022

TCBs, SRBs, and Enclaves

Many Db2 DBAs first hear about Task Control Blocks (TCBs) and Service Request Blocks (SRBs) in an IBM performance class, but not everyone has taken one of those classes. And even for those who have, a refresher is probably in order.

At a high level, for mainframe z/OS programs code can execute in one of two modes: TCB mode, also known as task mode, or SRB mode. Most programs execute under the control of a task. Each thread is represented by a TCB. A program can exploit multiple processors if it is composed of multiple tasks, as most programs are.

An SRB is a control block that represents a routine that performs a particular function or service in a specified address space. SRBs are lightweight and efficient but are available only to supervisor state software. An SRB is similar to a TCB in that it identifies a unit of work to the system. But an SRB cannot “own” storage areas. SRB routines can obtain, reference, use, and free storage areas, but the areas must be owned by a TCB. SRB mode typically is used by operating system facilities and vendor programs to perform certain performance-critical functions.

In general, z/OS dispatches Db2 work in TCB mode if the request is local, or in SRB mode if the request is distributed. These parallel tasks are assigned the same importance as the originating address space. Of course, this is a gross generalization and as zIIPs have become ubiquitous more SRB mode work has been enabled (as only SRBs can run on the zIIP).

Preemptible enclaves are used to do the work on behalf of the originating TCB or SRB address space. Enclaves are grouped by common characteristics and service requests, and because they are preemptible, the z/OS dispatcher—and Workload Manager—can interrupt these tasks for more important ones. There are two types of preemptible SRBs: client SRBs and enclave SRBs.

If the Db2 request is distributed DRDA workload, then it will be executed in enclave SRBs. If the request is coming over a local connection, then it will be dispatched between TCBs, client SRBs, and in some cases enclave SRBs (such as for parallel queries and index maintenance).

What Is an Enclave?

An enclave is a construct that represents a transaction or unit of work. Enclaves are a method of managing mainframe transactions for non-traditional workloads. You can think of an enclave as an anchor point for resource accumulation regardless of where the transaction is executing.

With traditional workloads, it is relatively easy to map the resources consumed to the actual transaction doing the consumption. But with non-traditional workloads (such as web transactions, distributed processing, and so on) it can be more difficult because a transaction can span multiple platforms. Enclaves are used to overcome this difficulty by correlating closely to the end user’s view of the transaction.

So even though a non-traditional transaction can be composed of multiple “pieces” spanning many server address spaces, and can share those address spaces with other transactions, the enclave gives you more effective control over the non-traditional workload. 

Synopsis

Hopefully this short introduction to TCBs, SRBs, and Enclaves has been helpful. At least the next time you hear somebody use these terms you'll have some idea what they are talking about!

Wednesday, April 20, 2022

Moving from IMS to Db2?

As a Db2 consultant and regular blogger I often receive questions about technical things, but also on strategy and industry trends. I try to help but I always make it clear when answering that I cannot predict the future, only make educated guesses... So with that introduction out of the way, one of the regular questions that I get involves comparing IMS to Db2, or helping to determine if it makes sense to convert from IMS to Db2. Here is the text of a recent question that was posed to me:

"My customer is wondering about the possible advantages of converting his IMS DB/DC system to IMS DC/Db2. The application currently performs well with an internal response time of less than .5 seconds on average. Even with an arrival rate of 425 full-function transactions per second, the queue count rarely goes above 10. This system typically peaks at 12.5 million transactions per twelve-hour day against HDAM and HIDAM databases totaling close to 1 terabyte. The application itself is currently a bit over seven million lines of code. Can you comment on the relative cost vs. advantage of moving an existing application from IMS DB to Db2 along with relative CPU capacity requirements?"

Here is my response:

Well, the main advantages of converting from IMS/DB to Db2 would be the better support for ad hoc query, support of standard SQL (instead of non-standard DL/1) for writing queries and a deeper pool of talent to support the Db2 environment. There are many more skilled Db2 developers out there than there are IMS techies these days. 

The primary benefit of sticking with IMS is the good and predictable performance you currently enjoy. And, of course, there would be no need to convert the database structures or the more difficult-to-achieve rewriting the necessary application code.

So, the high-level metric to keep in mind as you make your decision comes down to this: a well-designed IMS application will perform very fast, perhaps faster than a well-designed Db2 application (but that does not mean that Db2 is slow). On the other hand, it will be easier to develop applications for Db2, especially when data needs to be accessed multiple ways and in an ad hoc manner. 

I guess it boils down to this: How happy are you with the current application, are you able to support it properly and how many other IMS/DB databases do you support? If this is the last IMS/DB database and you are looking to convert away in order to rid yourself of the IMS license, then it makes sense to consider converting. But you should do a project plan and cost/benefit analysis before making your final decision (conversion can be very costly). If you have a lot of other IMS/DB databases, then don't convert to Db2 unless you cannot support the needs of your end users (management, ad hoc support, etc.) using IMS.

In terms of CPU requirements, Db2 will consume more CPU than IMS. Db2 optimizes queries internally whereas IMS programmers construct access paths to data. This additional requirement will cause Db2 to consume more CPU. But, of course, that additional CPU brings with it the enormous benefit of database optimization and better ad hoc query support.

Furthermore, if you do decide to convert from IMS to Db2, be sure to use Db2 as it was intended to be used. In other words, don't just convert segments to tables and be done. Make sure that you normalize your design and come up with a good "relational" Db2 database design. Also, be sure to train your programmers to write SQL queries. That means changing mindsets from the record-at-a-time processing of IMS to the group-at-a-time processing of Db2 and SQL.

For example, if you have master file processing logic it needs to be re-examined in Db2. Instead of open file, read record from file, use key to read record from database... you should be joining tables and reading from the join. This requires a different mindset and coding technique. 

As an additional consideration, keep in mind that Db2 is a much "newer" database technology than the hierarchical infrastructure used by IMS. So it stands to reason that newer capabilities and features will likely be implemented in Db2 before they are implemented in IMS; for example, AI and machine learning capabilities. That does not mean that IMS is obsolete, it isn't; IBM still maintains and upgrades IMS (at the time of writing, the current version is IMS 15.2 which was released for GA in March 2020).

Summary

Of course, this answer has been necessarily brief. I encourage you to read up on Db2 and understand its many differences with regard to IMS before jumping into a "go"/"no go" decision. Best of luck...

Thursday, April 07, 2022

Wishing Z Mainframe a Happy Birthday

Today, April 7, 2022, the IBM mainframe officially celebrates its 58th birthday. The IBM 360 was launched on April 7, 1964 and the world of enterprise computing has never been the same.

I'm old enough to remember in the early 1990s when Stewart Alsop of InfoWorld predicted the last mainframe would be unplugged on March 15, 1996. And ever since he made that woefully inaccurate prediction many think that the mainframe is no longer relevant. But they are wrong... Here we are, 26 years after that infamous predicted drop-dead date and the mainframe is still going strong! 

Mainframes are designed to be able to process a trillion web transactions a day with the capability to process 1.1 million transactions per second. And 70% of the Fortune 500 use mainframes!

To all of you diligent mainframe users out there, today is indeed a day to celebrate... another year has gone by, and mainframes are still here... running the world!

Wednesday, March 09, 2022

On The Road Again… IDUG North American Db2 Tech Conference 2022

Well, it looks like this year IDUG will be a live, in-person event again after a couple years of living virtually. The COVID pandemic took its toll on IT events and conferences, but I, for one, am looking forward to getting back out there and mingling – safely – with my Db2 colleagues. And the North American IDUG conference Boston this July is just the place to do it!

Yes, I know that July is still 3 months away, but it is definitely time to start making your plans now to attend this year's IDUG... because let’s face it, you probably haven’t traveled in a while so you’ll be rusty and need all the time you can get to plan effectively!  Starting now means that there’s still time to get your manager’s approval, make your travel plans, find all those suitcases you'll need that have been collecting dust, familiarize yourself with your airline's travel requirements, buy a new box of KN-95 masks, and be where all the cool Db2 folks will be this July 11-14, 2022.

If you’ve ever attended an IDUG conference before then you know why I’m excited. IDUG offers a world of phenomenal educational opportunities delivered by IBM developers, vendor experts, users, and consultants from all over the world. There will be a slew of informative technical sessions on all of the latest and greatest Db2 technologies and features. 

At this point, the agenda has not been set yet, but c’mon, you know it is going to be a great event. Especially with Db2 Vnext looming on the horizon, right?

So get on it and start planning to be in Boston this July. And if you need any help justifying your attendance at this year’s IDUG event, IDUG offers a handy template for creating a justification letter for your boss. You’ll need to make some changes for the 2022 event, but it is a nice starting point.

The Bottom Line

The IDUG Db2 Tech Conference is the place to be to learn all about Db2 from IBMers, gold consultants, IBM champions, end-users, ISVs, and more. With all of this great stuff going on this year in Boston, why wouldn't you want to be there!?!?

Tuesday, February 08, 2022

Db2 User-Defined Functions (UDFs)

A user-defined function, or UDF for short, enables you to customize Db2 to your shop's requirements. It is a very powerful feature that  can be used to add procedural functionality, coded by the user, to Db2. The UDF, once coded and implemented extends the functionality of Db2 by enabling users to specify the UDF in SQL statements just like built-in SQL functions.

Additional functionality and capability aside, it can also be complex to deliver and requires detailed knowledge, additional application development skills, and extra administrative handling.

User-defined functions are ideal for organizations wishing to utilize Db2 and SQL to perform specialized, corporate routines performing business logic and data transformation.

Types of UDFs

There are two ways of creating a user-defined function: you can code your own program from scratch using a traditional programming language; or you can use an existing function as a template, of sorts.

There are two types of user-defined functions that can be written from scratch: scalar functions and table functions. A scalar function is applied to a column or expression and operates on a single value. DB2 offers multiple built-in scalar functions, each of which can be applied to a column value or expression. Examples of built-in scalar functions include LTRIM, SQRT, and SUBSTR. You can read more about Db2 built-in functions in my earlier blog post, A Quick Intro to Db2 SQL Functions.

Table functions are a different type of function that, when invoked, returns an entire table. A table function is specified in the WHERE clause of a SELECT statement taking the place of a table, view, synonym, or alias.

Scalar and table user-defined functions are referred to as external functions, because they are written and developed outside of (or external to) DB2. External UDFs must be written in a host programming language. DB2 UDB user-defined functions can be written in Assembler, C, COBOL, or PL/I.

The final type of user-defined function is a sourced function. A sourced function is based on a function that already exists, that is it can be based on a built-in function or another user-defined function that has already been created. A sourced function can also be based on an existing scalar or column function.

User-defined functions are similar in functionality to application subroutines. But user-defined functions are different because they are used inside of SQL statements. This gives them great power. A user-defined function is not a substitute for an application subroutine, subprogram, or procedure. Instead, user-defined functions are used to extend the functionality of the SQL language.

The Schema

User-defined functions, user-defined distinct types, stored procedures, and triggers all are associated with a schema. By default, the schema name is the authid of the process that issues the CREATE FUNCTION, CREATE DISTINCT TYPE, CREATE PROCEDURE, or CREATE TRIGGER statement.

You can specify a schema name when you create a user-defined function, user-defined type, or trigger. If the schema name is not the same as the SQL authorization ID, then the issuer of the statement must have either SYSADM or SYSCTRL authority, or the authid of the issuing process must have the CREATEIN privilege on the schema.

For example, the following statement creates a user-defined function named FUNCX in the schema named MYSCHEMA:

     CREATE FUNCTION MYSCHEMA.FUNCX ...

If the MYSCHEMA component was not included in the CREATE statement, then the schema would default to the authid of the person (or process) that executed the CREATE statement. In short, the schema is set to the owner of the function. If the CREATE statement was embedded in a program, the owner is the authid of the owner of the plan or package; if the statement is dynamically prepared, the owner is the authid in the CURRENT SQLID special register.

Table Functions

Table functions are different in nature than scalar functions. A table function is designed to return multiple columns and rows. In other words, the output of a table function is itself a table. An example using a table function follows:

SELECT WINNER, WINNER_SCORE,
             LOSER, LOSER_SCORE
      FROM   GAME_RESULTS(5)
      WHERE  LOSER_SCORE = 0;

In this case, the table function GAME_RESULTS( ) is used to return the win/loss statistics for football games. The table function can be used in SQL statements just like a regular DB2 table. The function program is designed to fill the rows and columns of the "table." The input parameter is an INTEGER value corresponding to the week the game was played; if 0 is entered, all weeks are considered. The query above would return all results where the losing team was shut out (had 0 points) during the fifth week of the season.

The following or similar CREATE FUNCTION statement could be used to define the GAME_RESULTS( ) function:

      CREATE FUNCTION GAME_RESULTS(INTEGER)
     RETURNS TABLE (WEEK INTEGER,
                    WINNER CHAR(20),
                    WINNER_SCORE INTEGER,
                    LOSER CHAR(20),
                    LOSER_SCORE INTEGER)
       EXTERNAL NAME GAMES
     
   LANGUAGE C
        PARAMETER STYLE DB2SQL
          NO SQL
          DETERMINISTIC
          NO EXTERNAL ACTION
          FENCED
    
      SCRATCHPAD
    
      FINAL CALL
    
      DISALLOW PARALLEL
    
      CARDINALITY 300;

The parameter identifying this as a table function is the RETURNS TABLE parameter. This parameter is used to define the columns of the table function. The function program must create these rows itself or from another data source such as a flat file.

The value supplied for the CARDINALITY parameter is only an estimate. It is provided to help Db2 optimize statements using the table function. It is possible to return more or fewer rows than is specified in CARDINALITY.

Sourced Functions 

Sourced functions are created from functions that already exist. A sourced function can be based on any existing function, whether built-in (scalar and column) or user-defined (scalar). The biggest reasons that sourced functions are created is to enable functions for user-defined distinct data types. This is required because DB2 implements strong typing. Here is an example of creating a sourced UDF:

 CREATE FUNCTION FINDWORD
    (DOCUMENT, VARCHAR(50))
  RETURNS INTEGER
  SPECIFIC FINDWORDDOC
  SOURCE SPECIFIC FINDWORDCLOB;

In this example a new function, FINDWORD, is created from an existing function FINDWORDCLOB. The function finds the location of the specified word expressed as a VARCHAR(50) value in the supplied DOCUMENT. The function returns an INTEGER indicating the location of the word in the DOCUMENT. DOCUMENT is a user-defined data type based on a CLOB data type.

Summary

UDFs provide powerful functionality that can be used to customize Db2 SQL functionality and standardize your operational functtions and algorithms into your DBMS environment. 

Tuesday, February 01, 2022

Auditing Database Access and Change: A Necessity More Than a Nicety

The increasing burden of complying with government and industry regulations imposes significant, time-consuming requirements on IT projects and applications. And nowhere is the pressure to comply with regulations greater than on data stored in corporate databases.

Organizations must be hyper-vigilant as they implement controls to protect and monitor their data. One of the more useful techniques to protect your company’s database data is through data access auditing, also known as simply database auditing. Database auditing is the process of monitoring access to, and modification of, selected database objects and resources within operational databases and retaining a detailed record of the access where that record can be retrieved and analyzed as needed.

A data access auditing capability enables companies to produce an audit trail of information with regard to their database data. This audit trail should contain information such as what database objects were impacted, who performed the operations, and when the activity occurred. A comprehensive audit trail of database operations, coupled with an analysis engine to review and analyze the audit trail allows data and security professionals as well as IT auditors to perform an in-depth analysis of access and modification patterns against data in your database systems. Only when armed with such details is it possible to comply with regulations, pass security audits, and drill down into the details to review potential vulnerabilities for effective issue resolution.

A Look at the Regulations and Requirement

A fine-grained audit trail is necessary to comply with many regulations that apply to organizations of all types.

Many of the PCI Data Security Standard requirements emphasize the importance of real-time monitoring and tracking of access to cardholder data, as well as continuous assessment of database security health status.

HIPAA, the Health Insurance Portability and Accountability Act, directs health care providers to protect individual’s health care information going so far as to state that the provider must be able to deliver a list of everyone who even so much as looked at their patient’s information. Could you produce a list of everyone who looked at a specific row or set of rows in any database you manage?

And then there is the Sarbanes-Oxley Act (SOX) which has the goal of reducing fraud and conflicts of interest, as well as improving disclosure and financial reporting. Section 404 of the SOX Act specifies that the CFO must guarantee the accuracy of the processes used to add up the numbers; processes that access and manipulate data in a database system. As such, it is important to be able to track who changed database schemata and database data for SOX compliance

And these are only a few of the pertinent national, international, regional, and industry regulations that must be understood and complied with.

Database Access Auditing Techniques

So now that we understand why database access auditing is important, let’s take a look at how it can be accomplished. There are several popular techniques that can be deployed to audit your database structures.

The first technique is trace-based auditing, which is typically built directly into the native capabilities of the DBMS. For example, the Audit trace feature of IBM Db2 for z/OS. When an audit trace is started, the DBMS begins to cut trace records when activity occurs against audited objects (selected by DDL option). However, Db2 only captures the first read or write per unit of work, which will clearly miss activities as most UOWs encompass more than one read or write. Alternately, Db2 audit policies can be created for named tables to capture all activity, which improves the data captured, but can create an excess of audit records that need to be stored in SMF data sets.

So, there are problems with this technique including a high potential for performance degradation when audit tracing is enabled, a high probability that the database schema will need to be modified, and insufficient granularity of audit control, especially for reads.

Another technique is to scan and parse transaction logs. Every DBMS uses transaction logs to capture every database modification for recovery purposes. If you can read the log and interpret the data (which can be challenging as the data is not simple) it is possible to identify what data was changed and by which users. The biggest drawback to this technique is that database reads are not captured on transaction logs.

Additional issues with relying on log analysis for auditing data access include: it is possible to disable logging such that modification information will not be on the log and therefore not captured; performance issues scanning volumes and volumes of log files looking for only specific information to audit; and the difficulty of retaining logs over long periods for auditing when they were designed for short-term retention for database recovery.

And that brings us to the third, and preferred, method of database auditing for organizations that are serious about regulatory compliance: professional software that proactively monitors and intercepts all SQL requests as they are executed by the DBMS. It is important that all SQL access is audited by monitoring for SQL at the database level, not just by sniffing network calls. This is important because not every SQL request goes over the network, especially for the mainframe platform where much of the activity is centralized and many important business transactions never venture over an IP network (e.g., a CICS or IMS transaction accessing Db2).

Proactive intercept-based database audit monitoring does not require transaction logs, does not require database schema modification, should be highly granular in terms of specifying what to audit, and should incur only minimal overhead.

One such product that implements intercept-based auditing for Db2 database access is DBARS, which stands for “Db2 Access Recording Services,” available from ESAI Group

DBARS

Using DBARS for your Db2 database auditing needs makes sense as it offers a high-speed method for intercepting all Db2 database activities, not just modifications but also all reads. Furthermore, DBARS does not rely on Db2 tracing; instead, it uses a proprietary interface to intercept all Db2 SQL statements, regardless of origin. Therefore, you can use DBARS to create audit reports of all Db2 activity. Furthermore, DBARS provides the ability to block suspicious SQL activity, giving you the ability to prevent fraudulent access attempts based on specific parameters and patterns.

As you look into your mainframe Db2 database auditing needs and compare functionality against the advice proffered in this article, you will see that DBARS offers all of the functions needed for auditing access to sensitive data in Db2 tables.

Important Features for a Database Auditing Solution

As you investigate the database access auditing requirements for your organization, you should make sure that the solutions you examine support your DBMS using intercept-based auditing, instead of the other methods.

You should also compile a list of the types of questions that you want your solution to be able to answer. A good database access auditing solution should be able to provide answers to at least the following questions:

  • Who accessed the data?
  • At what date and time was the data accessed?
  • What program or client software was used to access the data?
  • For batch mainframer users, what was the z/OS job name?
  • From what location was the request issued?
  • For distributed Db2 access, what were the names of the external server, application, and workstation?
  • What SQL was issued to access the data?
  • Was the request successful; and if so, how many rows of data were accessed or modified?
  • If the request was a modification, what data was changed? (A before and after image of the change should be accessible.)
Of course, there are numerous details that must be investigated for each of these questions. You will want to be able to review recent activities, but you will also want to be able to review actions that happened in the past, so a robust database access auditing solution should provide an independent mechanism for the long-term storage and access of audit details. It should be easy to query the audit trail, perhaps even offering canned queries for the most common types of queries. Nonetheless, the audit information should be accessible using industry-standard query tools to make it easier for auditors to customize queries as necessary.

An alerting capability is also desirable, such that when certain SQL activity is intercepted an alert is triggered to take further actions, such as recording an exception, sending information to a log, or pinging a DBA or security admin.

Advanced auditing solutions also provide the ability to proactively block suspect access to the database. For example, you may want to stop any attempted access outside of normal, scheduled programs over the weekend. At any rate, it is desirable for an auditing solution to be able to block activities based on parameters such as user name, program name, IP address, execution time, type of access, and the like. Such a capability is important because preventing fraudulent access is preferable to allowing it and reporting that it happened!

It is also important for a comprehensive database auditing solution to provide a mechanism to audit privileged users, such as DBAs and SYSADMs. Many privileged users have blanket access to all corporate data. Although they can access and modify it at their discretion, they should not be accessing and modifying production data without due cause. A database auditing solution enables organizations to implement a “trust, but verify” policy with their privileged users. This allows the administrators to retain the authority they need to be able to do their jobs, while at the same time giving the organization the peace of mind that everything the privileged users are doing is tracked for security and compliance purposes. Without a database auditing solution in place, privileged users are a potential compliance problem lurking within every database implementation.

The Benefits of a Professional Database Auditing Solution

The bottom line is that database auditing should be a crucial component of your organization’s data protection strategy. Auditing database activity is a core requirement of compliance with many government and industry regulations, but auditing is also an essential component of securing and protecting the important production data in your database systems.

Be sure to study the auditing and compliance requirements of your organization and to augment your DBMS with the appropriate tools to bolster the auditability of your databases.