Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Monday, March 20, 2023

Harnessing the Power of zIIP Processors for Improved Db2 Performance and Lower Cost

As a Db2 DBA, you're constantly looking for ways to improve performance and efficiency while minimizing costs. One technology that can help achieve these goals is the zIIP (IBM System z Integrated Information Processor) processor. By offloading eligible Db2 workloads to zIIP processors, you can free up capacity on general-purpose processors and reduce costs, while improving performance.

So, what workloads are eligible for offloading to zIIP processors? XML processing, and portions of the Db2 LOAD, REORG, RUNSTATS and REBUILD utilities are among the most common. If you have third-party utilities (BMC, Broadcom, InfoTel) it is likely that they, too, will be zIIP-eligible, at least for some of their functionality.

Shifting workload to distributed/DDF is another good way to exploit zIIPs because SQL statements executed through DDF are zIIP-eligible. But most of the time DBAs have little influence on moving workload to distributed processing. This choice is typically driven by application development plans instead of DBA tuning tactics. 

Nevertheless, by understanding what type of workload is zIIP-eligible and encouraging such usage, you can offload workload to zIIP processors. Moving workload from general-purpose processors to zIIPs can possibly improve system performance and reduce costs.

You might also want to take a look at converting some of your COBOL workload to Java, if at all possible because Java programs are zIIP-eligible. Of course, this requires application developers to get involved, as well as (possibly) a conversion tool.

To fully harness the power of zIIP processors, it's important to identify eligible workloads and configure the system accordingly. Here are some tips to help you get started:

  • Configure Db2 for zIIP offload: Configure Db2 to take advantage of zIIP processors by setting the appropriate parameters and options. Consult the IBM Db2 documentation for specific guidance on configuring zIIP offload.

  • Monitor and analyze performance: Use Db2 performance monitoring tools to track the performance of zIIP offloaded workloads and identify areas for further optimization. This can help you continually improve performance and efficiency over time.
By effectively utilizing zIIP processors for Db2 workloads, you can achieve significant cost savings and performance improvements on IBM Z mainframe systems. Don't let this powerful technology go to waste – start exploring the benefits of zIIP processors today!

Thursday, September 17, 2020

Convert Your COBOL Db2 Programs to Java Without Rebinding

 As most Db2 developers and DBAs know, when you modify a Db2 program you have to prepare the program to enable it to be executed. This program preparation process requires running a series of code preprocessors that—when enacted in the proper sequence—creates an executable load module and a Db2 application package. The combination of the executable load module and the application package is required before any Db2 program can be run, whether batch or online.

But it is not our intent here to walk through and explain all of the steps and nuances involved in Db2 program preparation. Instead, we are taking a look at the impact of converting COBOL programs to Java programs, particularly when it comes to the need to bind as a part of the process.

We all know that issuing the BIND command causes Db2 to formulate access paths for SQL. If enough things (statistics, memory, buffers, etc.) have changed, then access paths can change whenever you BIND or REBIND. And this can be troublesome to manage.

But if the SQL does not change, then it is not technically necessary to bind to create a new package. You can prevent unnecessary BIND operations by comparing the new DBRM from the pre-compile with the previous version. Of course, there is no native capability in Db2 or the BIND command to compare the DBRM. That is why there are third-party tools on the market that can be used for this purpose.

But again, it is not the purpose of today’s post to discuss such tools. Instead, the topic is converting COBOL to Java. I have discussed this previously in the blog in the post Consider Cross-Compiling COBOL to Java to Reduce Costs, so you might want to take a moment to read through that post to acquaint yourself with the general topic.

Converting COBOL to Java and BIND

So, let’s consider a COBOL program with Db2 SQL statements in it. Most COBOL uses static SQL, meaning that the access paths are determined at bind time, not at execution time. If we convert that COBOL program to Java then we are not changing the SQL, just the code around it. Since the SQL does not change, then a bind should not be required, at least in theory, right?

Well, we first need to get into a quick discussion about types of Java programs. You can use either JDBC or SQLJ for accessing Db2 data from a Java program. With JDBC the program will use dynamic SQL whereas SQLJ will deliver static SQL. The Db2 BIND command can be issued using a DBRM (precompiler output) or a SQLJ customized profile. 

So, part of the equation to avoid binding is to utilize SQLJ for converted COBOL programs.

CloudFrame, the company and product discussed in the referenced blog post above can be used to convert COBOL programs into modular Java. And it uses SQLJ for the Db2 access. As such, with embedded SQLJ, static SQL will be used and the access paths will be determined at bind time instead of execution time.

But remember, we converted business logic, not SQL. The same SQL statements that were used in the COBOL program can be used in the converted Java. CloudFrame takes advantage of this and re-purposes the existing package from the previous COBOL program to the new Java SQLJ. CloudFrame automates the entire process as part of the conversion from COBOL to Java. This means that the static SQL from the COBOL program is converted and customized into SQLJ in java. This is a built-in capability of CloudFrame that allows you to simply reuse the same package information that was already generated and bound earlier.

This means no bind is required when you use CloudFrame to convert your Db2 COBOL applications to Java… and no access paths will change. And that is a good thing, right? Conversion and migration are already time-consuming processes; eliminating performance problems due to changing access paths means that one less issue to worry about during a COBOL to Java conversion when you use CloudFrame.

Tuesday, August 18, 2020

Navigating the IBM COBOL 4.2 End of Service Waters: Chart a course to benefit your business

 

Surprisingly, COBOL has been in the news a lot recently, due to its significant usage in many federal govern-ment and state systems, most recently with unemployment systems, being in the news. With the global COVID-19 pandemic, those unemployment systems were stressed like never before with a 1600% increase in traffic (Government Computer News, May 12, 2020) as those impacted by the pandemic filed claims.

Nevertheless, there is another impending event that will likely pull COBOL back into the news as IBM withdraws older versions of the COBOL compiler from service. All IBM product versions go through a lifecycle that starts with GA (general availability), after some time moves to EOM (end of marketing) where IBM no longer sells that version, and ends with EOS (end of support) where IBM no longer supports that product or version. It is at this point that most customers will need to decide to stop using that product or upgrade to a newer version because IBM will no longer fix or support EOS products or versions.

Of course, code that was compiled using an unsupported COBOL compiler will continue to run, but it is not wise to use unsupported software for important, mission-critical software, such as is usually written using COBOL. And you need to be aware of interoperability issues if you rely on more than one version of the COBOL compiler.

So what is going on in the world of COBOL that will require your attention? First of all, earlier this year on April 30, 2020, IBM withdrew support for Enterprise COBOL 5.1 and 5.2. And Enterprise COBOL 4.2 will be withdrawn from service on April 30, 2022 – just about two years from now.

So now is the time for your organization to think about its migration strategy.

Why is COBOL still being used?

Sometimes people who do not work in a mainframe environment are surprised that COBOL is still being used. But it is, and it is not just a fringe language. COBOL is a language that was designed for business data processing, and it is extremely well-suited for that purpose. It provides features for manipulating data and printing reports that are common business requirements. COBOL was purposely designed for applications that perform transaction processing like payroll, banking, airline booking, etc. You put data in, process that data, and send results out.

COBOL was invented in 1959, so its history stretches back over 60 years; a lot of time for organizations to build complex applications to support their business. And IBM has delivered new capabilities and features over the years that enable organizations to keep up to date as they maintain their application portfolio.

So, COBOL is in wide use across many industries.

A majority of global financial transactions are processed using COBOL, including processing 85 percent of the world’s ATM swipes. According to Reuters, almost 3 trillion dollars in DAILY commerce flows through COBOL systems!

The reality is that more than 30 billion COBOL transactions run every day. And there are more than 220 billion lines of COBOL in use today. COBOL is not dead…

 What’s new in COBOL 6

With COBOL 5.1 and 5.2 already out of support, and COBOL 4.2 soon to follow, one migration path is to Enterprise COBOL 6, and IBM has already delivered three releases of it: 6.1, 6.2, and 6.3. There are some nice new features that are in the latest version(s) of IBM Enterprise COBOL, including:

  • Compile and runtime support delivering performance improvements for z15 hardware and z/OS 2.4 operating system
  • Increased compiler capacity making it possible to compile and optimize larger programs (6.1)
  • 64-bit (AMODE 64) support in this compiler enables users to process large data tables that require greater than 2 GB of addressing space (6.3)
  • JSON support (6.1) including JSON PARSE statement (6.2)
  • Support for many new features from the COBOL 2002/2014 programming standards including new statements like ALLOCATE, FREE and INITIALIZE; addition of Dynamic Length elementary items; conditional compilation using the DEFINE compiler option, and more
  • Many new compiler options
  • Improved usability with USS

At the same time, there are concerns that need to be considered if and when you migrate to version 6. One example is that the new compiler will take longer to compile programs than earlier versions – from 5 to 12 times longer depending on the optimization level. There are also additional work data sets required and additional memory considerations that need to be addressed to ensure the compiler works properly. As much as 20 times more memory may be needed to compile than with earlier versions of the compiler.

Some additional compatibility issues to keep in mind are that your executables are required to be stored in PDSE data sets and that COBOL 6 programs cannot call or be called by OS/VS COBOL programs.

And of course, one of the biggest issues when migrating from COBOL 4.2 to a new version of COBOL is the possibility of invalid data – even if you have not changed your data or your program (other than re-compiling in COBOL 6). This happens because the new code generator may optimize the code differently. That is to say, you can get different generated code sequences for the same COBOL source with COBOL 6 than with 4.2 and earlier versions of COBOL. While this can help minimize CPU usage (a good thing) it can cause invalid data to be processed differently, causing different behavior at runtime (a bad thing).

Whether you will experience invalid data processing issues depends on your specific data and how your programmers coded to access it. Some examples of processing that may cause invalid data issues include invalid data in numeric USAGE DISPLAY data items; parameter/argument size mismatches; using TRUNC() with binary data values having more digits than they are defined for in working storage; and data items that are used before they have been assigned a value.

Migration considerations

Keep in mind that migration will be a lengthy process for any medium-to-large organization, mostly due to testing application behavior after compilation, and comparing it to pre-compilation behavior. You need to develop a plan that best suits your organization’s requirements and work to implement it in the roughly 2-year timeframe before IBM Enterprise COBOL 4.2 goes out of support.

Things to consider:

  • Gartner research shows that “huge ‘all-or-nothing’ modernization programs often fail to meet expectations”
  • What is your current state? Which COBOL compilers are you using and what is your end goal (6.1, 6.2, 6.3)?
  • Remember that compiled programs will continue to run, so it may not be imperative to re-compile everything prior to the end of support date. Of course, it can be difficult to keep track of what has been converted and what has not if you do not have a plan moving forward other than “convert when the program has to be changed at some point.” And it can become difficult to keep track of all the requirements and incompatibilities for multiple versions of COBOL if you do not plan for, and eventually convert to a newer compiler version.
  • Do you have the COBOL talent and knowledge not only to convert but to continue supporting your existing portfolio of COBOL applications?
  • Enterprise application portfolios can be quite large, making it difficult to effectively discover and map all of the dependencies. Consider using tools to help. 

Migration challenges and Options to Consider

As you put your plan together, you might consider converting some of your COBOL applications to Java. An impending event such as the end of support for a compiler is a prime opportunity for doing so. But why might you want to convert your COBOL programs to Java?

Well, it can be difficult to obtain and keep skilled COBOL programmers. As COBOL coders age and retire, there are fewer and fewer programmers with the needed skills to manage and maintain all of the COBOL programs out there. At the same time, there are many skilled Java programmers available on the market, and universities are churning out more every year.

Additionally, Java code is portable, so if you ever want to move it to another platform it is much easier to do that with Java than with COBOL. Furthermore, it is easier to adopt cloud technologies and gain the benefits of elastic compute with Java programs.

Cost reduction can be another valid reason to consider converting from COBOL to Java. Java programs can be run on zIIP processors, which can reduce the cost of running your applications. A workload that runs on zIIPs is not subject to IBM (and most ISV) licensing charges... and, as every mainframe shop knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a zIIP processor, then software license charges do not accrue - at least for that workload.

Additional benefits of zIIPs include:

  • They are significantly cheaper to acquire than standard CPs
  • When workload is redirected to a zIIP it frees up capacity on the standard CP

So, there are many reasons to consider converting at least some of your COBOL programs to Java. Some may be worried about Java performance, but Java performance is similar to COBOL these days; in other words, most of the performance issues of the past have been resolved. Furthermore, there are many tools to help you develop, manage, and test your Java code, both on the mainframe and other platforms.

Keeping in mind the concerns about “all-or-nothing” conversions, most organizations will be working toward a mix of COBOL migrations and Java conversions, with a mix of COBOL and Java being the end results. As you plan for this be sure to analyze and select appropriate candidate programs and applications for conversion to Java. There are tools that can analyze program functionality to assist you in choosing which the best candidates. For example, you may want to avoid converting programs that frequently call other COBOL programs and programs that use pre-relational DBMS technologies (such as IDMS and IMS).

How to convert COBOL to Java

At this point, you may be thinking, “Sure, I can see the merit in converting some of my programs to Java, but how can I do that? I don’t have the time for my developers to re-create COBOL programs in Java going line-by-line!” Of course, you don’t!

This is where an automated tool comes in handy. The CloudFrame Migration Suite provides code conversion tools, automation, and DevOps integration to deliver very maintainable, object-oriented Java that can integrate with modern technology available within your open architecture.  It can be used to refactor COBOL source code to Java without changing data, schedulers, and other infrastructure components. It is fully automated and seamlessly integrates with the change management systems you already use on the mainframe.

The Java code generated by CloudFrame will operate the same as your COBOL and produce the same output. There are even options you can use to maintain the COBOL 4.2 treatment of data, thereby avoiding the invalid data issues that can occur when you migrate to COBOL 6. This can help to reduce project testing and remediation time.

It is also possible to use CloudFrame to refactor your COBOL programs to Java but keep maintaining the code in COBOL. Such an approach, as described in this blog post (Consider Cross-Compiling COBOL to Java to Reduce Costs), can allow you to keep using your COBOL programmers for maintenance but to gain the zIIP eligibility of Java when you run the code.

Upcoming Webinar

To learn more about COBOL migration, modernization considerations, and how CloudFrame can help you to achieve your modernization goals, be sure to attend CloudFrame’s upcoming webinar where I will be participating on a panel along with Venkat Pillay (CEO and founder of CloudFrame) and Dale Vecchio (industry analyst and former Gartner research VP). The webinar, titled Navigating the COBOL 4.2 End of Support(EOS) Waters: An expert panel discusses the best course of action to benefit your business will be held on September 23, 2020 at 11:00 AM Eastern time. Be sure to register and attend!

Summary

Users of IBM Enterprise COBOL 4.2 need to be aware of the imminent end of service date in April 2022 and make appropriate plans for migrating off of the older compiler.

This can be a great opportunity to consider what should remain COBOL and where the opportunities to modernize to Java are.  Learn how CloudFrame can help you navigate that journey.

Tuesday, June 30, 2020

Consider Cross-Compiling COBOL to Java to Reduce Costs


Most organizations that rely on the mainframe for their mission-critical workload have a considerable amount of COBOL programs. COBOL was one of the first business-oriented programming languages having been first introduced in 1959. Designed for business and available when the IBM 360 became popular, COBOL is ubiquitous in most mainframe shops.

Organizations that rely on COBOL need to make sure that they continue to support and manage these applications or risk interruptions to their business, such as those experienced by the COBOL applications that run the state unemployment systems when the COVID-19 pandemic caused a spike in unemployment applications.

Although COBOL continues to work -- and work well -- for many application needs, there are on-going challenges that will arise for organizations using COBOL. One issue is the lack of skilled COBOL programmers. The average age of a COBOL programmer is in the mid-50’s, and that means many are close to retirement. What happens when all these old programmers retire? 

Another issue is cost containment. As business improves and workloads increase, your monthly mainframe software bill is likely increasing. IBM continues to release new pricing models that can help, such as tailored-fit pricing, but it is not easy to understand all of the different pricing models, nor is it quick or simple to switch, at least if you want to understand what you are switching to.

And you can’t really consider reducing cost without also managing to maintain your existing performance requirements. Sure, we all want to pay less, but we need to maintain our existing service level agreements and meet our daily batch window deadline.

Cross-Compiling COBOL to Java

Which brings me to the main point of today’s blog post. Have you considered cross-compiling your COBOL applications to Java? Doing so can help to address some of the issues we just discussed, as well as being a starting point toward your application modernization efforts.


What do I mean by cross-compiling COBOL to Java? Well, the general idea is to refactor the COBOL into high-quality Java using CloudframeTM. CloudFrame is the company and the product, which is used to migrate business logic in COBOL into modular Java. This refactoring of the code changes the program structure from COBOL to object-oriented Java without changing its external behavior.

After refactoring, there are no platform dependencies, which allows the converted Java workloads to run on any platform while not requiring changes to legacy data, batch schedulers, CICS triggers or Db2 stored procedures.

I can already hear some of you out there saying “wait-a-minute… do you really want me to convert all of my COBOL to Java?” You can, but I’m not really suggesting that you convert it all and leave COBOL behind… at least not immediately.

But first, let’s think about the benefits you can get when you refactor your COBOL into Java. Code that runs on a Java Virtual Machine (JVM) can run on zIIP processors. When programs run on the zIIP, the workload is not charged against the rolling four-hour average or the monthly capacity for your mainframe software bill. So, refactoring some of your COBOL to Java can help to lower your software bill.

Additionally, moving workload to zIIPs frees up your general-purpose processors to accommodate additional capacity. Many mainframe organizations are growing their workloads year after year, requiring them to upgrade their capacity. But if you can offload some of that work to the zIIP, not only can you use the general purpose capacity that is freed, but if you need to expand capacity you may be able to do it on zIIPs, which are less expensive to acquire than general purpose processors.

It's like Cloudframe is brining cloud economics to the mainframe.

COBOL and Java

CloudFrame refactors Batch COBOL workloads to Java without changing data, schedulers, and other infrastructure (e.g. MQ). CloudFrame is fully automated and seamlessly integrated with the change management systems you use on the mainframe. This means that your existing COBOL programmers can maintain the programs in COBOL while running the actual workloads in Java.

Yes, it is possible to use Cloudframe to refactor the COBOL to Java and then maintain and run Java only. But it is also possible to continue using your existing programmers to maintain the code in COBOL, and then use Cloudframe to refactor to Java and run the Java. This enables you to keep your existing developers while you embrace modernization in a manageable, progressive way that increases the frequency of tangible business deliverables at a lower risk.

An important consideration for such an approach is the backward compatibility that you can maintain. Cloudframe provides binary compatible integration with your existing data sources (QSAM, flat files, VSAM, Db2), subsystems, and job schedulers. By maintaining COBOL and cross-compiling to Java, you keep your COBOL until you are ready to shift to Java. At any time, you can quickly fall back to your COBOL load module with no data changes. The Java data is identical to the COBOL data except for date and timestamp.

With this progressive transformation approach, your migration team is in complete control of the granularity and velocity of the migration. It reduces the business risk of an all-or-nothing, shift-and-lift approach because you convert at your pace without completely eliminating the COBOL code.

Performance is always a consideration with conversions like this, but you can achieve similar performance, and sometimes even better performance as long as you understand your code and refactor wisely. Of course, you are not going to convert all of your COBOL code to Java, but only those applications that make sense. By considering the cost savings that can be achieved and the type of programs involved, cross-compiling to Java using Cloudframe can be an effective, reasonable, and cost-saving approach to application modernization.

Check out their website at www.cloudframe.com or request more information.

Tuesday, March 10, 2020

A Guide to Db2 Performance for Application Developers



DBAs: are you looking for a way to help train your developers to code more efficient Db2 application programs? 
Programmers: do you want to understand the best practices for writing high-performing Db2 applications?
Well, my latest book, A Guide to Db2 Performance for Application Developers, is just what you are looking for! Available in both printed and eBook formats, this is the book you need to assure that you are building effective, efficient Db2 applications.


This book will make you a better programmer by teaching you how to write efficient code to access Db2 databases. Whether you write applications on the mainframe or distributed systems, this book will teach you practices, methods, and techniques for optimizing your SQL and applications as you build them. Write efficient applications and become your DBA's favorite developer by learning the techniques outlined in this book!

The methods outlined in this book will help you improve the performance of your Db2 applications. The material is written for all Db2 professionals, whether you are coding on z/OS (the mainframe) or on Linux, Unix or Windows (distributed systems). When there are pertinent differences between the platforms it is explained in the text.

The focus of the book is on programming, coding and developing applications. As such, it does not focus on DBA, design, and data modeling issues, nor does it cover most Db2 utilities, DDL, and other non-programming related details. If you are a DBA, the book should still be of interest to you because DBAs are responsible for overall Db2 performance. Therefore, it makes sense to understand the programming aspect of performance.

It is important also to understand that the book is not about performance monitoring and tuning. Although these activities are important, they are typically not the domain of application developers. Instead, the book offers guidance on application development procedures, techniques, and philosophies. The goal of the book is to educate developers on how to write "good" application code that lends itself to optimal performance. By following the principles in this book you will be able to write code that does not require significant remedial, after-the-fact modifications by performance analysts. If you follow the guidelines in this book your DBAs and performance analysts will love you!

The assumption is made that the reader has some level of basic SQL knowledge and therefore it will not cover how to write Db2 SQL code or code a Db2 program. It is also important to point out that the book does not rehash material that is freely available in Db2 manuals that can be downloaded or read online.

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well.

You can order your copy of A Guide to Db2 Performance for Application Developers today at:

Wednesday, May 12, 2010

IDUG Tampa 2010, Day One

As usual, the North American IDUG conference is proving to be a hectic, yet enjoyable and informative time. The days are packed from morning til evening with technical sessions, networking, and running from here to there and back again.

Tuesday was the first day for normal IDUG sessions (the day-long seminars were moved to Monday this year), and the day was dominated (for me at least) by DB2 10 sessions. The spotlight session by Jeff Josten was an information-packed 90 minutes overview of DB2 10 that can only be described as drinking from a firehose. Myself and about 200 other curious attendees sat in attention as Jeff discussed the features that back up the themes of Versionn 10, which are efficiency, resilience, and growing new workloads on DB2 for z/OS.

Jeff didn’t share a GA date for the new version, nor would anyone else from IBM this week, but it has been strongly hinted that it could be before the end of the year (2010).

The biggest “thing” being touted by IBM about DB2 10 is the performance gains it delivers right out-of-the-box. Jeff discussed IBM’s performance objectives as historically being to deliver less than a 5% performance regression from release to release. But things have perked up recently. For DB2 9, most customers reported no regression or gain out of box. And the new goal is no longer containing regression, but delivering gain. For DB2 10, the expectation is that many customers will reduce CPU time 10% to 20% right out-of-the-box.

In IBM’s labs, Jeff indicated that the out-of-the-box CPU reduction numbers for traditional workloads are ranging from 5-10% and for newer workloads (e.g. TCP/IP, stored procedures) the improvement is as much as 20% in lab measurements. And when you start using new functionality, you can reasonably expect to see up to 10% CPU reduction. Of course, Jeff was careful to note that these are pre-GA numbers so things could change, even though there is no expectation that they will change.

Additionally, there is a lot of focus on scalability in DB2 10. Shops can expect to support 5x to 10x more concurrent users, up to 20,000 per subsystem. This is possible due to virtual storage relief: threads have been moved above the bar.

Jeff went on to cover a lot of additional new functionality to be delivered with DB2 10 including parellel index update during INSERT (which should speed up inserts against tables with multiple indexes), DB2’s usage of 1MB page size (z/OS) in buffer pools, multiple SQL access path and performance improvements, efficient caching of dynamic SQL with literals, LOB streaming between DDF and rest of DB2, Workfile spanned records (PBG), INSERT improvements for UTS, solid state disk monitoring and exploitation, temporal data support, timestamp data type improvements, and more.

Hash support is particularly interesting. With hashing you can get direct access to data with a single getpage instead of the multi-getpage approach of b-tree indexing. The targeted use case for hashes is for lookup of a row based upon primary key. The hashing algorithm is stored in the DB2 engine. Never fear, though, because you can still define additional indexes on hashed tables and the optimizer will understand and prefer hashed access when it is possible. (I hear the IMS DBAs out there laughing. DB2 DBAs are now going to need to understand space calculations for hash space and what collisions and overflow means.)

Next up was Roger Miller who covered DB2 10 from a database administration perspective. He began his session by referencing the extra detail that is available in the DB2 10 webcast presentation that Roger did about last month, which is available on the web.

Roger states that a lot of what is at the heart of DB2 10 is about making things easier for DBAs. And then to prove his point he talked for an hour about all of those things. Highlights included the reduced need for REORG, monitoring enhancements, hashing, and pureXML enhancements for usability, scalability, and performance.

A particularly interesting point made by Roger is that query parallelism these days is less about decreasing elapsed time and more about the ability to shuttle workload to a zIIP.

Roger also discussed the ability to skip V9 and go directly from V8 to V10. He also expressed concern that folks who choose to do this not ignore learning all about V9 when they do this. For example, RUNSTATS in V9 had key changes, so shops need to be careful to run RUNSTATS when moving to V10.

Roger also spoke about the significant changes to the DB2 Catalog and DB2 Directory in DB2 10. There are about 60 new table spaces, the links have been removed, inline LOBs are used in many places, and row level locking is used. These changes mean that online REORG works for everything in the catalog & the directory.

He also spoke about the various improvements to security administration in DB2 10. There is a new SECADM authority with no access to data and there is also a new option for DBADM without data access. Another nice new option is DBADM authority for every database in the subsystem. And then there is the ability to REVOKE without cascading, something that DB2 security administrators have been looking for for years!

Changing pace, I attended Billy Sundarrajan’s presentation on “De-mystifying JDBC Universal Drivers – for the z/OS DBA.” The reality is that more and more dynamic SQL applications are being implemented, so knowing about JDBC drivers is a necessity, not a luxury for the mainframe DBA.

Billy discussed the types of JDBC drivers and the installation issues involved. You can connect using a type 2 or type 4 driver. The Type 2 driver connects directly without DB2 Connect gateway; Type 4 driver connects thru DB2 Connect gateway.

He also discussed the benefits of setting end user variables for monitoring and the different properties that can be used for configuration.

Of course, I attended a few other sessions and spent some time at the exhibit hall and caught up with some old friends and… well, this is long enough of a post for the first day… check back tomorrow for a shorter (I promise) synopsis of day two.