Friday, December 27, 2019

Planning Your Db2 Performance Monitoring Strategy


The first part of any Db2 performance management strategy should be to provide a comprehensive approach to the monitoring of the Db2 subsystems operating at your shop. This approach involves monitoring not only the threads accessing Db2 and the SQL they issue, but also the DB2 address spaces. 

There are three aspects that must be addressed in order to accomplish this task:
  • Batch reports run against Db2 trace records. While Db2 is running, you can activate traces that accumulate information, which can be used to monitor both the performance of the Db2 subsystem and the applications being run. For more details on Db2 traces see my earlier 2-part blog post (part 1, part 2).
  • Online access to Db2 trace information and Db2 control blocks. This type of monitoring also can provide information on Db2 and its subordinate applications.
  • Sampling Db2 application programs as they run and analyzing which portions of the code use the most resources.

There are many in-depth details that comprise the task of setting these three components up to efficiently and effectively monitor your Db2 activity. I go over these details in my book, Db2 Developers Guide, so I direct interested parties there for the gory details.

But let's go over some performance monitoring basics. When you’re implementing a performance monitoring methodology, keep these basic caveats in mind:
  • Do not overdo monitoring and tracing. Db2 performance monitoring can consume a tremendous amount of resources. Sometimes the associated overhead is worthwhile because the monitoring (problem determination or exception notification) can help alleviate or avoid a problem. However, absorbing a large CPU overhead to monitor a Db2 subsystem that is already performing within the desired scope of acceptance might not be worthwhile.
  • Plan and implement two types of monitoring strategies at your shop:
  1. ongoing performance monitoring to ferret out exceptions, and;
  2. procedures for monitoring exceptions after they have been observed.
  • Do not try to drive a nail with a bulldozer. Use the correct tool for the job, based on the type of problem you’re monitoring. You would be unwise to turn on a trace that causes 200% CPU overhead to solve a production problem that could be solved just as easily by other types of monitoring (e.g. using EXPLAIN or Db2 Catalog reports).
  • Tuning should not consume your every waking moment. Establish your Db2 performance tuning goals in advance, and stop when they have been achieved. Too often, tuning goes beyond the point at which reasonable gains can be realized for the amount of effort exerted. (For example, if your goal is to achieve a five-second response time for a TSO application, stop when you have achieved that goal instead of tuning it further even if you can.)

Tuning goals should be set using the discipline of service level management (SLM). A service level is a measure of operational behavior. SLM ensures applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level can be defined as “99.95% uptime, during the hours of 9:00 AM to 10:00 PM on weekdays.” Of course, a service level can be more specific, stating “average response time for transactions will be two seconds or less for workloads of 500 or fewer users.”

For a service level agreement (SLA) to be successful, all of the parties involved must agree upon stated objectives for availability and performance. The end-users must be satisfied with the performance of their applications, and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.

If you do not identify service levels for each transaction, then you will always be managing to an unidentified requirement. Without a predefined and agreed upon SLA, how will the DBA and the end-users know whether an application is performing adequately? Without SLAs, business users and DBAs might have different expectations, resulting in unsatisfied business executives and frustrated DBAs... Not a good situation.

Wednesday, December 18, 2019

High Level Db2 Indexing Advice for Large and Small Tables


In general, creating indexes to support your most frequent and important Db2 SQL queries is a good idea. But the size of the table will be a factor in decided whether to index at all and/or how many indexes to create.

For tables more than 100 (or so) pages, it usually is best to define at least one index. This gives Db2 guiidance on how to cluster the data. And, for the most part, you should follow the general advice of having a primary key for every table... and that means at least one unique index to support the primary key.

If the table is large (more than 20,000 pages or so), you need to perform a balancing act to limit the indexes to those absolutely necessary for performance. When a large table has multiple indexes, data modification performance can suffer. When large tables lack indexes, however, access efficiency will suffer. This fragile balance must be monitored closely. In most situations, more indexes are better than fewer indexes because most applications are query-intensive rather than update-intensive. However, each table and application will have its own characteristics and requirements.

For tables containing a small number of pages (up to 100 or so pages) consider limiting indexes to those required for uniqueness and perhaps to support common join criterion. This is a reasonable approach because such a small number of pages can be scanned as, or more, efficiently than using an index.

For small tables you can add indexes when the performance of queries that access the table suffers. Test the performance of the query after the index is created, though, to ensure that the index helps. When you index a small table, increased I/O (due to index accesses) may cause performance to suffer when compared to a complete scan of all the data in the table.

Tuesday, December 03, 2019

A Guide to Db2 Application Performance for Developers: A Holiday Discount!

Regular readers of my blog know that I have written a couple of Db2 books, including DB2 Developer's Guide, which has been in print for over 20 years across 6 different editions. But you may not be aware that I recently wrote a new Db2 book, this time focusing on the things that application programmers and developers need to do to write programs that perform well from the very start. This new book is called A Guide to Db2 Application Performance for Developers.



You see, in my current role as an independent consultant that focuses on data management issues and involves a lot of work with Db2, I get to visit a lot of different organizations... and I get to see a lot of poorly performing programs and applications. So I thought: "Wouldn't it be great if there was a book I could recommend that would advise coders on how to ensure optimal performance in their code as they write their Db2 programs?" Well, now there is... 
A Guide to Db2 Application Performance for Developers.

This book is written for all Db2 professionals, covering both Db2 for LUW and Db2 for z/OS. When there are pertinent differences between the two it will be pointed out in the text. The book’s focus is on develop­ing applications, not database and system administration. So it doesn’t cover the things you don’t do on a daily basis as an application coder.  Instead, the book offers guidance on application devel­opment procedures, techniques, and philosophies for producing optimal code. The goal is to educate developers on how to write good appli­cation code that lends itself to optimal performance. 

By following the principles in this book you should be able to write code that does not require significant remedial, after-the-fact modifications by performance ana­lysts. If you follow the guidelines in this book your DBAs and performance analysts will love you!

The book does not rehash material that is freely available in Db2 manuals that can be downloaded or read online. It is assumed that the reader has access to the Db2 manuals for their environment (Linux, Unix, Windows, z/OS).

The book is not a tutorial on SQL; it assumes that you have knowledge of how to code SQL statements and embed them in your applications. Instead, it offers advice on how to code your programs and SQL statements for performance.

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well. 

OK, you may be saying, but what about that "Holiday Discount" you mention in the title? Well, I am offering a discount for anyone who buys the book before the end of the year (2019). There are different discounts and codes for the print and ebook versions of the book:


  • To receive a 5% discount on the print version of the book, use code 5poff when you order at this link.
  • To receive $5.00 off on the ebook version of the book, user code 5off when you order at this link.
These codes only work on the Bookbaby site. You can, of course, buy the book at other book stores, such as Amazon, at whatever price they are currently charging!


Happy holidays... and why not treat the programmer in your life to a copy of A Guide to Db2 Application Performance for Developers?  They'll surely thank you for it.



Wednesday, November 27, 2019

Happy Thanksgiving 2019

Just a quick post today to wish all of my readers in the US (and everywhere, really) a very Happy Thanksgiving.



Thanksgiving is a day we celebrate in the USA by spending time with family, eating well (traditionally turkey), and giving thanks for all that we have and hold dear.

Oh... and also for watching football!

May all of you reading this have a warm and happy Thanksgiving holiday surrounded by your family and loved one.

Happy Thanksgiving!

Thursday, November 07, 2019

Db2 12 for z/OS Function Level 506

Late last month, October 2019, IBM introduced a new function level, FL506, for Db2 12 for z/OS.  There are two significant impacts of this new function level:
  • Alternative function names support
  • Support for implicitly dropping explicitly created table spaces
The first impact, support for additional, alternative names for some existing Db2 built-in functions, was added is to improve compatibility across the Db2 product line. It is basically just a new way to refer to existing functionality, an alternative syntax, if you will. The following chart outlines the existing and new FL506 alternative syntax.

Table 1. Alternative Syntax for Function Names in FL506        
Existing Function Name
New Alternative Syntax Name
CHARACTER_LENGTH
CHAR_LENGTH
COVARIANCE or COVAR
COVAR_POP
HASH_MD5 or HASH-SHA1 or HASH_SHA256
HASH
POWER
POW
RAND
RANDOM
LEFT
STRLEFT
POSSTR
STRPOS
RIGHT
STRRIGHT
CLOB
TO_CLOB
TIMESTAMP_FORMAT
TO_TIMESTAMP


Support for these alternative spelling of built-in function names should make it easier to support applications across multiple members of the Db2 family where support already exists for these spellings. Of course, you may run into issues if you used any of the new spellings in your existing applications, for example as variable names.

The other significant feature of FL506 is support for implicitly dropping explicitly created universal table spaces when a DROP TABLE statement is executed. Prior to FL506 dropping a table that resides in an explicitly created table space does not drop the table space.

If you use vendor tools that manage and generate scripts for DDL changes, they need to be modified to support FL506. If not, they could produce -204 SQL codes when the generated DDL is executed if the DDL contains a DROP TABLESPACE statement. The table space will have already been implicitly dropped and trying to drop a table space that does not exist will throw an error. Be sure to discuss this with your tools vendor before migrating to FL506 to understand the cendor’s support timeline or if they have a workaround.

Summary

IBM is using function levels to deliver significant new capabilities for Db2 12 for z/OS. It is important for you and your organization to keep up-to-date on this new functionality and to determine where and when it makes sense to introduce it into your Db2 databases and applications.

Also, be aware that if you are not currently running at FL505, moving to FL506 activates all earlier function levels. You can find a list of all the current function levels here.




Thursday, October 31, 2019

Have You Considered Speaking at the IDUG Db2 Technical Conference? You should!

The 2020 North American Db2 Technical Conference is being held in Dallas, TX the week of June 7th. And the call for papers is still open and IDUG is looking for Db2 folks who want to share their experiences with Db2. You can talk about a project you worked on, an experience you had tuning or optimizing your Db2 databases and applications, your experience implementing a new version or funtion level, how your team uses any Db2 feature or capability, or really, anything related to your experience with Db2.

Speaking at a user group is a good way to expand your contacts and develop additional personal interaction skills. And I have also found it to be a good way to increase my technical knowledge and skills. Sure, as the presenter you are sharing your knowledge with the audience, but it always seems like I expand my knowledge and way of thinking about things when I deliver a presentation. Either because of questions I receive, or because putting the presentation together made me stop and think about things in different ways.

And if you are accepted to speak your attendance at the conference is complimentary!

Putting together an abstract is not that difficult at all. You just need to complete a bit of biographical information about yourself, select a category for your presentation, provide an overview of your topic, and offer up a bulleted list of 5 objectives. The site guides you through submitting all of these things at this link.

Speaking at a conference can be a very rewarding experience... and once you start doing it, you'll want to do it again and again. So go ahead. Click here and submit your abstract and I hope I'll see you in Dallas in June 2020!

Thursday, October 17, 2019

See You in Rotterdam... at the IDUG Db2 Tech Conference

Next week the 2019 IDUG EMEA Db2 Tech Conference is coming to Rotterdam and I am looking forward to being there. This year’s event is being held the week of October 20-24, 2019. I hope you’ve already made your plans to be there, but if you haven’t there’s still time to get your manager’s approval, make travel plans, and be where all the Db2 folks will be the end of October.

If you’ve ever attended an IDUG conference before then you know how much useful information you can learn at the event. IDUG offers phenomenal educational opportunities delivered by IBM developers, vendor experts, users, and consultants from all over the world. There will be a slew of informative technical sessions on all of the latest and greatest Db2 technologies and features. 

And let's not forget the exhibit hall (aka Solutions Center) where vendors present and demo their products that can help you manage Db2 more effectively. It is a good place to learn about new technology solutions for Db2, but also to hang out and meet with IBMers, consultants, and your peers.

If you have any doubts whether there will be something worthwhile for you there just take a look at this packed agenda! One of the conference highlights is always the great keynote session. This year's will be delivered by Al Martin, IBM VP of Development for Db2 Z and Warson Tool. He will talk about business and strategy for data and AI, highlighting how data is the foundation for AI. Should be informative and entertaining!

What Am I Up to at IDUG?

As usual, I will be busy at the conference. I will be arriving early into Rotterdam so I can get over the jet lag and then participate in some pre-conference meetings on Sunday. 

There are a couple of opportunities for you to stop by and say "Howdy!" to me, and I hope you will take advantage of them. On Tuesday, at 3:20 PM, I will be delivering a vendor-sponsored presentation (or VSP) for InfotelThis presentation, titled Improving Db2 Application Quality for Optimizing Performance and Controlling Costs. My portion of the presentation focuses on the impact of DevOps on database; it will be followed up by Colin Oakhill of Infotel-Insoft, who will talk about how their SQL quality assurance solutions can aid the DevOps process for Db2 development.

Additionally, Tueday evening I'll be spending some time in the booth with Infotel. So if you have any questions we didn’t answer in the VSP, you can ask us at the Infotel booth. Be sure to stop by and say hello, take a look at Infotel’s SQL quality assurance solutions for Db2 for z/OS, and register to win one of 2 of my Db2 application performance  books that will be raffled off. If you win, be sure to get me to sign your copy!

A Guide to Db2 Performance for Application Developers


Just Look at All That IDUG Has to Offer!

You can go to free complimentary workshops and hands-on labs being held throughout the duration of the conference. These half-day and full-day sessions are packed full of useful information that you can take home and apply to your Db2 environment. And this year there are sessions on Db2 migration, Db2 and the cloud, problem determination, machine learning, Zowe, and more. So be sure to track down the workshops that you want to attend and register for them before they fill up!

If you are looking for Db2 certification then IDUG is the place to be! All IDUG attendees can recive two complimentary certification coupons to take any IBM certification exams (to be completed at your leisure, as long as they are used before June 30, 2020).

And don't miss the Expert Panels where IBMers and other subject matter experts answer your questions. There are three separate panels this year covering Db2 for z/OS, Db2 for LUW and Application Development.

Finally, be sure that you download the mobile app for the conference to help you navigate all the opportunities available to you! Armed with the mobile app you’ll get daily intel on what’s happening at the conference.

Justifying Your Attendance

Finally, if you need any help justifying your attendance at this year’s IDUG event, just use this justification letter as your template to create an iron-clad rationale for your boss.

The Bottom Line

The IDUG Db2 Tech Conference is the place to be to learn all about Db2 from IBMers, gold consultants, IBM champions, end users, ISVs, and more. With all of this great stuff going on this year in Rotterdam, why would you want to be any place else!?!?


Monday, October 14, 2019

Mainframe Modernization: The Why and How

If your organization uses a mainframe or you are interested in modern mainframe computing issues, be sure to register for and join me in my webinar for GT Software, titled Mainframe Modernization: The Why and How. on Tuesday, October 29, 2019 from 12:00 PM - 1:00 PM CDT.

 Mainframe Modernization webinar


This webinar will discuss the rich heritage of the mainframe and the value of the applications and systems that have been written over many decades. Organizations rely on these legacy systems and the business knowledge built into these applications drive their businesses.
But an application created 20 or more years ago will not be as accessible to modern users as it should be. Digital transformation that enables users to access applications and data quickly is the norm, but this requires modernizing access to the rich data and processes on the mainframe.

This presentation will expose the value proposition of the mainframe, and look at the trends driving its usage and capabilities. I will look at the IT infrastructure challenges including changing technology, cloud adoption, legacy applications, and development trends. And look at tactics to achieve mainframe modernization amid complexity and change.
So if mainframes are your thing, or you just want to learn more about the state of the modern mainframe, be sure to sign up and attend!

Tuesday, September 17, 2019

IBM Unleashes the z15 Mainframe



In New York City, on September 12, 2019, IBM announced the latest and greatest iteration of its Z systems mainframe computing platform, the IBM z15. And I was lucky enough to be there for the unveiling.

The official IBM announcement letter can be found here if you want to dive into the details. But before you go there, consier first reading what I have to say about it below.

Before going any further, here I am with the new z15 in New York… don’t we make a handsome couple? 



The event was held at 3 World Trade Center in lower Manhattan. Ross Mauri, General Manager of IBM Z, kicked off the event extolling the unprecedented security delivered by the z15 with encryption everywhere and the data privacy passports. He claims that the IBM z15 is the most secure platform you can get, and the new capabilities back that up. Mauri also acknowledged that "there's always the next big thing in technology" but stated that "IBM is innovating and leading by anticipating customer needs to ensure the on-going relevance of the mainframe."

And there is a lot to like about the new IBM z15 platform, both for long-time users and those embracing the platform for new development. IBM is embracing the multicloud approach and reminding everybody that the mainframe is a vital component of multicloud for many organizations.

But modern infrastructure with the latest application development techniques is not as simple as throw out the old and bring in the new. I mean, let’s face it, if you have a mainframe with possibly hundreds or thousands of man years of work invested in it, are you really going to take the time to re-code all of that mission-critical work just to have it on a “new” platform? Rewriting applications that work today cannot be the priority for serious businesses! Especially when the modern mainframe is as new as it gets, runs all of that legacy code that runs your business, and also supports new cloud apps and development, too.

The IBM Z works perfectly as a part of your multicloud development strategy. The cloud promises an open, flexible world. But your most critical workloads also need to run securely and without interruption. To accomplish both objectives you must support cloud with an underlying IT infrastructure. And for Fortune 500 companies and other large organizations, the multicloud includes the mainframe as part of the enabling infrastructure.

What’s New

The new IBM z15 is housed in a convenient 19 inch rack, and that means it can be integrated into a standard rack. So you get all the benefit and strengths of the mainframe while fitting into the size expected by a standard data center.

Did you know that there are more transistors in the new IBM z15 chip than there are people in the world! Inside the IBM z15 processor chip, there are 15.6 miles of wires, 9.2 billion transistors and 26.2 billion wiring connections — all of which allow a single z15 server to process 1 trillion web transactions per day.

The mainframe is the ideal platform for many organizations. It provides the resiliency, security, and agility needed to power, secure, and integrate your hybrid cloud. And it capably, securely, and efficiently runs your transactions and the batch workload required to keep your business humming. IBM used to talk about five 9s of availability (that is 99.999%) but with the new IBM z15, IBM can deliver seven 9s (that is 99.99999%)! That is 3.16 seconds of downtime per year, or only 60.48 milliseconds of downtime per week. Now that is impressive!

The primary new features that are worth your time to investigate further, and that were highlighted by IBM at the kickoff event are:
  • Encryption everywhere which protects your data anywhere, even after it leaves your system, with new IBM Data Privacy Passports, which delivers privacy by policy.
  • Cloud native development that simplifies life for developers as they build and modernize applications using standard tools, including new support for Red Hat OpenShift. This enables you to both modernize the apps you have and to deploy new ones using the tools of your choice.
  • IBM Z Instant Recovery can reduce the impact of planned and unplanned downtime. Instant Recovery can speed the return to your pre-shutdown SLAs by up to 2x.

The flexibility of the z15 is noteworthy, too. The new IBM z15 provides the flexibility to implement 1 frame...


or up to 4 frames, as your capacity needs dictate.


And did you know it can run multiple operating systems, not just z/OS? The IBM Z platform can run z/OS, Linux on Z, z/VM, z/VSE, and z/TPF. This enables organizations to run legacy applications and modern, specialist ones using the operating system of their choice. Indeed, convenience and flexibility are hallmarks of the IBM Z platform.

The IBM z15 is a modern platform for all of your processing needs. And that is backed up not just by IBM, but also a brand new survery from BMC Software, in their 14th annual mainframe survey for 2019. The survey shows that 93% are confident in the combined long-term and new workload strength of the IBM Z platform, the strongest showing since 2013! Other highlights inlcude a majority thinking that mainframe growth will continue, along with increasing MIPS/MSU consumption... not to mention that the m
ainframe is handling increases in data volume, number of databases, and transaction volume. If you are working with mainframes in any way, be sure to check out the new BMC Mainframe Survey.


Indeed, with the new IBM z15 things are looking great for the mainframe and those that rely upon it to power their digital business.

Wednesday, September 04, 2019

The Power of Data Masking for Data Protection

Data privacy regulations and the desire to protect sensitive data requires methods to mask production data for test purposes. Data masking tools create structurally similar data that is not the same as the actual data, but can be used by application systems the same way as the actual data. The capability to mask data is important to be in compliance with regulations like GDPR and PCI-DSS, which place restrictions on how personally identifiable information (PII) can be used.

UBS Hainer’s Masking Tool for BCV5 (their test data management solution) offers robust masking of Db2 for z/OS data. I wrote about this capability previously on the blog last year (see Data Masking: An Imperative for Compliance and Governance, November 12, 2018), and if you are looking for a concise, yet thorough overview of the product’s data masking capabilities I point you to that blog post.

So why am I talking about data masking again? Well, it is a thorny problem that many organizations are still struggling with. As much as 80% of sensitive data resides in environments used for development, testing, and reporting. That is a lot of data that is ripe for exposure.

But I also wanted to share a new video produced by UBS Hainer that explains how data masking can help you to stay compliant and protect your sensitive data. It is well worth your time to watch this 2 minute video if you need to better address the protection of sensitive data at your shop.



Click to watch the video

Data masking is not a simple task, and as the video helps to explain, there is much to consider. To effectively mask your data requires a well-thought-out process and method for implementation to achieve success. As such, a tool like BCV5 Masking Tool can simplify how you address your Db2 data protection requirements. It provides dozens of easy to use masking algorithms implemented using Db2 user-defined functions. It ensures that the same actual value is translated to the same masked value every time. And the value will be a plausible value that works the same as the data it is masking. The tool understands thing like referential integrity, unique constraints, related data, and so on.


A reliable method of automating the process of data masking that understands all of the complicated issues and solves them is clearly needed. And this where UBS Hainer’s BCV5 Masking Tool excels.



Thursday, August 15, 2019

BMC AMI for DevOps Intelligently Integrates Db2 for z/OS Schema Changes

Organizations of all types and sizes have adopted a DevOps approach to building applications because it effectively implements small and frequent code changes using agile development techniques. This approach can significantly improve the time to value for application development. The DevOps approach is quite mature on distributed platforms, but it is also gaining traction on the mainframe.

As mainframe development teams begin to rely on DevOps practices more extensively, the need arises to incorporate Db2 for z/OS database changes. This capacity has been lacking until recently, requiring manual intervention by the DBA team to analyze and approve schema changes. This, of course, slows things down, the exact opposite of the desired impact of DevOps. But now BMC has introduced a new solution that brings automated Db2 schema changes to DevOps, namely BMC AMI for DevOps.

BMC AMI for DevOps is designed to integrate into the DevOps tooling that your developers are already using. It integrates with the Jenkins Pipeline tool suite to provide an automated method of receiving, analyzing, and implementing Db2 schema changes as part of an application update.

By integrating with your application orchestration tools AMI for DevOps can capture the necessary database changes required to move from test to production. But it does not just apply these changes; it enforces and ensures best practices using built-in intelligence and automated communication between development and database administration.

The ability to enforce best practices is driven by BMC’s Automated Mainframe Intelligence (AMI), which is policy driven. The AMI capability builds much of the DBA oversight for schema changes into the DevOps pipeline, enforcing database design best practices as you go instead of requiring in-depth manual DBA oversight.

Incorporating a database design advisory capability into the process offloads manual, error-prone tasks to the computer. This integrated automation enables automatic evaluation of Db2 database schema change requests to streamline the DBA approval process and remove the manual processes that inhibit continuous delivery of application functionality.

Furthermore, consider that intelligent database administration functionality can be used to help alleviate the loss of expertise resulting from an aging, retiring workforce. This is a significant challenge for many organizations in the mainframe world.

But let’s not forget the developers. The goal of adopting a DevOps approach on the mainframe is to speed up application development, but at the same time it is important that we do not forgo the safeguards built into mainframe development and operations. So you need a streamlined DevOps process—powered by intelligent automation—in which application developers do not have to wait around for DBA reviews and responses. A self-service model with built-in communication and intelligence such as provided by AMI for DevOps delivers this capability.

The Bottom Line

BMC AMI for DevOps helps you to bring DevOps to the mainframe by integrating Db2 for z/OS schema changes into established and existing DevOps orchestration processes. This means you can use BMC AMI for DevOps to deliver the speed of development required by agile techniques used for modern application delivery without abandoning the safeguards required by DBAs to assure the accuracy of the database changes for assuring availability and reliability of the production system. And developers gain more self-service capability for Db2 schema changes using a well-defined pipeline process.

Thursday, August 01, 2019

DevOps is Coming to Db2 for z/OS


Mainframe development teams are relying on DevOps practices more extensively, bringing the need to incorporate Db2 for z/OS database changes into the toolset that is supporting their software development lifecycle (SDLC).

But most mainframe professionals have only heard a little about DevOps and are not really savvy as to what it entails. DevOps is an amalgamation of Development and Operations. The goal of DevOps is to increase collaboration between developers and operational support and management professionals, with the desired outcome of faster, more accurate software delivery.

DevOps typically relies on agile development, coupled with a collaborative approach between development and operations personnel during all stages of the application development lifecycle. The DevOps approach results in small and frequent code changes and it can significantly reduce the lead time for changes, lower the rate of failure, and reduce the mean time to recovery when errors are encountered. These are all desirable qualities, especially as organizations are embracing digital transformation driven by the 24/7 expectations of users and customers to access data and apps at any time from any device.

The need to be able to survive and thrive in the new digital economy has caused organizations to adopt new and faster methods of developing, testing and delivering application software. Moving from a waterfall software development methodology to an agile methodology is one way that organizations are speeding the time-to-delivery of their software development. Incorporating a DevOps approach is another.

Instead of long software development projects that may not deliver value for months, or perhaps even years (common using the Waterfall development methodology) an agile DevOps approach delivers value quickly, and then incrementally over time. DevOps enables the continuous delivery of new functionality demanded by customers in the digital economy.

Succeeding with DevOps, however, requires a cultural shift in which all groups within IT work in collaboration with one another, and where management endorses and cultivates this cultural change. Because DevOps relies upon incremental development and rapid software delivery, your IT department can only thrive if there is a culture of accountability, collaboration, and team responsibility for desired business outcomes. Furthermore, it requires solid, integrated automated tooling to facilitate the SDLC from development, through testing, to delivery. Creating such an environment and culture can be challenging.

With DevOps the result will be a constantly repeating cycle of continuous development, continuous integration and continuous deployment. This is typically depicted graphically as the infinity symbol such as in Figure 1 (below).

Figure 1 - continuous development, integration and deployment


Note, however, that this particular iteration of the DevOps infinity graphic calls out the participation of both the application and the database. This is an important, though often lacking, detail that should be stressed when adopting DevOps practices.

The Mainframe and DevOps

The adoption of DevOps has, until now, been much slower within mainframe development teams than for distributed and cloud application development. The staid nature of mainframe development and support, coupled with a glass house mentality, and a rigid production turnover process contribute to the delayed adoption of DevOps on the mainframe. This is not surprising as mainframes mostly are used by large organizations running mission critical workloads with an aversion to any kind of change and risk-averse.

Additionally, the traditional waterfall development methodology has been used by most mainframe software developers for multiple decades, whereas DevOps is closely aligned with an agile approach, which differs significantly from waterfall.

Notwithstanding all of these barriers to acceptance of DevOps on the mainframe, mainframe developers can, and in some cases already do successfully utilize a DevOps approach. Technically speaking, the mainframe is just another platform and there is nothing inherent in its design or usage that obviates the ability to participate in a DevOps approach to application development and delivery.

What about Db2 for z/OS?

Integrating database change into the application delivery lifecycle can be a stumbling block on the road to DevOps success. Development teams focus on application code, as they should, and typically view database structure changes as ancillary to their coding efforts. In most application development projects, it is not the programmer’s responsibility to administer the database and modify database structures. But applications rely on the database being designed, implemented, and changed in accordance with the needs of the business and the code.

This means that many development projects have automated their SDLC tool chain to speed up the delivery of applications. This is the “Dev” portion of DevOps. But the requisite automation and tooling has not been as pervasively implemented to speed up the delivery of database changes. This is the “Ops” portion of DevOps. And this is changing.

A big consideration is that the manner in which change is applied to applications differs from how database changes are applied. That means each must be managed using different techniques and probably different tools. When an application program changes, the code is compiled, and the load module is migrated from test to production. The old load module is saved for posterity in case the change needs to be backed out, but the change is a wholesale replacement of the executable code.

Database changes are different. The database is an entire configuration in each environment and changes get migrated. There is no wholesale replacement of the database structures. DDL commands are issued to ALTER, DROP, and CREATE the changes to the database structures as needed.

From the perspective of database changes on Db2 for z/OS, DBAs need the ability to modify all the database objects supported by Db2 for z/OS. Supporting Db2 for z/OS using DevOps requires tooling that understands both Db2 for z/OS and the DevOps methodology and toolchain. And the tooling must understand how changes are made, as well as any underlying changes that may be required to effectively implement the database change. Some types of database changes are intrusive, requiring a complicated series of unloads, metadata captures, drops, creates, loads, and additional steps to implement. The tooling must be capable of making any of these changes in an automated way that the DBA trusts.

Fortunately, for organizations adopting DevOps on the mainframe with Db2, there is a solution for integrating Db2 database change into the DevOps toolchain: BMC AMI DevOps for Db2. BMC AMI DevOps for Db2 integrates with Jenkins, an application development orchestration tool, to automatically research and determine database schema change requirements, to streamline the review and approval process, and to safely implement the database schema changes making development and operations teams more efficient and agile.