Showing posts with label IBM Z. Show all posts
Showing posts with label IBM Z. Show all posts

Thursday, June 02, 2022

Db2 13 for z/OS is Here!

Here we are, in June 2022, about 5 years or so since Db2 12 for z/OS was released. And lo' and behold, IBM has given us a new version of Db2 for z/OS to learn and adopt: Db2 13 for z/OS

The new version is generally available (as of May 31, 2022). If you were not paying close attention though, you may have missed it. Db2 13 was announced at the same time as the new mainframe (IBM z16), so it didn't get quite the same level of attention. But those of us who use Db2 for z/OS day in and day out will find a lot of great new stuff in this latest and greatest version of Db2.

I'm not going to go into great detail about the new features and functionality of Db2 13 for z/OS today, but I will offer a high-level overview. Look for future blog posts to dig into more of the nitty gritty tech details and capabilities.

The first thing to mention is that you will need to activate the last Db2 12 function level 510 (FL510) before you can migrate to Db2 13. As many organizations are lagging behind in terms of function level activation, it will be interesting to see how this requirement impacts migration to Db2 13.

AI

So what can users expect from this new version? Well, it seems that the most talked-about features are related to adopting AI. Functions that deliver AI capabilities into Db2 will make it easier for organizations on the AI journey to integrate Db2 into their processes.

Perhaps the most significant AI addition to Db2 13 is the SQL Data Insights feature. Provided as an extension to Db2, SQL Data Insights is delivered using built-in functions to deliver AI capabilities like uncovering heretofore unknown relationships in your data. Since it uses built-in functions you can use it anywhere that you use SQL!

Db2 13 offers additional AI help delivering the ability to simplify building models, Natural Language Processing (NLP), and exploiting the speed of the IBM z16 for training and querying data.

IBM z16 Synergy

The next thing that has been highly-touted is that Db2 13 takes advantage of new capabilities delivered in the IBM z16 hardware.  The new Telum chip used by the z16 mainframe provides powerful AI capabilities that Db2 uses to bolster its AI capabilities (such as SQL Data Insights). And we have already touched on that in terms of speeding up training and querying data for AI.

Db2 for z/OS is unique in that it is the only major DBMS that is designed specifically for a single operating system (z/OS) and hardware platform (IBM Z). This enables IBM (the provider of the DBMS, O/S, and hardware) to take advantage of capabilities unique to the platform, because there is no worry about supporting other platforms.

One example of this unique synergy is the ability to improve sort performance using the SORTL instruction of the IBM Z15 and z16. Additionally, the IBM z16 System Recovery Boost can minimize downtime by speeding up the performance of Db2 for z/OS restart. 

But What About BAU?

OK, so there is new AI stuff and great synergy with the IBM Z, but what about the features and functionality that make it easier to keep up with Business As Usual (BUA)? You know, things like easier administration, better performance, and so on?

Good news! There are a plethora of great new capabilities and improvements in Db2 13 for z/OS. While I cannot adequately cover them in detail today, some examples include:

  • The ability to convert back-and-forth between partition-by-growth and partition-by-range Db2 table spaces. 
  • Support for more concurrent threads and open data sets, as well as improved storage conditions. 
  • DDF storage relief.
  • Real Time Statistics (RTS) improvements.
  • Many improvements to IBM Db2 utility functionality.
  • Security and compliance improvements, including integration to the IBM Z Security and Compliance Center.

Summary

The bottom line is that there is a new version of Db2 for z/OS that mainframe shops will need to learn and prepare for. As with any new Db2 version, it will be exciting to dig in and discover all of the new stuff that can help us do our jobs better... and improve our organization's efforts to use its data to improve business.

Keep checking back here as I will blog in more detail about the new capabilities of Db2 13 for z/OS over time... 

Sunday, May 09, 2021

Thinking About the Mainframe, the Cloud, and IBM Think 2021

A Bit about Think

I am looking forward to attending the IBM Think 2021 conference, IBM's annual flagship technology event. I have attended several in-person Think events, as well as last year’s virtual conference, and I always come away with new knowledge and additional insight into technology and IBM’s vast portfolio of hardware, software, and solutions. The Think conference is always one of the tech highlights of the year for me!

This year’s event, IBM Think 2021, is again being held as a virtual conference, May 11 and 12, 2021. And it is free of charge, which means that you can experience all the great education, presentations, and networking opportunities without having to leave your desk.

My favorite aspect of the Think conference is the breadth and scope of pertinent technical content that it covers. Whether you are a developer, a DBA, a data scientist, a manager, an executive, or any flavor of IT or business specialist, there will be a wealth of useful information presented to educate you and make you “think.”  Be sure to register here.

My Think 2021 Agenda

There are multiple sessions to be delivered at this year’s IBM Think conference that intrigue me because they focus on areas where I specialize.  For example, Dr. Dario Gil, SVP and Director of IBM Research will be delivering a keynote session on IT infrastructure which is sure to be educational. This session, 2081, offers a deep dive into the IBM innovations powering the next generation of hardware, including IBM Z.

Another session I am looking forward to is session 2303 focusing on security “everywhere.” It features IBM luminaries like Tom Rosamilia, Senior Vice President, IBM Systems, and Mary O’Brien, General Manager IBM Security. And Forrester Research Director, Lauren Nelson, will also be lending her industry expertise to the session.

But I think the Think 2021 session I am most looking forward to is The IBM Z roadmap for hybrid cloud and AI (session 1605) featuring Ross Mauri General Manager for IBM Z. Mauri promises to offer a timely discussion on the business value of integrating the IBM Z platform as a full participant into your hybrid cloud. And he’ll speak with Russell Plew, Technology Senior Manager at M&T Bank who will discuss their real-life experiences in doing so!

Why is this session so interesting to me? Well, I’ve worked with the mainframe my entire career, and as anybody who works on the mainframe knows, the IBM Z platform is used to drive mission-critical workloads across all major industry sectors, worldwide. If your organization needs to perform large-scale transaction processing (thousands of transactions per second), support thousands of users and programs concurrently, manage terabytes of information, and handle large-bandwidth communication, chances are you rely on the mainframe to do that because the platform excels at all of those things.

If you’ve ever deposited a check into your bank account, booked a flight on an airline, or used a credit card to purchase something, it is probable that a mainframe was involved in completing that activity!

Ever since Stewart Alsop of InfoWorld predicted the last mainframe would be unplugged on March 15, 1996 there has been a lingering perception that the mainframe would go away at some point. But here we are, 25 years later, and the mainframe is still going strong! At last year’s IBM Think conference IBM presented the following statistics on the mainframe’s ubiquity and power:

      70% of the Fortune 500 use mainframes and 72% of customer-facing applications are dependent on the mainframe for some or all data processing.

      Mainframes are designed to be able to process a trillion web transactions a day with the capability to process 1.1 million transactions per second.

      95% of transactions in the banking, insurance, airline and retail industries are handled by mainframes.


Indeed, the mainframe continues to offer a strong, unparalleled platform for performance, security, and reliability. Of course, the mainframe has changed and grown over its 50+ year lifespan. Today’s IBM z15 is light-years beyond the original IBM System/360 introduced in 1964. Some of the great newer capabilities of the IBM Z include encryptions everywhere with pervasive encryption and Data Privacy Passports, rack-mountable mainframes, Instant Recovery, and cloud-native development. I’m looking forward to hear how IBM’s customers have taken advantage of these, and other capabilities, to integrate the IBM Z into their hybrid cloud architecture.

It only makes sense that businesses relying on the mainframe will continue to do so, even as they embrace cloud computing. This is what the “hybrid” in the term hybrid cloud implies, an IT infrastructure that uses a mix of on-premises and private / public cloud from multiple providers. And this approach makes the most sense because everything can’t shift to the cloud immediately (perhaps ever) because most existing applications were not built with an understanding of the public cloud and it would take a lot of investment to re-engineer them to properly take advantage of a public cloud architecture. And even if you wanted to move everything, cloud service providers (CSPs) can’t build out their infrastructure fast enough to support all the existing data center capacity “out there” to immediately support everything.

So, it will be exciting to watch the IBM continue to innovate on the IBM Z platform as enterprise customers work to integrate Z as a vital component of their hybrid cloud infrastructure. With the large investment enterprises have in their working mainframe applications, large data sets and databases containing crucial data, and high-volume processing requirements they will continue to rely on the mainframe well into the future… and that makes it important to understand how IBM is enabling the IBM Z to participate in your hybrid cloud architecture.

So, join me at Think 2021 for session 1605 to learn how to use your investments in IBM Z and build and modernize applications into container-based workloads using a common DevOps experience. And stick around for other sessions to gain insights on harnessing the full value of IBM hardware, software and services in your organization as you continue to support, manage, and transform traditional business and IT operations.


Tuesday, August 18, 2020

Navigating the IBM COBOL 4.2 End of Service Waters: Chart a course to benefit your business

 

Surprisingly, COBOL has been in the news a lot recently, due to its significant usage in many federal govern-ment and state systems, most recently with unemployment systems, being in the news. With the global COVID-19 pandemic, those unemployment systems were stressed like never before with a 1600% increase in traffic (Government Computer News, May 12, 2020) as those impacted by the pandemic filed claims.

Nevertheless, there is another impending event that will likely pull COBOL back into the news as IBM withdraws older versions of the COBOL compiler from service. All IBM product versions go through a lifecycle that starts with GA (general availability), after some time moves to EOM (end of marketing) where IBM no longer sells that version, and ends with EOS (end of support) where IBM no longer supports that product or version. It is at this point that most customers will need to decide to stop using that product or upgrade to a newer version because IBM will no longer fix or support EOS products or versions.

Of course, code that was compiled using an unsupported COBOL compiler will continue to run, but it is not wise to use unsupported software for important, mission-critical software, such as is usually written using COBOL. And you need to be aware of interoperability issues if you rely on more than one version of the COBOL compiler.

So what is going on in the world of COBOL that will require your attention? First of all, earlier this year on April 30, 2020, IBM withdrew support for Enterprise COBOL 5.1 and 5.2. And Enterprise COBOL 4.2 will be withdrawn from service on April 30, 2022 – just about two years from now.

So now is the time for your organization to think about its migration strategy.

Why is COBOL still being used?

Sometimes people who do not work in a mainframe environment are surprised that COBOL is still being used. But it is, and it is not just a fringe language. COBOL is a language that was designed for business data processing, and it is extremely well-suited for that purpose. It provides features for manipulating data and printing reports that are common business requirements. COBOL was purposely designed for applications that perform transaction processing like payroll, banking, airline booking, etc. You put data in, process that data, and send results out.

COBOL was invented in 1959, so its history stretches back over 60 years; a lot of time for organizations to build complex applications to support their business. And IBM has delivered new capabilities and features over the years that enable organizations to keep up to date as they maintain their application portfolio.

So, COBOL is in wide use across many industries.

A majority of global financial transactions are processed using COBOL, including processing 85 percent of the world’s ATM swipes. According to Reuters, almost 3 trillion dollars in DAILY commerce flows through COBOL systems!

The reality is that more than 30 billion COBOL transactions run every day. And there are more than 220 billion lines of COBOL in use today. COBOL is not dead…

 What’s new in COBOL 6

With COBOL 5.1 and 5.2 already out of support, and COBOL 4.2 soon to follow, one migration path is to Enterprise COBOL 6, and IBM has already delivered three releases of it: 6.1, 6.2, and 6.3. There are some nice new features that are in the latest version(s) of IBM Enterprise COBOL, including:

  • Compile and runtime support delivering performance improvements for z15 hardware and z/OS 2.4 operating system
  • Increased compiler capacity making it possible to compile and optimize larger programs (6.1)
  • 64-bit (AMODE 64) support in this compiler enables users to process large data tables that require greater than 2 GB of addressing space (6.3)
  • JSON support (6.1) including JSON PARSE statement (6.2)
  • Support for many new features from the COBOL 2002/2014 programming standards including new statements like ALLOCATE, FREE and INITIALIZE; addition of Dynamic Length elementary items; conditional compilation using the DEFINE compiler option, and more
  • Many new compiler options
  • Improved usability with USS

At the same time, there are concerns that need to be considered if and when you migrate to version 6. One example is that the new compiler will take longer to compile programs than earlier versions – from 5 to 12 times longer depending on the optimization level. There are also additional work data sets required and additional memory considerations that need to be addressed to ensure the compiler works properly. As much as 20 times more memory may be needed to compile than with earlier versions of the compiler.

Some additional compatibility issues to keep in mind are that your executables are required to be stored in PDSE data sets and that COBOL 6 programs cannot call or be called by OS/VS COBOL programs.

And of course, one of the biggest issues when migrating from COBOL 4.2 to a new version of COBOL is the possibility of invalid data – even if you have not changed your data or your program (other than re-compiling in COBOL 6). This happens because the new code generator may optimize the code differently. That is to say, you can get different generated code sequences for the same COBOL source with COBOL 6 than with 4.2 and earlier versions of COBOL. While this can help minimize CPU usage (a good thing) it can cause invalid data to be processed differently, causing different behavior at runtime (a bad thing).

Whether you will experience invalid data processing issues depends on your specific data and how your programmers coded to access it. Some examples of processing that may cause invalid data issues include invalid data in numeric USAGE DISPLAY data items; parameter/argument size mismatches; using TRUNC() with binary data values having more digits than they are defined for in working storage; and data items that are used before they have been assigned a value.

Migration considerations

Keep in mind that migration will be a lengthy process for any medium-to-large organization, mostly due to testing application behavior after compilation, and comparing it to pre-compilation behavior. You need to develop a plan that best suits your organization’s requirements and work to implement it in the roughly 2-year timeframe before IBM Enterprise COBOL 4.2 goes out of support.

Things to consider:

  • Gartner research shows that “huge ‘all-or-nothing’ modernization programs often fail to meet expectations”
  • What is your current state? Which COBOL compilers are you using and what is your end goal (6.1, 6.2, 6.3)?
  • Remember that compiled programs will continue to run, so it may not be imperative to re-compile everything prior to the end of support date. Of course, it can be difficult to keep track of what has been converted and what has not if you do not have a plan moving forward other than “convert when the program has to be changed at some point.” And it can become difficult to keep track of all the requirements and incompatibilities for multiple versions of COBOL if you do not plan for, and eventually convert to a newer compiler version.
  • Do you have the COBOL talent and knowledge not only to convert but to continue supporting your existing portfolio of COBOL applications?
  • Enterprise application portfolios can be quite large, making it difficult to effectively discover and map all of the dependencies. Consider using tools to help. 

Migration challenges and Options to Consider

As you put your plan together, you might consider converting some of your COBOL applications to Java. An impending event such as the end of support for a compiler is a prime opportunity for doing so. But why might you want to convert your COBOL programs to Java?

Well, it can be difficult to obtain and keep skilled COBOL programmers. As COBOL coders age and retire, there are fewer and fewer programmers with the needed skills to manage and maintain all of the COBOL programs out there. At the same time, there are many skilled Java programmers available on the market, and universities are churning out more every year.

Additionally, Java code is portable, so if you ever want to move it to another platform it is much easier to do that with Java than with COBOL. Furthermore, it is easier to adopt cloud technologies and gain the benefits of elastic compute with Java programs.

Cost reduction can be another valid reason to consider converting from COBOL to Java. Java programs can be run on zIIP processors, which can reduce the cost of running your applications. A workload that runs on zIIPs is not subject to IBM (and most ISV) licensing charges... and, as every mainframe shop knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a zIIP processor, then software license charges do not accrue - at least for that workload.

Additional benefits of zIIPs include:

  • They are significantly cheaper to acquire than standard CPs
  • When workload is redirected to a zIIP it frees up capacity on the standard CP

So, there are many reasons to consider converting at least some of your COBOL programs to Java. Some may be worried about Java performance, but Java performance is similar to COBOL these days; in other words, most of the performance issues of the past have been resolved. Furthermore, there are many tools to help you develop, manage, and test your Java code, both on the mainframe and other platforms.

Keeping in mind the concerns about “all-or-nothing” conversions, most organizations will be working toward a mix of COBOL migrations and Java conversions, with a mix of COBOL and Java being the end results. As you plan for this be sure to analyze and select appropriate candidate programs and applications for conversion to Java. There are tools that can analyze program functionality to assist you in choosing which the best candidates. For example, you may want to avoid converting programs that frequently call other COBOL programs and programs that use pre-relational DBMS technologies (such as IDMS and IMS).

How to convert COBOL to Java

At this point, you may be thinking, “Sure, I can see the merit in converting some of my programs to Java, but how can I do that? I don’t have the time for my developers to re-create COBOL programs in Java going line-by-line!” Of course, you don’t!

This is where an automated tool comes in handy. The CloudFrame Migration Suite provides code conversion tools, automation, and DevOps integration to deliver very maintainable, object-oriented Java that can integrate with modern technology available within your open architecture.  It can be used to refactor COBOL source code to Java without changing data, schedulers, and other infrastructure components. It is fully automated and seamlessly integrates with the change management systems you already use on the mainframe.

The Java code generated by CloudFrame will operate the same as your COBOL and produce the same output. There are even options you can use to maintain the COBOL 4.2 treatment of data, thereby avoiding the invalid data issues that can occur when you migrate to COBOL 6. This can help to reduce project testing and remediation time.

It is also possible to use CloudFrame to refactor your COBOL programs to Java but keep maintaining the code in COBOL. Such an approach, as described in this blog post (Consider Cross-Compiling COBOL to Java to Reduce Costs), can allow you to keep using your COBOL programmers for maintenance but to gain the zIIP eligibility of Java when you run the code.

Upcoming Webinar

To learn more about COBOL migration, modernization considerations, and how CloudFrame can help you to achieve your modernization goals, be sure to attend CloudFrame’s upcoming webinar where I will be participating on a panel along with Venkat Pillay (CEO and founder of CloudFrame) and Dale Vecchio (industry analyst and former Gartner research VP). The webinar, titled Navigating the COBOL 4.2 End of Support(EOS) Waters: An expert panel discusses the best course of action to benefit your business will be held on September 23, 2020 at 11:00 AM Eastern time. Be sure to register and attend!

Summary

Users of IBM Enterprise COBOL 4.2 need to be aware of the imminent end of service date in April 2022 and make appropriate plans for migrating off of the older compiler.

This can be a great opportunity to consider what should remain COBOL and where the opportunities to modernize to Java are.  Learn how CloudFrame can help you navigate that journey.

Wednesday, May 20, 2020

IBM Think 2020: Virtual, On Demand, Hybrid Cloud and Z

This year’s IBM Think event was quite different than in past years. Usually, Think is an in-person event and attracts a lot of people, typically more than ten thousand IT executives and practitioners. But as we all know, this year with the global COVID-19 pandemic an in-person event was not practical, so IBM held it on-line. And I have to say, they did a fantastic job of managing multiple threads of content without experiencing bandwidth or access issues – at least none that I encountered.
The theme and focus of the content for the event was different, too. Instead of the usual conference focus on products, announcements, and customer stories, this year’s event was more philanthropic. Oh, sure, you could still hear about IBM’s products and customer successes, but the keynote and featured sessions were at a higher level this year.
In the kickoff session, new IBM CEO Arvind Krishna spoke about the driving forces in IT as being hybrid cloud and AI. And he spoke about these things in the context of moving IBM forward, but also how they can be used to help healthcare workers combat pandemics like we are currently experiencing.
In another keynoteIBM Executive Chairman Ginni Rometty spoke with Will.i.am (of the Black-Eyed Peas) about making the digital era inclusive through education, skills development, and the digital workforce. 


And then there was Mayim Bialik’s session on women and STEM, which was sincere, heartfelt, and entertaining. 

For those who don’t know who she is, she is the actress who played Blossom (on Blossom) and Amy Farrah Fowler (on The Big Bang Theory)… but she is also a scientist with a doctorate in neuroscience. Bialik’s session focused on putting a positive female face on STEM, something that is definitely needed!

So, what about the technology side of things? Well, you can take a clue from Krishna’s assertion that IBM as a company has to have a “maniacal” focus on hybrid cloud and AI in order to compete. But the company has a rich and deep heritage across the computing spectrum that gives it a key advantage even as it adjusts to embracing hybrid cloud and AI.
The first thing to remember is that IBM uses the term “hybrid multicloud[RB1] ” very specifically and deliberately. Everything is not going to be in the cloud[RB2] . Large enterprises continue to rely on the infrastructure and applications they have built over many years, many of them on z Systems mainframes. The key to the future is both on-premises and cloud, and IBM understands this with its hybrid cloud approach… as they clearly demonstrated at Think 2020.
My specific area of focus and expertise is the mainframe and Db2 for z/OS, so I sought out some sessions at Think in those areas. Let me tell you a bit about two of them.

First let’s take a quick look at how IBM Cloud Pak for Data can work with data on the Z platform. This information was drawn from IBM Distinguished Engineer Gary Crupi’s session, titled "Drive Actionable, Real-Time Insight from Your High-Value IBM Z Data Using IBM Cloud Pak for Data."

What is Cloud Pak for Data? Well, it is an IBM platform for unifying and simplifying the collection, organization, and analysis of data. Heretofore, it was mostly focused on non-mainframe platforms, but the latest release, version 3.0, is a major upgrade with an enhanced unified experience, expanded ecosystem, and optimized Red Hat integration. And it enables several ways for you to turn your enterprise data on IBM Z into actionable, real-time insight through the integrated cloud-native architecture of IBM Cloud Pak for Data.



Crupi’s session started out with the now familiar (at least to IBM customers and Think attendees) Ladder to AI and how Cloud Pak for Data helps to enable customer’s journey up the ladder. Data is the foundation for smart business decisions and AI can unlock the value of this data.

He went on to discuss the continuing importance of the mainframe providing facts including:
  •  70% of Fortune 500 companies use mainframe for their most critical business functions
  •  72% of customer-facing applications are completely or very dependent on mainframe processing
  •  The mainframe handles 1.1 million transactions per second (as compared to Google experiences of 60,000 searches per second)
  •  95% of transactions in the banking, insurance, airline and retail industries run on the mainframe

These are all good points; and things that mainframe users like to hear. It is good to see IBM promoting the ubiquity and capabilities of the mainframe.



Now, what about IBM Cloud Pak for Data better-exploiting mainframe data? Crupi goes back to the AI Ladder to talk about z/OS capabilities for analyzing and collecting data for AI.


Solutions such as Watson Machine Learing for z/OS, Db2 AI for z/OS, and QMF can be used for analyzing data; while Db2 for z/OS and Tools, IDAA, and Data Virtualization Manager can be used for data collection. These things already exist, but using them effectively with distributed platform capabilities will be crucial to be able to climb the ladder to AI.

IBM Cloud Pak for Data will leverage IBM Z technology to bring valuable IBM Z data into a modern analytics/AI platform. It can now exploit IBM Z data and resources where appropriate enabling you to further benefit from IBM Z technology and data.

A key new component of making the data on IBM Z accessible is IBM Db2 for z/OS Data Gate, a new product announced during Think 2020. Db2 Data Gate can help you reduce the cost and complexity of your data delivery with a simple, easy-to-deploy mechanism to deliver read-only access to Db2 for z/OS data. Instead of building and maintaining costly custom code, Db2 Data Gate do the work. Data can be synchronized between Db2 for z/OS data sources and target databases on IBM Cloud Pak for Data.


Instead of accessing data in the IBM Z data source directly, an application accesses a synchronized copy of the Db2 for z/OS data, hosted by a separate system. This target system can be established anywhere Cloud Pak for Data is supported, thus enabling a wide range of target platforms that include public cloud, on-premises, and private cloud deployments.


So IBM is helping you to expand the accessibility of your Z data.

And that brings me to the second session I’d like to briefly mention, Automate Your Mainframe z/OS Processes with Ansible [Session 6760]. 

Although Ansible is not a replacement for your operational mainframe automation tools, it can be used to communicate with and automate z/OS using the out-of-the-box SSH into z/OS Unix Systems Services to execute commands and scripts, submit JCL, and copy data. And Ansible has existing modules that can be used to make calls to RESTful/SOAP APIs that are available in many z/OS products.


Ansible can be beneficial to orchestrate cross-platform, including Z systems, and to simplify configuration and deployment management. But keep in mind that Ansible is a proactive framework for automation and is not intended to replace automation solutions that monitor and react.

Here is a nice, but by no means exhaustive, list of examples showing how Ansible can be used to interact with popular z/OS products.


The Bottom Line

The IBM Think 2020 conference was a great success considering how rapidly IBM had to move to convert it from an in-person event, to an online, virtual one. And the content was informative, entertaining, and had something for everybody. I hope you enjoyed my take on the event… feel free to share your comments below on anything I’ve written here, or on your experiences at the event.


Tuesday, September 17, 2019

IBM Unleashes the z15 Mainframe



In New York City, on September 12, 2019, IBM announced the latest and greatest iteration of its Z systems mainframe computing platform, the IBM z15. And I was lucky enough to be there for the unveiling.

The official IBM announcement letter can be found here if you want to dive into the details. But before you go there, consier first reading what I have to say about it below.

Before going any further, here I am with the new z15 in New York… don’t we make a handsome couple? 



The event was held at 3 World Trade Center in lower Manhattan. Ross Mauri, General Manager of IBM Z, kicked off the event extolling the unprecedented security delivered by the z15 with encryption everywhere and the data privacy passports. He claims that the IBM z15 is the most secure platform you can get, and the new capabilities back that up. Mauri also acknowledged that "there's always the next big thing in technology" but stated that "IBM is innovating and leading by anticipating customer needs to ensure the on-going relevance of the mainframe."

And there is a lot to like about the new IBM z15 platform, both for long-time users and those embracing the platform for new development. IBM is embracing the multicloud approach and reminding everybody that the mainframe is a vital component of multicloud for many organizations.

But modern infrastructure with the latest application development techniques is not as simple as throw out the old and bring in the new. I mean, let’s face it, if you have a mainframe with possibly hundreds or thousands of man years of work invested in it, are you really going to take the time to re-code all of that mission-critical work just to have it on a “new” platform? Rewriting applications that work today cannot be the priority for serious businesses! Especially when the modern mainframe is as new as it gets, runs all of that legacy code that runs your business, and also supports new cloud apps and development, too.

The IBM Z works perfectly as a part of your multicloud development strategy. The cloud promises an open, flexible world. But your most critical workloads also need to run securely and without interruption. To accomplish both objectives you must support cloud with an underlying IT infrastructure. And for Fortune 500 companies and other large organizations, the multicloud includes the mainframe as part of the enabling infrastructure.

What’s New

The new IBM z15 is housed in a convenient 19 inch rack, and that means it can be integrated into a standard rack. So you get all the benefit and strengths of the mainframe while fitting into the size expected by a standard data center.

Did you know that there are more transistors in the new IBM z15 chip than there are people in the world! Inside the IBM z15 processor chip, there are 15.6 miles of wires, 9.2 billion transistors and 26.2 billion wiring connections — all of which allow a single z15 server to process 1 trillion web transactions per day.

The mainframe is the ideal platform for many organizations. It provides the resiliency, security, and agility needed to power, secure, and integrate your hybrid cloud. And it capably, securely, and efficiently runs your transactions and the batch workload required to keep your business humming. IBM used to talk about five 9s of availability (that is 99.999%) but with the new IBM z15, IBM can deliver seven 9s (that is 99.99999%)! That is 3.16 seconds of downtime per year, or only 60.48 milliseconds of downtime per week. Now that is impressive!

The primary new features that are worth your time to investigate further, and that were highlighted by IBM at the kickoff event are:
  • Encryption everywhere which protects your data anywhere, even after it leaves your system, with new IBM Data Privacy Passports, which delivers privacy by policy.
  • Cloud native development that simplifies life for developers as they build and modernize applications using standard tools, including new support for Red Hat OpenShift. This enables you to both modernize the apps you have and to deploy new ones using the tools of your choice.
  • IBM Z Instant Recovery can reduce the impact of planned and unplanned downtime. Instant Recovery can speed the return to your pre-shutdown SLAs by up to 2x.

The flexibility of the z15 is noteworthy, too. The new IBM z15 provides the flexibility to implement 1 frame...


or up to 4 frames, as your capacity needs dictate.


And did you know it can run multiple operating systems, not just z/OS? The IBM Z platform can run z/OS, Linux on Z, z/VM, z/VSE, and z/TPF. This enables organizations to run legacy applications and modern, specialist ones using the operating system of their choice. Indeed, convenience and flexibility are hallmarks of the IBM Z platform.

The IBM z15 is a modern platform for all of your processing needs. And that is backed up not just by IBM, but also a brand new survery from BMC Software, in their 14th annual mainframe survey for 2019. The survey shows that 93% are confident in the combined long-term and new workload strength of the IBM Z platform, the strongest showing since 2013! Other highlights inlcude a majority thinking that mainframe growth will continue, along with increasing MIPS/MSU consumption... not to mention that the m
ainframe is handling increases in data volume, number of databases, and transaction volume. If you are working with mainframes in any way, be sure to check out the new BMC Mainframe Survey.


Indeed, with the new IBM z15 things are looking great for the mainframe and those that rely upon it to power their digital business.