Thursday, July 06, 2023

Top 10 Db2 Performance Tips - No. 2: Optimize Database Design

A well-designed database schema forms the foundation of a high-performing and efficient IBM Db2 database... and therefore, also serves as the basic starting point for efficient Db2 applications. The importance of optimizing the database design cannot be overstated, as it directly impacts query performance, data integrity, and overall system efficiency.

The Logical Data Model

The first step toward a proper database design is the creation of a logical data model. Before implementing databases of any sort, it is imperative to first develop a sound model of the data to be used. Novice database developers frequently begin with the quick-and-dirty approach to database implementation. They approach database design from a programming perspective. Because novices often lack experience with databases and data requirements gathering, they attempt to design databases like the flat files they are accustomed to using. This is a major mistake. Indeed, most developers using this approach quickly discover problems after the databases and applications become operational in a production environment. At a minimum, performance will suffer and data may not be as readily available as required. At worst, data integrity problems and/or performance problems may arise, rendering the entire application unusable.

The goal of a data model is to record the data requirements of a business process. The scope of the data model for each line of business must be comprehensive. A data model serves as lexicon for the data needs of the business... and as a blueprint for the physical implementation of the structures of the database.

A key component of building a proper data model is to ensure proper normalization. 

Normalization

Normalization reduces data redundancy and inconsistencies by ensuring that the data elements are designed appropriately. A series of normalization rules are applied to the entities and data elements, each of which is called a “normal form.” If the data conforms to the first rule, the data model is said to be in “first normal form,” and so on.

A database design in First Normal Form (1NF) will have no repeating groups and each instance of an entity can be identified by a primary key. For Second Normal Form (2NF), instances of an entity must not depend on anything other than the primary key for that entity. Third Normal Form (3NF) removes data elements that do not depend on the primary key. If the contents of a group of data elements can apply to more than a single entity instance, those data elements belong in a separate entity.

This is a quick and dirty introduction to normalization, and there are further levels of normalization not discussed here in order to keep the discussion moving along. For an introductory discussion of normalization visit http://wdvl.com/Authoring/DB/Normalization.

The bottom line is that normalization reduces data redundancy and improves data integrity by organizing data into logical entities and minimizing data duplication. By carefully analyzing the business requirements and applying normalization principles, database designers can create tables that are lean, efficient, and accurately represent the data model.

Relationships

Optimizing relationships between tables is another critical aspect of database design. Relationships, such as primary key-foreign key associations, define the logical connections between tables. This too, should be evident in the logical data model, which is frequently depicted as an entity/relationship diagram. 

Choosing appropriate indexing strategies, enforcing referential integrity, and carefully considering the cardinality and selectivity of relationships are crucial steps to ensure efficient query processing and join operations.

From Logical to Physical

Assuming you have a well-designed logical data model, the first step in moving to a physical database design is the process of transforming that logical data model into an actual physical database. The first step is to create an initial physical data model by transforming the logical data model into a physical implementation based on an understanding of the DBMS being used for deployment. To successfully create a physical database design you will need to have a good working knowledge of the features of the DBMS including:

  • In-depth knowledge of the database objects supported by the DBMS and the physical structures and files required to support those objects.

  • Details regarding the manner in which the DBMS supports indexing, referential integrity, constraints, data types, and other features that augment the functionality of database objects.

  • Detailed knowledge of new and obsolete features for particular versions or releases of the DBMS to be used.

  • Knowledge of the DBMS configuration parameters that are in place.

  • Data definition language (DDL) skills to translate the physical design into actual database objects.

Armed with the correct information, you can create an effective and efficient database from a logical data model. The first step in transforming a logical data model into a physical model is to perform a simple translation from logical terms to physical objects. Of course, this simple transformation will not result in a complete and correct physical database design – it is simply the first step. The transformation consists of the following:

  • Transforming entities into tables

  • Transforming attributes into columns

  • Transforming domains into data types and constraints

Data Types

To support the mapping of attributes to table columns you will need to map each logical domain of the attribute to a physical data type and perhaps additional constraints. In a physical database, each column must be assigned a data type. 

Selecting appropriate data types is vital for optimizing database design. Choosing the right data types can have a significant impact on storage requirements, query performance, and overall system efficiency. By selecting data types that accurately represent the data and minimize storage overhead, such as using integer types instead of character types for numeric values, assuring that date and time data use appropriate data/time data types, and choosing wisely between the various text and character data types for each column helps to improve data integrity, optimize storage utilization, and improve query execution speed.

Constraints

Furthermore, you will need to implement appropriate constraints, such as primary keys, unique constraints, and foreign keys. This enhances data integrity and query performance. Furthermore, additional constraints such as check constraints and nullability enable the DBMS to better enforce data integrity, instead of leaving it to application code written at a later time. Constraints enforce data consistency rules, ensure referential integrity, and provide the optimizer with valuable information for query optimization.

An Iterative Process

It is worth mentioning that database design optimization is an iterative process that should consider not only the current requirements but also the future growth and scalability of the system. Regularly reviewing and revisiting the database design as the application evolves can help identify areas for improvement and ensure that the database remains optimized over time.

Finally...

In conclusion, a well-designed database schema is fundamental to achieving optimal performance for your database applications. By focusing on strategies such as normalization, relationship optimization, appropriate data types, and constraints, database designers can create a robust and efficient database environment. Optimizing the database design not only enhances query performance and data integrity but also lays the groundwork for scalability and adaptability as the system evolves.

Monday, June 26, 2023

Top 10 Db2 Performance Tips - No. 1: Analyze Query Execution Plans

Query execution plans play a crucial role in identifying performance bottlenecks within an IBM Db2 database. When a query is executed, the database optimizer determines the most efficient way to retrieve the requested data by generating a query execution plan. It does this by parsing SQL requests and turning them into actionable commands for accessing and modifying data. 

So the query execution plan outlines the steps and operations involved in executing the query, including accessing tables, applying filters, joining data, and performing sorting or aggregations.

Understanding query execution plans is important because they reveal the inner workings of how Db2 will process the query. And they also highlight areas where performance can be improved if you understand the various ways that data can be processed by Db2. By analyzing query execution plans, you can identify potential inefficiencies and take proactive steps to optimize queries for better performance.

For example, one high-level performance tuning step that you can identify using a query execution plan is to find table (space) scans that would be better off using an index instead.

Plan Tables

But how can you review and analyze query execution plans? Perhaps the first method used by most folks is to examine the Plan Tables. A lot of information is contained in the PLAN_TABLE. The PLAN_TABLE is simply a specific table used by the EXPLAIN command to populate information about the choices made by the Db2 Optimizer when it formulates a query execution plan. The information in the PLAN_TABLE provides detailed information about the access paths chosen, join strategies employed, and other relevant details that impact query performance. 

As long as EXPLAIN YES is specified when you bind your program, or you explicitly issue an EXPLAIN PLAN statement, information about the query execution plan will be placed in the PLAN_TABLE.

So, after the optimizer creates the access paths and populates the PLAN_TABLE with data representing those access paths, you will need to examine the results to determine if everything is satisfactory. Many questions can be answered by analyzing the results of EXPLAIN – questions like:

  • if we are joining what type of join is used (NLJ, MS, Hybrid),
  • was an index used, and if so how many columns matched,
  • are we doing a scan, and if so what type of scan (full or page range)
  • is prefetch being used, and if so what type (sequential, list)
  • was a hint used
  • was parallelism used, and if so what degree and type 
  • was a sort required, and if so why (Join, Unique, Group By, Order By)
  • what type of locking is required

And that just covers the main PLAN_TABLE. There are actually many other explain tables available that, if created, will be populated by EXPLAIN. But that is beyond the scope of this high-level tip, which is already becoming rather lengthy.

Other Tools

There are also several additional tools and techniques that can be used to analyze Db2 query execution plans. One commonly used tool is Visual Explain, which provides a graphical representation of the query execution plan. Many tuners prefer a visual approach to revieing and tuning access paths. Visual Explain allows users to visualize the flow of data, understand the order in which operations are performed, and identify potential performance bottlenecks.

Visual Explain is a feature of the free IBM Data Studio tool, as well as the new Db2 Administration Foundation offering. There are other tools, from other vendors, that also provide a visual Explain capability.

Speaking of which, another technique to analyze query execution plans is to use database monitoring tools. Such tools capture real-time performance data, including query execution plans, and provide useful performance metrics that can be reviewed and acted upon as needed. 

By monitoring the execution plans of frequently executed queries, it is possible to detect patterns, identify resource-intensive operations, and make informed decisions about query optimization.

A Bit of Guidance

It is wise to always Bind your production programs (that use static SQL) specifying EXPLAIN YES so that you can see the access paths chosen by Db2. This way, you can either check to make sure that no access paths have changed before proceeding to implement changes in production, or at least you have the information available should problems arise after you move code into production.

It can be difficult to determine if any access paths have changed, because you will need the old execution plans to compare to the new ones. For this reason, it is a good idea to keep several versions of access path information for each program. 

Additionally, sometimes the program has changed between running EXPLAIN, which can make it challenging to find the exact SQL statements to compare. But you are still better off with older versions  than without any historical data.

To optimize query execution plans for better performance, it is essential to focus on areas such as index usage, join strategies, and predicate selection. By strategically creating and maintaining appropriate indexes, rewriting complex queries, and refining predicate conditions, it is possible to influence the optimizer's decisions and improve query performance.

In Conclusion 

Analyzing query execution plans is a vital step in optimizing the performance of IBM Db2 applications. By using tools like EXPLAIN and Visual Explain, PLAN_TABLE data, and monitoring tools, it is possible to gain valuable insights into query execution, identify potential bottlenecks, and enhance overall performance. 

By proactively analyzing and optimizing query execution plans, organizations can achieve faster response times, improved scalability, and a more efficient database environment.

Be sure to check back here next week when we will address database design as a performance issue.

Wednesday, June 21, 2023

Top 10 Performance Tuning Tips for IBM Db2

Performance is a critical aspect of any database management system, and IBM Db2 is no exception. As organizations deal with ever-increasing data volumes and complex workloads, it is crucial that Db2 environments are fine-tuned to ensure optimal performance, responsiveness, and scalability. In this multi-part blog post, I will examine the top ten performance tuning tips for assuring optimal IBM Db2 applications and systems.

Whether you are an experienced database administrator or just starting your journey with Db2, these performance tuning tips will provide some practical strategies and techniques to enhance the efficiency and speed of your Db2 deployments. This series of posts will cover a range of topics, including query optimization, database design, indexing strategies, memory configuration, I/O optimization, and more. 

By following these Db2 performance tips, you can significantly improve query response times, optimize resource utilization, and ensure the smooth operation of your Db2 environment. Whether you are dealing with transactional workloads, analytical queries, or a combination of both, the tips presented here will equip you with the knowledge and techniques to tackle performance bottlenecks and fine-tune Db2 to meet your specific needs.

Whether you are looking to optimize existing Db2 deployments or planning for a new implementation, this blog post will serve as a valuable resource to guide your performance tuning efforts. So, let's dive into the top ten performance tuning tips for IBM Db2 and discover how you can unlock the full potential of your database system.

Here's an outline for the upcoming posts in this series on "Top 10 Performance Tuning Tips for IBM Db2":

  Tip 1: Analyze Query Execution Plans

  Tip 2: Optimize Database Design

  Tip 3: Efficient Indexing Strategies

  Tip 4: Effective Memory Configuration

  Tip 5: Workload Management and Query Optimization

  Tip 6: Monitoring and Performance Metrics

  Tip 7: Configuration Parameters and Tuning

  Tip 8: I/O Optimization

  Tip 9: Regular Maintenance and Housekeeping

  Tip 10: Stay Up to Date 

So be sure to keep tuning in to the Db2 Portal blog as we publish these Top 10 Performance Tips over the course of the next few weeks.

And be sure to share your own performance and tuning thoughts here... or on any of the subsequent posts in this series.


Thursday, June 01, 2023

Manage Your Test Data Before It Becomes a Bottleneck

Test data management is an important aspect of application development. Anybody who has ever been involved in software development knows that creating applications requires test data. Without the data to test your programs against, there is really no way to ensure that the programs are doing what you want them to do.

Image by Freepik

But test data management can quickly consume a lot of time and effort. Supporting multiple programmers, multiple database environments, and making sure you have sufficient test cases is easier said than done. And we are not just talking about developing new applications, but also modifying existing ones!

So, what can be done? Well, I'm going to point you to a couple of resources. First, I am delivering a webinar on this topic on June 8, 2023 titled Manage Your Test Data Before it Becomes a Bottleneck -- the same title as this blog post!

In this webinar, I discuss test data management and the trends impacting application development and testing. Some of the issues I examine include automation, shift-left testing, agile development, and DevOps. And I'll uncover the impact of these trends and how they drive up the speed and complexity of creating and testing applications. I will also define the components and requirements of test data management and examine the pitfalls that can lead test data management to become a bottleneck, slowing down your developers.

Another resource you can take a look at is the article I wrote for ELNION, titled The Test Data Management Struggles of Modern App Development.

In this article, I explore the issues impacting application developers regarding test data management, and look at how they can overcome these challenges to ensure a successful testing process.

So if test data management is a challenge for you and your organization, I urge you to take advantage of these resources by registering for and attending the webinar, and also reading the article.

Friday, May 12, 2023

Will I See You In Philadelphia for IDUG NA 2023?


Mother's Day is right around the corner and if you are a Db2 professional you now that can only mean one thing: the North American IDUG Db2 Tech Conference is nigh! It seems like every year our favorite conference competes with our Mom's for our attention. But this year the folks at IDUG have helped us out by starting the conference on Tuesday instead of the Monday after Mother's Day. So go ahead, take Mom out to dinner and stay out as late as you want. You will still have a day to recover for your trip to Philadelphia.

Yes, IDUG is back in Philadelphia this year (2023). And I am looking forward to an exciting week of learning more about my favorite DBMS and mingling with others who think likewise. There is always a world of phenomenal educational opportunities delivered at IDUG each year by IBMer, vendor experts, users, and consultants from all over the world. There will be numerous informative technical sessions on all the latest and greatest Db2 technologies, features, and related products. 

If you’ve ever attended an IDUG conference in the past then you know why I’m excited. If you haven't attended before, I'm sure you'll find a lot to occupy your interest!

And be sure to seek me out and say hello. I'll be delivering three presentations this year!

First up, on Tuesday at 2:00 pm I'll be taking a look at the trends impacting data and database systems in my session titled Database Trends 2023: Things Are Changing and You Better Keep Up!

Then on Wednesday I will be co-presenting a vendor-sponsored presentation for the fine folks at InfoTel on Intelligent Automation of Db2 Administration at 10:15 AM. I will also be spending a bit of time in the InfoTel booth so you can always stop by there to say hello or ask me a question!

And finally, on Thursday, I will be delivering a session titled Code Your Db2 Applications for Performance From the Start! This one starts at 2:00 PM and is targeted at application developers but it can be useful for DBAs and performance analysts, too! The presentation is based on my latest book, A Guide to Db2 Performance for Application Developers.

So I hope to see you in Philadelphia for a great week of education and camaraderie at this year's premier event for Db2 professionals, the IDUG Db2 Tech Conference. 



Monday, March 20, 2023

Harnessing the Power of zIIP Processors for Improved Db2 Performance and Lower Cost

As a Db2 DBA, you're constantly looking for ways to improve performance and efficiency while minimizing costs. One technology that can help achieve these goals is the zIIP (IBM System z Integrated Information Processor) processor. By offloading eligible Db2 workloads to zIIP processors, you can free up capacity on general-purpose processors and reduce costs, while improving performance.

So, what workloads are eligible for offloading to zIIP processors? XML processing, and portions of the Db2 LOAD, REORG, RUNSTATS and REBUILD utilities are among the most common. If you have third-party utilities (BMC, Broadcom, InfoTel) it is likely that they, too, will be zIIP-eligible, at least for some of their functionality.

Shifting workload to distributed/DDF is another good way to exploit zIIPs because SQL statements executed through DDF are zIIP-eligible. But most of the time DBAs have little influence on moving workload to distributed processing. This choice is typically driven by application development plans instead of DBA tuning tactics. 

Nevertheless, by understanding what type of workload is zIIP-eligible and encouraging such usage, you can offload workload to zIIP processors. Moving workload from general-purpose processors to zIIPs can possibly improve system performance and reduce costs.

You might also want to take a look at converting some of your COBOL workload to Java, if at all possible because Java programs are zIIP-eligible. Of course, this requires application developers to get involved, as well as (possibly) a conversion tool.

To fully harness the power of zIIP processors, it's important to identify eligible workloads and configure the system accordingly. Here are some tips to help you get started:

  • Configure Db2 for zIIP offload: Configure Db2 to take advantage of zIIP processors by setting the appropriate parameters and options. Consult the IBM Db2 documentation for specific guidance on configuring zIIP offload.

  • Monitor and analyze performance: Use Db2 performance monitoring tools to track the performance of zIIP offloaded workloads and identify areas for further optimization. This can help you continually improve performance and efficiency over time.
By effectively utilizing zIIP processors for Db2 workloads, you can achieve significant cost savings and performance improvements on IBM Z mainframe systems. Don't let this powerful technology go to waste – start exploring the benefits of zIIP processors today!

Tuesday, March 07, 2023

A Db2 for z/OS DBA Poem

I'm a hip-hopping monster of the CRT,

All the programmers here, they be envying me,

I'm a DBA, a master of data,

Work with Db2 and I diss all the haters!


I strive every day to give smooth operation,

And to keep the data in perfect formation.

My eyes are trained to spot any trouble,

To fix it before the data turns to rubble.


My hands deftly skip over the keyboard, 

Tuning and tweaking all the inefficient code.

I know all the ins and outs of Db2,

Every command, every option, every ZPARM, I do.


My skills are honed by years of practice,

Through all the challenges, I've gained my status.

My expertise and skills are sought by many,

But few can match all my skills, or any.


I'm a Db2 DBA, indeed I am...

A guardian of data, and a z/OS fan!

Wednesday, March 01, 2023

Consider Application-Level Lock Control in Db2 13 for z/OS

It has been close to a year since Db2 13 for z/OS has been generally available. It was announced in April 2022 and delivered for GA on May 31, 2022.

As you think about migrating from Db2 12 to Db2 13, it is inevitable that you will consider the new functionality and capabilities that comes with the new version. I've discussed the AI functionality of SQL Data Insights delivered in Db2 13, but haven't really dug into some of the other interested new features.


Today, I want to briefly discuss application-level lock control. This new feature enables applications to take more control over Db2 locking. If you have applications that could benefit from different lock parameters than the system-wide settings used by Db2, then this new capability could be useful for at least some of your applications and tasks.

The first thing to note is that you must be at Function Level 500 before you can use application-level lock control. Using application-level lock control then requires setting a special register using the SET CURRENT LOCK TIMEOUT statement. This statement can be included in application programs to control the lock wait duration in seconds. The data type is INTEGER with a range of acceptable values from -1 to 32,767. Setting the CURRENT LOCK TIMEOUT to -1 indicates an indefinite wait, setting it to 0 indicates no waiting. 

Most DBAs reading that last sentence will shudder at the possible implications of waiting forever! Fortunately, there is another new DSNZPARM called SPREG_LOCKTIMEOUT_MAX that can limit the upper bound that an application can use for CURRENT LOCK TIMEOUT.

Nevertheless, in order to implement application-level lock control you will need to modify your application code. So, if you want to wait for locks for up to 50 seconds, you would issue

    SET CURRENT LOCK TIMEOUT = 50

Before the SQL that should wait for that duration. 

Of course, any applications using application-level lock control should be monitored for lock contention. This can be done using Db2 monitoring tools, such as Omegamon or Mainview. The trace record IFCID 437 can also be monitored to discover the specific applications and authorization IDs that use this special register.

Deadlock Resolution Control

Similar to controlling the lock timeout duration, Db2 13 also introduced the ability to manage deadlock resolution control at the application level. This is accomplished using a new system built-in global variable. Instead of just relying on the system setting to control deadlock detection, applications can choose to set the new global variable: SYSIBMADM.DEADLOCK_RESOLUTION_PRIORITY 

Valid values range from 0 and 255. The higher the value, the less likely that locks requested by the application will deadlock when the application is involved in a deadlock situation. Applications and users require the WRITE privilege on this global variable in order to be able to issue it successfully.

So, if you want to set this to the max, you would issue

    SET SYSIBMADM.DEADLOCK_RESOLUTION_PRIORITY = 200

And then issue the statement(s) you are concerned about deadlocking. 

Of course, using this global variable does not guarantee that the application won't experience a deadlock because there are other considerations involved that Db2 still must negotiate and consider. 

One final note, you can use Profile tables to set CURRENT LOCK TIMEOUT and SYSIBM.DEADLOCK_RESOLUTION_PRIORITY as this support has been added in Db2 13.

Thursday, February 23, 2023

Asking ChatGPT About the Advantages of Db2 for z/OS over Oracle

If you read my blog post from yesterday, you know that I have been playing around with ChatGPT, the latest AI technology that is sweeping the headlines.

Yesterday I was asking it about the mainframe, but today I thought I'd ask ChatGPT about Db2 and Oracle, specifically:

In what ways is IBM Db2 for z/OS better than Oracle?

Yes, I know that is a loaded question and that the results would likely be different if I swapped the order of Db2 for z/OS and Oracle. Nevertheless, I liked the results that ChatGPT generated, so I figured I'd share them with everybody here!

So here is ChatGPT's reply:

"Both IBM Db2 for z/OS and Oracle are mature and widely used relational database management systems, and each has its strengths and weaknesses. However, here are some ways that IBM Db2 for z/OS might be considered better than Oracle:
Transaction Processing: Db2 for z/OS is known for its high performance and ability to handle large transaction processing workloads. It is optimized for OLTP (Online Transaction Processing) applications and can handle high concurrency and high throughput.
Security: IBM has a long-standing reputation for security, and Db2 for z/OS is no exception. It has a robust set of security features, including encryption, role-based access control, and auditing capabilities that help ensure that sensitive data is protected.
Cost: Db2 for z/OS can be less expensive to operate than Oracle for some use cases. It is designed to run on IBM mainframes, which are known for their scalability and efficiency, allowing organizations to reduce hardware and software costs. Additionally, because IBM mainframes are known for their reliability, uptime, and availability, they can help reduce costs associated with system downtime or data loss.
Integration: Db2 for z/OS can integrate with other IBM products and services, such as CICS (Customer Information Control System) and IMS (Information Management System). This can be particularly beneficial for organizations that already have an IBM infrastructure in place.
Built-in Analytics: Db2 for z/OS includes built-in analytics features, such as IBM Db2 Analytics Accelerator, which can help organizations to run complex analytics queries on large datasets. This can be particularly useful for organizations that need to process large amounts of data quickly and efficiently. 
It is worth noting that Oracle also has its own strengths and advantages, such as its support for a wide range of platforms and its focus on database consolidation. Ultimately, the choice between Db2 for z/OS and Oracle will depend on the specific needs and requirements of an organization."

Again, a quite reasonable response! I think the thing that jumped out at me the most here is the "Cost" section. As a grizzeled, old mainframer I am always ready to fight back against unfounded claims that the mainframe option is always the more espensive one. But ChatGPT didn't make that mistake! Maybe because Oracle has a reputation of being expensive, too?

At any rate, I look forward to asking ChatGPT for help in my future endeavors. 


Wednesday, February 22, 2023

AI and ChatGPT Tell You Why You Should Use a Mainframe

ChatGPT has been getting a lot of attention for its ability to formulate reasonable responses to questions and inquiries. There have even been examples of ChatGPT being able to pass college exams and write up readable articles.

ChatGPT is a chatbot developed by OpenAI. It was launched in November pf 2022. It is built on top of OpenAI's GPT-3 family of large language models and it has been fine-tuned using both supervised and reinforcement learning techniques. 

So it is a somewhat practical example of AI being applied to a common proglem... communication.

Being the mainframe zealot that I am, I decided to ask ChatGPT why I should use a mainframe. I wasn't sure what to expect, but I am delighted with the response that I got. 

Me: Why should I still use a mainframe?

ChatGPT: There are several reasons why you might still want to use a mainframe for certain applications:

  1. High Reliability: Mainframes are designed to handle massive workloads and to provide high levels of reliability and availability. They are built with redundant components and features that allow them to recover quickly from failures, making them ideal for mission-critical applications.
  2. Security: Mainframes are known for their robust security features. They have been used in industries such as finance, healthcare, and government for decades because they offer features such as hardware encryption, advanced access controls, and secure connectivity options.
  3. Scalability: Mainframes can scale vertically to handle large amounts of data and processing power. They can also scale horizontally by adding more processors or servers to a mainframe cluster, making them a good fit for organizations with rapidly growing or fluctuating workloads.
  4. Cost-Effective: While mainframes are expensive to purchase, they can provide a cost-effective solution for certain applications. Mainframes are designed to handle large workloads with high efficiency, and they can consolidate multiple applications onto a single platform, reducing the need for additional hardware and software.
  5. Legacy Systems: Many organizations have critical legacy applications that were designed to run on mainframes. Replacing these systems can be expensive and time-consuming, and in some cases, it may not be feasible to migrate these applications to modern platforms.
  6. Integration with Modern Technologies: Many mainframe systems now offer modern connectivity options, such as APIs and web services, which make it easier to integrate them with newer technologies and applications.

In summary, while mainframes are no longer the primary platform for many organizations, they still offer unique benefits and advantages for certain applications. Mainframes provide high levels of reliability, security, scalability, and cost-effectiveness, making them an attractive option for organizations with specific needs and requirements.

This seems like a  reasonable response to me. What do you think? With technology this good, I can only imagine how much better it will get as the models are honed over time.


Monday, January 23, 2023

Escaping Single Quotes in Db2 SQL Statements

Every now and then I write about some things that may seem to be basic to old-timers, but can be very confusing to developers the first time they encounter them. Today's post is an example.

How do you handle a text string with a single quote in it? For example, something like this:

    Today is Craig's birthday (not really)

Well, the first thing to corroborate is that text in Db2 SQL is delimited by single quotes. So the first attempt people tend to make is something like this:

    'Today is Craig's birthday (not really)'

But they also quickly discover that this does not work. There are three single quotes here: one at the beginning, one between the "g" and the "s", and one at the end. So Db2 will choke on it and you will get an error.

The trick is knowing the escape character. An "escape" character is used to tell Db2 that the next character is actually part of the text string, not a delimiter. For Db2, the escape is to double up the single quote (note that other DBMSes may use a different technique). So, if we want to use our sample text phrase in Db2 SQL, we'd code it with the escape character, as follows:

    'Today is Craig''s birthday (not really)'

The single quote denoting the possessive for Craig's is doubled. This tells Db2 to treat the single quote as part of the text. So we could write SQL like this:

    SELECT 'Today is Craig''s birthday (not really)'
    FROM   SYSIBM.SYSDUMMY1;

This will return the text that we want:

    Today is Craig's birthday (not really)

This works even if there are multiple single quotes within the text. For example, consider if we want to use the following text in SQL:

    Today is not Craig's birthday but it is Beth's

We could write the SQL like this:

    SELECT 'Today is not Craig''s birthday but it is Beth''s'
    FROM   SYSIBM.SYSDUMMY1;

And it also works even at the end of the text string, even though it starts to look somewhat confusing. For example, what is we want to use the following text in SQL?

    I like the book 'A Confederacy of Dunces'

In that case, we still double up the single quotes, like so:

    SELECT 'I like ths book ''A Confederacy of Dunces'''
    FROM   SYSIBM.SYSDUMMY1;

And that will return our desired text.

I hope this short treatise on how to use the escape for single quotes in Db2 SQL has been helpful!

Monday, December 19, 2022

Seasons Greetings 2022!

Just a short blog entry today wishing all my regular readers a very happy holiday season! 

And to let everbody know that I will not be posting anything new between now and the end of the year...





But be sure to check back again next year (2023) as I continue to write about Db2, mainframes, and database issues that impact us all!

Monday, November 21, 2022

Db2 Education is a Constant Requirement

Ensuring access to proper Db2 educational materials should be one of the first issues to be addressed after your organization decides to implement Db2. But education sometimes falls through the cracks... 



Does your organization understand what Db2 is? How it works? For what purposes it is needed at your shop? How it will be used? Without a sound understanding of Db2, its components, and features, it is unlikely that you will be able to use Db2 to its best advantage. A basic level of Db2 knowledge can be acquired through a short Db2 fundamentals class for the IT personnel charged with making Db2 a success at your ­organization. But long-term success with Db2 requires ongoing education.

After addressing the basics of Db2 education, you must support a curriculum for on-going Db2 education for your organization. This support falls into four categories: 

The first category of training is a standard regimen of SQL and Db2 programming training to be used by application developers. Every programmer should receive basic training on SQL, education on how to embed SQL in the programming languages they will be using, and possibly additional courses on using Db2 with specific infrastructure software like MQ, WebSphere, CICS, and IMS. Also, with the importance of distributed access to Db2 these days, a course should be made available on that topic, with specific portions that address the technology used by your shop to provide distributed Db2 data access. If this basic level of Db2 education is not required for every Db2 programmer, then Db2 application performance will surely suffer as untrained coders write inefficient and incorrect SQL. This type of training can be delivered as instructor-led training by in-house SQL coding experts, as computer-based training, or as external classes. It should also be augmented with reference material such as books, articles, and blog posts.

The second category of education support is external training for special needs. This support includes education for database administrators, technical support personnel, and performance analysts. Additionally, your organization needs to plan for ongoing education to keep appropriate personnel up-to-date on new versions and releases of DB2. Although IBM offers great courses for new Db2 releases, several third-party vendors such as ProTech, Interskill Learning and KBCE regularly offer in-depth training and release-specific Db2 courses and lectures.

The third category of education is in-house, interactive training in the form of videos, computer-based training, and instructor-led courses. These courses should be used to augment and refresh the formal training given to your Db2 professional staff. 

The fourth, and final category of support, is reference material—for example, IBM’s Db2 manuals, Db2 books (such as DB2 Developer's Guide and A Guide to Db2 Performance for Application Developers), vendor-supplied white papers, and industry publications and periodicals. The current IBM manuals for Db2 are always available on the web, as well:

 



Of course, you should consider augmenting the standard IBM Db2 manuals with IBM redbooks. IBM redbooks provide in-depth, detailed coverage of a specific technology topic. IBM publishes redbooks on multiple subjects, including Db2, IMS, CICS, z/OS, and many other topics. IBM redbooks can greatly assist Db2 technicians working to understand a feature or nuance of DB2. You can download IBM red books for free in Adobe Acrobat format over the Web at http://www.redbooks.ibm.com/

IBM also offers many other useful sources of DB2 information on the Web including white papers, articles, and book excerpts from IBM Press publications. As does the International Db2 User Group (IDUG).

Independent Software Vendors (ISVs) are another rich source of Db2 information. The major vendors provide in-depth technical papers on features of Db2 that would be difficult for most shops to research in the same detail. Be sure to watch for and request the great Db2 Catalog posters, reference guides, white papers and eBooks available from BMC Software and Broadcom. There are other great Db2 ISVs who offer educational information, too, such as the newsletters, white papers, and webinars; these include InfoTel, SEG, and UBS-Hainer, among others.

All of these educational components—in-house education, external education, and industry publications—are useful for learning how you can use Db2 effectively. You would be wise to have a mix of material that supports more than one of the categories outlined previously. In this way, you provide a varied learning environment that meets the needs of all students. This varied learning environment allows each student to learn in the most conducive way for him or her. Plan to provide an on-site library of educational material addressing the following subjects:
  • Introduction to relational databases
  • Introduction to DB2 and SQL
  • Advanced SQL
  • Programming DB2 using (your languages of choice)
  • Programming DB2 in batch
  • Programming DB2 using TSO, CICS, and IMS
  • Programming DB2 and the Web
  • Creating DB2 stored procedures, triggers, and UDFs
  • Programming DB2 in a Distributed Environment
  • Debugging and Problem Analysis

You also might want to have an introductory Db2 database administration course to train new DBAs. In addition to this basic education library, plan to provide advanced education for technical Db2 users, such as DBAs, technical support personnel, and technical programmers and analysts. Advanced DBA topics (such as Sysplex Data Sharing, performance management, and backup/recovery) should be left to instructor-led training courses because of the complex nature of Db2 database administration. Additional advanced topics to consider include system administration (for systems programmers) and disaster recovery. Many vendors, including the ones metntioned above, offer these classes. Searching for smaller consulting firms and local resources is also prudent; these firms sometimes can provide courses tailored to your installation-specific needs.

The advanced education program should include allocating time to attend area user groups meetings, such as the International Db2 User Group (IDUG) conferences held every year in North America, EMEA, and Canada. 

The bottom line is simple, though. Plan for ongoing Db2 education for your DBAs, programmers, and analysts... or plan on failing.