Tuesday, February 08, 2022

Db2 User-Defined Functions (UDFs)

A user-defined function, or UDF for short, enables you to customize Db2 to your shop's requirements. It is a very powerful feature that  can be used to add procedural functionality, coded by the user, to Db2. The UDF, once coded and implemented extends the functionality of Db2 by enabling users to specify the UDF in SQL statements just like built-in SQL functions.

Additional functionality and capability aside, it can also be complex to deliver and requires detailed knowledge, additional application development skills, and extra administrative handling.

User-defined functions are ideal for organizations wishing to utilize Db2 and SQL to perform specialized, corporate routines performing business logic and data transformation.

Types of UDFs

There are two ways of creating a user-defined function: you can code your own program from scratch using a traditional programming language; or you can use an existing function as a template, of sorts.

There are two types of user-defined functions that can be written from scratch: scalar functions and table functions. A scalar function is applied to a column or expression and operates on a single value. DB2 offers multiple built-in scalar functions, each of which can be applied to a column value or expression. Examples of built-in scalar functions include LTRIM, SQRT, and SUBSTR. You can read more about Db2 built-in functions in my earlier blog post, A Quick Intro to Db2 SQL Functions.

Table functions are a different type of function that, when invoked, returns an entire table. A table function is specified in the WHERE clause of a SELECT statement taking the place of a table, view, synonym, or alias.

Scalar and table user-defined functions are referred to as external functions, because they are written and developed outside of (or external to) DB2. External UDFs must be written in a host programming language. DB2 UDB user-defined functions can be written in Assembler, C, COBOL, or PL/I.

The final type of user-defined function is a sourced function. A sourced function is based on a function that already exists, that is it can be based on a built-in function or another user-defined function that has already been created. A sourced function can also be based on an existing scalar or column function.

User-defined functions are similar in functionality to application subroutines. But user-defined functions are different because they are used inside of SQL statements. This gives them great power. A user-defined function is not a substitute for an application subroutine, subprogram, or procedure. Instead, user-defined functions are used to extend the functionality of the SQL language.

The Schema

User-defined functions, user-defined distinct types, stored procedures, and triggers all are associated with a schema. By default, the schema name is the authid of the process that issues the CREATE FUNCTION, CREATE DISTINCT TYPE, CREATE PROCEDURE, or CREATE TRIGGER statement.

You can specify a schema name when you create a user-defined function, user-defined type, or trigger. If the schema name is not the same as the SQL authorization ID, then the issuer of the statement must have either SYSADM or SYSCTRL authority, or the authid of the issuing process must have the CREATEIN privilege on the schema.

For example, the following statement creates a user-defined function named FUNCX in the schema named MYSCHEMA:

     CREATE FUNCTION MYSCHEMA.FUNCX ...

If the MYSCHEMA component was not included in the CREATE statement, then the schema would default to the authid of the person (or process) that executed the CREATE statement. In short, the schema is set to the owner of the function. If the CREATE statement was embedded in a program, the owner is the authid of the owner of the plan or package; if the statement is dynamically prepared, the owner is the authid in the CURRENT SQLID special register.

Table Functions

Table functions are different in nature than scalar functions. A table function is designed to return multiple columns and rows. In other words, the output of a table function is itself a table. An example using a table function follows:

SELECT WINNER, WINNER_SCORE,
             LOSER, LOSER_SCORE
      FROM   GAME_RESULTS(5)
      WHERE  LOSER_SCORE = 0;

In this case, the table function GAME_RESULTS( ) is used to return the win/loss statistics for football games. The table function can be used in SQL statements just like a regular DB2 table. The function program is designed to fill the rows and columns of the "table." The input parameter is an INTEGER value corresponding to the week the game was played; if 0 is entered, all weeks are considered. The query above would return all results where the losing team was shut out (had 0 points) during the fifth week of the season.

The following or similar CREATE FUNCTION statement could be used to define the GAME_RESULTS( ) function:

      CREATE FUNCTION GAME_RESULTS(INTEGER)
     RETURNS TABLE (WEEK INTEGER,
                    WINNER CHAR(20),
                    WINNER_SCORE INTEGER,
                    LOSER CHAR(20),
                    LOSER_SCORE INTEGER)
       EXTERNAL NAME GAMES
     
   LANGUAGE C
        PARAMETER STYLE DB2SQL
          NO SQL
          DETERMINISTIC
          NO EXTERNAL ACTION
          FENCED
    
      SCRATCHPAD
    
      FINAL CALL
    
      DISALLOW PARALLEL
    
      CARDINALITY 300;

The parameter identifying this as a table function is the RETURNS TABLE parameter. This parameter is used to define the columns of the table function. The function program must create these rows itself or from another data source such as a flat file.

The value supplied for the CARDINALITY parameter is only an estimate. It is provided to help Db2 optimize statements using the table function. It is possible to return more or fewer rows than is specified in CARDINALITY.

Sourced Functions 

Sourced functions are created from functions that already exist. A sourced function can be based on any existing function, whether built-in (scalar and column) or user-defined (scalar). The biggest reasons that sourced functions are created is to enable functions for user-defined distinct data types. This is required because DB2 implements strong typing. Here is an example of creating a sourced UDF:

 CREATE FUNCTION FINDWORD
    (DOCUMENT, VARCHAR(50))
  RETURNS INTEGER
  SPECIFIC FINDWORDDOC
  SOURCE SPECIFIC FINDWORDCLOB;

In this example a new function, FINDWORD, is created from an existing function FINDWORDCLOB. The function finds the location of the specified word expressed as a VARCHAR(50) value in the supplied DOCUMENT. The function returns an INTEGER indicating the location of the word in the DOCUMENT. DOCUMENT is a user-defined data type based on a CLOB data type.

Summary

UDFs provide powerful functionality that can be used to customize Db2 SQL functionality and standardize your operational functtions and algorithms into your DBMS environment. 

Tuesday, February 01, 2022

Auditing Database Access and Change: A Necessity More Than a Nicety

The increasing burden of complying with government and industry regulations imposes significant, time-consuming requirements on IT projects and applications. And nowhere is the pressure to comply with regulations greater than on data stored in corporate databases.

Organizations must be hyper-vigilant as they implement controls to protect and monitor their data. One of the more useful techniques to protect your company’s database data is through data access auditing, also known as simply database auditing. Database auditing is the process of monitoring access to, and modification of, selected database objects and resources within operational databases and retaining a detailed record of the access where that record can be retrieved and analyzed as needed.

A data access auditing capability enables companies to produce an audit trail of information with regard to their database data. This audit trail should contain information such as what database objects were impacted, who performed the operations, and when the activity occurred. A comprehensive audit trail of database operations, coupled with an analysis engine to review and analyze the audit trail allows data and security professionals as well as IT auditors to perform an in-depth analysis of access and modification patterns against data in your database systems. Only when armed with such details is it possible to comply with regulations, pass security audits, and drill down into the details to review potential vulnerabilities for effective issue resolution.

A Look at the Regulations and Requirement

A fine-grained audit trail is necessary to comply with many regulations that apply to organizations of all types.

Many of the PCI Data Security Standard requirements emphasize the importance of real-time monitoring and tracking of access to cardholder data, as well as continuous assessment of database security health status.

HIPAA, the Health Insurance Portability and Accountability Act, directs health care providers to protect individual’s health care information going so far as to state that the provider must be able to deliver a list of everyone who even so much as looked at their patient’s information. Could you produce a list of everyone who looked at a specific row or set of rows in any database you manage?

And then there is the Sarbanes-Oxley Act (SOX) which has the goal of reducing fraud and conflicts of interest, as well as improving disclosure and financial reporting. Section 404 of the SOX Act specifies that the CFO must guarantee the accuracy of the processes used to add up the numbers; processes that access and manipulate data in a database system. As such, it is important to be able to track who changed database schemata and database data for SOX compliance

And these are only a few of the pertinent national, international, regional, and industry regulations that must be understood and complied with.

Database Access Auditing Techniques

So now that we understand why database access auditing is important, let’s take a look at how it can be accomplished. There are several popular techniques that can be deployed to audit your database structures.

The first technique is trace-based auditing, which is typically built directly into the native capabilities of the DBMS. For example, the Audit trace feature of IBM Db2 for z/OS. When an audit trace is started, the DBMS begins to cut trace records when activity occurs against audited objects (selected by DDL option). However, Db2 only captures the first read or write per unit of work, which will clearly miss activities as most UOWs encompass more than one read or write. Alternately, Db2 audit policies can be created for named tables to capture all activity, which improves the data captured, but can create an excess of audit records that need to be stored in SMF data sets.

So, there are problems with this technique including a high potential for performance degradation when audit tracing is enabled, a high probability that the database schema will need to be modified, and insufficient granularity of audit control, especially for reads.

Another technique is to scan and parse transaction logs. Every DBMS uses transaction logs to capture every database modification for recovery purposes. If you can read the log and interpret the data (which can be challenging as the data is not simple) it is possible to identify what data was changed and by which users. The biggest drawback to this technique is that database reads are not captured on transaction logs.

Additional issues with relying on log analysis for auditing data access include: it is possible to disable logging such that modification information will not be on the log and therefore not captured; performance issues scanning volumes and volumes of log files looking for only specific information to audit; and the difficulty of retaining logs over long periods for auditing when they were designed for short-term retention for database recovery.

And that brings us to the third, and preferred, method of database auditing for organizations that are serious about regulatory compliance: professional software that proactively monitors and intercepts all SQL requests as they are executed by the DBMS. It is important that all SQL access is audited by monitoring for SQL at the database level, not just by sniffing network calls. This is important because not every SQL request goes over the network, especially for the mainframe platform where much of the activity is centralized and many important business transactions never venture over an IP network (e.g., a CICS or IMS transaction accessing Db2).

Proactive intercept-based database audit monitoring does not require transaction logs, does not require database schema modification, should be highly granular in terms of specifying what to audit, and should incur only minimal overhead.

One such product that implements intercept-based auditing for Db2 database access is DBARS, which stands for “Db2 Access Recording Services,” available from ESAI Group

DBARS

Using DBARS for your Db2 database auditing needs makes sense as it offers a high-speed method for intercepting all Db2 database activities, not just modifications but also all reads. Furthermore, DBARS does not rely on Db2 tracing; instead, it uses a proprietary interface to intercept all Db2 SQL statements, regardless of origin. Therefore, you can use DBARS to create audit reports of all Db2 activity. Furthermore, DBARS provides the ability to block suspicious SQL activity, giving you the ability to prevent fraudulent access attempts based on specific parameters and patterns.

As you look into your mainframe Db2 database auditing needs and compare functionality against the advice proffered in this article, you will see that DBARS offers all of the functions needed for auditing access to sensitive data in Db2 tables.

Important Features for a Database Auditing Solution

As you investigate the database access auditing requirements for your organization, you should make sure that the solutions you examine support your DBMS using intercept-based auditing, instead of the other methods.

You should also compile a list of the types of questions that you want your solution to be able to answer. A good database access auditing solution should be able to provide answers to at least the following questions:

  • Who accessed the data?
  • At what date and time was the data accessed?
  • What program or client software was used to access the data?
  • For batch mainframer users, what was the z/OS job name?
  • From what location was the request issued?
  • For distributed Db2 access, what were the names of the external server, application, and workstation?
  • What SQL was issued to access the data?
  • Was the request successful; and if so, how many rows of data were accessed or modified?
  • If the request was a modification, what data was changed? (A before and after image of the change should be accessible.)
Of course, there are numerous details that must be investigated for each of these questions. You will want to be able to review recent activities, but you will also want to be able to review actions that happened in the past, so a robust database access auditing solution should provide an independent mechanism for the long-term storage and access of audit details. It should be easy to query the audit trail, perhaps even offering canned queries for the most common types of queries. Nonetheless, the audit information should be accessible using industry-standard query tools to make it easier for auditors to customize queries as necessary.

An alerting capability is also desirable, such that when certain SQL activity is intercepted an alert is triggered to take further actions, such as recording an exception, sending information to a log, or pinging a DBA or security admin.

Advanced auditing solutions also provide the ability to proactively block suspect access to the database. For example, you may want to stop any attempted access outside of normal, scheduled programs over the weekend. At any rate, it is desirable for an auditing solution to be able to block activities based on parameters such as user name, program name, IP address, execution time, type of access, and the like. Such a capability is important because preventing fraudulent access is preferable to allowing it and reporting that it happened!

It is also important for a comprehensive database auditing solution to provide a mechanism to audit privileged users, such as DBAs and SYSADMs. Many privileged users have blanket access to all corporate data. Although they can access and modify it at their discretion, they should not be accessing and modifying production data without due cause. A database auditing solution enables organizations to implement a “trust, but verify” policy with their privileged users. This allows the administrators to retain the authority they need to be able to do their jobs, while at the same time giving the organization the peace of mind that everything the privileged users are doing is tracked for security and compliance purposes. Without a database auditing solution in place, privileged users are a potential compliance problem lurking within every database implementation.

The Benefits of a Professional Database Auditing Solution

The bottom line is that database auditing should be a crucial component of your organization’s data protection strategy. Auditing database activity is a core requirement of compliance with many government and industry regulations, but auditing is also an essential component of securing and protecting the important production data in your database systems.

Be sure to study the auditing and compliance requirements of your organization and to augment your DBMS with the appropriate tools to bolster the auditability of your databases.