Monday, July 01, 2013

DB2 Locking, Part 13: Optimistic Locking

Continuing our series on DB2 locking, let's look into a relatively recent development -- optimistic locking...

IBM added improvements for optimistic locking techniques in DB2 9 for z/OS.  What is optimistic locking? Sometimes referred to as optimistic concurrency control, optimistic locking is basically just what it sounds like. We are optimists and think that usually we will be the only ones with interest in the data. In other words, when optimistic locking is implemented you are assuming that most of the time there will be no other programs that are interested in the page of data that you are planning to modify.

Of course, even in the most optimistic world there will be exceptions, so optimistic locking does not assume that there will never be any concurrent processes that need to access your page(s). Basically, with optimistic locking you can improve performance by minimizing locking. So how do we do that?

When an application uses optimistic locking, locks are obtained immediately before a read operation and then released immediately. Update locks are obtained immediately before an update operation and held until the end of the transaction. Optimistic locking uses the RID (Record IDentifier) and a row change timestamp to test whether data has been changed by another transaction since the last read operation.

DB2 knows when a row was changed and so therefore he (I always tend to make DB2 masculine, sorry ladies) can ensure data integrity even as he minimizes the duration of locks. With optimistic locking, DB2 releases the page (or row) locks immediately after a read operation. And if you are using row locks, DB2 releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or a positioned delete.

Careful readers will have noticed that I talked about a “row change timestamp” but you may not have heard that expression before. DB2 V9 added support for automatically generated timestamp columns and if you wish to implement optimistic locking you will need to create (or alter) your tables to have a row change timestamp column, defined as follows:

NOT NULL GENERATED ALWAYS
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

or

NOT NULL GENERATED BY DEFAULT
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

For tables having a row change timestamp column, DB2 automatically populates and maintains the timestamp values for each row. Notice how the syntax is similar to the syntax used for other automatically generated DB2 values, such as sequences. DB2 will automatically generate the timestamp value for each row when the row is inserted, and modify the timestamp for each row when any column in that row is updated.

When you add a ROW CHANGE TIMESTAMP column to an existing table, the initial value for existing rows will not be immediately populated. Instead, DB2 places the table space in an advisory-REORG pending state. When you reorganize the table space, DB2 will generates the values for the ROW CHANGE TIMESTAMP column for all rows (and, of course, remove the advisory-REORG pending status).

OK, but how does this implement optimistic locking? Well, you can use this new column as a condition for making an UPDATE, by specifying it in your WHERE clause. Let’s walk thru a couple of examples.

First of all, when a table contains a ROW CHANGE TIMESTAMP you can use it to find out when its rows were modified. Let’s use the following table as an example:

CREATE TABLE CUSTOMER
 (CUSTNO           CHAR(8)   NOT NULL,
  CUST_INFOCHANGE  NOT NULL GENERATED ALWAYS
                   FOR EACH ROW ON UPDATE
                   AS ROW CHANGE TIMESTAMP,
  CUST_NAME        VARCHAR(50),
  CUST_ADDRESS     VARCHAR(100),
  CUST_CITY        CHAR(20),
  CUST_STATE       CHAR(2),
  CUST_ZIP         CHAR(9),
  CUST_PHONE       CHAR(10),

  PRIMARY KEY (CUSTNO))

Now that the table is defined with the ROW CHANGE TIMESTAMP we can use it in our programs and queries to determine change information about the data. For example, if we want to find all of the customer rows that were changed in the past week (ie. the last 7 days) we could run the following query:

SELECT CUSTNO, CUST_NAME
FROM   CUSTOMER
WHERE  ROW CHANGE TIMESTAMP FOR CUSTOMER <=
       CURRENT TIMESTAMP
AND    ROW CHANGE TIMESTAMP FOR CUSTOMER >=
       CURRENT TIMESTAMP - 7 DAYS;

But what would happen if you issued a statement like this against a table that was altered to include a ROW CHANGE TIMESTAMP? For example, if we created the CUSTOMER table as shown but without the CUST_INFOCHANGE column, populated the table with data, and then altered the table to include the CUST_INFOCHANGE column? In this case, DB2 will use the time the page was last modified. So the results will not be exactly correct because it would return all the rows on each page that qualifies (because at least one row on the page changed). This is why it is important to clear up the advisory REORG pending as soon as possible after adding the ROW CHANGE TIMESTAMP.

This is all well and good, and you can probably see the value of having this automagically changing timestamp in some of your tables, but where is the optimistic locking part? Well, for programs that use updateable static scrollable cursors DB2 can use optimistic locking as long as the program is bound specifying ISOLATION(CS). If you have this situation, DB2 will deploy optimistic locking to reduce the duration of locks between consecutive FETCH operations and between fetch operations and subsequent positioned UPDATE or DELETE operations.

Without optimistic locking, the lock taken at the first FETCH is held until the next FETCH. The lock taken at the last FETCH is held until COMMIT, ROLLBACK, or the end of transaction.

With optimistic locking, the scenario changes significantly. When the application requests a FETCH to position the cursor on a row, DB2 locks that row, executes the FETCH and releases the lock. When the application requests a positioned UPDATE or DELETE on the row, DB2 locks the row and then re-evaluates the predicate to ensure that the row still qualifies for the result table.

Optimistic locking itself will not happen without some effort on your part. Your application must have a ROW CHANGE TIMESTAMP and it must be selected first. Then, during a modification, a predicate needs to be added as a condition to tell whether the row has been modified or not. The static scrollable cursor uses the optimistic locking technique automatically. DB2 cannot use optimistic concurrency control for dynamic scrollable cursors. With dynamic scrollable cursors, the most recently fetched row or page from the base table remains locked to maintain position for a positioned UPDATE or DELETE.


So, if you have not taken a look at which applications might benefit from optimistic locking techniques since your shop migrated to you move to DB2 9 for z/OS it is time to take a look at what applications could take advantage of optimistic locking – and then add the ROW CHANGE TIMESTAMP to the appropriate tables.

4 comments:

  1. Abel Martins9:07 AM

    Hi, Craig,
    You as always generously distributes his knowledge of DB2 for us mere mortals. Thank you very much.
    I have implemented optimistic locking since DB2 V4 because it was possible, as today also create all the updatable tables with a TIMESTAMP column and so do the same control of the ROW CHANGE TIMESTAMP but the DB2 TIMESTAMP control is best without any doubt.
    For me, the extreme optimistic locking should include UNCOMMITTED READ readings because, as we think that we are the only ones to access that data, there would be no integrity problem. Yes, there may be, I know, but given the amount of which varies in a space of microseconds, can be accepted because when we update it, we must obligatory check the timestamp of the reading time.
    Please let me know what you think about it.
    Thank you so much once more.
    Abel Martins

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Abel, I can see where you might want to use ISOLATION UR to avoid even more locking overhead in an optimistic locking situation.

    Also, yes, it was possible to create the TIMESTAMP columns yourself prior to V9, but the problem is always making sure that everybody who works with the data modifies the timestamp... if anyone fails to do so, problems happen... much better now that DB2 automagically manages the timestamps.

    ReplyDelete
  4. Abel Martins2:57 PM

    Thanks Craig, I fully agree with you. Best regards.

    ReplyDelete