Continuing on Weak Isolation Levels…

Databases by default do not prevent us from all concurrency issues. Let’s understand more issues and their solutions.

Alok Kumar Singh
4 min readFeb 26, 2022

In the last blog, we looked at DirtyRead, DirtyWrite and Read Skew concurrency problems and learnt how Read Committed and the Snapshot Isolation levels solve them.

But there are more problems!

Let’s learn about other concurrency problems that can still occur:

Lost Updates

If two transactions perform “Read, Modify and Write” concurrently, the change made by one of the transaction could get lost if one makes a change on the old value and not the latest value of the data object.

Both tx1 and tx2 get the value of likes as 199, and tx1 incremented and updated it and then tx2 incremented it. But tx2 incremented 199 and not 200, so the update by tx1 got lost!

The read that resulted in the lost update is known as a Fuzzy Read. The lost update is a very common problem. Few scenarios of it occurring:

  • Incrementing counter concurrently like above likes example. It can happen in a program as well. Run this program multiple times, you would see different result !
  • Updating JSON documents.
  • Updating Lists.

Note: lost update can happen in Read Committed and Snapshot Isolation setting, as it depends on the application if it “read-modified and wrote”?

Let’s look at ways to prevent it:

Prevent Lost Update using Atomic Operations

No lost update using database atomic operations! Mic drop the read-modify-write cycle !

Avoid “Read-Modify-Write” cycles and use atomic update operations provided by the database. Most of the database provides concurrency safe update operations using exclusive locks on the database object.

Prevent Lost Update using Explicit Locks

application explicitly locks the data it wants to read-modify and write!

The application can prevent lost updates by explicitly locking the database objects that need to be updated.

BEGIN;// lock all the rows returned by the query and read it
SELECT * FROM notification where `date` >= '2011-05-03' FOR UPDATE;
// modify the previous read and write
UPDATE...
COMMIT;

Databases can detect Lost updates in concurrently executing read-modify-write cycles. It aborts the transaction for applications to retry. Snapshot Isolation mode in most databases can detect Lost updates, but it is very important to know that some databases like MYSQL/InnoDb’s Repeatable-read(snapshot isolation) do not detect lost updates and in such databases concurrently executing read-modify-write cycles are not safe.

Prevent Lost Update using Compare and Set

Databases which do not provide transactions provide “Compare and Set” operation to prevent lost updates by allowing updates to only happen if the value has not changed since you last read it. Details on whether the compare and set is a thread safe operation would depend on the database implementation and isolation setting. Please look at it before using it.

Let’s move on to more problems that can still occur…

Write Skew

one patient got assigned to 2 doctors due to write skew

Example speaks louder than words: Say, we have an online doctor consultation application. Patient come online and start looking for a doctor. We broadcast patients to live doctors. Doctor, on accepting the request to consult, starts a transaction, checks if the patient is free to consult. If yes, then doctor self assigns the patient for consultation. In the above example, both the Dr Prashant and Dr Jignesh self assigned to consult Shubham. Really hope the Shubham gets better, now two excellent doctors are treating him :D. This is the write-skew concurrency problem.

There was no dirty write: Dr Prashant updated his data with Shubham, and Dr Jignesh also did the same, which is valid in Snapshot isolation.

Both Prashant and Jignesh read the same object but updated different objects resulting in write skew. Also, write skew is very less obvious, hard to detect, databases cannot auto detect it. As it is said, if a problem can happen will happen. The application developer really needs to be careful of this race condition.

Preventing Write Skew

Databases do not have auto detection of write skew. Being careful and aware of these issues is the first step. Also, it can be prevented by:

  • Explicitly locking the rows the transaction depends on. or…
  • Multi object constraints using triggers or materialized views. or…
  • Using serialization isolation level, if that is an option.

I would like to write on Phantoms, 2 Phase and 3 Phase Locking in other blog(s).

--

--