Chapter 9. Locking and Concurrency

Red Hat Data Grid makes use of multi-versioned concurrency control (MVCC) - a concurrency scheme popular with relational databases and other data stores. MVCC offers many advantages over coarse-grained Java synchronization and even JDK Locks for access to shared data, including:

  • allowing concurrent readers and writers
  • readers and writers do not block one another
  • write skews can be detected and handled
  • internal locks can be striped

9.1. Locking implementation details

Red Hat Data Grid’s MVCC implementation makes use of minimal locks and synchronizations, leaning heavily towards lock-free techniques such as compare-and-swap and lock-free data structures wherever possible, which helps optimize for multi-CPU and multi-core environments.

In particular, Red Hat Data Grid’s MVCC implementation is heavily optimized for readers. Reader threads do not acquire explicit locks for entries, and instead directly read the entry in question.

Writers, on the other hand, need to acquire a write lock. This ensures only one concurrent writer per entry, causing concurrent writers to queue up to change an entry. To allow concurrent reads, writers make a copy of the entry they intend to modify, by wrapping the entry in an MVCCEntry. This copy isolates concurrent readers from seeing partially modified state. Once a write has completed, MVCCEntry.commit() will flush changes to the data container and subsequent readers will see the changes written.

9.1.1. How does it work in clustered caches?

In clustered caches, each key has a node responsible to lock the key. This node is called primary owner.

9.1.1.1. Non Transactional caches

  1. The write operation is sent to the primary owner of the key.
  2. The primary owner tries to lock the key.

    1. If it succeeds, it forwards the operation to the other owners;
    2. Otherwise, an exception is thrown.
Note

If the operation is conditional and it fails on the primary owner, it is not forwarded to the other owners.

Note

If the operation is executed locally in the primary owner, the first step is skipped.

9.1.2. Transactional caches

The transactional cache supports optimistic and pessimistic locking mode. Check section Transaction Locking for more information about it.

9.1.3. Isolation levels

Isolation level affects what transactions can read when running concurrently with other transaction. Check section Isolation Levels for more details about it.

9.1.4. The LockManager

The LockManager is a component that is responsible for locking an entry for writing. The LockManager makes use of a LockContainer to locate/hold/create locks. LockContainers come in two broad flavours, with support for lock striping and with support for one lock per entry.

9.1.5. Lock striping

Lock striping entails the use of a fixed-size, shared collection of locks for the entire cache, with locks being allocated to entries based on the entry’s key’s hash code. Similar to the way the JDK’s ConcurrentHashMap allocates locks, this allows for a highly scalable, fixed-overhead locking mechanism in exchange for potentially unrelated entries being blocked by the same lock.

The alternative is to disable lock striping - which would mean a new lock is created per entry. This approach may give you greater concurrent throughput, but it will be at the cost of additional memory usage, garbage collection churn, etc.

Default lock striping settings

lock striping is disabled by default, due to potential deadlocks that can happen if locks for different keys end up in the same lock stripe.

The size of the shared lock collection used by lock striping can be tuned using the concurrencyLevel attribute of the `<locking /> configuration element.

Configuration example:

<locking striping="false|true"/>

Or

new ConfigurationBuilder().locking().useLockStriping(false|true);

9.1.6. Concurrency levels

In addition to determining the size of the striped lock container, this concurrency level is also used to tune any JDK ConcurrentHashMap based collections where related, such as internal to DataContainers. Please refer to the JDK ConcurrentHashMap Javadocs for a detailed discussion of concurrency levels, as this parameter is used in exactly the same way in Red Hat Data Grid.

Configuration example:

<locking concurrency-level="32"/>

Or

new ConfigurationBuilder().locking().concurrencyLevel(32);

9.1.7. Lock timeout

The lock timeout specifies the amount of time, in milliseconds, to wait for a contented lock.

Configuration example:

<locking acquire-timeout="10000"/>

Or

new ConfigurationBuilder().locking().lockAcquisitionTimeout(10000);
//alternatively
new ConfigurationBuilder().locking().lockAcquisitionTimeout(10, TimeUnit.SECONDS);

9.1.8. Consistency

The fact that a single owner is locked (as opposed to all owners being locked) does not break the following consistency guarantee: if key K is hashed to nodes {A, B} and transaction TX1 acquires a lock for K, let’s say on A. If another transaction, TX2, is started on B (or any other node) and TX2 tries to lock K then it will fail with a timeout as the lock is already held by TX1. The reason for this is the that the lock for a key K is always, deterministically, acquired on the same node of the cluster, regardless of where the transaction originates.

9.2. Data Versioning

Red Hat Data Grid supports two forms of data versioning: simple and external. The simple versioning is used in transactional caches for write skew check. See Write Skews.

The external versioning is used to encapsulate an external source of data versioning within Red Hat Data Grid, such as when using Red Hat Data Grid with Hibernate which in turn gets its data version information directly from a database.

In this scheme, a mechanism to pass in the version becomes necessary, and overloaded versions of put() and putForExternalRead() will be provided in AdvancedCache to take in an external data version. This is then stored on the InvocationContext and applied to the entry at commit time.

Note

Write skew checks cannot and will not be performed in the case of external data versioning.