Chapter 5. Managing Memory

Configure the data container where Data Grid stores entries. Specify the encoding for cache entries, store data in off-heap memory, and use eviction or expiration policies to keep only active entries in memory.

5.1. Configuring Eviction and Expiration

Eviction and expiration are two strategies for cleaning the data container by removing old, unused entries. Although eviction and expiration are similar, they have some important differences.

  • ✓ Eviction lets Data Grid control the size of the data container by removing entries when the container becomes larger than a configured threshold.
  • ✓ Expiration limits the amount of time entries can exist. Data Grid uses a scheduler to periodically remove expired entries. Entries that are expired but not yet removed are immediately removed on access; in this case get() calls for expired entries return "null" values.
  • ✓ Eviction is local to Data Grid nodes.
  • ✓ Expiration takes place across Data Grid clusters.
  • ✓ You can use eviction and expiration together or independently of each other.
  • ✓ You can configure eviction and expiration declaratively in infinispan.xml to apply cache-wide defaults for entries.
  • ✓ You can explicitly define expiration settings for specific entries but you cannot define eviction on a per-entry basis.
  • ✓ You can manually evict entries and manually trigger expiration.

5.1.1. Eviction

Eviction lets you control the size of the data container by removing entries from memory. Eviction drops one entry from the data container at a time and is local to the node on which it occurs.

Important

Eviction removes entries from memory but not from persistent cache stores. To ensure that entries remain available after Data Grid evicts them, and to prevent inconsistencies with your data, you should configure a persistent cache store.

Data Grid eviction relies on two configurations:

  • Maximum size of the data container.
  • Strategy for removing entries.

Data container size

Data Grid lets you store entries either in the Java heap or in native memory (off-heap) and set a maximum size for the data container.

You configure the maximum size of the data container in one of two ways:

  • Total number of entries (max-count).
  • Maximum amount of memory (max-size).

    To perform eviction based on the amount of memory, you define a maximum size in bytes. For this reason, you must encode entries with a binary storage format such as application/x-protostream.

Evicting cache entries

When you configure memory, Data Grid approximates the current memory usage of the data container. When entries are added or modified, Data Grid compares the current memory usage of the data container to the maximum size. If the size exceeds the maximum, Data Grid performs eviction.

Eviction happens immediately in the thread that adds an entry that exceeds the maximum size.

Consider the following configuration as an example:

<memory max-count="50"/>

In this case, the cache can have a total of 50 entries. After the cache reaches the total number of entries, write operations trigger Data Grid to perform eviction.

Eviction strategies

Strategies control how Data Grid performs eviction. You can either perform eviction manually or configure Data Grid to do one of the following:

  • Remove old entries to make space for new ones.
  • Throw ContainerFullException and prevent new entries from being created.

    The exception eviction strategy works only with transactional caches that use 2 phase commits; not with 1 phase commits or synchronization optimizations.

Note

Data Grid includes the Caffeine caching library that implements a variation of the Least Frequently Used (LFU) cache replacement algorithm known as TinyLFU. For off-heap storage, Data Grid uses a custom implementation of the Least Recently Used (LRU) algorithm.

5.1.1.1. Configuring the Total Number of Entries for Data Grid Caches

Limit the size of the data container for cache entries to a total number of entries.

Procedure

  1. Configure your Data Grid cache encoding with an appropriate storage format.
  2. Specify the total number of entries that caches can contain before Data Grid performs eviction.

    • Declarative: Set the max-count attribute.
    • Programmatic: Call the maxCount() method.
  3. Configure an eviction strategy to control how Data Grid removes entries.

    • Declarative: Set the when-full attribute.
    • Programmatic: Call the whenFull() method.

Declarative configuration

<local-cache name="maximum_count">
  <encoding media-type="application/x-protostream"/>
  <memory max-count="500" when-full="REMOVE"/>
</local-cache>

Programmatic configuration

ConfigurationBuilder cfg = new ConfigurationBuilder();

cfg
  .encoding()
    .mediaType("application/x-protostream")
  .memory()
    .maxCount(500)
    .whenFull(EvictionStrategy.REMOVE)
  .build());

5.1.1.2. Configuring the Maximum Amount of Memory for Data Grid Caches

Limit the size of the data container for cache entries to a maximum amount of memory.

Procedure

  1. Configure your Data Grid cache to use a storage format that supports binary encoding.

    You must use a binary storage format to perform eviction based on the maximum amount of memory.

  2. Configure the maximum amount of memory, in bytes, that caches can use before Data Grid performs eviction.

    • Declarative: Set the max-size attribute.
    • Programmatic: Use the maxSize() method.
  3. Optionally specify a byte unit of measurement. The default is B (bytes). Refer to the configuration schema for supported units.
  4. Configure an eviction strategy to control how Data Grid removes entries.

    • Declarative: Set the when-full attribute.
    • Programmatic: Use the whenFull() method.

Declarative configuration

<local-cache name="maximum_size">
  <encoding media-type="application/x-protostream"/>
  <memory max-size="1.5GB" when-full="REMOVE"/>
</local-cache>

Programmatic configuration

ConfigurationBuilder cfg = new ConfigurationBuilder();

cfg
  .encoding()
    .mediaType("application/x-protostream")
  .memory()
    .maxSize("1.5GB")
    .whenFull(EvictionStrategy.REMOVE)
  .build());

5.1.1.3. Eviction Examples

Configure eviction as part of your cache definition.

Default memory configuration

Eviction is not enabled, which is the default configuration. Data Grid stores cache entries as objects in the JVM heap.

<distributed-cache name="default_memory">
  <memory />
</distributed-cache>

Eviction based on the total number of entries

Data Grid stores cache entries as objects in the JVM heap. Eviction happens when there are 100 entries in the data container and Data Grid gets a request to create a new entry:

<distributed-cache name="total_number">
  <memory max-count="100"/>
</distributed-cache>

Eviction based maximum size in bytes

Data Grid stores cache entries as byte[] arrays if you encode entries with binary storage formats, for example: application/x-protostream format.

In the following example, Data Grid performs eviction when the size of the data container reaches 500 MB (megabytes) in size and it gets a request to create a new entry:

<distributed-cache name="binary_storage">
  <!-- Encodes the cache with a binary storage format. -->
  <encoding media-type="application/x-protostream"/>
  <!-- Bounds the data container to a maximum size in MB (megabytes). -->
  <memory max-size="500 MB"/>
</distributed-cache>

Off-heap storage

Data Grid stores cache entries as bytes in native memory. Eviction happens when there are 100 entries in the data container and Data Grid gets a request to create a new entry:

<distributed-cache name="off_heap">
  <memory storage="OFF_HEAP" max-count="100"/>
</distributed-cache>

Off-heap storage with the exception strategy

Data Grid stores cache entries as bytes in native memory. When there are 100 entries in the data container, and Data Grid gets a request to create a new entry, it throws an exception and does not allow the new entry:

<distributed-cache name="eviction_exception">
  <memory storage="OFF_HEAP" max-count="100" when-full="EXCEPTION"/>
</distributed-cache>

Manual eviction

Data Grid stores cache entries as objects in the JVM heap. Eviction is not enabled but performed manually using the evict() method.

Tip

This configuration prevents a warning message when you enable passivation but do not configure eviction.

<distributed-cache name="eviction_manual">
  <memory when-full="MANUAL"/>
</distributed-cache>

Passivation with eviction

Passivation persists data to cache stores when Data Grid evicts entries. You should always enable eviction if you enable passivation, for example:

<distributed-cache name="passivation">
  <persistence passivation="true">
   <!-- Persistence configuration goes here. -->
  </persistence>
  <memory max-count="100"/>
</distributed-cache>

References

5.1.2. Expiration

Expiration removes entries from caches when they reach one of the following time limits:

Lifespan
Sets the maximum amount of time that entries can exist.
Maximum idle

Specifies how long entries can remain idle. If operations do not occur for entries, they become idle.

Important

Maximum idle expiration does not currently support cache configurations with persistent cache stores.

When using expiration with an exception-based eviction policy, entries that are expired but not yet removed from the cache count towards the size of the data container.

5.1.2.1. How Expiration Works

When you configure expiration, Data Grid stores keys with metadata that determines when entries expire.

  • Lifespan uses a creation timestamp and the value for the lifespan configuration property.
  • Maximum idle uses a last used timestamp and the value for the max-idle configuration property.

Data Grid checks if lifespan or maximum idle metadata is set and then compares the values with the current time.

If (creation + lifespan < currentTime) or (lastUsed + maxIdle < currentTime) then Data Grid detects that the entry is expired.

Expiration occurs whenever entries are accessed or found by the expiration reaper.

For example, k1 reaches the maximum idle time and a client makes a Cache.get(k1) request. In this case, Data Grid detects that the entry is expired and removes it from the data container. The Cache.get() returns null.

Data Grid also expires entries from cache stores, but only with lifespan expiration. Maximum idle expiration does not work with cache stores. In the case of cache loaders, Data Grid cannot expire entries because loaders can only read from external storage.

Note

Data Grid adds expiration metadata as long primitive data types to cache entries. This can increase the size of keys by as much as 32 bytes.

5.1.2.2. Expiration Reaper

Data Grid uses a reaper thread that runs periodically to detect and remove expired entries. The expiration reaper ensures that expired entries that are no longer accessed are removed.

The Data Grid ExpirationManager interface handles the expiration reaper and exposes the processExpiration() method.

In some cases, you can disable the expiration reaper and manually expire entries by calling processExpiration(); for instance, if you are using local cache mode with a custom application where a maintenance thread runs periodically.

Important

If you use clustered cache modes, you should never disable the expiration reaper.

Data Grid always uses the expiration reaper when using cache stores. In this case you cannot disable it.

5.1.2.3. Maximum Idle and Clustered Caches

Because maximum idle expiration relies on the last access time for cache entries, it has some limitations with clustered cache modes.

With lifespan expiration, the creation time for cache entries provides a value that is consistent across clustered caches. For example, the creation time for k1 is always the same on all nodes.

For maximum idle expiration with clustered caches, last access time for entries is not always the same on all nodes. To ensure that entries have the same relative access times across clusters, Data Grid sends touch commands to all owners when keys are accessed.

The touch commands that Data Grid send have the following considerations:

  • Cache.get() requests do not return until all touch commands complete. This synchronous behavior increases latency of client requests.
  • The touch command also updates the "recently accessed" metadata for cache entries on all owners, which Data Grid uses for eviction.
  • With scattered cache mode, Data Grid sends touch commands to all nodes, not just primary and backup owners.

Additional information

  • Maximum idle expiration does not work with invalidation mode.
  • Iteration across a clustered cache can return expired entries that have exceeded the maximum idle time limit. This behavior ensures performance because no remote invocations are performed during the iteration. Also note that iteration does not refresh any expired entries.

5.1.2.4. Expiration Examples

When you configure Data Grid to expire entries, you can set lifespan and maximum idle times for:

  • All entries in a cache (cache-wide). You can configure cache-wide expiration in infinispan.xml or programmatically using the ConfigurationBuilder.
  • Per entry, which takes priority over cache-wide expiration values. You configure expiration for specific entries when you create them.
Note

When you explicitly define lifespan and maximum idle time values for cache entries, Data Grid replicates those values across the cluster along with the cache entries. Likewise, Data Grid persists expiration values along with the entries if you configure cache stores.

Configuring expiration for all cache entries

Expire all cache entries after 2 seconds:

<expiration lifespan="2000" />

Expire all cache entries 1 second after last access time:

<expiration max-idle="1000" />

Disable the expiration reaper with the interval attribute and manually expire entries 1 second after last access time:

<expiration max-idle="1000" interval="-1" />

Expire all cache entries after 5 seconds or 1 second after the last access time, whichever happens first:

<expiration lifespan="5000" max-idle="1000" />

Configuring expiration when creating cache entries

The following example shows how to configure lifespan and maximum idle values when creating cache entries:

// Use the cache-wide expiration configuration.
cache.put("pinot noir", pinotNoirPrice); 1

// Define a lifespan value of 2.
cache.put("chardonnay", chardonnayPrice, 2, TimeUnit.SECONDS); 2

// Define a lifespan value of -1 (disabled) and a max-idle value of 1.
cache.put("pinot grigio", pinotGrigioPrice,
          -1, TimeUnit.SECONDS, 1, TimeUnit.SECONDS); 3

// Define a lifespan value of 5 and a max-idle value of 1.
cache.put("riesling", rieslingPrice,
          5, TimeUnit.SECONDS, 1, TimeUnit.SECONDS); 4

If the Data Grid configuration defines a lifespan value of 1000 for all entries, the preceding Cache.put() requests cause the entries to expire:

1
After 1 second.
2
After 2 seconds.
3
1 second after last access time.
4
After 5 seconds or 1 second after the last access time, whichever happens first.

5.2. Off-Heap Memory

Data Grid stores cache entries in JVM heap memory by default. You can configure Data Grid to use off-heap storage, which means your data occupies native memory outside the managed JVM memory space with the following benefits:

  • Uses less memory than JVM heap memory for the same amount of data.
  • Can improve overall JVM performance by avoiding Garbage Collector (GC) runs.

However, there are some trade-offs with off-heap storage; for example, JVM heap dumps do not show entries stored in off-heap memory.

The following diagram illustrates the memory space for a JVM process where Data Grid is running:

Figure 5.1. JVM memory space

JVM heap memory

The heap is divided into young and old generations that help keep referenced Java objects and other application data in memory. The GC process reclaims space from unreachable objects, running more frequently on the young generation memory pool.

When Data Grid stores cache entries in JVM heap memory, GC runs can take longer to complete as you start adding data to your caches. Because GC is an intensive process, longer and more frequent runs can degrade application performance.

Off-heap memory

Off-heap memory is native available system memory outside JVM memory management. The JVM memory space diagram shows the Metaspace memory pool that holds class metadata and is allocated from native memory. The diagram also represents a section of native memory that holds Data Grid cache entries.

Storing data off-heap

When you add entries to off-heap caches, Data Grid dynamically allocates native memory to your data.

Data Grid hashes the serialized byte[] for each key into buckets that are similar to a standard Java HashMap. Buckets include address pointers that Data Grid uses to locate entries that you store in off-heap memory.

Note

Data Grid determines equality of Java objects in off-heap storage using the serialized byte[] representation of each object instead of the object instance.

The following diagram shows a set of keys with names, the hash for each key and bucket array of address pointers, as well as the entries with the name and phone number:

Figure 5.2. Memory address pointers for keys

When key hashes collide, Data Grid links entries. In the Memory address pointers for keys diagram, if the William Clay and Luke Cage keys have the same hash, then the first entry added to the cache is the first element in the bucket.

Note

Even though Data Grid stores cache entries in native memory, run-time operations require JVM heap representations of those objects. For instance, cache.get() operations read objects into heap memory before returning. Likewise, state transfer operations hold subsets of objects in heap memory while they take place.

Memory overhead

Memory overhead is the additional memory that Data Grid uses to store entries. For off-heap storage, Data Grid uses 25 bytes for each entry in the cache.

When you use eviction to create a bounded off-heap data container, memory overhead increases to a total of 61 bytes because Data Grid creates an additional linked list to track entries in the cache and perform eviction.

Data consistency

Data Grid uses an array of locks to protect off-heap address spaces. The number of locks is twice the number of cores and then rounded to the nearest power of two. This ensures that there is an even distribution of ReadWriteLock instances to prevent write operations from blocking read operations.

5.2.1. Using Off-Heap Memory

Configure Data Grid to store cache entries in native memory outside the JVM heap space.

Procedure

  1. Modify your cache configuration to use off-heap memory.

    • Declarative: Add the storage="OFF_HEAP" attribute to the memory element.
    • Programmatic: Call the storage(OFF_HEAP) method in the MemoryConfigurationBuilder class.
  2. Limit the amount of off-heap memory that the cache can use by configuring either max-count or max-size eviction.
Off-heap memory bounded by size

Declarative configuration

<distributed-cache name="off_heap" mode="SYNC">
  <memory storage="OFF_HEAP"
          max-size="1.5GB"
          when-full="REMOVE"/>
</distributed-cache>

Programmatic configuration

ConfigurationBuilder cfg = new ConfigurationBuilder();

cfg
  .memory()
    .storage(StorageType.OFF_HEAP)
    .maxSize("1.5GB")
    .whenFull(EvictionStrategy.REMOVE)
  .build());

Off-heap memory bounded by total number of entries

Declarative configuration

<distributed-cache name="off_heap" mode="SYNC">
  <memory storage="OFF_HEAP"
          max-count="500"
          when-full="REMOVE"/>
</distributed-cache>

Programmatic configuration

ConfigurationBuilder cfg = new ConfigurationBuilder();

cfg
  .memory()
    .storage(StorageType.OFF_HEAP)
    .maxCount(500)
    .whenFull(EvictionStrategy.REMOVE)
  .build());