public interface LockedStream<K,V> extends BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
forEach(BiConsumer)
where the BiConsumer is invoked while guaranteeing that the entry being passed is properly locked for the
entire duration of the invocation.
An attempt is made to acquire the lock for an entry using the default
LockingConfiguration.lockAcquisitionTimeout()
before invoking any operations on it.
BaseCacheStream.SegmentCompletionListener
Modifier and Type | Method and Description |
---|---|
LockedStream<K,V> |
disableRehashAware()
Disables tracking of rehash events that could occur to the underlying cache.
|
LockedStream<K,V> |
distributedBatchSize(int batchSize)
Controls how many keys are returned from a remote node when using a stream terminal operation with a distributed
cache to back this stream.
|
LockedStream<K,V> |
filter(Predicate<? super CacheEntry<K,V>> predicate)
Returns a locked stream consisting of the elements of this stream that match
the given predicate.
|
default LockedStream<K,V> |
filter(SerializablePredicate<? super CacheEntry<K,V>> predicate)
Same as
filter(Predicate) except that the Predicate must also
implement Serializable |
LockedStream<K,V> |
filterKeys(Set<?> keys)
Filters which entries are returned by only returning ones that map to the given key.
|
LockedStream<K,V> |
filterKeySegments(IntSet segments)
Filters which entries are returned by what segment they are present in.
|
LockedStream<K,V> |
filterKeySegments(Set<Integer> segments)
Deprecated.
This is to be replaced by
filterKeySegments(IntSet) |
void |
forEach(BiConsumer<Cache<K,V>,? super CacheEntry<K,V>> biConsumer)
Performs an action for each element of this stream on the primary owner of the given key.
|
default void |
forEach(SerializableBiConsumer<Cache<K,V>,? super CacheEntry<K,V>> biConsumer)
Same as
forEach(BiConsumer) except that the BiConsumer must also
implement Serializable |
<R> Map<K,R> |
invokeAll(BiFunction<Cache<K,V>,? super CacheEntry<K,V>,R> biFunction)
Performs a BiFunction for each element of this stream on the primary owner of each entry returning
a value.
|
default <R> Map<K,R> |
invokeAll(SerializableBiFunction<Cache<K,V>,? super CacheEntry<K,V>,R> biFunction)
Same as
invokeAll(BiFunction) except that the BiFunction must also
implement Serializable |
Iterator<CacheEntry<K,V>> |
iterator()
This method is not supported when using a
LockedStream |
LockedStream<K,V> |
parallelDistribution()
This would enable sending requests to all other remote nodes when a terminal operator is performed.
|
LockedStream<K,V> |
segmentCompletionListener(BaseCacheStream.SegmentCompletionListener listener)
This method is not supported when using a
LockedStream |
LockedStream<K,V> |
sequentialDistribution()
This would disable sending requests to all other remote nodes compared to one at a time.
|
Spliterator<CacheEntry<K,V>> |
spliterator()
This method is not supported when using a
LockedStream |
LockedStream<K,V> |
timeout(long time,
TimeUnit unit)
Sets the timeout for the acquisition of the lock for each entry.
|
close, isParallel, onClose, parallel, sequential, unordered
LockedStream<K,V> filter(Predicate<? super CacheEntry<K,V>> predicate)
This filter is after the lock is acquired for the given key. This way the filter will see the same value as the consumer is given.
predicate
- predicatedefault LockedStream<K,V> filter(SerializablePredicate<? super CacheEntry<K,V>> predicate)
filter(Predicate)
except that the Predicate must also
implement Serializable
The compiler will pick this overload for lambda parameters, making them Serializable
predicate
- the predicate to filter out unwanted entriesvoid forEach(BiConsumer<Cache<K,V>,? super CacheEntry<K,V>> biConsumer)
This method is performed while holding exclusive lock over the given entry and will be released
only after the consumer has completed. In the function, entry.setValue(newValue)
is equivalent to
cache.put(entry.getKey(), newValue)
.
If using pessimistic transactions this lock is not held using a transaction and thus the user can start a transaction in this consumer which also must be completed before returning. A transaction can be started in the consumer and if done it will share the same lock used to obtain the key.
Remember that if you are using an explicit transaction or an async method that these must be completed before the consumer returns to guarantee that they are operating within the scope of the lock for the given key. Failure to do so will lead into possible inconsistency as they will be performing operations without the proper locking.
Some methods on the provided cache may not work as expected. These include
Cache.putForExternalRead(Object, Object)
, AdvancedCache.lock(Object[])
,
AdvancedCache.lock(Collection)
, and AdvancedCache.removeGroup(String)
.
If these methods are used inside of the Consumer on the cache it will throw a IllegalStateException
.
This is due to possible interactions with locks while using these commands.
biConsumer
- the biConsumer to run for each entry under their lockdefault void forEach(SerializableBiConsumer<Cache<K,V>,? super CacheEntry<K,V>> biConsumer)
forEach(BiConsumer)
except that the BiConsumer must also
implement Serializable
The compiler will pick this overload for lambda parameters, making them Serializable
biConsumer
- the biConsumer to run for each entry under their lock@Experimental <R> Map<K,R> invokeAll(BiFunction<Cache<K,V>,? super CacheEntry<K,V>,R> biFunction)
This method is currently marked as Experimental
since this method returns a Map and requires blocking.
This operation could take a deal of time and as such should be done using an asynchronous API. Most likely
this return type will be changed to use some sort of asynchronous return value. This method is here until
this can be implemented.
This BiFunction is invoked while holding an exclusive lock over the given entry that will be released
only after the function has completed. In the function, entry.setValue(newValue)
is equivalent to
cache.put(entry.getKey(), newValue)
.
If using pessimistic transactions this lock is not held using a transaction and thus the user can start a transaction in this consumer which also must be completed before returning. A transaction can be started in the biFunction and if done it will share the same lock used to obtain the key.
Remember if you are using an explicit transaction or an async method that these must be completed before the consumer returns to guarantee that they are operating within the scope of the lock for the given key. Failure to do so will lead into possible inconsistency as they will be performing operations without the proper locking.
Some methods on the provided cache may not work as expected. These include
Cache.putForExternalRead(Object, Object)
, AdvancedCache.lock(Object[])
,
AdvancedCache.lock(Collection)
, and AdvancedCache.removeGroup(String)
.
If these methods are used inside of the Consumer on the cache it will throw a IllegalStateException
.
This is due to possible interactions with locks while using these commands.
R
- the return typebiFunction
- the biFunction to run for each entry under their lock@Experimental default <R> Map<K,R> invokeAll(SerializableBiFunction<Cache<K,V>,? super CacheEntry<K,V>,R> biFunction)
invokeAll(BiFunction)
except that the BiFunction must also
implement Serializable
The compiler will pick this overload for lambda parameters, making them Serializable
R
- the return typebiFunction
- the biFunction to run for each entry under their lockLockedStream<K,V> sequentialDistribution()
Parallel distribution is enabled by default except for CacheStream.iterator()
and
CacheStream.spliterator()
sequentialDistribution
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
LockedStream<K,V> parallelDistribution()
Parallel distribution is enabled by default except for CacheStream.iterator()
and
CacheStream.spliterator()
parallelDistribution
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
LockedStream<K,V> filterKeySegments(Set<Integer> segments)
filterKeySegments(IntSet)
CacheStream.filter(Predicate)
method as this can control what nodes are
asked for data and what entries are read from the underlying CacheStore if present.filterKeySegments
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
segments
- The segments to use for this stream operation. Any segments not in this set will be ignored.LockedStream<K,V> filterKeySegments(IntSet segments)
CacheStream.filter(Predicate)
method as this can control what nodes are
asked for data and what entries are read from the underlying CacheStore if present.filterKeySegments
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
segments
- The segments to use for this stream operation. Any segments not in this set will be ignored.LockedStream<K,V> filterKeys(Set<?> keys)
CacheStream.filter(Predicate)
if the filter is holding references to the same
keys.filterKeys
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
keys
- The keys that this stream will only operate on.LockedStream<K,V> distributedBatchSize(int batchSize)
CacheStream.iterator()
, CacheStream.spliterator()
,
CacheStream.forEach(Consumer)
. Please see those methods for additional information on how this value
may affect them.
This value may be used in the case of a a terminal operator that doesn't track keys if an intermediate
operation is performed that requires bringing keys locally to do computations. Examples of such intermediate
operations are CacheStream.sorted()
, CacheStream.sorted(Comparator)
,
CacheStream.distinct()
, CacheStream.limit(long)
, CacheStream.skip(long)
This value is always ignored when this stream is backed by a cache that is not distributed as all values are already local.
distributedBatchSize
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
batchSize
- The size of each batch. This defaults to the state transfer chunk size.LockedStream<K,V> disableRehashAware()
Most terminal operations will run faster with rehash awareness disabled even without a rehash occuring. However if a rehash occurs with this disabled be prepared to possibly receive only a subset of values.
disableRehashAware
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
LockedStream<K,V> timeout(long time, TimeUnit unit)
timeout
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
time
- the maximum time to waitunit
- the time unit of the timeout argumentLockedStream<K,V> segmentCompletionListener(BaseCacheStream.SegmentCompletionListener listener) throws UnsupportedOperationException
LockedStream
segmentCompletionListener
in interface BaseCacheStream<CacheEntry<K,V>,LockedStream<K,V>>
listener
- The listener that will be called back as segments are completed.UnsupportedOperationException
Iterator<CacheEntry<K,V>> iterator() throws UnsupportedOperationException
LockedStream
iterator
in interface BaseStream<CacheEntry<K,V>,LockedStream<K,V>>
UnsupportedOperationException
Spliterator<CacheEntry<K,V>> spliterator() throws UnsupportedOperationException
LockedStream
spliterator
in interface BaseStream<CacheEntry<K,V>,LockedStream<K,V>>
UnsupportedOperationException
Copyright © 2021 JBoss by Red Hat. All rights reserved.