Interface CacheStream<R>
-
- Type Parameters:
R
- The type of the stream
- All Superinterfaces:
AutoCloseable
,BaseCacheStream<R,Stream<R>>
,BaseStream<R,Stream<R>>
,Stream<R>
- All Known Implementing Classes:
AbstractDelegatingCacheStream
public interface CacheStream<R> extends Stream<R>, BaseCacheStream<R,Stream<R>>
AStream
that has additional operations to monitor or control behavior when used from aCache
.Whenever the iterator or spliterator methods are used the user must close the
Stream
that the method was invoked on after completion of its operation. Failure to do so may cause a thread leakage if the iterator or spliterator are not fully consumed.When using stream that is backed by a distributed cache these operations will be performed using remote distribution controlled by the segments that each key maps to. All intermediate operations are lazy, even the special cases described in later paragraphs and are not evaluated until a final terminal operation is invoked on the stream. Essentially each set of intermediate operations is shipped to each remote node where they are applied to a local stream there and finally the terminal operation is completed. If this stream is parallel the processing on remote nodes is also done using a parallel stream.
Parallel distribution is enabled by default for all operations except for
iterator()
andspliterator()
. Please seesequentialDistribution()
andparallelDistribution()
. With this disabled only a single node will process the operation at a time (includes locally).Rehash aware is enabled by default for all operations. Any intermediate or terminal operation may be invoked multiple times during a rehash and thus you should ensure the are idempotent. This can be problematic for
forEach(Consumer)
as it may be difficult to implement with such requirements, please see it for more information. If you wish to disable rehash aware operations you can disable them by callingdisableRehashAware()
which should provide better performance for some operations. The performance is most affected for the key aware operationsiterator()
,spliterator()
,forEach(Consumer)
. Disabling rehash can cause incorrect results if the terminal operation is invoked and a rehash occurs before the operation completes. If incorrect results do occur it is guaranteed that it will only be that entries were missed and no entries are duplicated.Any stateful intermediate operation requires pulling all information up to that point local to operate properly. Each of these methods may have slightly different behavior, so make sure you check the method you are utilizing.
An example of such an operation is using distinct intermediate operation. What will happen is upon calling the terminal operation a remote retrieval operation will be ran using all of the intermediate operations up to the distinct operation remotely. This retrieval is then used to fuel a local stream where all of the remaining intermediate operations are performed and then finally the terminal operation is applied as normal. Note in this case the intermediate iterator still obeys the
distributedBatchSize(int)
setting irrespective of the terminal operator.- Since:
- 8.0
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from interface org.infinispan.BaseCacheStream
BaseCacheStream.SegmentCompletionListener
-
Nested classes/interfaces inherited from interface java.util.stream.Stream
Stream.Builder<T extends Object>
-
-
Method Summary
All Methods Instance Methods Abstract Methods Default Methods Deprecated Methods Modifier and Type Method Description default boolean
allMatch(SerializablePredicate<? super R> predicate)
Same asStream.allMatch(Predicate)
except that the Predicate must also implementSerializable
default boolean
anyMatch(SerializablePredicate<? super R> predicate)
Same asStream.anyMatch(Predicate)
except that the Predicate must also implementSerializable
default <R1> R1
collect(Supplier<Collector<? super R,?,R1>> supplier)
Performs a mutable reduction operation on the elements of this stream using aCollector
that is lazily created from theSupplier
provided.<R1,A>
R1collect(Collector<? super R,A,R1> collector)
default <R1> R1
collect(SerializableSupplier<Collector<? super R,?,R1>> supplier)
Performs a mutable reduction operation on the elements of this stream using aCollector
that is lazily created from theSerializableSupplier
provided.default <R1> R1
collect(SerializableSupplier<R1> supplier, SerializableBiConsumer<R1,? super R> accumulator, SerializableBiConsumer<R1,R1> combiner)
Same asStream.collect(Supplier, BiConsumer, BiConsumer)
except that the various arguments must also implementSerializable
CacheStream<R>
disableRehashAware()
Disables tracking of rehash events that could occur to the underlying cache.CacheStream<R>
distinct()
CacheStream<R>
distributedBatchSize(int batchSize)
Controls how many keys are returned from a remote node when using a stream terminal operation with a distributed cache to back this stream.CacheStream<R>
filter(Predicate<? super R> predicate)
default CacheStream<R>
filter(SerializablePredicate<? super R> predicate)
Same asfilter(Predicate)
except that the Predicate must also implementSerializable
CacheStream<R>
filterKeys(Set<?> keys)
Filters which entries are returned by only returning ones that map to the given key.CacheStream<R>
filterKeySegments(Set<Integer> segments)
Deprecated.This is to be replaced byfilterKeySegments(IntSet)
CacheStream<R>
filterKeySegments(IntSet segments)
Filters which entries are returned by what segment they are present in.<R1> CacheStream<R1>
flatMap(Function<? super R,? extends Stream<? extends R1>> mapper)
default <R1> CacheStream<R1>
flatMap(SerializableFunction<? super R,? extends Stream<? extends R1>> mapper)
Same asflatMap(Function)
except that the Function must also implementSerializable
DoubleCacheStream
flatMapToDouble(Function<? super R,? extends DoubleStream> mapper)
default DoubleCacheStream
flatMapToDouble(SerializableFunction<? super R,? extends DoubleStream> mapper)
Same asflatMapToDouble(Function)
except that the Function must also implementSerializable
IntCacheStream
flatMapToInt(Function<? super R,? extends IntStream> mapper)
default IntCacheStream
flatMapToInt(SerializableFunction<? super R,? extends IntStream> mapper)
Same asflatMapToInt(Function)
except that the Function must also implementSerializable
LongCacheStream
flatMapToLong(Function<? super R,? extends LongStream> mapper)
default LongCacheStream
flatMapToLong(SerializableFunction<? super R,? extends LongStream> mapper)
Same asflatMapToLong(Function)
except that the Function must also implementSerializable
<K,V>
voidforEach(BiConsumer<Cache<K,V>,? super R> action)
Same asforEach(Consumer)
except that it takes aBiConsumer
that provides access to the underlyingCache
that is backing this stream.void
forEach(Consumer<? super R> action)
default <K,V>
voidforEach(SerializableBiConsumer<Cache<K,V>,? super R> action)
default void
forEach(SerializableConsumer<? super R> action)
Same asforEach(Consumer)
except that the Consumer must also implementSerializable
Iterator<R>
iterator()
CacheStream<R>
limit(long maxSize)
<R1> CacheStream<R1>
map(Function<? super R,? extends R1> mapper)
default <R1> CacheStream<R1>
map(SerializableFunction<? super R,? extends R1> mapper)
Same asmap(Function)
except that the Function must also implementSerializable
DoubleCacheStream
mapToDouble(ToDoubleFunction<? super R> mapper)
default DoubleCacheStream
mapToDouble(SerializableToDoubleFunction<? super R> mapper)
Same asmapToDouble(ToDoubleFunction)
except that the ToDoubleFunction must also implementSerializable
IntCacheStream
mapToInt(ToIntFunction<? super R> mapper)
default IntCacheStream
mapToInt(SerializableToIntFunction<? super R> mapper)
Same asmapToInt(ToIntFunction)
except that the ToIntFunction must also implementSerializable
LongCacheStream
mapToLong(ToLongFunction<? super R> mapper)
default LongCacheStream
mapToLong(SerializableToLongFunction<? super R> mapper)
Same asmapToLong(ToLongFunction)
except that the ToLongFunction must also implementSerializable
default Optional<R>
max(SerializableComparator<? super R> comparator)
Same asStream.max(Comparator)
except that the Comparator must also implementSerializable
default Optional<R>
min(SerializableComparator<? super R> comparator)
Same asStream.min(Comparator)
except that the Comparator must also implementSerializable
default boolean
noneMatch(SerializablePredicate<? super R> predicate)
Same asStream.noneMatch(Predicate)
except that the Predicate must also implementSerializable
CacheStream<R>
onClose(Runnable closeHandler)
CacheStream<R>
parallel()
CacheStream<R>
parallelDistribution()
This would enable sending requests to all other remote nodes when a terminal operator is performed.CacheStream<R>
peek(Consumer<? super R> action)
default CacheStream<R>
peek(SerializableConsumer<? super R> action)
Same aspeek(Consumer)
except that the Consumer must also implementSerializable
default Optional<R>
reduce(SerializableBinaryOperator<R> accumulator)
Same asStream.reduce(BinaryOperator)
except that the BinaryOperator must also implementSerializable
default R
reduce(R identity, SerializableBinaryOperator<R> accumulator)
Same asStream.reduce(Object, BinaryOperator)
except that the BinaryOperator must also implementSerializable
default <U> U
reduce(U identity, SerializableBiFunction<U,? super R,U> accumulator, SerializableBinaryOperator<U> combiner)
Same asStream.reduce(Object, BiFunction, BinaryOperator)
except that the BinaryOperator must also implementSerializable
CacheStream<R>
segmentCompletionListener(BaseCacheStream.SegmentCompletionListener listener)
Allows registration of a segment completion listener that is notified when a segment has completed processing.CacheStream<R>
sequential()
CacheStream<R>
sequentialDistribution()
This would disable sending requests to all other remote nodes compared to one at a time.CacheStream<R>
skip(long n)
CacheStream<R>
sorted()
CacheStream<R>
sorted(Comparator<? super R> comparator)
default CacheStream<R>
sorted(SerializableComparator<? super R> comparator)
Same assorted(Comparator)
except that the Comparator must also implementSerializable
Spliterator<R>
spliterator()
CacheStream<R>
timeout(long timeout, TimeUnit unit)
Sets a given time to wait for a remote operation to respond by.default <A> A[]
toArray(SerializableIntFunction<A[]> generator)
Same asStream.toArray(IntFunction)
except that the BinaryOperator must also implementSerializable
CacheStream<R>
unordered()
-
Methods inherited from interface java.util.stream.BaseStream
close, isParallel
-
-
-
-
Method Detail
-
sequentialDistribution
CacheStream<R> sequentialDistribution()
This would disable sending requests to all other remote nodes compared to one at a time. This can reduce memory pressure on the originator node at the cost of performance.Parallel distribution is enabled by default except for
iterator()
andspliterator()
- Specified by:
sequentialDistribution
in interfaceBaseCacheStream<R,Stream<R>>
- Returns:
- a stream with parallel distribution disabled.
-
parallelDistribution
CacheStream<R> parallelDistribution()
Description copied from interface:BaseCacheStream
This would enable sending requests to all other remote nodes when a terminal operator is performed. This requires additional overhead as it must process results concurrently from various nodes, but should perform faster in the majority of cases.Parallel distribution is enabled by default except for
iterator()
andspliterator()
- Specified by:
parallelDistribution
in interfaceBaseCacheStream<R,Stream<R>>
- Returns:
- a stream with parallel distribution enabled.
-
filterKeySegments
CacheStream<R> filterKeySegments(Set<Integer> segments)
Deprecated.This is to be replaced byfilterKeySegments(IntSet)
Filters which entries are returned by what segment they are present in. This method can be substantially more efficient than using a regularfilter(Predicate)
method as this can control what nodes are asked for data and what entries are read from the underlying CacheStore if present.- Specified by:
filterKeySegments
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
segments
- The segments to use for this stream operation. Any segments not in this set will be ignored.- Returns:
- a stream with the segments filtered.
-
filterKeySegments
CacheStream<R> filterKeySegments(IntSet segments)
Filters which entries are returned by what segment they are present in. This method can be substantially more efficient than using a regularfilter(Predicate)
method as this can control what nodes are asked for data and what entries are read from the underlying CacheStore if present.- Specified by:
filterKeySegments
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
segments
- The segments to use for this stream operation. Any segments not in this set will be ignored.- Returns:
- a stream with the segments filtered.
-
filterKeys
CacheStream<R> filterKeys(Set<?> keys)
Filters which entries are returned by only returning ones that map to the given key. This method will be faster than a regularfilter(Predicate)
if the filter is holding references to the same keys.- Specified by:
filterKeys
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
keys
- The keys that this stream will only operate on.- Returns:
- a stream with the keys filtered.
-
distributedBatchSize
CacheStream<R> distributedBatchSize(int batchSize)
Controls how many keys are returned from a remote node when using a stream terminal operation with a distributed cache to back this stream. This value is ignored when terminal operators that don't track keys are used. Key tracking terminal operators areiterator()
,spliterator()
,forEach(Consumer)
. Please see those methods for additional information on how this value may affect them.This value may be used in the case of a a terminal operator that doesn't track keys if an intermediate operation is performed that requires bringing keys locally to do computations. Examples of such intermediate operations are
sorted()
,sorted(Comparator)
,distinct()
,limit(long)
,skip(long)
This value is always ignored when this stream is backed by a cache that is not distributed as all values are already local.
- Specified by:
distributedBatchSize
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
batchSize
- The size of each batch. This defaults to the state transfer chunk size.- Returns:
- a stream with the batch size updated
-
segmentCompletionListener
CacheStream<R> segmentCompletionListener(BaseCacheStream.SegmentCompletionListener listener)
Allows registration of a segment completion listener that is notified when a segment has completed processing. If the terminal operator has a short circuit this listener may never be called.This method is designed for the sole purpose of use with the
iterator()
to allow for a user to track completion of segments as they are returned from the iterator. Behavior of other methods is not specified. Please seeiterator()
for more information.Multiple listeners may be registered upon multiple invocations of this method. The ordering of notified listeners is not specified.
This is only used if this stream did not invoke
BaseCacheStream.disableRehashAware()
and has no flat map based operations. If this is done no segments will be notified.- Specified by:
segmentCompletionListener
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
listener
- The listener that will be called back as segments are completed.- Returns:
- a stream with the listener registered.
-
disableRehashAware
CacheStream<R> disableRehashAware()
Disables tracking of rehash events that could occur to the underlying cache. If a rehash event occurs while a terminal operation is being performed it is possible for some values that are in the cache to not be found. Note that you will never have an entry duplicated when rehash awareness is disabled, only lost values.Most terminal operations will run faster with rehash awareness disabled even without a rehash occuring. However if a rehash occurs with this disabled be prepared to possibly receive only a subset of values.
- Specified by:
disableRehashAware
in interfaceBaseCacheStream<R,Stream<R>>
- Returns:
- a stream with rehash awareness disabled.
-
timeout
CacheStream<R> timeout(long timeout, TimeUnit unit)
Sets a given time to wait for a remote operation to respond by. This timeout does nothing if the terminal operation does not go remote.If a timeout does occur then a
TimeoutException
is thrown from the terminal operation invoking thread or on the next call to theIterator
orSpliterator
.Note that if a rehash occurs this timeout value is reset for the subsequent retry if rehash aware is enabled.
- Specified by:
timeout
in interfaceBaseCacheStream<R,Stream<R>>
- Parameters:
timeout
- the maximum time to waitunit
- the time unit of the timeout argument- Returns:
- a stream with the timeout set
-
forEach
void forEach(Consumer<? super R> action)
This operation is performed remotely on the node that is the primary owner for the key tied to the entry(s) in this stream.
NOTE: This method while being rehash aware has the lowest consistency of all of the operators. This operation will be performed on every entry at least once in the cluster, as long as the originator doesn't go down while it is being performed. This is due to how the distributed action is performed. Essentially the
distributedBatchSize(int)
value controls how many elements are processed per node at a time when rehash is enabled. After those are complete the keys are sent to the originator to confirm that those were processed. If that node goes down during/before the response those keys will be processed a second time.It is possible to have the cache local to each node injected into this instance if the provided Consumer also implements the
CacheAware
interface. This method will be invoked before the consumeraccept()
method is invoked.This method is ran distributed by default with a distributed backing cache. However if you wish for this operation to run locally you can use the
stream().iterator().forEachRemaining(action)
for a single threaded variant. If you wish to have a parallel variant you can useStreamSupport.stream(Spliterator, boolean)
passing in the spliterator from the stream. In either case remember you must close the stream after you are done processing the iterator or spliterator..
-
forEach
default void forEach(SerializableConsumer<? super R> action)
Same asforEach(Consumer)
except that the Consumer must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
action
- consumer to be ran for each element in the stream
-
forEach
<K,V> void forEach(BiConsumer<Cache<K,V>,? super R> action)
Same asforEach(Consumer)
except that it takes aBiConsumer
that provides access to the underlyingCache
that is backing this stream.Note that the
CacheAware
interface is not supported for injection using this method as the cache is provided in the consumer directly.- Type Parameters:
K
- key type of the cacheV
- value type of the cache- Parameters:
action
- consumer to be ran for each element in the stream
-
forEach
default <K,V> void forEach(SerializableBiConsumer<Cache<K,V>,? super R> action)
- Type Parameters:
K
- key type of the cacheV
- value type of the cache- Parameters:
action
- consumer to be ran for each element in the stream
-
iterator
Iterator<R> iterator()
Usage of this operator requires closing this stream after you are done with the iterator. The preferred usage is to use a try with resource block on the stream.
This method has special usage with the
BaseCacheStream.SegmentCompletionListener
in that as entries are retrieved from the next method it will complete segments.This method obeys the
distributedBatchSize(int)
. Note that when using methods such asflatMap(Function)
that you will have possibly more than 1 element mapped to a given key so this doesn't guarantee that many number of entries are returned per batch.Note that the
Iterator.remove()
method is only supported if no intermediate operations have been applied to the stream and this is not a stream created from aCache.values()
collection.- Specified by:
iterator
in interfaceBaseStream<R,Stream<R>>
- Returns:
- the element iterator for this stream
-
spliterator
Spliterator<R> spliterator()
Usage of this operator requires closing this stream after you are done with the spliterator. The preferred usage is to use a try with resource block on the stream.
- Specified by:
spliterator
in interfaceBaseStream<R,Stream<R>>
- Returns:
- the element spliterator for this stream
-
sorted
CacheStream<R> sorted()
This operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. Beware this means it will require having all entries of this cache into memory at one time. This is described in more detail at
CacheStream
Any subsequent intermediate operations and the terminal operation are also performed locally.
-
sorted
CacheStream<R> sorted(Comparator<? super R> comparator)
This operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. Beware this means it will require having all entries of this cache into memory at one time. This is described in more detail at
CacheStream
Any subsequent intermediate operations and the terminal operation are then performed locally.
-
sorted
default CacheStream<R> sorted(SerializableComparator<? super R> comparator)
Same assorted(Comparator)
except that the Comparator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
comparator
- a non-interfering, statelessComparator
to be used to compare stream elements- Returns:
- the new stream
-
limit
CacheStream<R> limit(long maxSize)
This intermediate operation will be performed both remotely and locally to reduce how many elements are sent back from each node. More specifically this operation is applied remotely on each node to only return up to the maxSize value and then the aggregated results are limited once again on the local node.
This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStream
documentationAny subsequent intermediate operations and the terminal operation are then performed locally.
-
skip
CacheStream<R> skip(long n)
This operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStream
documentationDepending on the terminal operator this may or may not require all entries or a subset after skip is applied to be in memory all at once.
Any subsequent intermediate operations and the terminal operation are then performed locally.
-
peek
CacheStream<R> peek(Consumer<? super R> action)
-
peek
default CacheStream<R> peek(SerializableConsumer<? super R> action)
Same aspeek(Consumer)
except that the Consumer must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
action
- a non-interfering action to perform on the elements as they are consumed from the stream- Returns:
- the new stream
-
distinct
CacheStream<R> distinct()
This operation will be invoked both remotely and locally when used with a distributed cache backing this stream. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStream
documentationThis intermediate iterator operation will be performed locally and remotely requiring possibly a subset of all elements to be in memory
Any subsequent intermediate operations and the terminal operation are then performed locally.
-
collect
<R1,A> R1 collect(Collector<? super R,A,R1> collector)
Note when using a distributed backing cache for this stream the collector must be marshallable. This prevents the usage of
Collectors
class. However you can use theCacheCollectors
static factory methods to create a serializable wrapper, which then creates the actual collector lazily after being deserialized. This is useful to use any method from theCollectors
class as you would normally. Alternatively, you can callcollect(SerializableSupplier)
too.- Specified by:
collect
in interfaceStream<R>
- Type Parameters:
R1
- collected typeA
- intermediate collected type if applicable- Parameters:
collector
-- Returns:
- the collected value
- See Also:
CacheCollectors
-
collect
default <R1> R1 collect(SerializableSupplier<Collector<? super R,?,R1>> supplier)
Performs a mutable reduction operation on the elements of this stream using aCollector
that is lazily created from theSerializableSupplier
provided. This method behaves exactly the same ascollect(Collector)
with the enhanced capability of working even when the mutable reduction operation has to run in a remote node and the operation is notSerializable
or otherwise marshallable. So, this method is specially designed for situations when the user wants to use aCollector
instance that has been created byCollectors
static factory methods. In this particular case, the function that instantiates theCollector
will be marshalled according to theSerializable
rules.- Type Parameters:
R1
- The resulting type of the collector- Parameters:
supplier
- The supplier to create the collector that is specifically serializable- Returns:
- the collected value
- Since:
- 9.2
-
collect
default <R1> R1 collect(Supplier<Collector<? super R,?,R1>> supplier)
Performs a mutable reduction operation on the elements of this stream using aCollector
that is lazily created from theSupplier
provided. This method behaves exactly the same ascollect(Collector)
with the enhanced capability of working even when the mutable reduction operation has to run in a remote node and the operation is notSerializable
or otherwise marshallable. So, this method is specially designed for situations when the user wants to use aCollector
instance that has been created byCollectors
static factory methods. In this particular case, the function that instantiates theCollector
will be marshalled using InfinispanExternalizer
class or one of its subtypes.- Type Parameters:
R1
- The resulting type of the collector- Parameters:
supplier
- The supplier to create the collector- Returns:
- the collected value
- Since:
- 9.2
-
collect
default <R1> R1 collect(SerializableSupplier<R1> supplier, SerializableBiConsumer<R1,? super R> accumulator, SerializableBiConsumer<R1,R1> combiner)
Same asStream.collect(Supplier, BiConsumer, BiConsumer)
except that the various arguments must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Type Parameters:
R1
- type of the result- Parameters:
supplier
- a function that creates a new result container. For a parallel execution, this function may be called multiple times and must return a fresh value each time. Must be serializableaccumulator
- an associative, non-interfering, stateless function for incorporating an additional element into a result and must be serializablecombiner
- an associative, non-interfering, stateless function for combining two values, which must be compatible with the accumulator function and serializable- Returns:
- the result of the reduction
-
allMatch
default boolean allMatch(SerializablePredicate<? super R> predicate)
Same asStream.allMatch(Predicate)
except that the Predicate must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
predicate
- a non-interfering, stateless predicate to apply to elements of this stream that is serializable- Returns:
true
if either all elements of the stream match the provided predicate or the stream is empty, otherwisefalse
-
noneMatch
default boolean noneMatch(SerializablePredicate<? super R> predicate)
Same asStream.noneMatch(Predicate)
except that the Predicate must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
predicate
- a non-interfering, stateless predicate to apply to elements of this stream that is serializable- Returns:
true
if either no elements of the stream match the provided predicate or the stream is empty, otherwisefalse
-
anyMatch
default boolean anyMatch(SerializablePredicate<? super R> predicate)
Same asStream.anyMatch(Predicate)
except that the Predicate must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
predicate
- a non-interfering, stateless predicate to apply to elements of this stream that is serializable- Returns:
true
if any elements of the stream match the provided predicate, otherwisefalse
-
max
default Optional<R> max(SerializableComparator<? super R> comparator)
Same asStream.max(Comparator)
except that the Comparator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
comparator
- a non-interfering, statelessComparator
to compare elements of this stream that is also serializable- Returns:
- an
Optional
describing the maximum element of this stream, or an emptyOptional
if the stream is empty
-
min
default Optional<R> min(SerializableComparator<? super R> comparator)
Same asStream.min(Comparator)
except that the Comparator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
comparator
- a non-interfering, statelessComparator
to compare elements of this stream that is also serializable- Returns:
- an
Optional
describing the minimum element of this stream, or an emptyOptional
if the stream is empty
-
reduce
default Optional<R> reduce(SerializableBinaryOperator<R> accumulator)
Same asStream.reduce(BinaryOperator)
except that the BinaryOperator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
accumulator
- an associative, non-interfering, stateless function for combining two values that is also serializable- Returns:
- an
Optional
describing the result of the reduction
-
reduce
default R reduce(R identity, SerializableBinaryOperator<R> accumulator)
Same asStream.reduce(Object, BinaryOperator)
except that the BinaryOperator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
identity
- the identity value for the accumulating functionaccumulator
- an associative, non-interfering, stateless function for combining two values that is also serializable- Returns:
- the result of the reduction
-
reduce
default <U> U reduce(U identity, SerializableBiFunction<U,? super R,U> accumulator, SerializableBinaryOperator<U> combiner)
Same asStream.reduce(Object, BiFunction, BinaryOperator)
except that the BinaryOperator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Type Parameters:
U
- The type of the result- Parameters:
identity
- the identity value for the combiner functionaccumulator
- an associative, non-interfering, stateless function for incorporating an additional element into a result that is also serializablecombiner
- an associative, non-interfering, stateless function for combining two values, which must be compatible with the accumulator function that is also serializable- Returns:
- the result of the reduction
-
toArray
default <A> A[] toArray(SerializableIntFunction<A[]> generator)
Same asStream.toArray(IntFunction)
except that the BinaryOperator must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Type Parameters:
A
- the element type of the resulting array- Parameters:
generator
- a function which produces a new array of the desired type and the provided length that is also serializable- Returns:
- an array containing the elements in this stream
-
filter
CacheStream<R> filter(Predicate<? super R> predicate)
-
filter
default CacheStream<R> filter(SerializablePredicate<? super R> predicate)
Same asfilter(Predicate)
except that the Predicate must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
predicate
- a non-interfering, stateless predicate to apply to each element to determine if it should be included- Returns:
- the new cache stream
-
map
<R1> CacheStream<R1> map(Function<? super R,? extends R1> mapper)
-
map
default <R1> CacheStream<R1> map(SerializableFunction<? super R,? extends R1> mapper)
Same asmap(Function)
except that the Function must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Type Parameters:
R1
- The element type of the new stream- Parameters:
mapper
- a non-interfering, stateless function to apply to each element- Returns:
- the new cache stream
-
mapToDouble
DoubleCacheStream mapToDouble(ToDoubleFunction<? super R> mapper)
- Specified by:
mapToDouble
in interfaceStream<R>
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element- Returns:
- the new double cache stream
-
mapToDouble
default DoubleCacheStream mapToDouble(SerializableToDoubleFunction<? super R> mapper)
Same asmapToDouble(ToDoubleFunction)
except that the ToDoubleFunction must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element- Returns:
- the new stream
-
mapToInt
IntCacheStream mapToInt(ToIntFunction<? super R> mapper)
-
mapToInt
default IntCacheStream mapToInt(SerializableToIntFunction<? super R> mapper)
Same asmapToInt(ToIntFunction)
except that the ToIntFunction must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element- Returns:
- the new stream
-
mapToLong
LongCacheStream mapToLong(ToLongFunction<? super R> mapper)
-
mapToLong
default LongCacheStream mapToLong(SerializableToLongFunction<? super R> mapper)
Same asmapToLong(ToLongFunction)
except that the ToLongFunction must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element- Returns:
- the new stream
-
flatMap
<R1> CacheStream<R1> flatMap(Function<? super R,? extends Stream<? extends R1>> mapper)
-
flatMap
default <R1> CacheStream<R1> flatMap(SerializableFunction<? super R,? extends Stream<? extends R1>> mapper)
Same asflatMap(Function)
except that the Function must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Type Parameters:
R1
- The element type of the new stream- Parameters:
mapper
- a non-interfering, stateless function to apply to each element which produces a stream of new values- Returns:
- the new cache stream
-
flatMapToDouble
DoubleCacheStream flatMapToDouble(Function<? super R,? extends DoubleStream> mapper)
- Specified by:
flatMapToDouble
in interfaceStream<R>
- Returns:
- the new cache stream
-
flatMapToDouble
default DoubleCacheStream flatMapToDouble(SerializableFunction<? super R,? extends DoubleStream> mapper)
Same asflatMapToDouble(Function)
except that the Function must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element which produces a stream of new values- Returns:
- the new stream
-
flatMapToInt
IntCacheStream flatMapToInt(Function<? super R,? extends IntStream> mapper)
- Specified by:
flatMapToInt
in interfaceStream<R>
- Returns:
- the new cache stream
-
flatMapToInt
default IntCacheStream flatMapToInt(SerializableFunction<? super R,? extends IntStream> mapper)
Same asflatMapToInt(Function)
except that the Function must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element which produces a stream of new values- Returns:
- the new stream
-
flatMapToLong
LongCacheStream flatMapToLong(Function<? super R,? extends LongStream> mapper)
- Specified by:
flatMapToLong
in interfaceStream<R>
- Returns:
- the new cache stream
-
flatMapToLong
default LongCacheStream flatMapToLong(SerializableFunction<? super R,? extends LongStream> mapper)
Same asflatMapToLong(Function)
except that the Function must also implementSerializable
The compiler will pick this overload for lambda parameters, making them
Serializable
- Parameters:
mapper
- a non-interfering, stateless function to apply to each element which produces a stream of new values- Returns:
- the new stream
-
parallel
CacheStream<R> parallel()
- Specified by:
parallel
in interfaceBaseStream<R,Stream<R>>
- Returns:
- a parallel cache stream
-
sequential
CacheStream<R> sequential()
- Specified by:
sequential
in interfaceBaseStream<R,Stream<R>>
- Returns:
- a sequential cache stream
-
unordered
CacheStream<R> unordered()
- Specified by:
unordered
in interfaceBaseStream<R,Stream<R>>
- Returns:
- an unordered cache stream
-
onClose
CacheStream<R> onClose(Runnable closeHandler)
- Specified by:
onClose
in interfaceBaseStream<R,Stream<R>>
- Parameters:
closeHandler
-- Returns:
- a cache stream with the handler applied
-
-