T - the value typepublic abstract class ParallelFlowable<T> extends Object
Subscribers.
 
 Use from(Publisher) to start processing a regular Publisher in 'rails'.
 Use runOn(Scheduler) to introduce where each 'rail' should run on thread-vise.
 Use sequential() to merge the sources back into a single Flowable.
 
History: 2.0.5 - experimental; 2.1 - beta
| Constructor and Description | 
|---|
| ParallelFlowable() | 
| Modifier and Type | Method and Description | 
|---|---|
| <A,R> @NonNull Flowable<R> | collect(@NonNull Collector<T,A,R> collector)Reduces all values within a 'rail' and across 'rails' with a callbacks
 of the given  Collectorinto a single sequential value. | 
| <C> @NonNull ParallelFlowable<C> | collect(@NonNull Supplier<? extends C> collectionSupplier,
        @NonNull BiConsumer<? super C,? super T> collector)Collect the elements in each rail into a collection supplied via a collectionSupplier
 and collected into with a collector action, emitting the collection at the end. | 
| <U> @NonNull ParallelFlowable<U> | compose(@NonNull ParallelTransformer<T,U> composer)Allows composing operators, in assembly time, on top of this ParallelFlowable
 and returns another ParallelFlowable with composed features. | 
| <R> @NonNull ParallelFlowable<R> | concatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper)Generates and concatenates Publishers on each 'rail', signalling errors immediately
 and generating 2 publishers upfront. | 
| <R> @NonNull ParallelFlowable<R> | concatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        int prefetch)Generates and concatenates Publishers on each 'rail', signalling errors immediately
 and using the given prefetch amount for generating Publishers upfront. | 
| <R> @NonNull ParallelFlowable<R> | concatMapDelayError(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        boolean tillTheEnd)Generates and concatenates Publishers on each 'rail', optionally delaying errors
 and generating 2 publishers upfront. | 
| <R> @NonNull ParallelFlowable<R> | concatMapDelayError(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        int prefetch,
        boolean tillTheEnd)Generates and concatenates Publishers on each 'rail', optionally delaying errors
 and using the given prefetch amount for generating Publishers upfront. | 
| @NonNull ParallelFlowable<T> | doAfterNext(@NonNull Consumer<? super T> onAfterNext)Call the specified consumer with the current element passing through any 'rail'
 after it has been delivered to downstream within the rail. | 
| @NonNull ParallelFlowable<T> | doAfterTerminated(@NonNull Action onAfterTerminate)Run the specified Action when a 'rail' completes or signals an error. | 
| @NonNull ParallelFlowable<T> | doOnCancel(@NonNull Action onCancel)Run the specified Action when a 'rail' receives a cancellation. | 
| @NonNull ParallelFlowable<T> | doOnComplete(@NonNull Action onComplete)Run the specified Action when a 'rail' completes. | 
| @NonNull ParallelFlowable<T> | doOnError(@NonNull Consumer<Throwable> onError)Call the specified consumer with the exception passing through any 'rail'. | 
| @NonNull ParallelFlowable<T> | doOnNext(@NonNull Consumer<? super T> onNext)Call the specified consumer with the current element passing through any 'rail'. | 
| @NonNull ParallelFlowable<T> | doOnNext(@NonNull Consumer<? super T> onNext,
        @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)Call the specified consumer with the current element passing through any 'rail' and
 handles errors based on the returned value by the handler function. | 
| @NonNull ParallelFlowable<T> | doOnNext(@NonNull Consumer<? super T> onNext,
        @NonNull ParallelFailureHandling errorHandler)Call the specified consumer with the current element passing through any 'rail' and
 handles errors based on the given  ParallelFailureHandlingenumeration value. | 
| @NonNull ParallelFlowable<T> | doOnRequest(@NonNull LongConsumer onRequest)Call the specified consumer with the request amount if any rail receives a request. | 
| @NonNull ParallelFlowable<T> | doOnSubscribe(@NonNull Consumer<? super Subscription> onSubscribe)Call the specified callback when a 'rail' receives a Subscription from its upstream. | 
| @NonNull ParallelFlowable<T> | filter(@NonNull Predicate<? super T> predicate)Filters the source values on each 'rail'. | 
| @NonNull ParallelFlowable<T> | filter(@NonNull Predicate<? super T> predicate,
        @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)Filters the source values on each 'rail' and
 handles errors based on the returned value by the handler function. | 
| @NonNull ParallelFlowable<T> | filter(@NonNull Predicate<? super T> predicate,
        @NonNull ParallelFailureHandling errorHandler)Filters the source values on each 'rail' and
 handles errors based on the given  ParallelFailureHandlingenumeration value. | 
| <R> @NonNull ParallelFlowable<R> | flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper)Generates and flattens Publishers on each 'rail'. | 
| <R> @NonNull ParallelFlowable<R> | flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        boolean delayError)Generates and flattens Publishers on each 'rail', optionally delaying errors. | 
| <R> @NonNull ParallelFlowable<R> | flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        boolean delayError,
        int maxConcurrency)Generates and flattens Publishers on each 'rail', optionally delaying errors
 and having a total number of simultaneous subscriptions to the inner Publishers. | 
| <R> @NonNull ParallelFlowable<R> | flatMap(@NonNull Function<? super T,? extends Publisher<? extends R>> mapper,
        boolean delayError,
        int maxConcurrency,
        int prefetch)Generates and flattens Publishers on each 'rail', optionally delaying errors,
 having a total number of simultaneous subscriptions to the inner Publishers
 and using the given prefetch amount for the inner Publishers. | 
| <U> @NonNull ParallelFlowable<U> | flatMapIterable(@NonNull Function<? super T,? extends Iterable<? extends U>> mapper)Returns a  ParallelFlowablethat merges each item emitted by the source on each rail with the values in anIterablecorresponding to that item that is generated by a selector. | 
| <U> @NonNull ParallelFlowable<U> | flatMapIterable(@NonNull Function<? super T,? extends Iterable<? extends U>> mapper,
        int bufferSize)Returns a  ParallelFlowablethat merges each item emitted by the sourceParallelFlowablewith the values in an
 Iterable corresponding to that item that is generated by a selector. | 
| <R> @NonNull ParallelFlowable<R> | flatMapStream(@NonNull Function<? super T,? extends Stream<? extends R>> mapper)Maps each upstream item on each rail into a  Streamand emits theStream's items to the downstream in a sequential fashion. | 
| <R> @NonNull ParallelFlowable<R> | flatMapStream(@NonNull Function<? super T,? extends Stream<? extends R>> mapper,
        int prefetch)Maps each upstream item of each rail into a  Streamand emits theStream's items to the downstream in a sequential fashion. | 
| static <T> @NonNull ParallelFlowable<T> | from(@NonNull Publisher<? extends T> source)Take a Publisher and prepare to consume it on multiple 'rails' (number of CPUs)
 in a round-robin fashion. | 
| static <T> @NonNull ParallelFlowable<T> | from(@NonNull Publisher<? extends T> source,
        int parallelism)Take a Publisher and prepare to consume it on parallelism number of 'rails' in a round-robin fashion. | 
| static <T> @NonNull ParallelFlowable<T> | from(@NonNull Publisher<? extends T> source,
        int parallelism,
        int prefetch)Take a Publisher and prepare to consume it on parallelism number of 'rails' ,
 possibly ordered and round-robin fashion and use custom prefetch amount and queue
 for dealing with the source Publisher's values. | 
| static <T> @NonNull ParallelFlowable<T> | fromArray(Publisher<T>... publishers)Wraps multiple Publishers into a ParallelFlowable which runs them
 in parallel and unordered. | 
| <R> @NonNull ParallelFlowable<R> | map(@NonNull Function<? super T,? extends R> mapper)Maps the source values on each 'rail' to another value. | 
| <R> @NonNull ParallelFlowable<R> | map(@NonNull Function<? super T,? extends R> mapper,
        @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)Maps the source values on each 'rail' to another value and
 handles errors based on the returned value by the handler function. | 
| <R> @NonNull ParallelFlowable<R> | map(@NonNull Function<? super T,? extends R> mapper,
        @NonNull ParallelFailureHandling errorHandler)Maps the source values on each 'rail' to another value and
 handles errors based on the given  ParallelFailureHandlingenumeration value. | 
| <R> @NonNull ParallelFlowable<R> | mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper)Maps the source values on each 'rail' to an optional and emits its value if any. | 
| <R> @NonNull ParallelFlowable<R> | mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper,
        @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)Maps the source values on each 'rail' to an optional and emits its value if any and
 handles errors based on the returned value by the handler function. | 
| <R> @NonNull ParallelFlowable<R> | mapOptional(@NonNull Function<? super T,Optional<? extends R>> mapper,
        @NonNull ParallelFailureHandling errorHandler)Maps the source values on each 'rail' to an optional and emits its value if any and
 handles errors based on the given  ParallelFailureHandlingenumeration value. | 
| abstract int | parallelism()Returns the number of expected parallel Subscribers. | 
| @NonNull Flowable<T> | reduce(@NonNull BiFunction<T,T,T> reducer)Reduces all values within a 'rail' and across 'rails' with a reducer function into a single
 sequential value. | 
| <R> @NonNull ParallelFlowable<R> | reduce(@NonNull Supplier<R> initialSupplier,
        @NonNull BiFunction<R,? super T,R> reducer)Reduces all values within a 'rail' to a single value (with a possibly different type) via
 a reducer function that is initialized on each rail from an initialSupplier value. | 
| @NonNull ParallelFlowable<T> | runOn(@NonNull Scheduler scheduler)Specifies where each 'rail' will observe its incoming values with
 no work-stealing and default prefetch amount. | 
| @NonNull ParallelFlowable<T> | runOn(@NonNull Scheduler scheduler,
        int prefetch)Specifies where each 'rail' will observe its incoming values with
 possibly work-stealing and a given prefetch amount. | 
| @NonNull Flowable<T> | sequential()Merges the values from each 'rail' in a round-robin or same-order fashion and
 exposes it as a regular Publisher sequence, running with a default prefetch value
 for the rails. | 
| @NonNull Flowable<T> | sequential(int prefetch)Merges the values from each 'rail' in a round-robin or same-order fashion and
 exposes it as a regular Publisher sequence, running with a give prefetch value
 for the rails. | 
| @NonNull Flowable<T> | sequentialDelayError()Merges the values from each 'rail' in a round-robin or same-order fashion and
 exposes it as a regular Flowable sequence, running with a default prefetch value
 for the rails and delaying errors from all rails till all terminate. | 
| @NonNull Flowable<T> | sequentialDelayError(int prefetch)Merges the values from each 'rail' in a round-robin or same-order fashion and
 exposes it as a regular Publisher sequence, running with a give prefetch value
 for the rails and delaying errors from all rails till all terminate. | 
| @NonNull Flowable<T> | sorted(@NonNull Comparator<? super T> comparator)Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially
 picks the smallest next value from the rails. | 
| @NonNull Flowable<T> | sorted(@NonNull Comparator<? super T> comparator,
        int capacityHint)Sorts the 'rails' of this ParallelFlowable and returns a Publisher that sequentially
 picks the smallest next value from the rails. | 
| abstract void | subscribe(@NonNull Subscriber<? super T>[] subscribers)Subscribes an array of Subscribers to this ParallelFlowable and triggers
 the execution chain for all 'rails'. | 
| <R> R | to(@NonNull ParallelFlowableConverter<T,R> converter)Calls the specified converter function during assembly time and returns its resulting value. | 
| @NonNull Flowable<List<T>> | toSortedList(@NonNull Comparator<? super T> comparator)Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher. | 
| @NonNull Flowable<List<T>> | toSortedList(@NonNull Comparator<? super T> comparator,
        int capacityHint)Sorts the 'rails' according to the comparator and returns a full sorted list as a Publisher. | 
| protected boolean | validate(@NonNull Subscriber<?>[] subscribers)Validates the number of subscribers and returns true if their number
 matches the parallelism level of this ParallelFlowable. | 
@BackpressureSupport(value=SPECIAL) @SchedulerSupport(value="none") public abstract void subscribe(@NonNull @NonNull Subscriber<? super T>[] subscribers)
Subscriber.subscribe does not operate by default on a particular Scheduler.subscribers - the subscribers array to run in parallel, the number
 of items must be equal to the parallelism level of this ParallelFlowableparallelism()@CheckReturnValue public abstract int parallelism()
protected final boolean validate(@NonNull @NonNull Subscriber<?>[] subscribers)
subscribers - the array of Subscribers@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=FULL) public static <T> @NonNull ParallelFlowable<T> from(@NonNull @NonNull Publisher<? extends T> source)
Flowable.bufferSize() amount from the upstream, followed
  by 75% of that amount requested after every 75% received.from does not operate by default on a particular Scheduler.T - the value typesource - the source Publisher@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=FULL) public static <T> @NonNull ParallelFlowable<T> from(@NonNull @NonNull Publisher<? extends T> source, int parallelism)
Flowable.bufferSize() amount from the upstream, followed
  by 75% of that amount requested after every 75% received.from does not operate by default on a particular Scheduler.T - the value typesource - the source Publisherparallelism - the number of parallel rails@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=FULL) public static <T> @NonNull ParallelFlowable<T> from(@NonNull @NonNull Publisher<? extends T> source, int parallelism, int prefetch)
prefetch amount from the upstream, followed
  by 75% of that amount requested after every 75% received.from does not operate by default on a particular Scheduler.T - the value typesource - the source Publisherparallelism - the number of parallel railsprefetch - the number of values to prefetch from the source
 the source until there is a rail ready to process it.@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> map(@NonNull @NonNull Function<? super T,? extends R> mapper)
Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.R - the output value typemapper - the mapper function turning Ts into Rs.@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> map(@NonNull @NonNull Function<? super T,? extends R> mapper, @NonNull @NonNull ParallelFailureHandling errorHandler)
ParallelFailureHandling enumeration value.
 Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
R - the output value typemapper - the mapper function turning Ts into Rs.errorHandler - the enumeration that defines how to handle errors thrown
                     from the mapper function@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> map(@NonNull @NonNull Function<? super T,? extends R> mapper, @NonNull @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)
Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
R - the output value typemapper - the mapper function turning Ts into Rs.errorHandler - the function called with the current repeat count and
                     failure Throwable and should return one of the ParallelFailureHandling
                     enumeration values to indicate how to proceed.@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final @NonNull ParallelFlowable<T> filter(@NonNull @NonNull Predicate<? super T> predicate)
Note that the same predicate may be called from multiple threads concurrently.
filter does not operate by default on a particular Scheduler.predicate - the function returning true to keep a value or false to drop a value@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final @NonNull ParallelFlowable<T> filter(@NonNull @NonNull Predicate<? super T> predicate, @NonNull @NonNull ParallelFailureHandling errorHandler)
ParallelFailureHandling enumeration value.
 Note that the same predicate may be called from multiple threads concurrently.
filter does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
predicate - the function returning true to keep a value or false to drop a valueerrorHandler - the enumeration that defines how to handle errors thrown
                     from the predicate@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final @NonNull ParallelFlowable<T> filter(@NonNull @NonNull Predicate<? super T> predicate, @NonNull @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)
Note that the same predicate may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
predicate - the function returning true to keep a value or false to drop a valueerrorHandler - the function called with the current repeat count and
                     failure Throwable and should return one of the ParallelFailureHandling
                     enumeration values to indicate how to proceed.@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="custom") public final @NonNull ParallelFlowable<T> runOn(@NonNull @NonNull Scheduler scheduler)
 This operator uses the default prefetch size returned by Flowable.bufferSize().
 
 The operator will call Scheduler.createWorker() as many
 times as this ParallelFlowable's parallelism level is.
 
No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.
This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.
Flowable.bufferSize() amount from the upstream, followed
  by 75% of that amount requested after every 75% received.runOn drains the upstream rails on the specified Scheduler's
  Workers.scheduler - the scheduler to use@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="custom") public final @NonNull ParallelFlowable<T> runOn(@NonNull @NonNull Scheduler scheduler, int prefetch)
 This operator uses the default prefetch size returned by Flowable.bufferSize().
 
 The operator will call Scheduler.createWorker() as many
 times as this ParallelFlowable's parallelism level is.
 
No assumptions are made about the Scheduler's parallelism level, if the Scheduler's parallelism level is lower than the ParallelFlowable's, some rails may end up on the same thread/worker.
This operator doesn't require the Scheduler to be trampolining as it does its own built-in trampolining logic.
scheduler - the scheduler to use
 that rail's worker has run out of work.prefetch - the number of values to request on each 'rail' from the source@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final @NonNull Flowable<T> reduce(@NonNull @NonNull BiFunction<T,T,T> reducer)
Note that the same reducer function may be called from multiple threads concurrently.
Long.MAX_VALUE).reduce does not operate by default on a particular Scheduler.reducer - the function to reduce two values into one.@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> reduce(@NonNull @NonNull Supplier<R> initialSupplier, @NonNull @NonNull BiFunction<R,? super T,R> reducer)
Note that the same mapper function may be called from multiple threads concurrently.
Long.MAX_VALUE).reduce does not operate by default on a particular Scheduler.R - the reduced output typeinitialSupplier - the supplier for the initial valuereducer - the function to reduce a previous output of reduce (or the initial value supplied)
 with a current source value.@BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @CheckReturnValue @NonNull public final @NonNull Flowable<T> sequential()
 This operator uses the default prefetch size returned by Flowable.bufferSize().
  
 
Flowable.bufferSize() amount from each rail, then
  requests from each rail 75% of this amount after 75% received.sequential does not operate by default on a particular Scheduler.sequential(int), 
sequentialDelayError()@BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @CheckReturnValue @NonNull public final @NonNull Flowable<T> sequential(int prefetch)
 
 prefetch amount from each rail, then
  requests from each rail 75% of this amount after 75% received.sequential does not operate by default on a particular Scheduler.prefetch - the prefetch amount to use for each railsequential(), 
sequentialDelayError(int)@BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @CheckReturnValue @NonNull public final @NonNull Flowable<T> sequentialDelayError()
 This operator uses the default prefetch size returned by Flowable.bufferSize().
  
 
Flowable.bufferSize() amount from each rail, then
  requests from each rail 75% of this amount after 75% received.sequentialDelayError does not operate by default on a particular Scheduler.History: 2.0.7 - experimental
sequentialDelayError(int), 
sequential()@BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @CheckReturnValue @NonNull public final @NonNull Flowable<T> sequentialDelayError(int prefetch)
 
 prefetch amount from each rail, then
  requests from each rail 75% of this amount after 75% received.sequentialDelayError does not operate by default on a particular Scheduler.History: 2.0.7 - experimental
prefetch - the prefetch amount to use for each railsequential(), 
sequentialDelayError()@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final @NonNull Flowable<T> sorted(@NonNull @NonNull Comparator<? super T> comparator)
This operator requires a finite source ParallelFlowable.
Long.MAX_VALUE).sorted does not operate by default on a particular Scheduler.comparator - the comparator to use@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final @NonNull Flowable<T> sorted(@NonNull @NonNull Comparator<? super T> comparator, int capacityHint)
This operator requires a finite source ParallelFlowable.
Long.MAX_VALUE).sorted does not operate by default on a particular Scheduler.comparator - the comparator to usecapacityHint - the expected number of total elements@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final @NonNull Flowable<List<T>> toSortedList(@NonNull @NonNull Comparator<? super T> comparator)
This operator requires a finite source ParallelFlowable.
Long.MAX_VALUE).toSortedList does not operate by default on a particular Scheduler.comparator - the comparator to compare elements@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final @NonNull Flowable<List<T>> toSortedList(@NonNull @NonNull Comparator<? super T> comparator, int capacityHint)
This operator requires a finite source ParallelFlowable.
Long.MAX_VALUE).toSortedList does not operate by default on a particular Scheduler.comparator - the comparator to compare elementscapacityHint - the expected number of total elements@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnNext(@NonNull @NonNull Consumer<? super T> onNext)
map does not operate by default on a particular Scheduler.onNext - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnNext(@NonNull @NonNull Consumer<? super T> onNext, @NonNull @NonNull ParallelFailureHandling errorHandler)
ParallelFailureHandling enumeration value.
 map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
onNext - the callbackerrorHandler - the enumeration that defines how to handle errors thrown
                     from the onNext consumer@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnNext(@NonNull @NonNull Consumer<? super T> onNext, @NonNull @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
onNext - the callbackerrorHandler - the function called with the current repeat count and
                     failure Throwable and should return one of the ParallelFailureHandling
                     enumeration values to indicate how to proceed.@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doAfterNext(@NonNull @NonNull Consumer<? super T> onAfterNext)
map does not operate by default on a particular Scheduler.onAfterNext - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnError(@NonNull @NonNull Consumer<Throwable> onError)
map does not operate by default on a particular Scheduler.onError - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnComplete(@NonNull @NonNull Action onComplete)
map does not operate by default on a particular Scheduler.onComplete - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doAfterTerminated(@NonNull @NonNull Action onAfterTerminate)
map does not operate by default on a particular Scheduler.onAfterTerminate - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnSubscribe(@NonNull @NonNull Consumer<? super Subscription> onSubscribe)
map does not operate by default on a particular Scheduler.onSubscribe - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final @NonNull ParallelFlowable<T> doOnRequest(@NonNull @NonNull LongConsumer onRequest)
map does not operate by default on a particular Scheduler.onRequest - the callback@BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") @CheckReturnValue @NonNull public final @NonNull ParallelFlowable<T> doOnCancel(@NonNull @NonNull Action onCancel)
map does not operate by default on a particular Scheduler.onCancel - the callback@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final <C> @NonNull ParallelFlowable<C> collect(@NonNull @NonNull Supplier<? extends C> collectionSupplier, @NonNull @NonNull BiConsumer<? super C,? super T> collector)
Long.MAX_VALUE).map does not operate by default on a particular Scheduler.C - the collection typecollectionSupplier - the supplier of the collection in each railcollector - the collector, taking the per-rail collection and the current item@CheckReturnValue @NonNull @SafeVarargs @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public static <T> @NonNull ParallelFlowable<T> fromArray(@NonNull Publisher<T>... publishers)
map does not operate by default on a particular Scheduler.T - the value typepublishers - the array of publishers@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final <R> R to(@NonNull @NonNull ParallelFlowableConverter<T,R> converter)
This allows fluent conversion to any other type.
to does not operate by default on a particular Scheduler.History: 2.1.7 - experimental
R - the resulting object typeconverter - the function that receives the current ParallelFlowable instance and returns a valueNullPointerException - if converter is null@CheckReturnValue @NonNull @BackpressureSupport(value=PASS_THROUGH) @SchedulerSupport(value="none") public final <U> @NonNull ParallelFlowable<U> compose(@NonNull @NonNull ParallelTransformer<T,U> composer)
compose does not operate by default on a particular Scheduler.U - the output value typecomposer - the composer function from ParallelFlowable (this) to another ParallelFlowable@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> flatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper)
Errors are not delayed and uses unbounded concurrency along with default inner prefetch.
Flowable.bufferSize() amount from each rail upfront
  and keeps requesting as many items per rail as many inner sources on
  that rail completed. The inner sources are requested Flowable.bufferSize()
  amount upfront, then 75% of this amount requested after 75% received.
  flatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a Publisher@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> flatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError)
It uses unbounded concurrency along with default inner prefetch.
Flowable.bufferSize() amount from each rail upfront
  and keeps requesting as many items per rail as many inner sources on
  that rail completed. The inner sources are requested Flowable.bufferSize()
  amount upfront, then 75% of this amount requested after 75% received.
  flatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a PublisherdelayError - should the errors from the main and the inner sources delayed till everybody terminates?@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> flatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency)
It uses a default inner prefetch.
maxConcurrency amount from each rail upfront
  and keeps requesting as many items per rail as many inner sources on
  that rail completed. The inner sources are requested Flowable.bufferSize()
  amount upfront, then 75% of this amount requested after 75% received.
  flatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a PublisherdelayError - should the errors from the main and the inner sources delayed till everybody terminates?maxConcurrency - the maximum number of simultaneous subscriptions to the generated inner Publishers@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> flatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean delayError, int maxConcurrency, int prefetch)
maxConcurrency amount from each rail upfront
  and keeps requesting as many items per rail as many inner sources on
  that rail completed. The inner sources are requested the prefetch
  amount upfront, then 75% of this amount requested after 75% received.
  flatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a PublisherdelayError - should the errors from the main and the inner sources delayed till everybody terminates?maxConcurrency - the maximum number of simultaneous subscriptions to the generated inner Publishersprefetch - the number of items to prefetch from each inner Publisher@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> concatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper)
concatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a Publisher
 source and the inner Publishers (immediate, boundary, end)@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> concatMap(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, int prefetch)
prefetch amount from each rail upfront and keeps
  requesting 75% of this amount after 75% received and the inner sources completed.
  Requests for the inner sources are determined by the downstream rails'
  backpressure behavior.concatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a Publisherprefetch - the number of items to prefetch from each inner Publisher
 source and the inner Publishers (immediate, boundary, end)@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> concatMapDelayError(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, boolean tillTheEnd)
concatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a PublishertillTheEnd - if true all errors from the upstream and inner Publishers are delayed
 till all of them terminate, if false, the error is emitted when an inner Publisher terminates.
 source and the inner Publishers (immediate, boundary, end)@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <R> @NonNull ParallelFlowable<R> concatMapDelayError(@NonNull @NonNull Function<? super T,? extends Publisher<? extends R>> mapper, int prefetch, boolean tillTheEnd)
prefetch amount from each rail upfront and keeps
  requesting 75% of this amount after 75% received and the inner sources completed.
  Requests for the inner sources are determined by the downstream rails'
  backpressure behavior.concatMap does not operate by default on a particular Scheduler.R - the result typemapper - the function to map each rail's value into a Publisherprefetch - the number of items to prefetch from each inner PublishertillTheEnd - if true all errors from the upstream and inner Publishers are delayed
 till all of them terminate, if false, the error is emitted when an inner Publisher terminates.@CheckReturnValue @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @NonNull public final <U> @NonNull ParallelFlowable<U> flatMapIterable(@NonNull @NonNull Function<? super T,? extends Iterable<? extends U>> mapper)
ParallelFlowable that merges each item emitted by the source on each rail with the values in an
 Iterable corresponding to that item that is generated by a selector.
 
  
 
ParallelFlowables is
  expected to honor backpressure as well. If the source ParallelFlowable violates the rule, the operator will
  signal a MissingBackpressureException.flatMapIterable does not operate by default on a particular Scheduler.U - the type of item emitted by the resulting Iterablemapper - a function that returns an Iterable sequence of values for when given an item emitted by the
            source ParallelFlowableflatMapStream(Function)@CheckReturnValue @NonNull @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") public final <U> @NonNull ParallelFlowable<U> flatMapIterable(@NonNull @NonNull Function<? super T,? extends Iterable<? extends U>> mapper, int bufferSize)
ParallelFlowable that merges each item emitted by the source ParallelFlowable with the values in an
 Iterable corresponding to that item that is generated by a selector.
 
  
 
ParallelFlowables is
  expected to honor backpressure as well. If the source ParallelFlowable violates the rule, the operator will
  signal a MissingBackpressureException.flatMapIterable does not operate by default on a particular Scheduler.U - the type of item emitted by the resulting Iterablemapper - a function that returns an Iterable sequence of values for when given an item emitted by the
            source ParallelFlowablebufferSize - the number of elements to prefetch from each upstream railParallelFlowable instanceflatMapStream(Function, int)@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> mapOptional(@NonNull @NonNull Function<? super T,Optional<? extends R>> mapper)
Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.R - the output value typemapper - the mapper function turning Ts into optional of Rs.@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> mapOptional(@NonNull @NonNull Function<? super T,Optional<? extends R>> mapper, @NonNull @NonNull ParallelFailureHandling errorHandler)
ParallelFailureHandling enumeration value.
 Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
R - the output value typemapper - the mapper function turning Ts into optional of Rs.errorHandler - the enumeration that defines how to handle errors thrown
                     from the mapper function@CheckReturnValue @NonNull @SchedulerSupport(value="none") @BackpressureSupport(value=PASS_THROUGH) public final <R> @NonNull ParallelFlowable<R> mapOptional(@NonNull @NonNull Function<? super T,Optional<? extends R>> mapper, @NonNull @NonNull BiFunction<? super Long,? super Throwable,ParallelFailureHandling> errorHandler)
Note that the same mapper function may be called from multiple threads concurrently.
map does not operate by default on a particular Scheduler.History: 2.0.8 - experimental
R - the output value typemapper - the mapper function turning Ts into optional of Rs.errorHandler - the function called with the current repeat count and
                     failure Throwable and should return one of the ParallelFailureHandling
                     enumeration values to indicate how to proceed.@CheckReturnValue @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @NonNull public final <R> @NonNull ParallelFlowable<R> flatMapStream(@NonNull @NonNull Function<? super T,? extends Stream<? extends R>> mapper)
Stream and emits the Stream's items to the downstream in a sequential fashion.
 
  
 
 Due to the blocking and sequential nature of Java Streams, the streams are mapped and consumed in a sequential fashion
 without interleaving (unlike a more general flatMap(Function)). Therefore, flatMapStream and
 concatMapStream are identical operators and are provided as aliases.
 
 The operator closes the Stream upon cancellation and when it terminates. Exceptions raised when
 closing a Stream are routed to the global error handler (RxJavaPlugins.onError(Throwable).
 If a Stream should not be closed, turn it into an Iterable and use flatMapIterable(Function):
 
 source.flatMapIterable(v -> createStream(v)::iterator);
 
 Note that Streams can be consumed only once; any subsequent attempt to consume a Stream
 will result in an IllegalStateException.
 
 Primitive streams are not supported and items have to be boxed manually (e.g., via IntStream.boxed()):
 
 source.flatMapStream(v -> IntStream.rangeClosed(v + 1, v + 10).boxed());
 
 Stream does not support concurrent usage so creating and/or consuming the same instance multiple times
 from multiple threads can lead to undefined behavior.
 
Flowable.bufferSize() items of the upstream (then 75% of it after the 75% received)
  and caches them until they are ready to be mapped into Streams
  after the current Stream has been consumed.flatMapStream does not operate by default on a particular Scheduler.R - the element type of the Streams and the resultmapper - the function that receives an upstream item and should return a Stream whose elements
 will be emitted to the downstreamflatMap(Function), 
flatMapIterable(Function), 
flatMapStream(Function, int)@CheckReturnValue @BackpressureSupport(value=FULL) @SchedulerSupport(value="none") @NonNull public final <R> @NonNull ParallelFlowable<R> flatMapStream(@NonNull @NonNull Function<? super T,? extends Stream<? extends R>> mapper, int prefetch)
Stream and emits the Stream's items to the downstream in a sequential fashion.
 
  
 
 Due to the blocking and sequential nature of Java Streams, the streams are mapped and consumed in a sequential fashion
 without interleaving (unlike a more general flatMap(Function)). Therefore, flatMapStream and
 concatMapStream are identical operators and are provided as aliases.
 
 The operator closes the Stream upon cancellation and when it terminates. Exceptions raised when
 closing a Stream are routed to the global error handler (RxJavaPlugins.onError(Throwable).
 If a Stream should not be closed, turn it into an Iterable and use flatMapIterable(Function, int):
 
 source.flatMapIterable(v -> createStream(v)::iterator, 32);
 
 Note that Streams can be consumed only once; any subsequent attempt to consume a Stream
 will result in an IllegalStateException.
 
 Primitive streams are not supported and items have to be boxed manually (e.g., via IntStream.boxed()):
 
 source.flatMapStream(v -> IntStream.rangeClosed(v + 1, v + 10).boxed(), 32);
 
 Stream does not support concurrent usage so creating and/or consuming the same instance multiple times
 from multiple threads can lead to undefined behavior.
 
Streams
  after the current Stream has been consumed.flatMapStream does not operate by default on a particular Scheduler.R - the element type of the Streams and the resultmapper - the function that receives an upstream item and should return a Stream whose elements
 will be emitted to the downstreamprefetch - the number of upstream items to request upfront, then 75% of this amount after each 75% upstream items receivedflatMap(Function, boolean, int), 
flatMapIterable(Function, int)@CheckReturnValue @NonNull @BackpressureSupport(value=UNBOUNDED_IN) @SchedulerSupport(value="none") public final <A,R> @NonNull Flowable<R> collect(@NonNull @NonNull Collector<T,A,R> collector)
Collector into a single sequential value.
 
 Each parallel rail receives its own Collector.accumulator() and
 Collector.combiner().
 
Long.MAX_VALUE).collect does not operate by default on a particular Scheduler.A - the accumulator typeR - the output value typecollector - the Collector instance