pub(crate) struct GroupedHashAggregateStream {Show 20 fields
schema: SchemaRef,
input: SendableRecordBatchStream,
mode: AggregateMode,
aggregate_arguments: Vec<Vec<Arc<dyn PhysicalExpr>>>,
filter_expressions: Vec<Option<Arc<dyn PhysicalExpr>>>,
group_by: PhysicalGroupBy,
batch_size: usize,
group_values_soft_limit: Option<usize>,
exec_state: ExecutionState,
input_done: bool,
group_values: Box<dyn GroupValues>,
current_group_indices: Vec<usize>,
accumulators: Vec<Box<dyn GroupsAccumulator>>,
group_ordering: GroupOrdering,
spill_state: SpillState,
skip_aggregation_probe: Option<SkipAggregationProbe>,
reservation: MemoryReservation,
baseline_metrics: BaselineMetrics,
group_by_metrics: GroupByMetrics,
reduction_factor: Option<RatioMetrics>,
}Expand description
HashTable based Grouping Aggregator
§Design Goals
This structure is designed so that updating the aggregates can be vectorized (done in a tight loop) without allocations. The accumulator state is not managed by this operator (e.g in the hash table) and instead is delegated to the individual accumulators which have type specialized inner loops that perform the aggregation.
§Architecture
Assigns a consecutive group internally stores aggregate values
index for each unique set for all groups
of group values
┌────────────┐ ┌──────────────┐ ┌──────────────┐
│ ┌────────┐ │ │┌────────────┐│ │┌────────────┐│
│ │ "A" │ │ ││accumulator ││ ││accumulator ││
│ ├────────┤ │ ││ 0 ││ ││ N ││
│ │ "Z" │ │ ││ ┌────────┐ ││ ││ ┌────────┐ ││
│ └────────┘ │ ││ │ state │ ││ ││ │ state │ ││
│ │ ││ │┌─────┐ │ ││ ... ││ │┌─────┐ │ ││
│ ... │ ││ │├─────┤ │ ││ ││ │├─────┤ │ ││
│ │ ││ │└─────┘ │ ││ ││ │└─────┘ │ ││
│ │ ││ │ │ ││ ││ │ │ ││
│ ┌────────┐ │ ││ │ ... │ ││ ││ │ ... │ ││
│ │ "Q" │ │ ││ │ │ ││ ││ │ │ ││
│ └────────┘ │ ││ │┌─────┐ │ ││ ││ │┌─────┐ │ ││
│ │ ││ │└─────┘ │ ││ ││ │└─────┘ │ ││
└────────────┘ ││ └────────┘ ││ ││ └────────┘ ││
│└────────────┘│ │└────────────┘│
└──────────────┘ └──────────────┘
group_values accumulators
For example, given a query like COUNT(x), SUM(y) ... GROUP BY z,
group_values will store the distinct values of z. There will
be one accumulator for COUNT(x), specialized for the data type
of x and one accumulator for SUM(y), specialized for the data
type of y.
§Discussion
group_values does not store any aggregate state inline. It only
assigns “group indices”, one for each (distinct) group value. The
accumulators manage the in-progress aggregate state for each
group, with the group values themselves are stored in
group_values at the corresponding group index.
The accumulator state (e.g partial sums) is managed by and stored
by a [GroupsAccumulator] accumulator. There is one accumulator
per aggregate expression (COUNT, AVG, etc) in the
stream. Internally, each GroupsAccumulator manages the state for
multiple groups, and is passed group_indexes during update. Note
The accumulator state is not managed by this operator (e.g in the
hash table).
§Partial Aggregate and multi-phase grouping
As described on Accumulator::state, this operator is used in the context
“multi-phase” grouping when the mode is AggregateMode::Partial.
An important optimization for multi-phase partial aggregation is to skip partial aggregation when it is not effective enough to warrant the memory or CPU cost, as is often the case for queries many distinct groups (high cardinality group by). Memory is particularly important because each Partial aggregator must store the intermediate state for each group.
If the ratio of the number of groups to the number of input rows exceeds a
threshold, and [GroupsAccumulator::supports_convert_to_state] is
supported, this operator will stop applying Partial aggregation and directly
pass the input rows to the next aggregation phase.
§Spilling (to disk)
The sizes of group values and accumulators can become large. Before that causes out of memory,
this hash aggregator outputs partial states early for partial aggregation or spills to local
disk using Arrow IPC format for final aggregation. For every input [RecordBatch], the memory
manager checks whether the new input size meets the memory configuration. If not, outputting or
spilling happens. For outputting, the final aggregation takes care of re-grouping. For spilling,
later stream-merge sort on reading back the spilled data does re-grouping. Note the rows cannot
be grouped once spilled onto disk, the read back data needs to be re-grouped again. In addition,
re-grouping may cause out of memory again. Thus, re-grouping has to be a sort based aggregation.
Partial Aggregation [batch_size = 2] (max memory = 3 rows)
INPUTS PARTIALLY AGGREGATED (UPDATE BATCH) OUTPUTS
┌─────────┐ ┌─────────────────┐ ┌─────────────────┐
│ a │ b │ │ a │ AVG(b) │ │ a │ AVG(b) │
│---│-----│ │ │[count]│[sum]│ │ │[count]│[sum]│
│ 3 │ 3.0 │ ─▶ │---│-------│-----│ │---│-------│-----│
│ 2 │ 2.0 │ │ 2 │ 1 │ 2.0 │ ─▶ early emit ─▶ │ 2 │ 1 │ 2.0 │
└─────────┘ │ 3 │ 2 │ 7.0 │ │ │ 3 │ 2 │ 7.0 │
┌─────────┐ ─▶ │ 4 │ 1 │ 8.0 │ │ └─────────────────┘
│ 3 │ 4.0 │ └─────────────────┘ └▶ ┌─────────────────┐
│ 4 │ 8.0 │ ┌─────────────────┐ │ 4 │ 1 │ 8.0 │
└─────────┘ │ a │ AVG(b) │ ┌▶ │ 1 │ 1 │ 1.0 │
┌─────────┐ │---│-------│-----│ │ └─────────────────┘
│ 1 │ 1.0 │ ─▶ │ 1 │ 1 │ 1.0 │ ─▶ early emit ─▶ ┌─────────────────┐
│ 3 │ 2.0 │ │ 3 │ 1 │ 2.0 │ │ 3 │ 1 │ 2.0 │
└─────────┘ └─────────────────┘ └─────────────────┘
Final Aggregation [batch_size = 2] (max memory = 3 rows)
PARTIALLY INPUTS FINAL AGGREGATION (MERGE BATCH) RE-GROUPED (SORTED)
┌─────────────────┐ [keep using the partial schema] [Real final aggregation
│ a │ AVG(b) │ ┌─────────────────┐ output]
│ │[count]│[sum]│ │ a │ AVG(b) │ ┌────────────┐
│---│-------│-----│ ─▶ │ │[count]│[sum]│ │ a │ AVG(b) │
│ 3 │ 3 │ 3.0 │ │---│-------│-----│ ─▶ spill ─┐ │---│--------│
│ 2 │ 2 │ 1.0 │ │ 2 │ 2 │ 1.0 │ │ │ 1 │ 4.0 │
└─────────────────┘ │ 3 │ 4 │ 8.0 │ ▼ │ 2 │ 1.0 │
┌─────────────────┐ ─▶ │ 4 │ 1 │ 7.0 │ Streaming ─▶ └────────────┘
│ 3 │ 1 │ 5.0 │ └─────────────────┘ merge sort ─▶ ┌────────────┐
│ 4 │ 1 │ 7.0 │ ┌─────────────────┐ ▲ │ a │ AVG(b) │
└─────────────────┘ │ a │ AVG(b) │ │ │---│--------│
┌─────────────────┐ │---│-------│-----│ ─▶ memory ─┘ │ 3 │ 2.0 │
│ 1 │ 2 │ 8.0 │ ─▶ │ 1 │ 2 │ 8.0 │ │ 4 │ 7.0 │
│ 2 │ 2 │ 3.0 │ │ 2 │ 2 │ 3.0 │ └────────────┘
└─────────────────┘ └─────────────────┘Fields§
§schema: SchemaRef§input: SendableRecordBatchStream§mode: AggregateMode§aggregate_arguments: Vec<Vec<Arc<dyn PhysicalExpr>>>Arguments to pass to each accumulator.
The arguments in accumulator[i] is passed aggregate_arguments[i]
The argument to each accumulator is itself a Vec because
some aggregates such as CORR can accept more than one
argument.
filter_expressions: Vec<Option<Arc<dyn PhysicalExpr>>>Optional filter expression to evaluate, one for each for accumulator. If present, only those rows for which the filter evaluate to true should be included in the aggregate results.
For example, for an aggregate like SUM(x) FILTER (WHERE x >= 100),
the filter expression is x > 100.
group_by: PhysicalGroupByGROUP BY expressions
batch_size: usizemax rows in output RecordBatches
group_values_soft_limit: Option<usize>Optional soft limit on the number of group_values in a batch
If the number of group_values in a single batch exceeds this value,
the GroupedHashAggregateStream operation immediately switches to
output mode and emits all groups.
exec_state: ExecutionStateTracks if this stream is generating input or output
input_done: boolHave we seen the end of the input
group_values: Box<dyn GroupValues>An interning store of group keys
current_group_indices: Vec<usize>scratch space for the current input [RecordBatch] being
processed. Reused across batches here to avoid reallocations
accumulators: Vec<Box<dyn GroupsAccumulator>>Accumulators, one for each AggregateFunctionExpr in the query
For example, if the query has aggregates, SUM(x),
COUNT(y), there will be two accumulators, each one
specialized for that particular aggregate and its input types
group_ordering: GroupOrderingOptional ordering information, that might allow groups to be emitted from the hash table prior to seeing the end of the input
spill_state: SpillStateThe spill state object
skip_aggregation_probe: Option<SkipAggregationProbe>Optional probe for skipping data aggregation, if supported by current stream.
reservation: MemoryReservationThe memory reservation for this grouping
baseline_metrics: BaselineMetricsExecution metrics
group_by_metrics: GroupByMetricsAggregation-specific metrics
reduction_factor: Option<RatioMetrics>Reduction factor metric, calculated as output_rows/input_rows (only for partial aggregation)
Implementations§
Source§impl GroupedHashAggregateStream
impl GroupedHashAggregateStream
Sourcefn group_aggregate_batch(&mut self, batch: RecordBatch) -> Result<()>
fn group_aggregate_batch(&mut self, batch: RecordBatch) -> Result<()>
Perform group-by aggregation for the given [RecordBatch].
fn update_memory_reservation(&mut self) -> Result<()>
Sourcefn emit(
&mut self,
emit_to: EmitTo,
spilling: bool,
) -> Result<Option<RecordBatch>>
fn emit( &mut self, emit_to: EmitTo, spilling: bool, ) -> Result<Option<RecordBatch>>
Create an output RecordBatch with the group keys and accumulator states/values specified in emit_to
Sourcefn spill_previous_if_necessary(&mut self, batch: &RecordBatch) -> Result<()>
fn spill_previous_if_necessary(&mut self, batch: &RecordBatch) -> Result<()>
Optimistically, Self::group_aggregate_batch allows to exceed the memory target slightly
(~ 1 [RecordBatch]) for simplicity. In such cases, spill the data to disk and clear the
memory. Currently only GroupOrdering::None is supported for spilling.
Sourcefn spill(&mut self) -> Result<()>
fn spill(&mut self) -> Result<()>
Emit all intermediate aggregation states, sort them, and store them on disk. This process helps in reducing memory pressure by allowing the data to be read back with streaming merge.
Sourcefn clear_shrink(&mut self, batch: &RecordBatch)
fn clear_shrink(&mut self, batch: &RecordBatch)
Clear memory and shirk capacities to the size of the batch.
Sourcefn emit_early_if_necessary(&mut self) -> Result<()>
fn emit_early_if_necessary(&mut self) -> Result<()>
Emit if the used memory exceeds the target for partial aggregation.
Currently only GroupOrdering::None is supported for early emitting.
TODO: support group_ordering for early emitting
Sourcefn update_merged_stream(&mut self) -> Result<()>
fn update_merged_stream(&mut self) -> Result<()>
At this point, all the inputs are read and there are some spills.
Emit the remaining rows and create a batch.
Conduct a streaming merge sort between the batch and spilled data. Since the stream is fully
sorted, set self.group_ordering to Full, then later we can read with [EmitTo::First].
Sourcefn hit_soft_group_limit(&self) -> bool
fn hit_soft_group_limit(&self) -> bool
returns true if there is a soft groups limit and the number of distinct groups we have seen is over that limit
Sourcefn set_input_done_and_produce_output(&mut self) -> Result<()>
fn set_input_done_and_produce_output(&mut self) -> Result<()>
common function for signalling end of processing of the input stream
Sourcefn update_skip_aggregation_probe(&mut self, input_rows: usize)
fn update_skip_aggregation_probe(&mut self, input_rows: usize)
Updates skip aggregation probe state.
Notice: It should only be called in Partial aggregation
Sourcefn switch_to_skip_aggregation(&mut self) -> Result<()>
fn switch_to_skip_aggregation(&mut self) -> Result<()>
In case the probe indicates that aggregation may be skipped, forces stream to produce currently accumulated output.
Notice: It should only be called in Partial aggregation
Sourcefn should_skip_aggregation(&self) -> bool
fn should_skip_aggregation(&self) -> bool
Returns true if the aggregation probe indicates that aggregation should be skipped.
Notice: It should only be called in Partial aggregation
Sourcefn transform_to_states(&self, batch: RecordBatch) -> Result<RecordBatch>
fn transform_to_states(&self, batch: RecordBatch) -> Result<RecordBatch>
Transforms input batch to intermediate aggregate state, without grouping it
Trait Implementations§
Source§impl Stream for GroupedHashAggregateStream
impl Stream for GroupedHashAggregateStream
Source§type Item = Result<RecordBatch, DataFusionError>
type Item = Result<RecordBatch, DataFusionError>
Auto Trait Implementations§
impl Freeze for GroupedHashAggregateStream
impl !RefUnwindSafe for GroupedHashAggregateStream
impl Send for GroupedHashAggregateStream
impl !Sync for GroupedHashAggregateStream
impl Unpin for GroupedHashAggregateStream
impl !UnwindSafe for GroupedHashAggregateStream
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
§impl<T> StreamExt for Twhere
T: Stream + ?Sized,
impl<T> StreamExt for Twhere
T: Stream + ?Sized,
§fn next(&mut self) -> Next<'_, Self>where
Self: Unpin,
fn next(&mut self) -> Next<'_, Self>where
Self: Unpin,
§fn into_future(self) -> StreamFuture<Self>
fn into_future(self) -> StreamFuture<Self>
§fn map<T, F>(self, f: F) -> Map<Self, F>
fn map<T, F>(self, f: F) -> Map<Self, F>
§fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
fn enumerate(self) -> Enumerate<Self>where
Self: Sized,
§fn filter<Fut, F>(self, f: F) -> Filter<Self, Fut, F>
fn filter<Fut, F>(self, f: F) -> Filter<Self, Fut, F>
§fn filter_map<Fut, T, F>(self, f: F) -> FilterMap<Self, Fut, F>
fn filter_map<Fut, T, F>(self, f: F) -> FilterMap<Self, Fut, F>
§fn then<Fut, F>(self, f: F) -> Then<Self, Fut, F>
fn then<Fut, F>(self, f: F) -> Then<Self, Fut, F>
§fn collect<C>(self) -> Collect<Self, C>
fn collect<C>(self) -> Collect<Self, C>
§fn unzip<A, B, FromA, FromB>(self) -> Unzip<Self, FromA, FromB>
fn unzip<A, B, FromA, FromB>(self) -> Unzip<Self, FromA, FromB>
§fn concat(self) -> Concat<Self>
fn concat(self) -> Concat<Self>
§fn count(self) -> Count<Self>where
Self: Sized,
fn count(self) -> Count<Self>where
Self: Sized,
§fn fold<T, Fut, F>(self, init: T, f: F) -> Fold<Self, Fut, T, F>
fn fold<T, Fut, F>(self, init: T, f: F) -> Fold<Self, Fut, T, F>
§fn any<Fut, F>(self, f: F) -> Any<Self, Fut, F>
fn any<Fut, F>(self, f: F) -> Any<Self, Fut, F>
true if any element in stream satisfied a predicate. Read more§fn all<Fut, F>(self, f: F) -> All<Self, Fut, F>
fn all<Fut, F>(self, f: F) -> All<Self, Fut, F>
true if all element in stream satisfied a predicate. Read more§fn flatten(self) -> Flatten<Self>where
Self::Item: Stream,
Self: Sized,
fn flatten(self) -> Flatten<Self>where
Self::Item: Stream,
Self: Sized,
§fn flatten_unordered(
self,
limit: impl Into<Option<usize>>,
) -> FlattenUnorderedWithFlowController<Self, ()>
fn flatten_unordered( self, limit: impl Into<Option<usize>>, ) -> FlattenUnorderedWithFlowController<Self, ()>
§fn flat_map_unordered<U, F>(
self,
limit: impl Into<Option<usize>>,
f: F,
) -> FlatMapUnordered<Self, U, F>
fn flat_map_unordered<U, F>( self, limit: impl Into<Option<usize>>, f: F, ) -> FlatMapUnordered<Self, U, F>
StreamExt::map] but flattens nested Streams
and polls them concurrently, yielding items in any order, as they made
available. Read more§fn scan<S, B, Fut, F>(self, initial_state: S, f: F) -> Scan<Self, S, Fut, F>
fn scan<S, B, Fut, F>(self, initial_state: S, f: F) -> Scan<Self, S, Fut, F>
StreamExt::fold] that holds internal state
and produces a new stream. Read more§fn skip_while<Fut, F>(self, f: F) -> SkipWhile<Self, Fut, F>
fn skip_while<Fut, F>(self, f: F) -> SkipWhile<Self, Fut, F>
true. Read more§fn take_while<Fut, F>(self, f: F) -> TakeWhile<Self, Fut, F>
fn take_while<Fut, F>(self, f: F) -> TakeWhile<Self, Fut, F>
true. Read more§fn take_until<Fut>(self, fut: Fut) -> TakeUntil<Self, Fut>
fn take_until<Fut>(self, fut: Fut) -> TakeUntil<Self, Fut>
§fn for_each<Fut, F>(self, f: F) -> ForEach<Self, Fut, F>
fn for_each<Fut, F>(self, f: F) -> ForEach<Self, Fut, F>
§fn for_each_concurrent<Fut, F>(
self,
limit: impl Into<Option<usize>>,
f: F,
) -> ForEachConcurrent<Self, Fut, F>
fn for_each_concurrent<Fut, F>( self, limit: impl Into<Option<usize>>, f: F, ) -> ForEachConcurrent<Self, Fut, F>
§fn take(self, n: usize) -> Take<Self>where
Self: Sized,
fn take(self, n: usize) -> Take<Self>where
Self: Sized,
n items of the underlying stream. Read more§fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
fn skip(self, n: usize) -> Skip<Self>where
Self: Sized,
n items of the underlying stream. Read more§fn catch_unwind(self) -> CatchUnwind<Self>where
Self: Sized + UnwindSafe,
fn catch_unwind(self) -> CatchUnwind<Self>where
Self: Sized + UnwindSafe,
§fn boxed<'a>(self) -> Pin<Box<dyn Stream<Item = Self::Item> + Send + 'a>>
fn boxed<'a>(self) -> Pin<Box<dyn Stream<Item = Self::Item> + Send + 'a>>
§fn boxed_local<'a>(self) -> Pin<Box<dyn Stream<Item = Self::Item> + 'a>>where
Self: Sized + 'a,
fn boxed_local<'a>(self) -> Pin<Box<dyn Stream<Item = Self::Item> + 'a>>where
Self: Sized + 'a,
§fn buffered(self, n: usize) -> Buffered<Self>
fn buffered(self, n: usize) -> Buffered<Self>
§fn buffer_unordered(self, n: usize) -> BufferUnordered<Self>
fn buffer_unordered(self, n: usize) -> BufferUnordered<Self>
§fn zip<St>(self, other: St) -> Zip<Self, St>where
St: Stream,
Self: Sized,
fn zip<St>(self, other: St) -> Zip<Self, St>where
St: Stream,
Self: Sized,
§fn chain<St>(self, other: St) -> Chain<Self, St>where
St: Stream<Item = Self::Item>,
Self: Sized,
fn chain<St>(self, other: St) -> Chain<Self, St>where
St: Stream<Item = Self::Item>,
Self: Sized,
§fn peekable(self) -> Peekable<Self>where
Self: Sized,
fn peekable(self) -> Peekable<Self>where
Self: Sized,
peek method. Read more§fn chunks(self, capacity: usize) -> Chunks<Self>where
Self: Sized,
fn chunks(self, capacity: usize) -> Chunks<Self>where
Self: Sized,
§fn ready_chunks(self, capacity: usize) -> ReadyChunks<Self>where
Self: Sized,
fn ready_chunks(self, capacity: usize) -> ReadyChunks<Self>where
Self: Sized,
§fn forward<S>(self, sink: S) -> Forward<Self, S>where
S: Sink<Self::Ok, Error = Self::Error>,
Self: Sized + TryStream,
fn forward<S>(self, sink: S) -> Forward<Self, S>where
S: Sink<Self::Ok, Error = Self::Error>,
Self: Sized + TryStream,
§fn inspect<F>(self, f: F) -> Inspect<Self, F>
fn inspect<F>(self, f: F) -> Inspect<Self, F>
§fn left_stream<B>(self) -> Either<Self, B>where
B: Stream<Item = Self::Item>,
Self: Sized,
fn left_stream<B>(self) -> Either<Self, B>where
B: Stream<Item = Self::Item>,
Self: Sized,
§fn right_stream<B>(self) -> Either<B, Self>where
B: Stream<Item = Self::Item>,
Self: Sized,
fn right_stream<B>(self) -> Either<B, Self>where
B: Stream<Item = Self::Item>,
Self: Sized,
§fn poll_next_unpin(&mut self, cx: &mut Context<'_>) -> Poll<Option<Self::Item>>where
Self: Unpin,
fn poll_next_unpin(&mut self, cx: &mut Context<'_>) -> Poll<Option<Self::Item>>where
Self: Unpin,
Stream::poll_next] on Unpin
stream types.§fn select_next_some(&mut self) -> SelectNextSome<'_, Self>where
Self: Unpin + FusedStream,
fn select_next_some(&mut self) -> SelectNextSome<'_, Self>where
Self: Unpin + FusedStream,
§impl<S, T, E> TryStream for S
impl<S, T, E> TryStream for S
§impl<S> TryStreamExt for Swhere
S: TryStream + ?Sized,
impl<S> TryStreamExt for Swhere
S: TryStream + ?Sized,
§fn err_into<E>(self) -> ErrInto<Self, E>
fn err_into<E>(self) -> ErrInto<Self, E>
§fn map_ok<T, F>(self, f: F) -> MapOk<Self, F>
fn map_ok<T, F>(self, f: F) -> MapOk<Self, F>
§fn map_err<E, F>(self, f: F) -> MapErr<Self, F>
fn map_err<E, F>(self, f: F) -> MapErr<Self, F>
§fn and_then<Fut, F>(self, f: F) -> AndThen<Self, Fut, F>
fn and_then<Fut, F>(self, f: F) -> AndThen<Self, Fut, F>
f. Read more§fn or_else<Fut, F>(self, f: F) -> OrElse<Self, Fut, F>
fn or_else<Fut, F>(self, f: F) -> OrElse<Self, Fut, F>
f. Read more§fn inspect_ok<F>(self, f: F) -> InspectOk<Self, F>
fn inspect_ok<F>(self, f: F) -> InspectOk<Self, F>
§fn inspect_err<F>(self, f: F) -> InspectErr<Self, F>
fn inspect_err<F>(self, f: F) -> InspectErr<Self, F>
§fn into_stream(self) -> IntoStream<Self>where
Self: Sized,
fn into_stream(self) -> IntoStream<Self>where
Self: Sized,
§fn try_next(&mut self) -> TryNext<'_, Self>where
Self: Unpin,
fn try_next(&mut self) -> TryNext<'_, Self>where
Self: Unpin,
§fn try_for_each<Fut, F>(self, f: F) -> TryForEach<Self, Fut, F>
fn try_for_each<Fut, F>(self, f: F) -> TryForEach<Self, Fut, F>
§fn try_skip_while<Fut, F>(self, f: F) -> TrySkipWhile<Self, Fut, F>
fn try_skip_while<Fut, F>(self, f: F) -> TrySkipWhile<Self, Fut, F>
true. Read more§fn try_take_while<Fut, F>(self, f: F) -> TryTakeWhile<Self, Fut, F>
fn try_take_while<Fut, F>(self, f: F) -> TryTakeWhile<Self, Fut, F>
true. Read more§fn try_for_each_concurrent<Fut, F>(
self,
limit: impl Into<Option<usize>>,
f: F,
) -> TryForEachConcurrent<Self, Fut, F>
fn try_for_each_concurrent<Fut, F>( self, limit: impl Into<Option<usize>>, f: F, ) -> TryForEachConcurrent<Self, Fut, F>
§fn try_collect<C>(self) -> TryCollect<Self, C>
fn try_collect<C>(self) -> TryCollect<Self, C>
§fn try_chunks(self, capacity: usize) -> TryChunks<Self>where
Self: Sized,
fn try_chunks(self, capacity: usize) -> TryChunks<Self>where
Self: Sized,
§fn try_ready_chunks(self, capacity: usize) -> TryReadyChunks<Self>where
Self: Sized,
fn try_ready_chunks(self, capacity: usize) -> TryReadyChunks<Self>where
Self: Sized,
§fn try_filter<Fut, F>(self, f: F) -> TryFilter<Self, Fut, F>
fn try_filter<Fut, F>(self, f: F) -> TryFilter<Self, Fut, F>
§fn try_filter_map<Fut, F, T>(self, f: F) -> TryFilterMap<Self, Fut, F>
fn try_filter_map<Fut, F, T>(self, f: F) -> TryFilterMap<Self, Fut, F>
§fn try_flatten_unordered(
self,
limit: impl Into<Option<usize>>,
) -> TryFlattenUnordered<Self>
fn try_flatten_unordered( self, limit: impl Into<Option<usize>>, ) -> TryFlattenUnordered<Self>
§fn try_flatten(self) -> TryFlatten<Self>
fn try_flatten(self) -> TryFlatten<Self>
§fn try_fold<T, Fut, F>(self, init: T, f: F) -> TryFold<Self, Fut, T, F>
fn try_fold<T, Fut, F>(self, init: T, f: F) -> TryFold<Self, Fut, T, F>
§fn try_concat(self) -> TryConcat<Self>
fn try_concat(self) -> TryConcat<Self>
§fn try_buffer_unordered(self, n: usize) -> TryBufferUnordered<Self>where
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
fn try_buffer_unordered(self, n: usize) -> TryBufferUnordered<Self>where
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
§fn try_buffered(self, n: usize) -> TryBuffered<Self>where
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
fn try_buffered(self, n: usize) -> TryBuffered<Self>where
Self::Ok: TryFuture<Error = Self::Error>,
Self: Sized,
§fn try_poll_next_unpin(
&mut self,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Self::Ok, Self::Error>>>where
Self: Unpin,
fn try_poll_next_unpin(
&mut self,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Self::Ok, Self::Error>>>where
Self: Unpin,
TryStream::try_poll_next] on Unpin
stream types.§fn into_async_read(self) -> IntoAsyncRead<Self>
fn into_async_read(self) -> IntoAsyncRead<Self>
AsyncBufRead. Read more§fn try_all<Fut, F>(self, f: F) -> TryAll<Self, Fut, F>
fn try_all<Fut, F>(self, f: F) -> TryAll<Self, Fut, F>
Err is encountered or if an Ok item is found
that does not satisfy the predicate. Read more