spawn_parquet_parallel_serialization_task

Function spawn_parquet_parallel_serialization_task 

Source
fn spawn_parquet_parallel_serialization_task(
    row_group_writer_factory: ArrowRowGroupWriterFactory,
    data: Receiver<RecordBatch>,
    serialize_tx: Sender<SpawnedTask<Result<(Vec<ArrowColumnChunk>, MemoryReservation, usize)>>>,
    schema: Arc<Schema>,
    writer_props: Arc<WriterProperties>,
    parallel_options: ParallelParquetWriterOptions,
    pool: Arc<dyn MemoryPool>,
) -> SpawnedTask<Result<(), DataFusionError>> 
Expand description

This task coordinates the serialization of a parquet file in parallel. As the query produces RecordBatches, these are written to a RowGroup via parallel [ArrowColumnWriter] tasks. Once the desired max rows per row group is reached, the parallel tasks are joined on another separate task and sent to a concatenation task. This task immediately continues to work on the next row group in parallel. So, parquet serialization is parallelized across both columns and row_groups, with a theoretical max number of parallel tasks given by n_columns * num_row_groups.