async fn concatenate_parallel_row_groups(
parquet_writer: SerializedFileWriter<SharedBuffer>,
merged_buff: SharedBuffer,
serialize_rx: Receiver<SpawnedTask<Result<(Vec<ArrowColumnChunk>, MemoryReservation, usize)>>>,
object_store_writer: Box<dyn AsyncWrite + Send + Unpin>,
pool: Arc<dyn MemoryPool>,
) -> Result<ParquetMetaData>Expand description
Consume RowGroups serialized by other parallel tasks and concatenate them in to the final parquet file, while flushing finalized bytes to an [ObjectStore]