You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Large files can be hard to manage, so our S3 batch export should support splitting-up a file into individual parts according to a max file size provided by the user.
Some questions are open like when exporting Parquet files we write the entire Arrow RecordBatch, which could be larger than the max file size, so the writer needs to handle this limit too. JSONL S3 batch exports should be easier to implement as they write rows one-by-one.
The text was updated successfully, but these errors were encountered:
Large files can be hard to manage, so our S3 batch export should support splitting-up a file into individual parts according to a max file size provided by the user.
Some questions are open like when exporting Parquet files we write the entire Arrow RecordBatch, which could be larger than the max file size, so the writer needs to handle this limit too. JSONL S3 batch exports should be easier to implement as they write rows one-by-one.
The text was updated successfully, but these errors were encountered: