Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DISCUSSION] Make it easier and faster to query remote files (S3, iceberg, etc) #13456

Open
4 tasks
alamb opened this issue Nov 17, 2024 · 12 comments
Open
4 tasks
Labels
enhancement New feature or request

Comments

@alamb
Copy link
Contributor

alamb commented Nov 17, 2024

Is your feature request related to a problem or challenge?

I personally think making it easy to use DataFusion with the "open data lake" stack is very important over the next few months.

@julienledem wrote up a very nice piece describing The advent of the Open Data Lake

The high level idea is to make it really easy for people to build systems that query (quickly!) from parquet files stored on remote object store, including Apache Iceberg, Delta Lake, Hudi, etc.

You can already use DataFusion (and datafusion-cli) to query such data, but it takes non trivial effort to configure and tune for good performance. My idea is to make it easier to do so / make DataFusion better out of the box.

With that as a building block, people could/would build applications and systems targeting specific usecases

I don't yet fully understand where we currently stand on this goal, but I wanted to start hte discussio

Describe the solution you'd like

In my mind, the specific work this entails stuff like

Describe alternatives you've considered

One specific item, brought up by @MrPowers would be to try DataFusion with the "10B row challenge" described in https://dataengineeringcentral.substack.com/p/10-billion-row-challenge-duckdb-vs .

I suspect it would be non ideal at first, but trying it to figure out what the challenges are would help us focus our efforts

Additional context

No response

@alamb alamb added the enhancement New feature or request label Nov 17, 2024
@comphead
Copy link
Contributor

What about remote HDFS files support? We have a contribution project https://github.com/datafusion-contrib/datafusion-objectstore-hdfs which supposed to query HDFS, but not sure how far we are with that

@alamb
Copy link
Contributor Author

alamb commented Nov 19, 2024

Yes I think HDFS would be another good target

Basically I want to make sure that it is as easy as possible to use DataFusion to query data that lives on remote systems (aka where the data is not on some local NVME but must be accessed over the network)

@jonathanc-n
Copy link
Contributor

jonathanc-n commented Nov 21, 2024

I forgot to mention this here, apache/iceberg-rust#700 (write support) is a really nice issue opened up in iceberg-rust. Getting the rust implementation of iceberg up and going would probably help out datafusion a bit on the data lake side of things.

@alamb
Copy link
Contributor Author

alamb commented Nov 21, 2024

I forgot to mention this here, apache/iceberg-rust#700 (write support) is a really nice issue opened up in iceberg-rust. Getting the rust implementation of iceberg up and going would probably help out datafusion a bit on the data lake side of things.

Yes, 100% -- one of my goals is to make it easy for this to "just work" with DataFusion. I think we are a bit away from it at the moment

@matthewmturner
Copy link
Contributor

@alamb from a datafusion perspective which parts do you think are missing? I ask about just the datafusion perspective because i am assuming the owners of the relevant table formats will be implementing their spec / protocol.

@matthewmturner
Copy link
Contributor

The way i was thinking about it the thing that makes it interesting / difficult is getting the semantics of each of the formats as part of datafusion which i would presume needs to be done as either SQL extensions and/or custom execution plans. Without knowing much about the specifics i thought that was already doable - but maybe im missing something.

@alamb
Copy link
Contributor Author

alamb commented Nov 22, 2024

@alamb from a datafusion perspective which parts do you think are missing? I ask about just the datafusion perspective because i am assuming the owners of the relevant table formats will be implementing their spec / protocol.

I am thinking mostly the list

@matthewmturner
Copy link
Contributor

Thanks @alamb

I plan to work on the second item - probably in December. I was really hoping to get a dft release out shortly where all the custom table providers (Delta / Iceberg / Hudi - and potentially Lance) were on DF v43 - but that might be wishful thinking.

Depending how the next week goes i will make a judgement call on whether or not to wait. If it isnt looking promising that everyone will be on v43 then my next priority will be implementing the FFI.

@blaginin
Copy link
Contributor

blaginin commented Dec 3, 2024

I think one problem with the current implementation of external storages is that it's pretty hard to test properly. For example, the issue I solve in #13576 happened because right now, we only test that the external storage parameters are parsed, but we don’t even check if they’re parsed correctly.

Maybe we should start mocking aws/iceberg/... and move more towards integration testing? That way, we’d be more confident that our external storage support actually works 😅

@comphead
Copy link
Contributor

comphead commented Dec 3, 2024

@alamb I will try to cover how DataFusion works with remote HDFS files if that fits.
I'm planning to experiment with datafusion-objectstore-hdfs and make dev/tests/examples/docs so the user can easily query HDFS files from Datafusion CLI

@alamb
Copy link
Contributor Author

alamb commented Dec 4, 2024

Maybe we should start mocking aws/iceberg/... and move more towards integration testing? That way, we’d be more confident that our external storage support actually works 😅

Yes 100% -- maybe we can take a friendly look at the emultors used in object_store tests

https://github.com/apache/arrow-rs/blob/9047d99f6bf87582532ee6ed0acb3f2d5f889f11/.github/workflows/object_store.yml#L91-L184

@alamb alamb changed the title [DISCUSSION] Make it easy and fast to query files on remote files (S3, iceberg, etc) [DISCUSSION] Make it easier and faster to query files on remote files (S3, iceberg, etc) Dec 21, 2024
@alamb alamb changed the title [DISCUSSION] Make it easier and faster to query files on remote files (S3, iceberg, etc) [DISCUSSION] Make it easier and faster to query remote files (S3, iceberg, etc) Dec 21, 2024
@timvw
Copy link
Contributor

timvw commented Jan 9, 2025

In qv I use minio to test whether s3 integration works...

https://github.com/timvw/qv/blob/main/tests/files_on_s3.rs#L9
https://github.com/timvw/qv/blob/main/ci/minio_start.sh

Integration with delta-rs was always pretty easy, iceberg-rust|rs has been more difficult (mostly because you also need to access a catalog to get the correct/current relevant metadata to get started...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants