diff --git a/docs/assets/img/s3gatewayvsclientdataflow.png b/docs/assets/img/s3gatewayvsclientdataflow.png new file mode 100644 index 00000000000..9431c18ccb5 Binary files /dev/null and b/docs/assets/img/s3gatewayvsclientdataflow.png differ diff --git a/docs/understand/architecture.md b/docs/understand/architecture.md index 3da64fc6f98..7af37d77167 100644 --- a/docs/understand/architecture.md +++ b/docs/understand/architecture.md @@ -112,6 +112,15 @@ Using [lakeFSFileSystem][hadoopfs] increases Spark ETL jobs performance by execu and all data operations directly through the same underlying object store that lakeFS uses. +## How lakeFS Clients and Gateway Handle Metadata and Data Access + + +When using the Python client, lakeCTL, or the lakeFS Spark client, these clients communicate with the lakeFS server to retrieve metadata information. For example, they may query lakeFS to understand which version of a file is needed or to track changes in branches and commits. This communication does not include the actual data transfer, but instead involves passing only metadata about data locations and versions. +Once the client knows the exact data location from the lakeFS metadata, it directly accesses the data in the underlying object storage (potentially using presigned URLs) without routing through lakeFS. For instance, if data is stored in S3, the Spark client will retrieve the S3 paths from lakeFS, then directly read and write to those paths in S3 without involving lakeFS in the data transfer. + +lakeFS Clients vs Gateway Data Flow + + [data-quality-gates]: {% link understand/use_cases/cicd_for_data.md %}#using-hooks-as-data-quality-gates [dynamodb-permissions]: {% link howto/deploy/aws.md %}#grant-dynamodb-permissions-to-lakefs [roadmap]: {% link project/index.md %}#roadmap