Skip to content

Commit

Permalink
Code and actions cleanup (#4568)
Browse files Browse the repository at this point in the history
  • Loading branch information
nopcoder authored Nov 8, 2022
1 parent 20f2c15 commit 6601b49
Show file tree
Hide file tree
Showing 45 changed files with 228 additions and 798 deletions.
2 changes: 1 addition & 1 deletion .github/actions/bootstrap-test-lakefs/action.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ runs:
shell: bash
run: tar -xf /tmp/generated.tar.gz
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/docker-publish-lakefs-rclone-export.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
id: version

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/docker-publish.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
node-version: '16.17.1'

- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.19.2
id: go
Expand All @@ -42,7 +42,7 @@ jobs:
GOLANGCI_LINT_FLAGS: --out-format github-actions

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down
38 changes: 26 additions & 12 deletions .github/workflows/esti.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,14 @@ jobs:
- uses: actions/setup-node@v3
with:
node-version: '16.17.1'

- name: Generate code
run: |
make -j3 gen-api gen-ui VERSION=${{ steps.version.outputs.tag }}
tar -cf /tmp/generated.tar.gz .
- name: Store generated code
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: generated-code
path: /tmp/generated.tar.gz
Expand Down Expand Up @@ -105,7 +107,7 @@ jobs:
run: tar -xf /tmp/generated.tar.gz

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down Expand Up @@ -185,7 +187,7 @@ jobs:
uses: actions/checkout@v3

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down Expand Up @@ -265,8 +267,11 @@ jobs:
- name: Check-out code
uses: actions/checkout@v3

- name: Setup Scala
uses: olafurpg/setup-scala@v10
- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: Start lakeFS for Spark tests
uses: ./.github/actions/bootstrap-test-lakefs
Expand Down Expand Up @@ -323,8 +328,11 @@ jobs:
- name: Check-out code
uses: actions/checkout@v3

- name: Setup Scala
uses: olafurpg/setup-scala@v10
- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: Package Spark App
working-directory: test/spark/app
Expand Down Expand Up @@ -399,8 +407,11 @@ jobs:
- name: Check-out code
uses: actions/checkout@v3

- name: Setup Scala
uses: olafurpg/setup-scala@v10
- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: Package Spark App
working-directory: test/spark/app
Expand Down Expand Up @@ -525,7 +536,7 @@ jobs:
uses: actions/checkout@v3

- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.19.2
id: go
Expand Down Expand Up @@ -589,8 +600,11 @@ jobs:
- name: Check-out code
uses: actions/checkout@v3

- name: Setup Scala
uses: olafurpg/setup-scala@v10
- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: Package Metaclient
working-directory: clients/spark
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/goreleaser.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
go-version: 1.19.2

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/publish-hadoop-lakefs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ jobs:
run: sed -i.bak 's/<version>.*<\/version><!--MARKER.*/<version>'${{ steps.version.outputs.tag }}'<\/version>/' pom.xml

- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
uses: aws-actions/configure-aws-credentials@v1-node16
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
Expand Down
10 changes: 8 additions & 2 deletions .github/workflows/publish-spark-metadata-client.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,17 @@ jobs:
steps:
- name: Check-out code
uses: actions/checkout@v3
- name: Setup Scala
uses: olafurpg/setup-scala@v10

- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: validate format
working-directory: clients/spark
run: sbt scalafmtCheck

- name: validate unused
working-directory: clients/spark
run: sbt "scalafix --check"
Expand Down
7 changes: 5 additions & 2 deletions .github/workflows/spark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,11 @@ jobs:
- name: Check-out code
uses: actions/checkout@v3

- name: Setup Scala
uses: olafurpg/setup-scala@v13
- uses: actions/setup-java@v3
with:
distribution: 'adopt-hotspot'
java-version: '8'
cache: 'sbt'

- name: validate format
working-directory: clients/spark
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ proto: go-install ## Build proto (Protocol Buffers) files
$(PROTOC) --proto_path=pkg/graveler/settings --go_out=pkg/graveler/settings --go_opt=paths=source_relative test_settings.proto
$(PROTOC) --proto_path=pkg/kv/kvtest --go_out=pkg/kv/kvtest --go_opt=paths=source_relative test_model.proto
$(PROTOC) --proto_path=pkg/kv --go_out=pkg/kv --go_opt=paths=source_relative secondary_index.proto
$(PROTOC) --proto_path=pkg/gateway/multiparts --go_out=pkg/gateway/multiparts --go_opt=paths=source_relative multipart.proto
$(PROTOC) --proto_path=pkg/gateway/multipart --go_out=pkg/gateway/multipart --go_opt=paths=source_relative multipart.proto
$(PROTOC) --proto_path=pkg/actions --go_out=pkg/actions --go_opt=paths=source_relative actions.proto
$(PROTOC) --proto_path=pkg/auth/model --go_out=pkg/auth/model --go_opt=paths=source_relative model.proto

Expand Down
8 changes: 4 additions & 4 deletions cmd/lakefs/cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import (
"github.com/treeverse/lakefs/pkg/catalog"
"github.com/treeverse/lakefs/pkg/config"
"github.com/treeverse/lakefs/pkg/gateway"
"github.com/treeverse/lakefs/pkg/gateway/multiparts"
"github.com/treeverse/lakefs/pkg/gateway/multipart"
"github.com/treeverse/lakefs/pkg/gateway/sig"
"github.com/treeverse/lakefs/pkg/httputil"
"github.com/treeverse/lakefs/pkg/kv"
Expand Down Expand Up @@ -132,9 +132,9 @@ var runCmd = &cobra.Command{

migrator := kv.NewDatabaseMigrator(kvParams)
storeMessage := &kv.StoreMessage{Store: kvStore}
multipartsTracker := multiparts.NewTracker(*storeMessage)
multipartTracker := multipart.NewTracker(*storeMessage)
actionsStore := actions.NewActionsKVStore(*storeMessage)
authMetadataManager := auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseParams().Type, kvStore)
authMetadataManager := auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseType(), kvStore)
idGen := &actions.DecreasingIDGenerator{}

// initialize auth service
Expand Down Expand Up @@ -305,7 +305,7 @@ var runCmd = &cobra.Command{
s3gatewayHandler := gateway.NewHandler(
cfg.GetS3GatewayRegion(),
c,
multipartsTracker,
multipartTracker,
blockStore,
authService,
cfg.GetS3GatewayDomainNames(),
Expand Down
2 changes: 1 addition & 1 deletion cmd/lakefs/cmd/setup.go
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ var setupCmd = &cobra.Command{
logger := logging.Default()
authLogger := logger.WithField("service", "auth_service")
authService = auth.NewKVAuthService(storeMessage, crypt.NewSecretStore(cfg.GetAuthEncryptionSecret()), nil, cfg.GetAuthCacheConfig(), authLogger)
metadataManager = auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseParams().Type, kvStore)
metadataManager = auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseType(), kvStore)

cloudMetadataProvider := stats.BuildMetadataProvider(logger, cfg)
metadata := stats.NewMetadata(ctx, logger, cfg.GetBlockstoreType(), metadataManager, cloudMetadataProvider)
Expand Down
2 changes: 1 addition & 1 deletion cmd/lakefs/cmd/superuser.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ var superuserCmd = &cobra.Command{
}
storeMessage := &kv.StoreMessage{Store: kvStore}
authService := auth.NewKVAuthService(storeMessage, crypt.NewSecretStore(cfg.GetAuthEncryptionSecret()), nil, cfg.GetAuthCacheConfig(), logger.WithField("service", "auth_service"))
authMetadataManager := auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseParams().Type, kvStore)
authMetadataManager := auth.NewKVMetadataManager(version.Version, cfg.GetFixedInstallationID(), cfg.GetDatabaseType(), kvStore)

metadataProvider := stats.BuildMetadataProvider(logger, cfg)
metadata := stats.NewMetadata(ctx, logger, cfg.GetBlockstoreType(), authMetadataManager, metadataProvider)
Expand Down
2 changes: 1 addition & 1 deletion design/accepted/metadata_kv/lakefs-kv-execution-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The folloiwng steps will be required for each pacakge that uses the 'db' layer:
- authorization information: users, group, policy, credentials and etc
pkg/actions
- actions information: runs, hooks, status
pkg/gateway/multiparts
pkg/gateway/multipart
- tracking gateway multipart requests

pkg/graveler/ref
Expand Down
14 changes: 7 additions & 7 deletions design/accepted/metadata_kv/lakefs-on-kv-testing-plan.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ This document aims to describe the requirements for the testing infrastructure a
* Need to consider various scales
## Per Package DB Testing

### ```pkg/gateway/multiparts```
### ```pkg/gateway/multipart```
* DB is used to track the start and end of a multipart upload. All DB accesses are done via `mutltiparts.Tracker`. Entries are created once, read-accesses multiple times and deleted upon completion
* Currently unit tests cover correctness of DB accesses in both good and error paths.
* 83.8% coverage
Expand Down Expand Up @@ -108,21 +108,21 @@ It can be leveraged, however, to extend the cover of '''pkg/graveler/ref''' and
* Data level migration tests infrastructure
* Migrate data from Table to KV, extract both and compare
* Data extraction should be done by listing all objects in the DB, using a designated 'get' function, and compare
* Implement ~~dumpers for `gateway_multiprts`~~ `GetAll` for `multiparts.Tracker`, to return a list of `MultipartUpload`
* Implement ~~dumpers for `gateway_multiprts`~~ `GetAll` for `multipart.Tracker`, to return a list of `MultipartUpload`
* Implement comparison of `MultipartUploads` list. Lists are considered identical if objects are identical, but **not necessarily** at the same order
* Implement unit tests for `pkg/gateway/multiparts`
* Add multipart uploads using `multiparts.Tracker.Create` with Table DB (KV Feature Flag off)
* Implement unit tests for `pkg/gateway/multipart`
* Add multipart uploads using `multipart.Tracker.Create` with Table DB (KV Feature Flag off)
* Create an entry with key representing each off the supported storages:
* azure, google, s3, local, mem & transient
* Read all entries using `GetAll` above (Table DB)
* Run migration for `gateway_multiparts` table
* Read all entries using `GetAll` (KV Store)
* Compare the lists and expect equality (up to order)
* Infrastructure for running migration during a system test execution
* System test to run migration during multiparts upload
* Currently there is a single simple multiparts system test (single file, 7 parts) - this is also an opportunity to expand that
* System test to run migration during multipart upload
* Currently there is a single simple multipart system test (single file, 7 parts) - this is also an opportunity to expand that
* KV Store unit tests
* `multiparts.Tracker` benchmark to run on both Table DB and KV Store (use feature flag to toggle) and verify there is no degradation
* `multipart.Tracker` benchmark to run on both Table DB and KV Store (use feature flag to toggle) and verify there is no degradation
* Define sequence(s) of actions to perform (Create/Get/Delete etc.)
* Run each sequence with feature flag off and on
* Compare results and fail if KV performance is more than [TBD]% slower
Expand Down
86 changes: 0 additions & 86 deletions pkg/actions/main_test.go

This file was deleted.

Loading

0 comments on commit 6601b49

Please sign in to comment.