Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aliyun OSS - S3 compatible storage issue!!! #879

Open
shusaan opened this issue Dec 5, 2023 · 2 comments
Open

Aliyun OSS - S3 compatible storage issue!!! #879

shusaan opened this issue Dec 5, 2023 · 2 comments

Comments

@shusaan
Copy link

shusaan commented Dec 5, 2023

I have configured cluster to access (Aliyun OSS) S3 compatible storage but it show this error, can you please help me how to resolve this.
Here are configuration.

---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster1
spec:
  instances: 1
  startDelay: 300
  stopDelay: 300
  replicationSlots:
    highAvailability:
      enabled: true
    updateInterval: 300
  primaryUpdateStrategy: unsupervised
  storage:
    storageClass: sc-ha
    size: 20Gi
  walStorage:
    storageClass: sc-ha
    size: 20Gi
  monitoring:
    enablePodMonitor: true
  backup:
    barmanObjectStore:
      destinationPath: "s3://testbucket/"
      endpointURL: "https://cn-shanghai.oss.aliyuncs.com"
      s3Credentials:
        accessKeyId:
          name: minio-creds
          key: MINIO_ACCESS_KEY
        secretAccessKey:
          name: minio-creds
          key: MINIO_SECRET_KEY
      wal:
        compression: gzip
        maxParallel: 8
    retentionPolicy: "30d"

Here are the logs

{"level":"info","ts":"2023-10-19T14:55:21Z","logger":"barman-cloud-check-wal-archive","msg":"2023-10-19 14:55:21,576 [1568] ERROR: Barman cloud WAL archive check exception: An error occurred (403) when calling the HeadBucket operation: Forbidden","pipe":"stderr","logging_pod":"cluster1-1"}
{"level":"error","ts":"2023-10-19T14:55:21Z","logger":"wal-archive","msg":"Error invoking barman-cloud-check-wal-archive","logging_pod":"cluster1-1","currentPrimary":"cluster1-1","targetPrimary":"cluster1-1","options":["--endpoint-url","https://cn-shanghai.oss.aliyuncs.com","--cloud-provider","aws-s3","s3://testbucket/","cluster1"],"exitCode":-1,"error":"exit status 4","stacktrace":"github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/barman/archiver.(*WALArchiver).CheckWalArchiveDestination\n\tpkg/management/barman/archiver/archiver.go:257\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:373\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:190\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:85\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.1/x64/src/runtime/proc.go:267"}
{"level":"error","ts":"2023-10-19T14:55:21Z","msg":"while barman-cloud-check-wal-archive","logging_pod":"cluster1-1","error":"unexpected failure invoking barman-cloud-wal-archive: exit status 4","stacktrace":"github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:128\ngithub.com/cloudnative-pg/cloudnative-pg/pkg/management/log.Error\n\tpkg/management/log/log.go:166\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.checkWalArchive\n\tinternal/cmd/manager/walarchive/cmd.go:374\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.run\n\tinternal/cmd/manager/walarchive/cmd.go:190\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/walarchive.NewCmd.func1\n\tinternal/cmd/manager/walarchive/cmd.go:85\ngithub.com/spf13/cobra.(*Command).execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:940\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tpkg/mod/github.com/spf13/[email protected]/command.go:1068\ngithub.com/spf13/cobra.(*Command).Execute\n\tpkg/mod/github.com/spf13/[email protected]/command.go:992\nmain.main\n\tcmd/manager/main.go:64\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.1/x64/src/runtime/proc.go:267"}

I Have also tested with user that has full permission but the error was same, is there any custom config required for ALIYUN OSS storage?
As per Document the OSS is compatible with S3.

Please advice

@mikewallace1979
Copy link
Contributor

Hi @shusaan - it's quite common for S3-compatible object stores to have different requirements around the endpoint URL.

Currently you have:

      endpointURL: "https://cn-shanghai.oss.aliyuncs.com"

However no such URL is listed in the documentation for regions and endpoints - the public endpoint listed for Shanghai is oss-cn-shanghai.aliyuncs.com.

Can you retry with:

      endpointURL: "https://oss-cn-shanghai.aliyuncs.com"

and let us know the outcome?

Thanks.

@shusaan
Copy link
Author

shusaan commented Jul 23, 2024

Hi @mikewallace1979 I have tested it, the endpoint is working fine but we have to set the following configuration,

s3 =
  addressing_style = virtual

By-default configuration it is not working, However inside database container I created profile and in the profile I set the above mentioned values and then run the command barman-cloud-backup (pass the configured profile) it worked as expected, now my question how we can set this for CNPG database cluster?
Here is the Aliyun Link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants