Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implement storage management commands #1587

Merged
merged 16 commits into from
Oct 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 82 additions & 0 deletions cmd/storage.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
package cmd

import (
"github.com/spf13/afero"
"github.com/spf13/cobra"
"github.com/supabase/cli/internal/storage"
"github.com/supabase/cli/internal/storage/cp"
"github.com/supabase/cli/internal/storage/ls"
"github.com/supabase/cli/internal/storage/mv"
"github.com/supabase/cli/internal/storage/rm"
)

var (
storageCmd = &cobra.Command{
GroupID: groupManagementAPI,
Use: "storage",
Short: "Manage Supabase Storage objects",
}

recursive bool

lsCmd = &cobra.Command{
Use: "ls [path]",
Example: "ls ss:///bucket/docs",
Short: "List objects by path prefix",
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
objectPath := storage.STORAGE_SCHEME + ":///"
if len(args) > 0 {
objectPath = args[0]
sweatybridge marked this conversation as resolved.
Show resolved Hide resolved
}
return ls.Run(cmd.Context(), objectPath, recursive, afero.NewOsFs())
},
}

cpCmd = &cobra.Command{
sweatybridge marked this conversation as resolved.
Show resolved Hide resolved
Use: "cp <src> <dst>",
Example: `cp readme.md ss:///bucket/readme.md
cp -r docs ss:///bucket/docs
cp -r ss:///bucket/docs .
`,
Short: "Copy objects from src to dst path",
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
return cp.Run(cmd.Context(), args[0], args[1], recursive, afero.NewOsFs())
},
}

mvCmd = &cobra.Command{
Use: "mv <src> <dst>",
Short: "Move objects from src to dst path",
Example: "mv -r ss:///bucket/docs ss:///bucket/www/docs",
Args: cobra.ExactArgs(2),
RunE: func(cmd *cobra.Command, args []string) error {
return mv.Run(cmd.Context(), args[0], args[1], recursive, afero.NewOsFs())
},
}

rmCmd = &cobra.Command{
Use: "rm <file> ...",
Short: "Remove objects by file path",
Example: `rm -r ss:///bucket/docs
rm ss:///bucket/docs/example.md ss:///bucket/readme.md
`,
Args: cobra.MinimumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
return rm.Run(cmd.Context(), args, recursive, afero.NewOsFs())
},
}
)

func init() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if moving directory shouldn require a recursive call tbh, as this is not the behavior people are used to. Doing mv foo bar if foo is a directory will work in any unix environment. Only cp and rm require this flag.

Copy link
Contributor Author

@sweatybridge sweatybridge Oct 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree on the difference. I was a bit reluctant to make mv recursive by default because it would mean at least 2 network calls to copy a file, ie. first call tries to move as a file and then second call to move as a directory.

I find adding a -r flag isn't too bad in this case because it's more consistent with other commands. Also we are only supporting moving within the same bucket (no local -> remote) so I hope the expectation doesn't need to be exactly like unix.

If it turns out to be a big inconvenience, reverting to unix behaviour would not be a breaking change for those who already use the -r flag. So this is probably ok for now.

lsCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively list a directory.")
storageCmd.AddCommand(lsCmd)
cpCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively copy a directory.")
storageCmd.AddCommand(cpCmd)
rmCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively move a directory.")
storageCmd.AddCommand(rmCmd)
mvCmd.Flags().BoolVarP(&recursive, "recursive", "r", false, "Recursively remove a directory.")
storageCmd.AddCommand(mvCmd)
rootCmd.AddCommand(storageCmd)
}
7 changes: 2 additions & 5 deletions internal/storage/client/buckets.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ func ListStorageBuckets(ctx context.Context, projectRef string) ([]BucketRespons
}

type CreateBucketRequest struct {
Id string `json:"id"` // "string",
Name string `json:"name"` // "string",
Id string `json:"id,omitempty"` // "string",
Public bool `json:"public,omitempty"` // false,
FileSizeLimit int `json:"file_size_limit,omitempty"` // 0,
AllowedMimeTypes []string `json:"allowed_mime_types,omitempty"` // ["string"]
Expand All @@ -51,10 +51,7 @@ func CreateStorageBucket(ctx context.Context, projectRef, bucketName string) (*C
if err != nil {
return nil, err
}
body := CreateBucketRequest{
Id: bucketName,
Name: bucketName,
}
body := CreateBucketRequest{Name: bucketName}
return tenant.JsonResponseWithBearer[CreateBucketResponse](ctx, http.MethodPost, url, apiKey.ServiceRole, body)
}

Expand Down
32 changes: 25 additions & 7 deletions internal/storage/client/objects.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import (
"net/http"
"os"
"path"
"strings"

"github.com/spf13/afero"
"github.com/supabase/cli/internal/utils"
Expand All @@ -17,9 +18,9 @@ const PAGE_LIMIT = 100

type ListObjectsQuery struct {
Prefix string `json:"prefix"`
Search string `json:"search"`
Limit int `json:"limit"`
Offset int `json:"offset"`
Search string `json:"search,omitempty"`
Limit int `json:"limit,omitempty"`
Offset int `json:"offset,omitempty"`
}

type ObjectResponse struct {
Expand Down Expand Up @@ -62,21 +63,37 @@ func ListStorageObjects(ctx context.Context, projectRef, bucket, prefix string,
}

func UploadStorageObject(ctx context.Context, projectRef, remotePath, localPath string, fsys afero.Fs) error {
apiKey, err := tenant.GetApiKeys(ctx, projectRef)
f, err := fsys.Open(localPath)
if err != nil {
return err
}
url := fmt.Sprintf("https://%s/storage/v1/object/%s", utils.GetSupabaseHost(projectRef), remotePath)
f, err := fsys.Open(localPath)
defer f.Close()
// Decode mimetype
header := io.LimitReader(f, 512)
buf, err := io.ReadAll(header)
if err != nil {
return err
}
defer f.Close()
mimetype := http.DetectContentType(buf)
_, err = f.Seek(0, io.SeekStart)
if err != nil {
return err
}
// Prepare request
apiKey, err := tenant.GetApiKeys(ctx, projectRef)
if err != nil {
return err
}
remotePath = strings.TrimPrefix(remotePath, "/")
url := fmt.Sprintf("https://%s/storage/v1/object/%s", utils.GetSupabaseHost(projectRef), remotePath)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, f)
if err != nil {
return err
}
req.Header.Add("Authorization", "Bearer "+apiKey.ServiceRole)
req.Header.Add("Content-Type", mimetype)
// Use default value of storage-js: https://github.com/supabase/storage-js/blob/main/src/packages/StorageFileApi.ts#L22
req.Header.Add("Cache-Control", "max-age=3600")
// Sends request
resp, err := http.DefaultClient.Do(req)
if err != nil {
Expand All @@ -98,6 +115,7 @@ func DownloadStorageObject(ctx context.Context, projectRef, remotePath, localPat
if err != nil {
return err
}
remotePath = strings.TrimPrefix(remotePath, "/")
url := fmt.Sprintf("https://%s/storage/v1/object/%s", utils.GetSupabaseHost(projectRef), remotePath)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
Expand Down
138 changes: 138 additions & 0 deletions internal/storage/cp/cp.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
package cp

import (
"context"
"errors"
"fmt"
"io/fs"
"net/url"
"os"
"path"
"path/filepath"
"strings"

"github.com/spf13/afero"
"github.com/supabase/cli/internal/storage"
"github.com/supabase/cli/internal/storage/client"
"github.com/supabase/cli/internal/storage/ls"
"github.com/supabase/cli/internal/utils"
)

var errUnsupportedOperation = errors.New("Unsupported operation")

func Run(ctx context.Context, src, dst string, recursive bool, fsys afero.Fs) error {
srcParsed, err := url.Parse(src)
if err != nil {
return err
}
dstParsed, err := url.Parse(dst)
if err != nil {
return err
}
projectRef, err := utils.LoadProjectRef(fsys)
if err != nil {
return err
}
if strings.ToLower(srcParsed.Scheme) == storage.STORAGE_SCHEME && dstParsed.Scheme == "" {
if recursive {
return DownloadStorageObjectAll(ctx, projectRef, srcParsed.Path, dst, fsys)
}
return client.DownloadStorageObject(ctx, projectRef, srcParsed.Path, dst, fsys)
} else if srcParsed.Scheme == "" && strings.ToLower(dstParsed.Scheme) == storage.STORAGE_SCHEME {
if recursive {
return UploadStorageObjectAll(ctx, projectRef, dstParsed.Path, src, fsys)
}
return client.UploadStorageObject(ctx, projectRef, dstParsed.Path, src, fsys)
} else if strings.ToLower(srcParsed.Scheme) == storage.STORAGE_SCHEME && strings.ToLower(dstParsed.Scheme) == storage.STORAGE_SCHEME {
return errors.New("Copying between buckets is not supported")
}
utils.CmdSuggestion = fmt.Sprintf("Run %s to copy between local directories.", utils.Aqua("cp -r <src> <dst>"))
return errUnsupportedOperation
}

func DownloadStorageObjectAll(ctx context.Context, projectRef, remotePath, localPath string, fsys afero.Fs) error {
// Prepare local directory for download
if fi, err := fsys.Stat(localPath); err == nil && fi.IsDir() {
localPath = filepath.Join(localPath, path.Base(remotePath))
}
count := 0
if err := ls.IterateStoragePathsAll(ctx, projectRef, remotePath, func(objectPath string) error {
relPath := strings.TrimPrefix(objectPath, remotePath)
dstPath := filepath.Join(localPath, filepath.FromSlash(relPath))
fmt.Fprintln(os.Stderr, "Downloading:", objectPath, "=>", dstPath)
count++
if strings.HasSuffix(objectPath, "/") {
return utils.MkdirIfNotExistFS(fsys, dstPath)
}
if err := utils.MkdirIfNotExistFS(fsys, filepath.Dir(dstPath)); err != nil {
return err
}
return client.DownloadStorageObject(ctx, projectRef, objectPath, dstPath, fsys)
}); err != nil {
return err
}
if count == 0 {
return errors.New("Object not found: " + remotePath)
}
return nil
}

func UploadStorageObjectAll(ctx context.Context, projectRef, remotePath, localPath string, fsys afero.Fs) error {
noSlash := strings.TrimSuffix(remotePath, "/")
// Check if directory exists on remote
dirExists := false
fileExists := false
if err := ls.IterateStoragePaths(ctx, projectRef, noSlash, func(objectName string) error {
if objectName == path.Base(noSlash) {
fileExists = true
}
if objectName == path.Base(noSlash)+"/" {
dirExists = true
}
return nil
}); err != nil {
return err
}
baseName := filepath.Base(localPath)
return afero.Walk(fsys, localPath, func(filePath string, info fs.FileInfo, err error) error {
if err != nil {
return err
}
if !info.Mode().IsRegular() {
return nil
}
relPath, err := filepath.Rel(localPath, filePath)
if err != nil {
return err
}
dstPath := remotePath
// Copying single file
if relPath == "." {
_, prefix := storage.SplitBucketPrefix(dstPath)
if IsDir(prefix) || (dirExists && !fileExists) {
dstPath = path.Join(dstPath, info.Name())
}
} else {
if baseName != "." && (dirExists || len(noSlash) == 0) {
dstPath = path.Join(dstPath, baseName)
}
dstPath = path.Join(dstPath, relPath)
}
fmt.Fprintln(os.Stderr, "Uploading:", filePath, "=>", dstPath)
err = client.UploadStorageObject(ctx, projectRef, dstPath, filePath, fsys)
if err != nil && strings.Contains(err.Error(), `"error":"Bucket not found"`) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tying ourselves to the exact error message is not great imo. Can we at least assert on http code instead? It's also available from what I see. Same for 2 different places.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

404 could either be thrown from http path not found or bucket doesn't exist. We can add the status code as an additional check but eventually we need a more specific error code to distinguish the two.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we should always create a bucket without any additional flag, but from UX perspective it sounds ok.

// Retry after creating bucket
if bucket, prefix := storage.SplitBucketPrefix(dstPath); len(prefix) > 0 {
if _, err := client.CreateStorageBucket(ctx, projectRef, bucket); err != nil {
return err
}
err = client.UploadStorageObject(ctx, projectRef, dstPath, filePath, fsys)
}
}
return err
})
}

func IsDir(objectPrefix string) bool {
return len(objectPrefix) == 0 || strings.HasSuffix(objectPrefix, "/")
}
Loading