Skip to content

Commit

Permalink
remove s3.region parameter (pingcap#9757)
Browse files Browse the repository at this point in the history
  • Loading branch information
ran-huang authored Aug 10, 2022
1 parent 26d4427 commit 0b34972
Show file tree
Hide file tree
Showing 13 changed files with 29 additions and 38 deletions.
9 changes: 3 additions & 6 deletions br/backup-and-restore-storages.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Cloud storages such as S3, GCS and Azblob sometimes require additional configura

```bash
./dumpling -u root -h 127.0.0.1 -P 3306 -B mydb -F 256MiB \
-o 's3://my-bucket/sql-backup?region=us-west-2'
-o 's3://my-bucket/sql-backup'
```

+ Use TiDB Lightning to import data from S3:
Expand All @@ -39,7 +39,7 @@ Cloud storages such as S3, GCS and Azblob sometimes require additional configura

```bash
./tidb-lightning --tidb-port=4000 --pd-urls=127.0.0.1:2379 --backend=local --sorted-kv-dir=/tmp/sorted-kvs \
-d 's3://my-bucket/sql-backup?region=us-west-2'
-d 's3://my-bucket/sql-backup'
```

+ Use TiDB Lightning to import data from S3 (using the path style in the request mode):
Expand Down Expand Up @@ -75,7 +75,6 @@ Cloud storages such as S3, GCS and Azblob sometimes require additional configura
|:----------|:---------|
| `access-key` | The access key |
| `secret-access-key` | The secret access key |
| `region` | Service Region for Amazon S3 (default to `us-east-1`) |
| `use-accelerate-endpoint` | Whether to use the accelerate endpoint on Amazon S3 (default to `false`) |
| `endpoint` | URL of custom endpoint for S3-compatible services (for example, `https://s3.example.com/`) |
| `force-path-style` | Use path style access rather than virtual hosted style access (default to `true`) |
Expand Down Expand Up @@ -138,8 +137,7 @@ In addition to the URL parameters, BR and Dumpling also support specifying these

```bash
./dumpling -u root -h 127.0.0.1 -P 3306 -B mydb -F 256MiB \
-o 's3://my-bucket/sql-backup' \
--s3.region 'us-west-2'
-o 's3://my-bucket/sql-backup'
```

If you have specified URL parameters and command-line parameters at the same time, the URL parameters are overwritten by the command-line parameters.
Expand All @@ -148,7 +146,6 @@ If you have specified URL parameters and command-line parameters at the same tim

| Command-line parameter | Description |
|:----------|:------|
| `--s3.region` | Amazon S3's service region, which defaults to `us-east-1`. |
| `--s3.endpoint` | The URL of custom endpoint for S3-compatible services. For example, `https://s3.example.com/`. |
| `--s3.storage-class` | The storage class of the upload object. For example, `STANDARD` or `STANDARD_IA`. |
| `--s3.sse` | The server-side encryption algorithm used to encrypt the upload. The value options are empty, `AES256` and `aws:kms`. |
Expand Down
9 changes: 3 additions & 6 deletions br/backup-storage-S3.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Before performing backup or restoration using S3, you need to configure the priv

Before backup, configure the following privileges to access the backup directory on S3.

- Minimum privileges for TiKV and BR to access the backup directories of `s3:ListBucket`, `s3:PutObject`, and `s3:AbortMultipartUpload` during backup
- Minimum privileges for TiKV and BR to access the backup directories of `s3:ListBucket`, `s3:PutObject`, and `s3:AbortMultipartUpload` during backup
- Minimum privileges for TiKV and BR to access the backup directories of `s3:ListBucket` and `s3:GetObject` during restoration

If you have not yet created a backup directory, refer to [AWS Official Document](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) to create an S3 bucket in the specified region. If necessary, you can also create a folder in the bucket by referring to [AWS official documentation - Create Folder](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html).
Expand All @@ -33,15 +33,15 @@ It is recommended that you configure access to S3 using either of the following
{{< copyable "shell-regular" >}}

```shell
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}" --s3.region "${region}"
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}"
```

- Configure `access-key` and `secret-access-key` for accessing S3 in the `br` CLI, and set `--send-credentials-to-tikv=true` to pass the access key from BR to each TiKV.

{{< copyable "shell-regular" >}}

```shell
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" --s3.region "${region}" --send-credentials-to-tikv=true
br backup full --pd "${PDIP}:2379" --storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" --send-credentials-to-tikv=true
```

Because the access key in a command is vulnerable to leakage, you are recommended to associate an IAM role to EC2 instances to access S3.
Expand All @@ -54,15 +54,13 @@ Because the access key in a command is vulnerable to leakage, you are recommende
br backup full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" \
--s3.region "${region}" \
--send-credentials-to-tikv=true \
--ratelimit 128 \
--log-file backuptable.log
```

In the preceding command:

- `--s3.region`: specifies the region of S3.
- `--send-credentials-to-tikv`: specifies that access key is passed to the TiKV nodes.

## Restore data from S3
Expand All @@ -71,7 +69,6 @@ In the preceding command:
br restore full \
--pd "${PDIP}:2379" \
--storage "s3://${Bucket}/${Folder}?access-key=${accessKey}&secret-access-key=${secretAccessKey}" \
--s3.region "${region}" \
--ratelimit 128 \
--send-credentials-to-tikv=true \
--log-file restorefull.log
Expand Down
2 changes: 1 addition & 1 deletion dm/task-configuration-file-full.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ loaders:
global: # The configuration name of the processing unit.
pool-size: 16 # The number of threads that concurrently execute dumped SQL files in the load processing unit (16 by default). When multiple instances are migrating data to TiDB at the same time, slightly reduce the value according to the load.
# The directory that stores full data exported from the upstream ("./dumped_data" by default).
# Supoprts a local filesystem path or an Amazon S3 path. For example, "s3://dm_bucket/dumped_data?region=us-west-2&endpoint=s3-website.us-east-2.amazonaws.com&access_key=s3accesskey&secret_access_key=s3secretkey&force_path_style=true"
# Supoprts a local filesystem path or an Amazon S3 path. For example, "s3://dm_bucket/dumped_data?endpoint=s3-website.us-east-2.amazonaws.com&access_key=s3accesskey&secret_access_key=s3secretkey&force_path_style=true"
dir: "./dumped_data"
# The import mode during the full import phase. In most cases you don't need to care about this configuration.
# - "sql" (default). Use [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) TiDB-backend mode to import data.
Expand Down
5 changes: 1 addition & 4 deletions dumpling-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,8 +186,6 @@ export AWS_SECRET_ACCESS_KEY=${SecretKey}
Dumpling also supports reading credential files from `~/.aws/credentials`. For more Dumpling configuration, see the configuration of [External storages](/br/backup-and-restore-storages.md).
When you back up data using Dumpling, explicitly specify the `--s3.region` parameter, which means the region of the S3 storage (for example, `ap-northeast-1`):
{{< copyable "shell-regular" >}}
```shell
Expand All @@ -196,8 +194,7 @@ When you back up data using Dumpling, explicitly specify the `--s3.region` param
-P 4000 \
-h 127.0.0.1 \
-r 200000 \
-o "s3://${Bucket}/${Folder}" \
--s3.region "${region}"
-o "s3://${Bucket}/${Folder}"
```
### Filter the exported data
Expand Down
4 changes: 2 additions & 2 deletions encryption-at-rest.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,11 +177,11 @@ To enable S3 server-side encryption when backup to S3 using BR, pass `--s3.sse`
To use a custom AWS KMS CMK that you created and owned, pass `--s3.sse-kms-key-id` in addition. In this case, both the BR process and all the TiKV nodes in the cluster would need access to the KMS CMK (for example, via AWS IAM), and the KMS CMK needs to be in the same AWS region as the S3 bucket used to store the backup. It is advised to grant access to the KMS CMK to BR process and TiKV nodes via AWS IAM. Refer to AWS documentation for usage of [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). For example:

```
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.region <region> --s3.sse aws:kms --s3.sse-kms-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
./br backup full --pd <pd-address> --storage "s3://<bucket>/<prefix>" --s3.sse aws:kms --s3.sse-kms-key-id 0987dcba-09fe-87dc-65ba-ab0987654321
```

When restoring the backup, both `--s3.sse` and `--s3.sse-kms-key-id` should NOT be used. S3 will figure out encryption settings by itself. The BR process and TiKV nodes in the cluster to restore the backup to would also need access to the KMS CMK, or the restore will fail. Example:

```
./br restore full --pd <pd-address> --storage "s3://<bucket>/<prefix> --s3.region <region>"
./br restore full --pd <pd-address> --storage "s3://<bucket>/<prefix>"
```
6 changes: 3 additions & 3 deletions migrate-aurora-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Export the schema using Dumpling by running the following command. The command i
{{< copyable "shell-regular" >}}

```shell
tiup dumpling --host ${host} --port 3306 --user root --password ${password} --filter 'my_db1.table[12]' --no-data --output 's3://my-bucket/schema-backup?region=us-west-2' --filter "mydb.*"
tiup dumpling --host ${host} --port 3306 --user root --password ${password} --filter 'my_db1.table[12]' --no-data --output 's3://my-bucket/schema-backup' --filter "mydb.*"
```

The parameters used in the command above are as follows. For more parameters, refer to [Dumpling overview](/dumpling-overview.md).
Expand Down Expand Up @@ -110,7 +110,7 @@ sorted-kv-dir = "/mnt/ssd/sorted-kv-dir"
[mydumper]
# The path that stores the snapshot file.
data-source-dir = "${s3_path}" # e.g.: s3://my-bucket/sql-backup?region=us-west-2
data-source-dir = "${s3_path}" # e.g.: s3://my-bucket/sql-backup
[[mydumper.files]]
# The expression that parses the parquet file.
Expand All @@ -129,7 +129,7 @@ If you need to enable TLS in the TiDB cluster, refer to [TiDB Lightning Configur
{{< copyable "shell-regular" >}}

```shell
tiup tidb-lightning -config tidb-lightning.toml -d 's3://my-bucket/schema-backup?region=us-west-2'
tiup tidb-lightning -config tidb-lightning.toml -d 's3://my-bucket/schema-backup'
```

2. Start the import by running `tidb-lightning`. If you launch the program directly in the command line, the process might exit unexpectedly after receiving a SIGHUP signal. In this case, it is recommended to run the program using a `nohup` or `screen` tool. For example:
Expand Down
2 changes: 1 addition & 1 deletion migrate-from-csv-files-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ sorted-kv-dir = "/mnt/ssd/sorted-kv-dir"

[mydumper]
# Directory of the data source.
data-source-dir = "${data-path}" # A local path or S3 path. For example, 's3://my-bucket/sql-backup?region=us-west-2'.
data-source-dir = "${data-path}" # A local path or S3 path. For example, 's3://my-bucket/sql-backup'.

# Defines CSV format.
[mydumper.csv]
Expand Down
4 changes: 2 additions & 2 deletions migrate-from-sql-files-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This document describes how to migrate data from MySQL SQL files to TiDB using T

## Step 1. Prepare SQL files

Put all the SQL files in the same directory, like `/data/my_datasource/` or `s3://my-bucket/sql-backup?region=us-west-2`. TiDB Lighting recursively searches for all `.sql` files in this directory and its subdirectories.
Put all the SQL files in the same directory, like `/data/my_datasource/` or `s3://my-bucket/sql-backup`. TiDB Lighting recursively searches for all `.sql` files in this directory and its subdirectories.

## Step 2. Define the target table schema

Expand Down Expand Up @@ -54,7 +54,7 @@ sorted-kv-dir = "${sorted-kv-dir}"

[mydumper]
# Directory of the data source
data-source-dir = "${data-path}" # Local or S3 path, such as 's3://my-bucket/sql-backup?region=us-west-2'
data-source-dir = "${data-path}" # Local or S3 path, such as 's3://my-bucket/sql-backup'

[tidb]
# The information of target cluster
Expand Down
4 changes: 2 additions & 2 deletions migrate-large-mysql-to-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The target TiKV cluster must have enough disk space to store the imported data.
{{< copyable "shell-regular" >}}

```shell
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup?region=us-west-2'
tiup dumpling -h ${ip} -P 3306 -u root -t 16 -r 200000 -F 256MiB -B my_db1 -f 'my_db1.table[12]' -o 's3://my-bucket/sql-backup'
```

Dumpling exports data in SQL files by default. You can specify a different file format by adding the `--filetype` option.
Expand Down Expand Up @@ -110,7 +110,7 @@ The target TiKV cluster must have enough disk space to store the imported data.
[mydumper]
# The data source directory. The same directory where Dumpling exports data in "Step 1. Export all data from MySQL".
data-source-dir = "${data-path}" # A local path or S3 path. For example, 's3://my-bucket/sql-backup?region=us-west-2'.
data-source-dir = "${data-path}" # A local path or S3 path. For example, 's3://my-bucket/sql-backup'.
[tidb]
# The target TiDB cluster information.
Expand Down
4 changes: 2 additions & 2 deletions sql-statements/sql-statement-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ BR supports backing up data to S3 or GCS:
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?region=us-west-2&access-key={YOUR_ACCESS_KEY}&secret-access-key={YOUR_SECRET_KEY}';
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?access-key={YOUR_ACCESS_KEY}&secret-access-key={YOUR_SECRET_KEY}';
```

The URL syntax is further explained in [External Storages](/br/backup-and-restore-storages.md).
Expand All @@ -115,7 +115,7 @@ When running on cloud environment where credentials should not be distributed, s
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/?region=us-west-2'
BACKUP DATABASE `test` TO 's3://example-bucket-2020/backup-05/'
SEND_CREDENTIALS_TO_TIKV = FALSE;
```

Expand Down
4 changes: 2 additions & 2 deletions sql-statements/sql-statement-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ BR supports restoring data from S3 or GCS:
{{< copyable "sql" >}}

```sql
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/?region=us-west-2';
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/';
```

The URL syntax is further explained in [External Storages](/br/backup-and-restore-storages.md).
Expand All @@ -106,7 +106,7 @@ When running on cloud environment where credentials should not be distributed, s
{{< copyable "sql" >}}

```sql
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/?region=us-west-2'
RESTORE DATABASE * FROM 's3://example-bucket-2020/backup-05/'
SEND_CREDENTIALS_TO_TIKV = FALSE;
```

Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-show-backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ In one connection, execute the following statement:
{{< copyable "sql" >}}

```sql
BACKUP DATABASE `test` TO 's3://example-bucket/backup-01/?region=us-west-1';
BACKUP DATABASE `test` TO 's3://example-bucket/backup-01';
```

Before the backup completes, run `SHOW BACKUPS` in a new connection:
Expand Down
12 changes: 6 additions & 6 deletions tidb-cloud/migrate-from-aurora-bulk-import.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ summary: Learn how to migrate data from Amazon Aurora MySQL to TiDB Cloud in bul

# Migrate from Amazon Aurora MySQL to TiDB Cloud in Bulk

This document describes how to migrate data from Amazon Aurora MySQL to TiDB Cloud in bulk using the import tools on TiDB Cloud console.
This document describes how to migrate data from Amazon Aurora MySQL to TiDB Cloud in bulk using the import tools on TiDB Cloud console.

## Learn how to create an import task on the TiDB Cloud console

Expand Down Expand Up @@ -82,7 +82,7 @@ You need to prepare an EC2 to run the following data export task. It's better to
```bash
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
source ~/.bash_profile
tiup install dumpling
tiup install dumpling
```

In the above commands, you need to modify `~/.bash_profile` to the path of your profile file.
Expand All @@ -92,7 +92,7 @@ You need to prepare an EC2 to run the following data export task. It's better to
> **Note:**
>
> If you have assigned the IAM role to the EC2, you can skip configuring the access key and security key, and directly run Dumpling on this EC2.

You can grant the write privilege using the access key and security key of your AWS account in the environment. Create a specific key pair for preparing data, and revoke the access key immediately after you finish the preparation.

{{< copyable "shell-regular" >}}
Expand All @@ -114,7 +114,7 @@ You need to prepare an EC2 to run the following data export task. It's better to
export_endpoint="<the endpoint for Amazon Aurora MySQL>"
# You will use the s3 url when you create importing task
backup_dir="s3://<bucket name>/<backup dir>"
s3_bucket_region="<bueckt_region>"
s3_bucket_region="<bucket_region>"
# Use `tiup -- dumpling` instead if "flag needs an argument: 'h' in -h" is prompted for TiUP versions earlier than v1.8
tiup dumpling \
Expand Down Expand Up @@ -159,7 +159,7 @@ To migrate data from Aurora, you need to back up the schema of the database.
mysqldump -h ${export_endpoint} -u ${export_username} -p --ssl-mode=DISABLED -d${export_database} >db.sql
```

3. Import the schema of the database into TiDB Cloud.
3. Import the schema of the database into TiDB Cloud.

{{< copyable "sql" >}}

Expand Down Expand Up @@ -189,7 +189,7 @@ To migrate data from Aurora, you need to back up the schema of the database.

7. Choose the proper IAM role to grant write access to the S3 bucket. Make a note of this role as it will be used later when you import the snapshot to TiDB Cloud.

8. Choose a proper AWS KMS Key and make sure the IAM role has already been added to the KMS Key Users. To add a role, you can select a KSM service, select the key, and then click **Add**.
8. Choose a proper AWS KMS Key and make sure the IAM role has already been added to the KMS Key Users. To add a role, you can select a KSM service, select the key, and then click **Add**.

9. Click **Export Amazon S3**. You can see the progress in the task table.

Expand Down

0 comments on commit 0b34972

Please sign in to comment.