Skip to content

Commit

Permalink
[doc] Change the table name from uppercase to lowercase (apache#2841)
Browse files Browse the repository at this point in the history
  • Loading branch information
herefree authored and harveyyue committed Feb 22, 2024
1 parent a65fcff commit 290381f
Show file tree
Hide file tree
Showing 12 changed files with 138 additions and 138 deletions.
6 changes: 3 additions & 3 deletions docs/content/concepts/append-table/append-queue-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ For streaming reads, records are produced in the following order:
You can define watermark for reading Paimon tables:

```sql
CREATE TABLE T (
CREATE TABLE t (
`user` BIGINT,
product STRING,
order_time TIMESTAMP(3),
Expand All @@ -105,7 +105,7 @@ CREATE TABLE T (

-- launch a bounded streaming job to read paimon_table
SELECT window_start, window_end, COUNT(`user`) FROM TABLE(
TUMBLE(TABLE T, DESCRIPTOR(order_time), INTERVAL '10' MINUTES)) GROUP BY window_start, window_end;
TUMBLE(TABLE t, DESCRIPTOR(order_time), INTERVAL '10' MINUTES)) GROUP BY window_start, window_end;
```

You can also enable [Flink Watermark alignment](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/event-time/generating_watermarks/#watermark-alignment-_beta_),
Expand Down Expand Up @@ -168,7 +168,7 @@ The following is an example of creating the Append table and specifying the buck
{{< tab "Flink" >}}

```sql
CREATE TABLE MyTable (
CREATE TABLE my_table (
product_id BIGINT,
price DOUBLE,
sales BIGINT
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ The following is an example of creating the Append table and specifying the buck
{{< tab "Flink" >}}

```sql
CREATE TABLE MyTable (
CREATE TABLE my_table (
product_id BIGINT,
price DOUBLE,
sales BIGINT
Expand Down
26 changes: 13 additions & 13 deletions docs/content/concepts/primary-key-table/merge-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ So we introduce sequence group mechanism for partial-update tables. It can solve
See example:

```sql
CREATE TABLE T (
CREATE TABLE t (
k INT,
a INT,
b INT,
Expand All @@ -91,17 +91,17 @@ CREATE TABLE T (
'fields.g_2.sequence-group'='c,d'
);

INSERT INTO T VALUES (1, 1, 1, 1, 1, 1, 1);
INSERT INTO t VALUES (1, 1, 1, 1, 1, 1, 1);

-- g_2 is null, c, d should not be updated
INSERT INTO T VALUES (1, 2, 2, 2, 2, 2, CAST(NULL AS INT));
INSERT INTO t VALUES (1, 2, 2, 2, 2, 2, CAST(NULL AS INT));

SELECT * FROM T; -- output 1, 2, 2, 2, 1, 1, 1
SELECT * FROM t; -- output 1, 2, 2, 2, 1, 1, 1

-- g_1 is smaller, a, b should not be updated
INSERT INTO T VALUES (1, 3, 3, 1, 3, 3, 3);
INSERT INTO t VALUES (1, 3, 3, 1, 3, 3, 3);

SELECT * FROM T; -- output 1, 2, 2, 2, 3, 3, 3
SELECT * FROM t; -- output 1, 2, 2, 2, 3, 3, 3
```

For `fields.<field-name>.sequence-group`, valid comparative data types include: DECIMAL, TINYINT, SMALLINT, INTEGER, BIGINT, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP, and TIMESTAMP_LTZ.
Expand All @@ -113,7 +113,7 @@ You can specify aggregation function for the input field, all the functions in t
See example:

```sql
CREATE TABLE T (
CREATE TABLE t (
k INT,
a INT,
b INT,
Expand All @@ -127,13 +127,13 @@ CREATE TABLE T (
'fields.c.sequence-group' = 'd',
'fields.d.aggregate-function' = 'sum'
);
INSERT INTO T VALUES (1, 1, 1, CAST(NULL AS INT), CAST(NULL AS INT));
INSERT INTO T VALUES (1, CAST(NULL AS INT), CAST(NULL AS INT), 1, 1);
INSERT INTO T VALUES (1, 2, 2, CAST(NULL AS INT), CAST(NULL AS INT));
INSERT INTO T VALUES (1, CAST(NULL AS INT), CAST(NULL AS INT), 2, 2);
INSERT INTO t VALUES (1, 1, 1, CAST(NULL AS INT), CAST(NULL AS INT));
INSERT INTO t VALUES (1, CAST(NULL AS INT), CAST(NULL AS INT), 1, 1);
INSERT INTO t VALUES (1, 2, 2, CAST(NULL AS INT), CAST(NULL AS INT));
INSERT INTO t VALUES (1, CAST(NULL AS INT), CAST(NULL AS INT), 2, 2);


SELECT * FROM T; -- output 1, 2, 1, 2, 3
SELECT * FROM t; -- output 1, 2, 1, 2, 3
```

## Aggregation
Expand All @@ -151,7 +151,7 @@ Each field not part of the primary keys can be given an aggregate function, spec
{{< tab "Flink" >}}

```sql
CREATE TABLE MyTable (
CREATE TABLE my_table (
product_id BIGINT,
price DOUBLE,
sales BIGINT,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ there will be some cases that lead to data disorder. At this time, you can use a
{{< tabs "sequence.field" >}}
{{< tab "Flink" >}}
```sql
CREATE TABLE MyTable (
CREATE TABLE my_table (
pk BIGINT PRIMARY KEY NOT ENFORCED,
v1 DOUBLE,
v2 BIGINT,
Expand Down
8 changes: 4 additions & 4 deletions docs/content/how-to/altering-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,11 +280,11 @@ The following SQL drops the partitions of the paimon table.
For flink sql, you can specify the partial columns of partition columns, and you can also specify multiple partition values at the same time.

```sql
ALTER TABLE MyTable DROP PARTITION (`id` = 1);
ALTER TABLE my_table DROP PARTITION (`id` = 1);

ALTER TABLE MyTable DROP PARTITION (`id` = 1, `name` = 'paimon');
ALTER TABLE my_table DROP PARTITION (`id` = 1, `name` = 'paimon');

ALTER TABLE MyTable DROP PARTITION (`id` = 1), PARTITION (`id` = 2);
ALTER TABLE my_table DROP PARTITION (`id` = 1), PARTITION (`id` = 2);

```

Expand All @@ -295,7 +295,7 @@ ALTER TABLE MyTable DROP PARTITION (`id` = 1), PARTITION (`id` = 2);
For spark sql, you need to specify all the partition columns.

```sql
ALTER TABLE MyTable DROP PARTITION (`id` = 1, `name` = 'paimon');
ALTER TABLE my_table DROP PARTITION (`id` = 1, `name` = 'paimon');
```

{{< /tab >}}
Expand Down
Loading

0 comments on commit 290381f

Please sign in to comment.