You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
INSERT INTO paimon_t1min
SELECT rowtime, pdate, symbol, event_id, price_change_t1min, price_change_percent_t1min
FROM t1min
What doesn't meet your expectations?
I developed my Flink streaming app with Paimon and tested it running on the localhost.
It parses to the partition keys are ["symbol", "pdate"] and the primary keys are ["symbol", "event_id", "rowtime"].
So I met the error.
Does it take the wrong with my SQL queries?
java.lang.IllegalArgumentException: Field names must be unique. Found duplicates: [symbol]
at org.apache.paimon.types.RowType.validateFields(RowType.java:159) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.types.RowType.(RowType.java:65) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.types.RowType.(RowType.java:69) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.schema.TableSchema.projectedLogicalRowType(TableSchema.java:253) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.index.GlobalDynamicBucketSink.build(GlobalDynamicBucketSink.java:77) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkSinkBuilder.buildDynamicBucketSink(FlinkSinkBuilder.java:102) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkSinkBuilder.build(FlinkSinkBuilder.java:90) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkTableSinkBase.lambda$getSinkRuntimeProvider$0(FlinkTableSinkBase.java:140) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.PaimonDataStreamSinkProvider.consumeDataStream(PaimonDataStreamSinkProvider.java:41) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.applySinkProvider(CommonExecSink.java:507) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:218) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:161) ~[?:?]
at org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85) ~[?:?]
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) ~[scala-library-2.12.7.jar:?]
at scala.collection.Iterator.foreach(Iterator.scala:937) ~[scala-library-2.12.7.jar:?]
at scala.collection.Iterator.foreach$(Iterator.scala:937) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) ~[scala-library-2.12.7.jar:?]
at scala.collection.IterableLike.foreach(IterableLike.scala:70) ~[scala-library-2.12.7.jar:?]
at scala.collection.IterableLike.foreach$(IterableLike.scala:69) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[scala-library-2.12.7.jar:?]
at scala.collection.TraversableLike.map(TraversableLike.scala:233) ~[scala-library-2.12.7.jar:?]
at scala.collection.TraversableLike.map$(TraversableLike.scala:226) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[scala-library-2.12.7.jar:?]
at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84) ~[?:?]
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197) ~[?:?]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1803) ~[flink-table-api-java-1.17.1.jar:1.17.1]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:881) ~[flink-table-api-java-1.17.1.jar:1.17.1]
at org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:109) ~[flink-table-api-java-1.17.1.jar:1.17.1]
Anything else?
No response
Are you willing to submit a PR?
I'm willing to submit a PR!
The text was updated successfully, but these errors were encountered:
Search before asking
Paimon version
0.5.0-incubating
Compute Engine
Flink 1.17.1
Minimal reproduce step
CREATE TABLE IF NOT EXISTS paimon_t1min
(
rowtime TIMESTAMP_LTZ(3) ,
pdate STRING,
symbol STRING ,
event_id STRING,
price_change_t1min DECIMAL(38, 17),
price_change_percent_t1min DECIMAL(38, 6),
PRIMARY KEY (symbol, event_id) NOT ENFORCED
) PARTITIONED BY (pdate)
WITH (
'bucket' = '20',
'bucket-key' = 'event_id',
'format' = 'csv',
'changelog-producer' = 'full-compaction',
'changelog-producer.compaction-interval' = '5 s',
'merge-engine' = 'partial-update',
'partial-update.ignore-delete' = 'true',
'partition.expiration-time' = '7 d',
'partition.expiration-check-interval' = '1 d',
'partition.timestamp-formatter' = 'yyyy-MM-dd',
'partition.timestamp-pattern' = '$pdate',
'snapshot.num-retained.max' = '20'
);
INSERT INTO paimon_t1min
SELECT rowtime, pdate, symbol, event_id, price_change_t1min, price_change_percent_t1min
FROM t1min
What doesn't meet your expectations?
I developed my Flink streaming app with Paimon and tested it running on the localhost.
It parses to the partition keys are ["symbol", "pdate"] and the primary keys are ["symbol", "event_id", "rowtime"].
So I met the error.
Does it take the wrong with my SQL queries?
java.lang.IllegalArgumentException: Field names must be unique. Found duplicates: [symbol]
at org.apache.paimon.types.RowType.validateFields(RowType.java:159) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.types.RowType.(RowType.java:65) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.types.RowType.(RowType.java:69) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.schema.TableSchema.projectedLogicalRowType(TableSchema.java:253) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.index.GlobalDynamicBucketSink.build(GlobalDynamicBucketSink.java:77) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkSinkBuilder.buildDynamicBucketSink(FlinkSinkBuilder.java:102) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkSinkBuilder.build(FlinkSinkBuilder.java:90) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.sink.FlinkTableSinkBase.lambda$getSinkRuntimeProvider$0(FlinkTableSinkBase.java:140) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.paimon.flink.PaimonDataStreamSinkProvider.consumeDataStream(PaimonDataStreamSinkProvider.java:41) ~[paimon-flink-1.17-0.5.0-incubating.jar:0.5.0-incubating]
at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.applySinkProvider(CommonExecSink.java:507) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecSink.createSinkTransformation(CommonExecSink.java:218) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.java:176) ~[?:?]
at org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:161) ~[?:?]
at org.apache.flink.table.planner.delegation.StreamPlanner.$anonfun$translateToPlan$1(StreamPlanner.scala:85) ~[?:?]
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) ~[scala-library-2.12.7.jar:?]
at scala.collection.Iterator.foreach(Iterator.scala:937) ~[scala-library-2.12.7.jar:?]
at scala.collection.Iterator.foreach$(Iterator.scala:937) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) ~[scala-library-2.12.7.jar:?]
at scala.collection.IterableLike.foreach(IterableLike.scala:70) ~[scala-library-2.12.7.jar:?]
at scala.collection.IterableLike.foreach$(IterableLike.scala:69) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) ~[scala-library-2.12.7.jar:?]
at scala.collection.TraversableLike.map(TraversableLike.scala:233) ~[scala-library-2.12.7.jar:?]
at scala.collection.TraversableLike.map$(TraversableLike.scala:226) ~[scala-library-2.12.7.jar:?]
at scala.collection.AbstractTraversable.map(Traversable.scala:104) ~[scala-library-2.12.7.jar:?]
at org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:84) ~[?:?]
at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:197) ~[?:?]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1803) ~[flink-table-api-java-1.17.1.jar:1.17.1]
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:881) ~[flink-table-api-java-1.17.1.jar:1.17.1]
at org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:109) ~[flink-table-api-java-1.17.1.jar:1.17.1]
Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: