-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
non-partitioned table, Failed to write deletion vectors [Bug] #3891
Comments
Upgrade to paimon-flink-1.20-0.9-20240806.002229-1.jar, same exception |
I hit the same exception. Looks like it only happened with non-partitioned tables.
|
My table is non-partitioned PK table |
How do you create your Paimon table? Could you provide its DDL? What streaming job are you running? Could you provide its SQL? |
I made a simplified example.
|
I did some more testing, I found that either streaming job or batch job will trigger the exception |
It looks like this is a problem only in S3... |
Local Disk, HDFS, OSS all are OK... |
Search before asking
Paimon version
paimon-flink-1.19-0.9-20240803.002144-49.jar
paimon-flink-1.20-0.9-20240806.002229-1.jar
Compute Engine
java version "22.0.2" 2024-07-16
Java(TM) SE Runtime Environment Oracle GraalVM 22.0.2+9.1 (build 22.0.2+9-jvmci-b01)
Java HotSpot(TM) 64-Bit Server VM Oracle GraalVM 22.0.2+9.1 (build 22.0.2+9-jvmci-b01, mixed mode, sharing)
flink 1.19.1
flink 1.20.0
minio 2024-08-03T04:33:23Z
Minimal reproduce step
The Stream jobs that can be executed correctly in paimon-flink-1.19-0.8.2.jar will cause the following error when executed in 0.9 snapshot
index-9326e638-d025-4e6b-a647-ef39ecee0a45-8.zip
Here is full flink taskexecutor log file
flink-dict-taskexecutor-0-cmtt-dict-17.log.zip
What doesn't meet your expectations?
Same behavior as paimon 0.8.2
Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: