You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
zhuangchong
changed the title
[Bug] paimon cdc 整库同步,所有的job name都是重复的prometheus监控作业名字无法抓取
[Bug] Paimon cdc synchronizes the entire database. All job names are duplicates. Prometheus monitoring job names cannot be captured.
Mar 20, 2024
Search before asking
Paimon version
0.7
Compute Engine
java api
Minimal reproduce step
paimon整库同步上生产,prometheus无法抓取job name,因为所有paimon整库同步名称都一样
./bin/flink run -d -t yarn-per-job -Djobmanager.memory.process.size=2096m -Dtaskmanager.memory.process.size=2096m
-Dyarn.application.queue=spark -Dyarn.application.node-label=ingest -Dyarn.application.name=obei_contract
/opt/module/flink-1.17.0/lib/paimon-flink-action-0.6.0-incubating.jar
mysql-sync-database
--warehouse hdfs://mycluster/user/hive/warehouse/ods.db
--database ods
--mysql-conf hostname=10.xx.xx.x.x.xx
--mysql-conf username=xxxxx
--mysql-conf password='xxxxxxxx'
--mysql-conf database-name=obei-contract
--including-tables 't_dcm_contract_detail|t_dcm_contract_head'
--catalog_conf metastore=hive
--catalog_conf uri=thrift://10.80.29.39:9083
--table-conf bucket=2
--table_prefix t_ods_
--table-conf changelog-producer=input
--table-conf sink.parallelism=2
What doesn't meet your expectations?
所有的整库同步job name 一样,不符合
Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: