Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] I am using the Hive engine, and I cannot view partitions. #2233

Closed
1 of 2 tasks
labixiaoxiaopang opened this issue Nov 1, 2023 · 2 comments
Closed
1 of 2 tasks
Labels
bug Something isn't working

Comments

@labixiaoxiaopang
Copy link

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.6-SNAPSHOT

Compute Engine

hive:2.3.5
flink:1.17.1

Minimal reproduce step

  1. use paimon-flink-action-0.6-SNAPSHOT.jar mysql-sync-database,It automatically created a table named paimon_ods.ods_passenger_paimon
  2. I used Flink to consume this table and output it to an append-only table.
  3. I executed these SQL statements.
CREATE TEMPORARY CATALOG my_hive WITH (
'type' = 'paimon',
'metastore' = 'hive',
'uri' = 'thrift://localhost:9083',
'warehouse' = 'hdfs://localhost:9100/hive/')
CREATE TABLE if not exists realtime_paimon.ods_passenger_passenger
(
   database_name STRING,
   table_name    STRING,
   id            bigint NOT NULL,
   user_name     STRING,
   area_code     STRING,
   age           int,
   nation        STRING,
   sex           int,
   create_time   TIMESTAMP(3),
   update_time   TIMESTAMP(3),
   c_dt          STRING
)
   PARTITIONED BY (`c_dt`)
WITH ( 'auto-create' = 'true',
   'bucket' = '-1' ,
   'snapshot.num-retained.max' = '10',
   'snapshot.num-retained.min' = '5',
   'manifest.merge-min-count' = '5' ,
   'path' = 'hdfs://localhost:9100/hive/ods.db/ods_passenger_passenger');
select database_name,
      table_name,
      id,
      user_name,
      area_code,
      age,
      nation,
      sex,
      create_time,
      update_time,
      DATE_FORMAT(update_time, 'yyyy-MM-dd') as c_dt
from paimon_ods.ods_passenger_paimon /*+ OPTIONS('continuous.discovery-interval' = '5' ,'scan.mode'='latest-full') */
  1. I filtered out the retracted data and created a temporary view.
DataStream<Row> resultStream =
               tableEnv.toChangelogStream(resultTable)
                       .filter(
                               new FilterFunction<Row>() {
                                   @Override
                                   public boolean filter(Row row) throws Exception {
                                       RowKind kind = row.getKind();
                                       row.setKind(RowKind.INSERT);
                                       return kind == RowKind.INSERT
                                               || kind == RowKind.UPDATE_AFTER;
                                   }
                               });
       tableEnv.createTemporaryView(
               "ods_passenger_passenger_view", tableEnv.fromDataStream(resultStream));
  1. I executed the SQL statements for writing data.
insert into realtime_paimon.ods_passenger_passenger(database_name, table_name, id, user_name, area_code, nation, sex,
                                                   create_time, update_time, c_dt)
select database_name,
table_name,
id,
user_name,
area_code,
nation,
sex,
create_time,
update_time,
c_dt
from ods_passenger_passenger_view
  1. In hive ,I executed this SQL statement.
show partitions realtime_paimon.ods_passenger_passenger;
  1. I placed the paimon-hive-connector-2.3-0.6-20231101.045607-49.jar in the auxlib folder of the Hive directory and restarted Hive.
  2. It resulted in an error :Table realtime_paimon.ods_passenger_passenger is not a partitioned table
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table realtime_paimon.ods_passenger_passenger is not a partitioned table

What doesn't meet your expectations?

It resulted in an error

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!
@labixiaoxiaopang labixiaoxiaopang added the bug Something isn't working label Nov 1, 2023
@zhuangchong
Copy link
Contributor

When creating a table in flink sql, you need to specify 'metastore.partitioned-table'='true'

see https://paimon.apache.org/docs/master/maintenance/configurations/

image

@labixiaoxiaopang
Copy link
Author

Thank you. I solved the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants