-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom output directory naming (without topic in it) #544
Comments
I think my case is the same as #515 and customisation I am looking for will break recovery. |
I wonder, here: https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/TopicPartitionWriter.java#L602 |
Would it be possible to get such feature? Would it be accepted if it is contributed? A bit more about my my use-case is, that I have master kafka cluster receive mirror traffic from N remote clusters. Each mirror writes to topic with prefix name. Want to dump N topics under the single directory rather then have N HDFS directories and have each HDFS consumer deal with that. |
I have created an MR which should address this feature request. Can someone please review and let me know if this is acceptable? |
Hello, is it possible / safe to provide a custom directory naming convention where one do not want to use actual topic name?
I see it possible via custom partitioner where topic is passed in an can possibly not be used:
https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/partitioner/Partitioner.java#L38
However, it fees not used consistently and e.g. offset recovery seems to be looking hard for the the directory with name of the topic:
https://github.com/confluentinc/kafka-connect-hdfs/blob/master/src/main/java/io/confluent/connect/hdfs/TopicPartitionWriter.java#L602
Question is, If I can safely (somehow) not use the topic name ( and possibly have also custom hive table name ) or it is intentionally not possible for some reason.
The text was updated successfully, but these errors were encountered: