Replies: 1 comment 2 replies
-
Thanks for this discussion, @cstmgl - we have a couple of definite takeaways here, especially making the We can also consider supporting files for the secrets. However I do wonder how that is more secure than secrets in ENV vars. Wouldn't a remote execution attack also be able to read any files you might have mounted? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
We are starting to experiment with Apicurio Schema Registry and I have a question regarding our setup and some challenges we are facing. Let me start by explaning our setup.
We run our projects in Kubernetes (Openshift) and we are using Strimzi as the distribution for Kafka (also running on Kubernetes).
We are/plan to deploy Apicurio registry also as part of our Kubernetes setup and use the Kafkq for the persistency of the schemas with the ksql setup. We are using this image:
apicurio/apicurio-registry-kafkasql:2.4.1.Final
Just to give more context we delegate on Strimzi the creation of the Kafka Users that will be used for the authentication into Kafka.
For those not familiar, and I'm not that familiar, Strimzi generates a secret with the details of the User and it's keystores and certificates that we can then reference.
Anyway when I'm doing a setup and defining the Kafka authentication I do something like this in my deployment yaml:
This setup works fine and basically the volumes and secrets are generated by Strimzi and everything finds it's way of working.
Now, my problem is that my security team finds it very frown upon to reference secrets directly in the environment variables, because if by any reason there could be some sort of remote code execution there could be the leakage of the values by exposing the environment settings.
Anyway the recommendation we have and how we normally fix this is by referencing the volumes directly from the environment settings.
Something like this we do with other applications and works fine (in other applications):
But somehow here it does not work, to be totally honest I don't get how this kind of usage of values from the volumes works, but it does...
Anyway I was digging in the code and we realized that most likely we would need to define the applications.properties file inside the /deployments/config and we would be good to go.
We did manage to do that and now the setup is working.
But I'm a bit confused about some things.
If I look at the code
https://github.com/Apicurio/apicurio-registry/blob/2.4.1.Final/storage/kafkasql/src/main/java/io/apicurio/registry/storage/impl/kafkasql/KafkaSqlFactory.java#L256
I see there that we need to either define all keystore properties or nothing gets consumed. I was expecting at least the keyPassword to be optional, mainly because if we are using a PKCS12 keystore I could not define a password per key.
Anyway I realized that if we define both values with the same content they seem to work, but I found all this a bit complicated, so I'm thinking that I'm doing something totally wrong.
I was curious how are people normally defining the Kafka authentication when using Kafkasql for persistency and if they are in a similar setup with Strimzi, that would be great.
Thanks for any feedback/details.
Beta Was this translation helpful? Give feedback.
All reactions