This repo is a fork of the original simple Kafka module: SnuK87/keycloak-kafka extended with schema aware producers.
Th purpose of this project is to add a module for Keycloak to produce keycloak events to Kafka.
- Simple JSON
- AVRO
JSON Schema and Protobuf are in progress
Tested with
Kafka version: 2.12-2.1.x
, 2.12-2.4.x
, 2.12-2.5.x
, 2.13-2.8
, 2.13-3.3.x
Keycloak version: 19.0.x, 21.0.x
Java version: 17
You can simply use Maven to build the jar file. Thanks to the assembly plugin the build process will create a fat jar that includes all dependencies and makes the deployment quite easy. Just use the following command to build the jar file.
mvn clean package
First you need to build or download the keycloak-kafka module.
To install the module to your keycloak server you have to configure the module and deploy it.
If you deploy the module without configuration, your keycloak server will fail to start throwing a NullPointerException
.
If you want to install the module manually as described in the initial version you can follow this guide.
The following properties can be set via environment variables (e.g. ${KAFKA_TOPIC}
) or as parameters when starting keycloak (e.g. --spi-events-listener-kafka-topic-events
).
-
topicEvents
(envKAFKA_TOPIC
): The name of the kafka topic to where the events will be produced to. -
clientId
(envKAFKA_CLIENT_ID
): Theclient.id
used to identify the client in kafka. -
bootstrapServers
(envKAFKA_BOOTSTRAP_SERVERS
): A comma separated list of available brokers. -
events
(envKAFKA_EVENTS
): The events that will be send to kafka. -
topicAdminEvents
(envKAFKA_ADMIN_TOPIC
): (Optional) The name of the kafka topic to where the admin events will be produced to. No events will be produced when this property isn't set. -
valueSerializerClass
(envKAFKA_VALUE_SERIALIZER_CLASS
): (Optional) The default is simple JSON serialization. The serializer class to be used. It should be Confluent's io.confluent.kafka.serializers.KafkaAvroSerializer to serialize events in AVRO format. -
schemaRegistryUrl
(envSCHEMA_REGISTRY_URL
): (Only required with Confluent producers) The SchemaRegistry server URL where the schemas are stored. -
autoRegisterSchemas
(envKAFKA_AUTO_REGISTER_SCHEMAS
): (Optional) The default value is false. The mode of the schema handling. If auto register is turned of an exception will be thrown when the schema can not be found.
A list of available events can be found here
It's also possible to configure the kafka client with environment variables or by adding parameters to the keycloak start command. This makes it possible to connect this module to a kafka broker that requires SSL/TLS connections. For example to change the timeout of how long the producer will block the thread to 10 seconds you just have to pass the following parameter to the start command.
./kc.sh start --spi-events-listener-kafka-max-block-ms 10000
Or set the following environnment variable.
KAFKA_MAX_BLOCK_MS=10000
A full list of available configurations can be found in the official kafka docs.
Because some environments have difficulties with empty string variables, a workaround for SSL_ENDPOINT_IDENTIFICATION_ALGORITHM was implemented. To disable the host name verification set the value to disabled . The module will transfer the value to an empty string when creating the kafka client. |
As mentioned above the kafka client can be configured by passing parameters to the start command. To make kafka open a SSL/TLS secured connection you can add the following parameters:
./kc.sh start \
--spi-events-listener-kafka-security-protocol SSL \
--spi-events-listener-kafka-ssl-truststore-location kafka.client.truststore.jks \
--spi-events-listener-kafka-ssl-truststore-password test1234
Copy the keycloak-kafka-<version>-jar-with-dependencies.jar
into the $KEYCLOAK_HOME/providers
folder. Keycloak will automatically
install the module with all it's dependencies on start up.
- Open administration console
- Choose realm
- Go to Events
- Open
Config
tab and addkafka
to Event Listeners.
The simplest way to enable the kafka module in a docker container is to create a custom docker image from the keycloak base image. A simple example can be found in the Dockerfile.
When you build this image on your local machine by using docker build . -t keycloak-kafka
, you can test everything by running the docker-compose file on your local machine.
This just provides a simple example to show how it's working. Please consider to read this documentation and create your own Dockerfile.
docker-compose build && docker-compose up
The following snippet shows a minimal Spring Boot Kafka client to consume keycloak events. Additional properties can be added to the KeycloakEvent
class.
@SpringBootApplication
@Log4j2
public class KafkaConsumerApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaConsumerApplication.class, args);
}
@KafkaListener(topics = "keycloak-events", groupId = "event-consumer")
public void handleKeycloakEvent(KeycloakEvent event) {
log.info("Consumed event: " + event);
}
@KafkaListener(topics = "keycloak-admin-events", groupId = "event-consumer")
public void handleKeycloakAdminEvent(KeycloakAdminEvent event) {
log.info("Consumed admin event: " + event);
}
@Bean
public StringJsonMessageConverter jsonConverter() {
return new StringJsonMessageConverter();
}
}
@Data
class KeycloakEvent {
private String userId;
private String type;
}
@Data
class KeycloakAdminEvent {
private String realmId;
private String operationType;
}
Any kind of contributions are welcome.