This appender lets your application publish its application logs directly to Apache Kafka.
Due to a breaking change in the Logback Encoder API you need to use at least logback version 1.2.
Add logback-kafka-appender
and logback-classic
as library dependencies to your project.
<!-- [maven pom.xml] -->
<dependency>
<groupId>com.github.rahulsinghai</groupId>
<artifactId>logback-kafka-appender</artifactId>
<version>0.2.2</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.12</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.10.5.1</version>
<scope>runtime</scope>
</dependency>
// [build.sbt]
libraryDependencies += "com.github.rahulsinghai" % "logback-kafka-appender" % "0.2.2"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.12"
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-databind" % "2.10.5.1"
Check complete configuration examples for how to configure logback.xml.
logback-kafka-appender depends on org.apache.kafka:kafka-clients:1.0.0:jar
. It can append logs to a kafka broker with version 0.9.0.0 or higher.
The dependency to kafka-clients is not shadowed and may be upgraded to a higher, api compatible, version through dependency overrides.
Direct logging over the network is not a trivial thing because it might be much less reliable than the local file system and has a much bigger impact on the application performance if the transport has hiccups.
You need make a essential decision: Is it more important to deliver all logs to the remote Kafka or is it more important to keep the application running smoothly? Either of this decisions allows you to tune this appender for throughput.
Strategy | Description |
---|---|
AsynchronousDeliveryStrategy |
Dispatches each log message to the Kafka Producer . If the delivery fails for some reasons, the message is dispatched to the fallback appenders. However, this DeliveryStrategy _ |
does_ block if the producers send buffer is full (this can happen if the connection to the broker gets lost). To avoid even this blocking, enable the producerConfig block.on.buffer.full=false . All log messages that cannot be delivered fast enough will then immediately go to the fallback appenders. |
|
BlockingDeliveryStrategy |
Blocks each calling thread until the log message is actually delivered. Normally this strategy is discouraged because it has a huge negative impact on throughput. __ |
Warning: This strategy should not be used together with the producerConfig linger.ms __ |
The AsynchronousDeliveryStrategy
does not prevent you from being blocked by the Kafka metadata exchange.
That means: If all brokers are not reachable when the logging context starts, or all brokers become unreachable for a longer time period (> metadata.max.age.ms
), your appender will eventually block.
This behavior is undesirable in general and can be mitigated with kafka-clients 0.9 (see #16).
Until then, you can wrap the KafkaAppender with logback's own AsyncAppender or, for more control, the LoggingEventAsyncDisruptorAppender from Logstash Logback Encoder.
An example configuration could look like this:
<configuration>
<!-- *** APPENDERS *** -->
<!-- Acts as fallback appender as well for KafkaAppender -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- This is the kafkaAppender -->
<appender name="kafkaAppender" class="com.github.rahulsinghai.logback.kafka.KafkaAppender">
<!-- Kafka Appender configuration -->
<!-- Adding below line will add a fallback appender when kafka is not available. -->
<appender-ref ref="STDOUT" />
</appender>
<!-- Logs asynchronously. It acts solely as an event dispatcher and must therefore reference another appender in order to do anything useful. -->
<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<!-- When the blocking queue has 20% capacity remaining, it will drop events of level TRACE, DEBUG and INFO, keeping only events of level WARN and ERROR -->
<discardingThreshold>20</discardingThreshold>
<queueSize>256</queueSize>
<!-- if neverBlock is set to true, the async appender discards messages when its internal queue is full -->
<neverBlock>true</neverBlock>
<!-- Mandatory: AsyncAppender requires another appender-ref to which logs will be forwarded asynchronously -->
<appender-ref ref="kafkaAppender"/>
</appender>
<!-- *** LOGGERS *** -->
<!-- All logs generated by MdsVerify class go to KafkaAppender with additivity=false (logs will not be forwarded to parent/root logger) -->
<!-- If we need to show this logs in pod console, then turn additivity to true -->
<!-- You can set individual classes with debug level -->
<logger
name="io.github.rahulsinghai.logback.kafka.appender.example.Main" level="debug" additivity="false">
<appender-ref ref="ASYNC" />
</logger>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
You may also roll your own delivery strategy. Just extend com.github.rahulsinghai.logback.kafka.delivery.DeliveryStrategy
.
If, for whatever reason, the kafka-producer decides that it cannot publish a log message, the
message could still be logged to a fallback appender (a ConsoleAppender
on STDOUT or STDERR would
be a reasonable choice for that).
Just add your fallback appender(s) as logback appender-ref
to the KafkaAppender
section in
your logback.xml
. Every message that cannot be delivered to kafka will be written to all
defined appender-ref
's.
Example: <appender-ref ref="STDOUT">
while STDOUT
is an defined appender.
Note that the AsynchronousDeliveryStrategy
will reuse the kafka producers io thread to write the
message to the fallback appenders. Thus all fallback appenders should be reasonable fast so they do
not slow down or break the kafka producer.
This appender uses the kafka producer introduced in kafka-0.8.2. It uses the producer default configuration.
You may override any known kafka producer config with
an <producerConfig>Name=Value</producerConfig>
block (note that the boostrap.servers
config is
mandatory). This allows a lot of fine tuning potential (eg. with batch.size
, compression.type
and linger.ms
).
This module supports any ch.qos.logback.core.encoder.Encoder
. This allows you to use any encoder
that is capable of encoding an ILoggingEvent
or IAccessEvent
like the well-known
logback PatternLayoutEncoder
or for example the
logstash-logback-encoder's LogstashEncoxer
.
If you want to write something different than string on your kafka logging topic, you may roll your encoding mechanism. A use case would be to to smaller message sizes and/or better serialization/deserialization performance on the producing or consuming side. Useful formats could be BSON, Avro or others.
To roll your own implementation please refer to
the logback documentation.
Note that logback-kafka-appender will never call the headerBytes()
or footerBytes()
method.
Your encoder should be type-parameterized for any subtype of the type of event you want to support (
typically ILoggingEvent
) like in
public class MyEncoder extends ch.qos.logback.core.encoder.Encoder<ILoggingEvent> {/*..*/
}
Kafka's scalability and ordering guarantees heavily rely on the concepts of partitions (more details here). For application logging this means that we need to decide how we want to distribute our log messages over multiple kafka topic partitions. One implication of this decision is how messages are ordered when they are consumed from a arbitrary multi-partition consumer since kafka only provides a guaranteed read order only on each single partition. Another implication is how evenly our log messages are distributed across all available partitions and therefore balanced between multiple brokers.
The order of log messages may or may not be important, depending on the intended consumer-audience ( e.g. a logstash indexer will reorder all message by its timestamp anyway).
You can provide a fixed partition for the kafka appender using the partition
property or let the
producer use the message key to partition a message. Thus logback-kafka-appender
supports the
following keying strategies strategies:
Strategy | Description |
---|---|
NoKeyKeyingStrategy (default) |
Does not generate a message key. Results in round robin distribution across partition if no fixed partition is provided. |
HostNameKeyingStrategy |
This strategy uses the HOSTNAME as message key. This is useful because it ensures that all log messages issued by this host will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of hosts (compared to the number of partitions). |
ContextNameKeyingStrategy |
This strategy uses logback's CONTEXT_NAME as message key. This is ensures that all log messages logged by the same logging context will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of hosts (compared to the number of partitions). This strategy only works for ILoggingEvents . |
ThreadNameKeyingStrategy |
This strategy uses the calling threads name as message key. This ensures that all messages logged by the same thread will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of thread(-names) (compared to the number of partitions). This strategy only works for ILoggingEvents . |
LoggerNameKeyingStrategy |
* This strategy uses the logger name as message key. This ensures that all messages logged by the same logger will remain in the correct order for any consumer. But this strategy can lead to uneven log distribution for a small number of distinct loggers (compared to the number of partitions). This strategy only works for ILoggingEvents . |
If none of the above keying strategies satisfies your requirements, you can easily implement your
own by implementing a custom KeyingStrategy
:
package foo;
import com.github.rahulsinghai.logback.kafka.keying.KeyingStrategy;
/* This is a valid example but does not really make much sense */
public class LevelKeyingStrategy implements KeyingStrategy<ILoggingEvent> {
@Override
public byte[] createKey(ILoggingEvent e) {
return ByteBuffer.allocate(4).putInt(e.getLevel()).array();
}
}
As most custom logback component, your custom partitioning strategy may also implement the
ch.qos.logback.core.spi.ContextAware
and ch.qos.logback.core.spi.LifeCycle
interfaces.
A custom keying strategy may especially become handy when you want to use kafka's log compactation facility.
- Q: I want to log to different/multiple topics!
A: No problem, create an appender for each topic.
This project is licensed under the Apache License Version 2.0.