kafka-logger
is a plugin which works as a Kafka client driver for the ngx_lua nginx module.
This will provide the ability to send Log data requests as JSON objects to external Kafka clusters.
This plugin provides the ability to push Log data as a batch to you're external Kafka topics. In case if you did not recieve the log data don't worry give it some time it will automatically send the logs after the timer function expires in our Batch Processor.
For more info on Batch-Processor in Apache APISIX please refer. Batch-Processor
Name | Type | Requirement | Default | Valid | Description |
---|---|---|---|---|---|
broker_list | object | required | An array of Kafka brokers. | ||
kafka_topic | string | required | Target topic to push data. | ||
key | string | required | Key for the message. | ||
timeout | integer | optional | 3 | [1,...] | Timeout for the upstream to send data. |
name | string | optional | "kafka logger" | A unique identifier to identity the batch processor | |
meta_format | string | optional | "default" | enum: default , origin |
default : collect the request information with detfault JSON way. origin : collect the request information with original HTTP request. example |
batch_max_size | integer | optional | 1000 | [1,...] | Max size of each batch |
inactive_timeout | integer | optional | 5 | [1,...] | Maximum age in seconds when the buffer will be flushed if inactive |
buffer_duration | integer | optional | 60 | [1,...] | Maximum age in seconds of the oldest entry in a batch before the batch must be processed |
max_retry_count | integer | optional | 0 | [0,...] | Maximum number of retries before removing from the processing pipe line |
retry_delay | integer | optional | 1 | [0,...] | Number of seconds the process execution should be delayed if the execution fails |
include_req_body | boolean | optional | false | Whether to include the request body |
-
default:
{"upstream":"127.0.0.1:1980","start_time":1602211788041,"client_ip":"127.0.0.1","service_id":"","route_id":"1","request":{"querystring":{"ab":"cd"},"size":90,"uri":"\/hello?ab=cd","url":"http:\/\/localhost:1984\/hello?ab=cd","headers":{"host":"localhost","content-length":"6","connection":"close"},"body":"abcdef","method":"GET"},"response":{"headers":{"content-type":"text\/plain","server":"APISIX\/1.5","connection":"close","transfer-encoding":"chunked"},"status":200,"size":153},"latency":99.000215530396}
-
origin:
GET /hello?ab=cd HTTP/1.1 host: localhost content-length: 6 connection: close abcdef
The message
will write to the buffer first.
It will send to the kafka server when the buffer exceed the batch_max_size
,
or every buffer_duration
flush the buffer.
In case of success, returns true
.
In case of errors, returns nil
with a string describing the error (buffer overflow
).
This plugin supports to push in to more than one broker at a time. Specify the brokers of the external kafka servers as below sample to take effect of this functionality.
{
"127.0.0.1":9092,
"127.0.0.1":9093
}
The following is an example on how to enable the kafka-logger for a specific route.
curl http://127.0.0.1:9080/apisix/admin/routes/5 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"plugins": {
"kafka-logger": {
"broker_list" :
{
"127.0.0.1":9092
},
"kafka_topic" : "test2",
"key" : "key1",
"batch_max_size": 1,
"name": "kafka logger"
}
},
"upstream": {
"nodes": {
"127.0.0.1:1980": 1
},
"type": "roundrobin"
},
"uri": "/hello"
}'
*success:
$ curl -i http://127.0.0.1:9080/hello
HTTP/1.1 200 OK
...
hello, world
Remove the corresponding json configuration in the plugin configuration to disable the kafka-logger
.
APISIX plugins are hot-reloaded, therefore no need to restart APISIX.
$ curl http://127.0.0.1:2379/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d value='
{
"methods": ["GET"],
"uri": "/hello",
"plugins": {},
"upstream": {
"type": "roundrobin",
"nodes": {
"127.0.0.1:1980": 1
}
}
}'