Elastic butler were born at Oi telecomunication company as a open source alternative to Elastic Stack Alerting tool.
With butler you get notified if your data has a pattern set by you.
- Notify me by mail if there are more than 10 fail login attempts in the last 20 minutes
- Notify me by if we sell more than 1000 iphones in the last day
You are free to create your own recipes and notification types.
In addition to elastic search, it will be necessary a mongo database for running Buttler. The mongo databese will store all your monitoring recipes.
If you already have a running mongo and elastic search, just set up at config/env.json file. If you don't have this yet, you can use our test sandbox)
Create your recipes than start butler.
npm start
Butler uses the @timestamp field to do the "period" filter. Make sure your index has this field.
Butler will search for recipes at your mongo database.
A recipe describe the operation of monitoring. This is how a recipe looks like:
{
"name": "test-recipe",
"application": "test",
"active": true,
"elasticsearch": "http://localhost:9200",
"kibana": "http://localhost:5601",
"interval": 10,
"search": {
"index": "shakespeare",
"query": "\"with love\"",
"limit": 10,
"period": "10 m"
},
"action": {
"type": "gmail",
"to": "[email protected]",
"subject": "[#hits#] hits at [#application# #recipe#]",
"body": "<p>Your recipe results:</p> #detail#"
}
}
- name: Recipe name
- application: Monitored application name
- elasticsearch: Elastic-search url with port information (Ex: localhost:9200)
- search: This is the object to specify the search
- search.index: Search index name. If you set dateOnIndex at config/env.json, butler will add the current date (-YYYY-MM-DD) at the end of index name.
- search.query: Elastic search query. (Ex.: code:"500" && "EXTERNAL API ERROR")
- search.limit: Limit of hits until action be executed
- search.period: Period in minutes that the occurrence will be searched. (Ex: 20 m) "At last 20 minutes"
- action: This is the object to specify the action. For default butler has 2 action types (gmail and twiliosms). Butler will search for this type at worker/senders folder. You can create your own sender.
This action is part of butler default solution, and uses a gmail account to send the notification.
- action.type: gmail
- action.to: Recipient's email
- action.subject: Mail subject. Tags #hits#, #application# and #recipe# will be replaced with recipe data
- action.body: Mail body. Tag #detail# will be replaced with search result data
{
"name" : "test-recipe",
"application" : "test",
"active" : false,
"elasticsearch" : "http://localhost:9200",
"kibana" : "http://localhost:5601",
"interval" : 10,
"action" : {
"body" : "<p>Your recipe results:</p> #detail#",
"subject" : "[#hits#] hits at [#application# #recipe#]",
"to" : "[email protected]",
"type" : "gmail"
},
"search" : {
"index" : "shakespeare",
"query" : "\"with love\"",
"limit" : "10",
"period" : "60 m"
}
}
This action is part of butler default solution, and uses a twilio account to send sms.
- action.type: twiliosms
- action.to: Recipient's phone number (Ex: +5521999998888)
- action.body: Sms body. Tags #hits#, #application# and #recipe# will be replaced with recipe data
{
"name" : "test-recipe-sms",
"application" : "test",
"active" : true,
"elasticsearch" : "http://localhost:9200",
"kibana" : "http://localhost:5601",
"interval" : 10,
"action" : {
"body" : "#application# #recipe# => #hits# hits",
"to" : "+5521999998888",
"type" : "twiliosms"
},
"search" : {
"index" : "shakespeare",
"query" : "\"with love\"",
"limit" : "10",
"period" : "6000 m"
}
}
For test propouses you can use our sandbox to create an initial test environment:
cd _sandbox
sudo ./sandbox.sh up -d
Our sandbox will use docker-compose to run mongo, elastic-search and kibana containers.
After containers are running, you should need import some sample data:
Add sample index mapping at http://localhost:5601/app/kibana#/dev_tools/console
PUT /shakespeare
{
"mappings": {
"doc": {
"properties": {
"speaker": {
"type": "keyword"
},
"play_name": {
"type": "keyword"
},
"line_id": {
"type": "integer"
},
"speech_number": {
"type": "integer"
}
}
}
}
}
Add the @timestamp ingest to automatic include timestamp information in you data.
PUT _ingest/pipeline/timestamp
{
"description" : "Adds a timestamp field at the current time",
"processors" : [ {
"set" : {
"field": "@timestamp",
"value": "{{_ingest.timestamp}}"
}
} ]
}
Now you can import some sample data to your index:
cd _sandbox/sample_data
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty&pipeline=timestamp' --data-binary @shakespeare_6.0.json
To run butler on a docker container you need to adjust config/env.json:
- Edit mongo configuration url and options
- Edit senders configurations
Than just run:
cd _docker
sudo ./up.sh
If you want to see container logs:
sudo docker logs butler
To stop butler:
cd _docker
sudo ./down.sh
You can create your own butler sender. To do this you have to create a sender class at worker/senders folder with the name of your type:
// worker/senders/sms.js
class SmsSender {
constructor() {
this.config = require('../../config');
}
// Send recipe result by sms
send(recipe, searchResult) {
let message = this._getInfo(recipe.action.body, recipe, searchResult);
// send the sms notification here...
};
_getInfo(text, recipe, searchResult) {
return text
.replace('#application#', recipe.application)
.replace('#recipe#', recipe.name)
.replace('#hits#', searchResult.hits.total);
}
};
module.exports = new SmsSender();
Than you can create a recipe using your new sender:
{
"name": "Error 500 on process recipe",
"application": "My Application",
"active": true,
"elasticsearch": "http://localhost:9200",
"kibana": "http://localhost:5601",
"interval": 10,
"search": {
"index": "shakespeare",
"query": "\"process\" && code: \"500\"",
"limit": 10,
"period": "10 m"
},
"action": {
"type": "sms",
"to": "2199999-8888",
"body": "#application# #recipe# => #hits# hits"
}
}
Butler stores all execution results at mongo "executions" collection.
If you want these executions to expire so that your database does not get too full, you have to set the TTL.
db.getCollection('executions').createIndex( { "created_at": 1 }, { expireAfterSeconds: 10800 } )