Elastalert is using up to much CPU and memory #1425
-
I am using an ubuntu VM, and running elastalert on it, there is nothing else being ran on the Ubuntu except fo relastalert. I had configure the machine to use 8 GB and 2 cpus and the machien started crashing alot, increased it to 16 GB and 4 cpus and it is working better. But the memory being used is quite a lot, is it supposed to be as such? I know elastalert keeps an instance per rule in memory to store the data, if so is there a way to flush that data periodically through any rule category? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Please note that if you do not regularly close the indexes accumulated in Elasticsearch, the Java heap memory will be exhausted and Elasticsearch will become unresponsive, even if there is enough disk space. |
Beta Was this translation helpful? Give feedback.
The memory usage depends on the rule configurations and the amount of data each rule returns on every query. I've not personally run into a scenario where ElastAlert 2 runs out of memory, and I've had it running for many months at a time. I'm not saying there's no memory leak, but just that I've not observed it and so it will be difficult to find it without seeing it firsthand.
If you can rewrite your rules to avoid querying large result sets, such as switching to
use_count_query
to only return counts instead of the actual documents, that would likely eliminate the problem. Or if you can continue to isolate the problem then perhaps we have a better shot of finding out specifically what's …