-
Notifications
You must be signed in to change notification settings - Fork 414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to achieve fault tolerance with logstash forwarder? #442
Comments
@KwanzooDev Make sure that even archived files match the "paths" pattern. For example use "impression_logs/impressions.log*" which would match impressions.log and impressions.log-201504221051 Forwarder will follow the rotation and remember where in the file it was, even if an archive happens when forwarder is not running or Logstash is down. This way the only time you lose logs is if you delete the archived file and Logstash forwarder restarts (if it is still running it keeps the deleted file open until it finishes with it). But chances are deletion are after 7 days at least and the logs be sent and done with by then. |
@driskell : This issue is random because for few test cases it is processing 350 logs correctly. |
It could be a bug. But can you explain how you're testing this? Do you stop the application, then wait, then stop forwarder, then create some logs (exactly 350) and then start it again? Due to buffering and such it may be when you stop forwarder it hadn't yet sent everything (there is no graceful shutdown supported, stopping forwarder is destructive). So when you start it again it sends more than expected, it sends what it previously missed plus the new logs. |
Stop the forwarder, create some logs using Gatling (performance testing tool) and then start it again. |
@KwanzooDev Can you make sure 0 logs are generating, and wait a minute before stopping forwarder? As I said it may be it hadn't finished sending logs and then it was stopped. Then when it started it was sending the Gatling logs AND it was sending what it didn't finish before it was stopped. |
@driskell |
I am using Logstash Forwarder to process tomcat logs.
My logstash forwarder config file contains :
"files": [
{
"paths": [ "/usr/share/tomcat/impression_logs/impressions.log" ]
}]
I am using log rotation so it will archive log file to different folder after some time or when file size exceeds 1MB.
If logstash-forwarder is down for few minutes logs are getting archived and it does not process those logs.
I am using monit to monitor logstash forwarder, but still there is chance of losing logs.
Is there a way to achieve fault tolerance with logstash forwarder?
We use log4j to rotate log file. If this file exceeds a size of 1MB, it will be renamed and replaced by a new impressions.log file
The text was updated successfully, but these errors were encountered: