Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory errors. #7

Open
KevinJCross opened this issue Feb 28, 2017 · 5 comments
Open

Out of memory errors. #7

KevinJCross opened this issue Feb 28, 2017 · 5 comments

Comments

@KevinJCross
Copy link

It seems that there is no write queue limiting for this library.

Once the connection is created and we start logging to logstash socket and logstash slows down to a trickle the write logs back up to such a degree that you
screen shot 2017-02-28 at 10 33 37
run out of heap.

Two solutions could be.

  1. write queue for the socket with an async callback when the socket has emptied some of the queue to be sent. This is obviously limited to x size and after the queue is full logs are dropped. This would also help with batching (an optimisation that is really required for this library)

  2. Write buffer ring (LMAX in mutli-threaded languages is awesome). Size limited array that is used as a buffer ring to write log messages into.

simplified untested psedo-code. note that the older versions just get stomped on so if you run out of space only the most recent logs will end up getting sent. You can always just get it to drop too but that is slightly more complex.

writeIndex = 0
readIndex
write(x) =>{
    // can add the dropping here if the write is going to write on a non null ( slightly slower and probably not necessary)
    buffer[ writeIndex % buffer.length] = x;
    writeIndex++;
}

nextBuffer = () => { 
  x = buffer[readIndex%buffer.length]; 
  buffer[readIndex%buffer.length] = null; 
  return x 
} 

// called on async socket empty
fullSocket()=> {
   while( (x = nextBuffer())  != null ) {
      // or do some batching here
       socket.send( x );
   }
}

Im unsure that the consecutive writes will end up in the same buffer or will be send in separate tcp buffers. Im not too familiar with nodejs sockets.

I might have to switch to another library because of this ... or try create a pull request if i have the time.

Also have you though about integration with winston logging ?
The logstash-winston logger seems a bit badly done to be honest.

@simlevesque
Copy link

simlevesque commented Sep 24, 2018

@KevinJCross

What did you end up doing? This is causing crashes on our prod environment. Guess I should read all GitHub issues before using something in the future...

@oroce
Copy link
Member

oroce commented Sep 25, 2018

You can use a queue on top of this library, like this one: https://www.npmjs.com/package/queue

const queue = require('queue');
var Logstash = require('logstash-client');

var logstash = new Logstash({
  type: 'udp', // udp, tcp, memory
  host: 'logstash.example.org',
  port: 13333
});


const q = queue
module.exports = function(message) {
   q.push(function(cb) {
     logstash.send(message, cb);
   });
   q.start();
   if (q.results > 1e4) {
     // if the queue's size exceeds 10000 you can `q.slice(10, 5000)` 
  }
};

I haven't tried the code above, but it's pretty simple to have something like that

@simlevesque
Copy link

Maybe fix your library instead ?

@oroce
Copy link
Member

oroce commented Sep 25, 2018

haha, how can i fix if the logstash getting slow at receiving the messages through udp? if you have any ideas i'm open for them.
Also OOM means that you are pushing too hard the logs, you either scale up the receiver or scale down the number of messages you are pushing

@simlevesque
Copy link

Integrate the queue directly in the client instead of crashing the process seems like the best course of action.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants