-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ER_LOCK_DEADLOCK #541
Comments
Update: I increased REDIS_CONCURRENCY of workers and increased memory a little bit, now sometimes I get 0 in queue, and on busy hours it's average 3k in queue However, I sill see deadlock issues. |
If you have perpetual jobs in the queue you normally should spin up an additional worker to help handle the load. Would generally suggest going that approach vs increasing the concurrency too high. There will most likely always be jobs in the queue though if you have data always coming in, unsure of what your setup is. As for the deadlocks, looks like they are getting retried (jobs all retry) but still failing which is interesting and leads me to believe whatever the lock is may be fairly long running. Could you see if you can get any more information on the deadlocks? https://www.percona.com/blog/how-to-deal-with-mysql-deadlocks/ would be curious to see what table is preventing the inserts. Typically inserts don't have problems unless 1) the record already exists (which is most likely not the case here since on retry it still fails) or 2) an index behind the record is being blocked by other inserts of operations. |
Yeah, I checked some of the deadlock logs.
|
Interestingly, I ran a (And count was 6M) Weird how long this is taking |
Well, I just increased the AWS RDS instance size and it looks like the problem went away. At least on the first hour I cannot see any deadlock, I will update here after 24h Maybe the bottleneck was on the DB instance itself |
Well, too soon. I just got a deadlock. But for sure it lowered the frequency.
|
@pushchris looks like the problem is the foreign key |
Well this is a fun one. There is definitely an index missing but I'm unsure how. All of the hosted versions I have include an index on the |
@leobarcellos were you able to try out adding the indexes in to see if it solved the issue for you? |
@pushchris Sure! I updated yesterday. Thanks for this fix. I can see that the deadlock errors decreased fairly, however I'm still receiving some. Most of them still on the same table:
I can see that eventually I get a few on
This second one, I could see that is caused by the |
Just got another different deadlock error, now on rule_evaluations
|
What is your DB usage looking like? Specifically CPU and memory usage. This many deadlocks makes me think it's under provisioned since most of those tables are pretty simple but there may be something else underlying that is causing it. |
Awesome! #551 should hopefully help with the second and third items in the top SQL list you sent over and reduce total number of deadlocks on those tables. |
I'm receiving constantly deadlock issues and I don't know what else I could do.
I was working with 5 workers, maybe they were competing. However I just modified to just 1 worker and it's still happening.
I even tried putting some locks (using acquireLock and releaseLock) before doing these jobs, but even this doesn't worked (the ER_LOCK_DEADLOCK error is from AWS RDS Aurora MySQL and I'm not sure how can I solve this)
Deadlock issues generally are solved on the application, but I don't know if this is the case, I even tried to use this repo:
https://www.npmjs.com/package/@tanjaae/knex-mysql2-deadlock
But could not manage it to work, I think they conflict with current knex/mysql2 versions.
@pushchris don't you suffer with these deadlock on worker instances?
The text was updated successfully, but these errors were encountered: