-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding with_master and release_master! helpers. #209
base: master
Are you sure you want to change the base?
Conversation
stick_to_master! | ||
yield if block_given? | ||
ensure | ||
release_master! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's something here about if it was stuck to master before you ran this block.
Imagine if you were already stuck to master in a request, then called with_master
- at the end of the block, I would think you'd recover the previous context (stuck to master).
I can imagine this is why there was an instance variable used in without_sticking
and then leveraged in the various methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, ill update.
We have a similar need and ended up with this ActiveRecord::Base.class_eval do
def self.on_master
previously_stuck = self.connection.send(:stuck_to_master?)
self.connection.stick_to_master!(true) unless previously_stuck
yield
ensure
unless previously_stuck
connection_id = self.connection.instance_variable_get(:@id)
Makara::Context.release(connection_id)
end
end
end
MyModel.on_master do
# do stuff
end Curious: is there any way to achieve the same thing with |
That’s exactly what I wanted to achieve, to have on master work with or without sticky flag. Thanks for sharing your block! |
@kigster yeah, sadly my snippet only works when sticky is |
Do you managed to get this done for latest makara? |
Believe it or not, it's a very timely albeit complex question. BackgroundI've been involved with Makara while I was a CTO at Wanelo.com, while Brian Leonard was VPE at TaskRabbit. Our offices were close by and during one of our lunches he told me about Makara, and despite the fact that TaskRabbit were MySQL users and there was no PostgreSQL support, I instantly knew that this was exactly what my team needed, and we weren't afraid to port it to PostgreSQL. The year was 2010-2012. while several other gems claimed the ability to spread the db load to a replica, upon further investigation we discovered that none other than Makara were written with multi-threading in mind. Many people at the time used Unicorn and Resque – both single-threaded multi-process models (and highly memory inefficient). We were already on Puma and Sidekiq, both multi-threaded gems. Fast forward to literally right now, and as part of yet another scaling project at my current work, we are taking advantage of the relatively new native read/write split support that was available in Rails 6 & 7. Having used Makara extensively, and having played with Rails read/write splitting in the last few months, I feel I can make a meaningful comparison. I know this is not exactly what you asked but bear with me, because the short answer to your question is 'it depends'. Rails Read/Write SplittingThis approach has some unfortunate limitations:
MakaraWhen we first started using Makara at Wanelo in 2011, Advantages of Makara
So, what is Stickiness?
It was for this case (very short stickiness, and a small number of very critical queries) that prompted the
Alternatives to Stickinesswhile cookies may work on the web, for obvious reasons background jobs have no such luxury. But they have other neat features that more than compensate for clunky stickiness on the web. If you use Sidekiq, you have access to several very relevant features:
For instance, sidekiq workers can't use stickiness, since they do not support a coookie.
Stickiness and Replication DelayThese two concepts are extremely related. If your replicas are able to keep up with a typical replication delay within fractions of a second, then stickiness may not be needed (or can be extremely short say 300ms). Replocation Delay introduces into the architecture a concept of "evemntual consistency". TLDR;I am currently pitching to my company to experiment with Makara. If I am successful, I'd ve more than happy to submit a proper PR with those helpers. Sorry for the awefully long essay :) |
Thank you for this. Really helpfull. |
Keep in mind you can execute a fast and relatively cheap query against replica to compute the replication delay. If I wrote this, I'd run this periodically this on a single dedicated thread in that Ruby VM. Then did this queries that timing is critical you can query the thread about the latest delay and make your decision accordingly. |
Well in API I have no choice the data needs to be there right away and
sometimes it is simply not there and we can not afford to wait - usually
just simple Rails "find()" stuff.
…On Tue, 21 Nov 2023 at 16:39, Konstantin Gredeskoul < ***@***.***> wrote:
Keep in mind you can execute a fast and relatively cheap query against
replica to compute the replication delay.
If I wrote this, I'd run this periodically this on a single dedicated
thread in that Ruby VM. Then did this queries that timing is critical you
can query the thread about the latest delay and make your decision
accordingly.
—
Reply to this email directly, view it on GitHub
<#209 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB44ABSFWMT3VO3SRG6EQLYFTDMDAVCNFSM4FEXO6H2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBSGEYTMNZZGUZQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@camol Have you considered adding a database level statement timeout? Often times when replicas lag it's because someone is running a long query on a replica that, in order to finish, must push back on WAL log transfer and application. |
The other trick we used was to have a sidekiq server that was ONLY connected to the master. The rest (a lot more) read only from the replica. If we enqueued a job that must get the current data we used the queue that was attached to the primary. |
This does not solve the problem for us the queries are fast and simple and
we can not afford to wait extra since simply it means that replica not yet
have a data and simply record is not found on slave at the moment and we
absolutely need master for this particular read. This strategy worked for
us perfectly well and we do not want to risk any issues and we are trying
to find a way to maintain the exact same behaviour.
…On Tue, 21 Nov 2023 at 18:06, Konstantin Gredeskoul < ***@***.***> wrote:
The other trick we used was to have a sidekiq server that was ONLY
connected to the master.
The rest (a lot more) read only from the replica.
If we enqueued a job that must get the current data we used the queue that
was attached to the primary.
—
Reply to this email directly, view it on GitHub
<#209 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB44AGM24BWLNMRP3S3GU3YFTNPVAVCNFSM4FEXO6H2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBSGEZTEMJQGEZQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
This concept — spreading the reads to a potentially lagging replica — exists for a very specific reason. It's needed when your traffic goes above what the largest database instance with 1Tb of Ram, 256 CPU cores and a 15-SSD disk array is not enough. Do NOT use replicas for reads if you need 100% accuracy (like in financial or medical domains). But absolutely DO use it when you can either tolerate a slight delay by queuing your jobs either a slight delay, and when it's not mission critical if a user on occasion sees old data. Great examples of apps that might use Makara are social apps, content delivery, chat, etc. Apps that are inherently asynchronous can scale 10X compared to using a single primary, by using many replicas with Makara routing the traffic. The alternative for 100% accuracy is horizontal partitioning of the data across multiple masters. This is also the only method that works when your scaling problem is not the reads but write IO. My 2c. --kig |
This is just a concept, but i think an important one to have...