Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use fetch_multi for multiple cache blocks #421

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sobrinho
Copy link

@sobrinho sobrinho commented Nov 11, 2017

This will improve the performance since we are going to hit the backend
just once to read all keys, as long the cache adapter implements fetch
multi support like dalli.

For example:

json.cache! :x do
  json.x true
end

json.cache! :y do
  json.y true
end

json.cache! :z do
  json.z true
end

This example was hitting the memcached 6 times on cache miss:

  1. read x
  2. write x
  3. read y
  4. write y
  5. read z
  6. write z

And 3 times on cache hit:

  1. read x
  2. read y
  3. read z

After this change, 4 times on cache miss:

  1. read multi x,y,z
  2. write x
  3. write y
  4. write z

And 1 time on cache hit:

  1. read multi x,y,z

Note that in the case of different options, one read multi will be made
per each options, i.e.:

json.cache! :x do
  json.x true
end

json.cache! :y do
  json.y true
end

json.cache! :z, expires_in: 10.minutes do
  json.z true
end

json.cache! :w, expires_in: 10.minutes do
  json.w true
end

In the case of cache miss:

  1. read multi x,y
  2. write x
  3. write y
  4. read multi z,w
  5. write z
  6. write w

In the case of cache hit:

  1. read multi x,y
  2. read multi z,w

That's because Rails.cache.fetch_multi signature is limited to use the
same options for all given keys.

And for last, nested cache calls are allowed and will follow recursively
to accomplish the same behavior, i.e.:

json.cache! :x do
  json.x true

  json.cache! :y do
    json.y true
  end

  json.cache! :z do
    json.z true
  end
end

json.cache! :w do
  json.w true
end

In the case of cache miss:

  1. read multi x,w
  2. read multi y,z
  3. write y
  4. write z
  5. write x
  6. write w

In the case of cache hit:

  1. read multi x,w

The same rule of options will be applied, if you have different options,
one hit per options.

This is the result of an investigation in application that was spending
15% of the time by hitting the memcached multiple times.

We were able to reduce the memcached time to 1% of the request by using
this algorithm.

Thanks to @samflores for helping me on the initial idea.

@rails-bot
Copy link

Thanks for the pull request, and welcome! The Rails team is excited to review your changes, and you should hear from @rwz (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

This will improve the performance since we are going to hit the backend
just once to read all keys, as long the cache adapter implements fetch
multi support like dalli.

For example:

    json.cache! :x do
      json.x true
    end

    json.cache! :y do
      json.y true
    end

    json.cache! :z do
      json.z true
    end

This example was hitting the memcached 6 times on cache miss:

  1. read x
  2. write x
  3. read y
  4. write y
  5. read z
  6. write z

And 3 times on cache hit:

  1. read x
  2. read y
  3. read z

After this change, 4 times on cache miss:

  1. read multi x,y,z
  2. write x
  3. write y
  4. write z

And 1 time on cache hit:

  1. read multi x,y,z

Note that in the case of different options, one read multi will be made
per each options, i.e.:

    json.cache! :x do
      json.x true
    end

    json.cache! :y do
      json.y true
    end

    json.cache! :z, expires_in: 10.minutes do
      json.z true
    end

    json.cache! :w, expires_in: 10.minutes do
      json.w true
    end

In the case of cache miss:

  1. read multi x,y
  2. write x
  3. write y
  4. read multi z,w
  5. write z
  5. write w

In the case of cache hit:

  1. read multi x,y
  2. read multi z,w

That's because Rails.cache.fetch_multi signature is limited to use the
same options for all given keys.

And for last, nested cache calls are allowed and will follow recursively
to accomplish the same behavior, i.e.:

  json.cache! :x do
    json.x true

    json.cache! :y do
      json.y true
    end

    json.cache! :z do
      json.z true
    end
  end

  json.cache! :w do
    json.w true
  end

In the case of cache miss:

  1. read multi x,w
  2. read multi y,z
  3. write y
  4. write z
  5. write x
  6. write w

In the case of cache hit:

  1. read multi x,w

The same rule of options will be applied, if you have different options,
one hit per options.

This is the result of an investigation in application that was spending
15% of the time by hitting the memcached multiple times.

We were able to reduce the memcached time to 1% of the request by using
this algorithm.

Thanks to @samflores for helping me on the initial idea.
@sobrinho
Copy link
Author

We lost the instrumentation with that change but I think we should focus on instrumentation at rails adapters itself instead of Jbuilder.

Make sense?

@sobrinho
Copy link
Author

Also, not sure why the build failed, it's working locally and I don't have permissions to rerun the build.

@sobrinho
Copy link
Author

This is somehow a new incarnation of #298.

sobrinho added a commit to sobrinho/abstract_builder that referenced this pull request Nov 29, 2017
This will improve the performance since we are going to hit the backend
just once to read all keys, as long the cache adapter implements fetch
multi support like dalli.

For example:

    builder.cache! :x do |cache|
      cache.x true
    end

    builder.cache! :y do |cache|
      cache.y true
    end

    builder.cache! :z do |cache|
      cache.z true
    end

This example was hitting the memcached 6 times on cache miss:

  1. read x
  2. write x
  3. read y
  4. write y
  5. read z
  6. write z

And 3 times on cache hit:

  1. read x
  2. read y
  3. read z

After this change, 4 times on cache miss:

  1. read multi x,y,z
  2. write x
  3. write y
  4. write z

And 1 time on cache hit:

  1. read multi x,y,z

Note that in the case of different options, one read multi will be made
per each options, i.e.:

    builder.cache! :x do |cache|
      cache.x true
    end

    builder.cache! :y do |cache|
      cache.y true
    end

    builder.cache! :z, expires_in: 10.minutes do |cache|
      cache.z true
    end

    builder.cache! :w, expires_in: 10.minutes do |cache|
      cache.w true
    end

In the case of cache miss:

  1. read multi x,y
  2. write x
  3. write y
  4. read multi z,w
  5. write z
  5. write w

In the case of cache hit:

  1. read multi x,y
  2. read multi z,w

That's because Rails.cache.fetch_multi signature is limited to use the
same options for all given keys.

And for last, nested cache calls are allowed and will follow recursively
to accomplish the same behavior, i.e.:

  builder.cache! :x do |cache_x|
    cache_x.x true

    cache_x.cache! :y do |cache_y|
      cache_y.y true
    end

    cache_x.cache! :z do |cache_z|
      cache_z.z true
    end
  end

  builder.cache! :w do |cache_w|
    cache_w.w true
  end

In the case of cache miss:

  1. read multi x,w
  2. read multi y,z
  3. write y
  4. write z
  5. write x
  6. write w

In the case of cache hit:

  1. read multi x,w

The same rule of options will be applied, if you have different options,
one hit per options.

Pretty much the same of rails/jbuilder#421.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants