Skip to content
This repository has been archived by the owner on Jan 21, 2022. It is now read-only.

Evaluate a two-level per-request + shared cache #56

Open
raulk opened this issue Jun 28, 2018 · 2 comments
Open

Evaluate a two-level per-request + shared cache #56

raulk opened this issue Jun 28, 2018 · 2 comments

Comments

@raulk
Copy link
Contributor

raulk commented Jun 28, 2018

ethql currently supports a request-scoped cache, but there is benefit to shared caching. In a production scenario, it is likely that the youngest blocks will be hottest, and that highly used smart contracts will garner higher ethql traffic.

However, there is also risk:

  • Requests from client A can lead to evicting entries that are hot for client B, hence making the runtime cost of a query non-deterministic. Cache pinning is a possibility here, but how to manage the pinning is a different discussion.
  • Misbehaved clients could thrash cache intentionally to degrade the performance of ethql.
  • Chain reorgs can render cache entries dirty, so an eviction process is required to remove entries affected by a reorg (reorgs can be notified to us via WSS).
  • In the above, if the WSS connection is dropped, we'd need to temporarily disable the cache, and start with a cleared cache once the connection is re-established.

Ultimately I see a two-level cache:

  • L1 => request-scoped cache, as it's already implemented.
  • L2 => shared LRU or LFU cache with different backend implementations (e.g. in memory, Redis, etc.)
@ericjuta
Copy link

ericjuta commented Jul 30, 2018

Plugging this in here too in reference to #83 and the satisfying PR #91

@raulk
Copy link
Contributor Author

raulk commented Aug 1, 2018

Thanks, @rej156. Will look into that!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants