You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 21, 2022. It is now read-only.
ethql currently supports a request-scoped cache, but there is benefit to shared caching. In a production scenario, it is likely that the youngest blocks will be hottest, and that highly used smart contracts will garner higher ethql traffic.
However, there is also risk:
Requests from client A can lead to evicting entries that are hot for client B, hence making the runtime cost of a query non-deterministic. Cache pinning is a possibility here, but how to manage the pinning is a different discussion.
Misbehaved clients could thrash cache intentionally to degrade the performance of ethql.
Chain reorgs can render cache entries dirty, so an eviction process is required to remove entries affected by a reorg (reorgs can be notified to us via WSS).
In the above, if the WSS connection is dropped, we'd need to temporarily disable the cache, and start with a cleared cache once the connection is re-established.
Ultimately I see a two-level cache:
L1 => request-scoped cache, as it's already implemented.
L2 => shared LRU or LFU cache with different backend implementations (e.g. in memory, Redis, etc.)
The text was updated successfully, but these errors were encountered:
ethql currently supports a request-scoped cache, but there is benefit to shared caching. In a production scenario, it is likely that the youngest blocks will be hottest, and that highly used smart contracts will garner higher ethql traffic.
However, there is also risk:
Ultimately I see a two-level cache:
The text was updated successfully, but these errors were encountered: