From e73becbb25091a86d072e06b05e83519aa686f8e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Viktor=20S=C3=B6derqvist?= Date: Fri, 5 Jul 2024 03:30:45 +0200 Subject: [PATCH] Spelling MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Viktor Söderqvist --- topics/latency.md | 2 +- topics/lru-cache.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/topics/latency.md b/topics/latency.md index 83a59ef8..383d1189 100644 --- a/topics/latency.md +++ b/topics/latency.md @@ -236,7 +236,7 @@ Fork time in different systems Modern hardware is pretty fast at copying the page table. So are modern hardware-assisted virtualized environments, -but fork can be really slow in older virtualized environmants without hardware support. +but fork can be really slow in older virtualized environments without hardware support. As of 2024, this is hardly a problem. You can measure the fork time for a Valkey instance by diff --git a/topics/lru-cache.md b/topics/lru-cache.md index 33e2cff0..5886f31e 100644 --- a/topics/lru-cache.md +++ b/topics/lru-cache.md @@ -114,7 +114,7 @@ You can see three kind of dots in the graphs, forming three distinct bands. In a theoretical LRU implementation we expect that, among the old keys, the first half will be evicted. The Valkey LRU algorithm will instead only *probabilistically* evicts the older keys. -As you can see, Redis OSS 3.0 does a reasonalbe job with 5 samples. +As you can see, Redis OSS 3.0 does a reasonable job with 5 samples. Using a sample size of 10, the approximation is very close to an exact LRU implementation. (The LRU algorithm hasn't changed considerably since this test was performed, so the performance of Valkey is similar in this regard.)