-
-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Key With One Evaluation of the init
Future
#432
Comments
Hi. Thank you for using moka.
I believe you can use Maybe the document is a bit confusing, but
Here is how
Here is how
In both cases, In |
It should be the latter one; only one call gets the value and all others get |
Thanks for your response. So I can either go with my current solution (although it fails on the ttl and I could lose the key if the update fails) or I can have B resolve immediately. Is there a way that I can say |
No. Currently, moka does not provide a way to know if the closure is going to block. (The same thing to I am not sure how your refresh ahead cache would work, but I guess instead of
|
I don't really mind if 'B' blocks during steps 3-7 but yes, returning immediately would be the ideal behavior. I've got a seperate function for getting data. The update function needs to:
'B' blocking while 'A' is updating isnt ideal but it's acceptable as long as 'B's function early returns (so it has to know that A just ran). Considering the behavior of '.remove()', this should be possible as-is but it would be convenient if there was a ready-made function for this so I can preserve ttls. |
Thank you for explaining. I have a better understanding of the issue now.
This is done by
OK. So this is why you wanted the update operations from I think what you really want is to ensure that the update operation for the same key should not happen more than once in a given time frame. ( You can achieve this by storing the last update time of the value in the cache and use // Cargo.toml
//
// [dependencies]
// futures-util = "0.3.30"
// moka = { version = "0.12.7", features = ["future"] }
// tokio = { version = "1.38.0", features = ["rt-multi-thread", "macros", "sync", "time"] }
use std::{sync::Arc, time::{Duration, Instant}};
use moka::future::Cache;
pub type Key = String;
pub type Value = String;
#[derive(Clone, Debug)]
pub struct ValueAndUpdatedAt<V> {
/// The value.
#[allow(unused)]
value: V,
/// The time when the value was last updated.
updated_at: Instant,
}
async fn refresh(
cache: &Cache<Key, ValueAndUpdatedAt<Value>>,
key: &Key,
) -> Result<(), Box<String>> {
use moka::ops::compute::{CompResult, Op};
let key = key.clone();
let result = cache
.entry(key.clone())
.and_try_compute_with(move |maybe_entry| async {
if let Some(entry) = maybe_entry {
// Do early return if the entry was updated less than 8 seconds ago.
if entry.into_value().updated_at.elapsed() < Duration::from_secs(8) {
return Ok(Op::Nop) as Result<_, Box<String>>;
}
}
// If we are here, `maybe_entry` should be `None` or the entry was
// updated more than or equal to 8 seconds ago. Get the latest value
// from the upstream service and insert it to this cache.
//
// Since `and_try_compute_with` ensures that only one task can enter
// this async block at a time for the same key, we can be sure that
// the value in the upstream service is retrieved only once.
let value = get_latest_from_upstream(key).await?;
// Update the value in the cache.
Ok(Op::Put(ValueAndUpdatedAt {
value,
updated_at: Instant::now(),
}))
})
.await;
match result {
Err(e) => {
eprintln!("Error: {e:?}");
return Err(e);
}
Ok(CompResult::Unchanged(_entry)) => {
println!(
" Value existed in the cache. Did not update the value \
because not enough time has passed since the last update."
);
}
Ok(CompResult::ReplacedWith(_entry)) => {
println!(
" Value existed in the cache. \
Updated the cached value with the latest one from the upstream service."
);
}
Ok(CompResult::Inserted(_entry)) => {
println!(
" Value did not exist in the cache. \
Inserted the latest value from the upstream service."
);
}
Ok(CompResult::StillNone(_key)) => unreachable!(),
Ok(CompResult::Removed(_entry)) => unreachable!(),
}
Ok(())
}
async fn get_latest_from_upstream(_key: Key) -> Result<Value, Box<String>> {
// Simulate an upstream service that takes some time to respond
tokio::time::sleep(Duration::from_secs(1)).await;
Ok("value".to_string())
}
#[tokio::main]
async fn main() -> Result<(), Box<String>> {
let cache = Cache::builder()
.max_capacity(100)
.time_to_live(Duration::from_secs(360))
.build();
let key = "key".to_string();
println!("Refreshing the value in the cache.");
refresh(&cache, &key).await?;
println!("\nSleeping for 10 seconds...");
tokio::time::sleep(Duration::from_secs(10)).await;
println!("\nRun two async tasks concurrently and try to refresh the value for the same key.");
let barrier = Arc::new(tokio::sync::Barrier::new(2));
let handles = (0..2).into_iter().map(|_| {
let cache = cache.clone();
let key = key.clone();
let barrier = barrier.clone();
tokio::spawn(async move {
barrier.wait().await;
refresh(&cache, &key).await
})
});
// Wait for all tasks to complete.
futures_util::future::join_all(handles).await;
println!("\nAll tasks completed.");
println!("\nSleeping for 5 seconds...");
tokio::time::sleep(Duration::from_secs(5)).await;
println!("\nTry to refresh the value for the key.");
refresh(&cache, &key).await?;
println!("\nSleeping for 5 seconds...");
tokio::time::sleep(Duration::from_secs(5)).await;
println!("\nRefresh the value for the key again.");
refresh(&cache, &key).await?;
Ok(())
} $ cargo run
Refreshing the value in the cache.
Value did not exist in the cache. Inserted the latest value from the upstream service.
Sleeping for 10 seconds...
Run two async tasks concurrently and try to refresh the value for the same key.
Value existed in the cache. Updated the cached value with the latest one from the upstream service.
Value existed in the cache. Did not update the value because not enough time has passed since the last update.
All tasks completed.
Sleeping for 5 seconds...
Try to refresh the value for the key.
Value existed in the cache. Did not update the value because not enough time has passed since the last update.
Sleeping for 5 seconds...
Refresh the value for the key again.
Value existed in the cache. Updated the cached value with the latest one from the upstream service. Well, actually the cache already stores the last update time for each key, but it is not exposed to the user. When I have more time, I will expose this via a new method of |
In the above code snippet,
|
OK. For now, I would not have enough time to implement this, but hopefully in near future. EDIT: Created #433. Please subscribe the issue if you are interested. |
This is a bit hard to understand, but please review Caffeine to see if anything it does helps refine your approach. It’s refreshAfterWrite will asynchronously reload an entry if accessed after a time threshold and return the currently cached value. This way active content stays fresh, inactive is allowed to expire, and it hides the reload penalty from cache usages. While this is individual per key, a coalescing loader can batch over a time/space window for a more efficient operation. The in-flight reloads are stored in a secondary mapping, invalidated if the mapping was modified (linearizability), and dedupes its calls. A common confusion is by users who instead want to periodically reload the cache contents. That’s not really a cache but a warm replica as no eviction should ever take place. That’s simply a scheduled task to recreate an immutable map, so it is clearer to write that code directly without needing a library. |
Thank you for the info! I will take a look at refreshAfterWrite. |
Here is another example. This one only checks if Like the previous example, this stores the last update time of the value in the cache and uses // Cargo.toml
//
// [dependencies]
// futures-util = "0.3.30"
// moka = { version = "0.12.7", features = ["future"] }
// tokio = { version = "1.38.0", features = ["rt-multi-thread", "macros", "sync", "time"] }
use std::{sync::Arc, time::{Duration, Instant}};
use moka::future::Cache;
pub type Key = String;
pub type Value = String;
#[derive(Clone, Debug)]
pub struct ValueAndUpdatedAt<V> {
/// The value.
#[allow(unused)]
value: V,
/// The time when the value was last updated.
updated_at: Instant,
}
async fn refresh(
cache: &Cache<Key, ValueAndUpdatedAt<Value>>,
key: &Key,
) -> Result<(), Box<String>> {
use moka::ops::compute::{CompResult, Op};
let key = key.clone();
let started_at = Instant::now();
let result = cache
.entry(key.clone())
.and_try_compute_with(move |maybe_entry| async move {
if let Some(entry) = maybe_entry {
// Do early return if we were blocked by another async task
// updating the value for the same key.
//
// This can be checked if our started_at time is earlier than or
// equal to the time that entry was updated by other async task.
if started_at <= entry.into_value().updated_at {
return Ok(Op::Nop) as Result<_, Box<String>>;
}
}
// If we are here, `maybe_entry` should be `None` or the entry was
// updated _before_ we started the computation.
//
// Since `and_try_compute_with` ensures that only one task can enter
// this block at a time for the same key, we can be sure that the
// value for the key in the upstream service is updated only once.
let value = get_latest_from_upstream(key).await?;
// Update the value in the cache.
Ok(Op::Put(ValueAndUpdatedAt {
value,
updated_at: Instant::now(),
}))
})
.await;
match result {
Err(e) => {
eprintln!("Error: {e:?}");
return Err(e);
}
Ok(CompResult::Unchanged(_entry)) => {
println!(
" Value existed in the cache. Did not update the value \
because it was already done by another async task running concurrently."
);
}
Ok(CompResult::ReplacedWith(_entry)) => {
println!(
" Value existed in the cache. \
Updated the cached value with the latest one from the upstream service."
);
}
Ok(CompResult::Inserted(_entry)) => {
println!(
" Value did not exist in the cache. \
Inserted the latest value from the upstream service."
);
}
Ok(CompResult::StillNone(_key)) => unreachable!(),
Ok(CompResult::Removed(_entry)) => unreachable!(),
}
Ok(())
}
async fn get_latest_from_upstream(_key: Key) -> Result<Value, Box<String>> {
// Simulate an upstream service that takes some time to respond
tokio::time::sleep(Duration::from_secs(1)).await;
Ok("value".to_string())
}
#[tokio::main]
async fn main() -> Result<(), Box<String>> {
let cache = Cache::builder()
.max_capacity(100)
.time_to_live(Duration::from_secs(360))
.build();
let key = "key".to_string();
println!("Refreshing the value in the cache.");
refresh(&cache, &key).await?;
println!("\nSleeping for 10 seconds...");
tokio::time::sleep(Duration::from_secs(10)).await;
println!("\nRun two async tasks concurrently and try to refresh the value for the same key.");
let barrier = Arc::new(tokio::sync::Barrier::new(2));
let handles = (0..2).into_iter().map(|_| {
let cache = cache.clone();
let key = key.clone();
let barrier = barrier.clone();
tokio::spawn(async move {
barrier.wait().await;
refresh(&cache, &key).await
})
});
// Wait for all tasks to complete.
futures_util::future::join_all(handles).await;
println!("\nAll tasks completed.");
println!("\nSleeping for 5 seconds...");
tokio::time::sleep(Duration::from_secs(5)).await;
println!("\nRefresh the value for the key.");
refresh(&cache, &key).await?;
println!("\nSleeping for 5 seconds...");
tokio::time::sleep(Duration::from_secs(5)).await;
println!("\nRefresh the value for the key again.");
refresh(&cache, &key).await?;
Ok(())
} $ cargo run
Refreshing the value in the cache.
Value did not exist in the cache. Inserted the latest value from the upstream service.
Sleeping for 10 seconds...
Run two async tasks concurrently and try to refresh the value for the same key.
Value existed in the cache. Updated the cached value with the latest one from the upstream service.
Value existed in the cache. Did not update the value because it was already done by another async task running concurrently.
All tasks completed.
Sleeping for 5 seconds...
Refresh the value for the key.
Value existed in the cache. Updated the cached value with the latest one from the upstream service.
Sleeping for 5 seconds...
Refresh the value for the key again.
Value existed in the cache. Updated the cached value with the latest one from the upstream service. |
Thank you so much. I'll implement this example! |
I'm working on a refresh ahead cache that's a wrapper around a moka cache. This cache will need to make update on its keys every x minutes, usually by calling an API. I don't want there to be multiple requests to update a key. Instead I want the requests to be coalesced into one.
From moka::future::OwnedKeySelector::or_insert_with:
However, this only runs when the key doesn't exist. I want to update its value. To do this, I'd use moka::future::OwnedKeySelector::and_try_compute_with. But it says,
I'm looking for a function that has the same behavior as
and_try_compute_with
but the same concurrency asor_insert_with
. Did I miss something when reading the docs or is there not a way to do this?Right now, I'm simply removing the key before running the closure, but this resets the TTL on the value. I don't want to do that. Also, one question about this approach: Can multiple calls to remove at the same time return copies of the value, or will only one call get the value and all others get
None
back?The text was updated successfully, but these errors were encountered: