Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

feat: introduce storage node configuration #69

Merged
merged 5 commits into from
May 1, 2024
Merged

feat: introduce storage node configuration #69

merged 5 commits into from
May 1, 2024

Conversation

xx01cyx
Copy link
Member

@xx01cyx xx01cyx commented May 1, 2024

We can run the storage node as follows:

cargo run -- --data-store memdisk --disk-cache-size 1000000 --mem-cache-size 1000 --cache-path "my-cache" --max-disk-reader-buffer-size 10000000

See config.rs for more configurable fields.

@xx01cyx xx01cyx requested review from lanlou1554 and unw9527 May 1, 2024 14:18
@codecov-commenter
Copy link

codecov-commenter commented May 1, 2024

Codecov Report

Attention: Patch coverage is 46.52406% with 100 lines in your changes are missing coverage. Please review.

Project coverage is 85.01%. Comparing base (540db28) to head (b1595de).

Files Patch % Lines
storage-node/src/server.rs 30.15% 87 Missing and 1 partial ⚠️
storage-node/src/bin/storage_node.rs 0.00% 8 Missing ⚠️
storage-node/src/common/config.rs 0.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #69      +/-   ##
==========================================
- Coverage   87.17%   85.01%   -2.17%     
==========================================
  Files          22       23       +1     
  Lines        3299     3437     +138     
  Branches     3299     3437     +138     
==========================================
+ Hits         2876     2922      +46     
- Misses        253      348      +95     
+ Partials      170      167       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

storage-node/src/bin/storage_node.rs Show resolved Hide resolved
@@ -105,12 +97,147 @@ pub async fn storage_node_serve(ip_addr: &str, port: u16) -> ParpulseResult<()>
let routes = route.or(heartbeat).or(catch_all);
let ip_addr: IpAddr = ip_addr.parse().unwrap();
warp::serve(routes).run((ip_addr, port)).await;
}

pub async fn storage_node_serve(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to explicitly match xxx to LRU or LRU-K for every type of data store...?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The concrete types have to match when we initialize a variable. We can't assign a variable as LRU or LRUK at the same time. So we have to do this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I mean if it is possible to modularize repetitive code.

use bytes::Bytes;
use log::debug;
use parpulse_client::RequestParams;
use tokio::sync::mpsc::Receiver;

#[async_trait]
pub trait StorageManager: Send + Sync {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to extract this out?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to enable route to accept StorageManager<C> where C could be different

Copy link
Collaborator

@lanlou1554 lanlou1554 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to also make MAX_DISK_READER_BUFFER_SIZE configurable?
Also, for the client parameters, I think we need to sync with cache2 team!
Others LGTM!

@xx01cyx xx01cyx requested a review from lanlou1554 May 1, 2024 15:51
@xx01cyx xx01cyx merged commit b85f491 into main May 1, 2024
1 check passed
@xx01cyx xx01cyx deleted the yx/config branch May 1, 2024 16:25
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants