Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mesh VPN support to the CDH #763

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

portersrc
Copy link
Member

@portersrc portersrc commented Oct 23, 2024

RFC issue is here

This PR is meant to hold the main guest functionality for overlay network support with Nebula. The main additions are in confidential-data-hub/overlay-network.

Related items not included in this PR:

  • trustee plugin and nebula support Add nebula_ca plugin trustee#539
  • possible kata-containers support (no PR opened for this yet) (may depend on initdata)
  • packaging (e.g. how to build and include nebula itself)
  • integration testing

@portersrc portersrc force-pushed the feature/encrypted-mesh branch 3 times, most recently from 00e1f81 to 6fb1d22 Compare October 28, 2024 19:59
@portersrc portersrc force-pushed the feature/encrypted-mesh branch 9 times, most recently from 60d552e to 8cdf322 Compare November 11, 2024 14:25
@portersrc portersrc force-pushed the feature/encrypted-mesh branch 3 times, most recently from 8947bea to 8062c00 Compare November 18, 2024 14:44
@portersrc portersrc marked this pull request as ready for review November 18, 2024 15:41
@portersrc portersrc requested a review from a team as a code owner November 18, 2024 15:41
Copy link
Member

@fitzthum fitzthum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments on the first half of the PR

#[arg(short, long)]
pod_name: String,
#[arg(short, long)]
lighthouse_pub_ip: String,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this could maybe come from the CDH config file (with init-data coming soon) rather than be passed from the caller.

overlay_network::init(pod_name, lighthouse_pub_ip).await?;
// FIXME remove return value for this interface if we don't need
// anything here.
Ok(Vec::<u8>::new())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with this comment. Just return a result here probably

}

// FIXME This should be a shared struct, if possible, with trustee's nebula
// plugin. It's that plugin's custom protocol.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really possible. Even our attesters have structs that need to match the verifiers in the AS


// FIXME These should be configurable
const LIGHTHOUSE_IP: Ipv4Addr = Ipv4Addr::new(192, 168, 100, 100);
const LIGHTHOUSE_MASK: Ipv4Addr = Ipv4Addr::new(255, 255, 255, 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, try to get these from the CDH config file. Otherwise we have sort of a single-use feature.

/// Initialize a nebula mesh. The general approach is as follows:
/// - Calculate what the mesh IP will be for this worker.
/// - Ask trustee for its nebula credentials
/// - Start the nebula daemon.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: put comment before function name

}

// FIXME: kbs hard-coded to localhost is wrong. This should be based on
// ResourceUri? Where does it come from?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

config file

/// - Ask trustee for its nebula credentials
/// - Start the nebula daemon.
pub async fn init(&self) -> Result<()> {
let is_lighthouse: bool = self.lighthouse_ip.is_empty();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we're going to use one of the pods as the lighthouse?

pub async fn init(&self) -> Result<()> {
let is_lighthouse: bool = self.lighthouse_ip.is_empty();

let (mesh_ip, which_config);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be more rust-like, you could do let (mesh_ip, which_config) if is_lighthouse { and then have each arm return a tuple

// FIXME: kbs hard-coded to localhost is wrong. This should be based on
// ResourceUri? Where does it come from?
let prefix_len: u32 = self.netmask_to_prefix_len(LIGHTHOUSE_MASK);
let neb_cred_uri: String = format!(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense to move this through 85ish into it's own method?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it could be cool to make these configs more object oriented. rather than just storing a string config, you could have a struct that stores a string and knows how to write itself to a certain place and such. not a requirement but could be nifty

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants