Skip to content
This repository has been archived by the owner on Dec 10, 2024. It is now read-only.

Commit

Permalink
Reference Architecture 22.08
Browse files Browse the repository at this point in the history
New Components/Features:
- Inclusion of the Cloud RA in the distribution
- Inclusion of OpenTelemetry
- Inclusion of Jaeger
- Inclusion of Linkerd Service Mesh (version 2.12.0)
- Inclusion of standalone cAdvisor
- Inclusion of Scalable I/O Virtualization (SIOV) for 4th Gen Intel® Xeon® Scalable processor
- Inclusion of Intel® Data Streaming Accelerator (Intel® DSA) for 4th Gen Intel® Xeon® Scalable processor
- Inclusion of Intel® Dynamic Load Balancer (Intel® DLB) for 4th Gen Intel® Xeon® Scalable processor
- Inclusion of 5G Core support in the Regional Data Center Configuration Profile
- Inclusion of post-deployment hook for additional automation
- Scale up/down cluster nodes after initial deployment
- Support for Load Balancing on additional interfaces when using Multus CNI
- Support for upgrade/downgrade of network adaptor drivers post deployment
- Support binding of QAT to new Virtual Function (VF)
- Support for 3rd Gen Intel® Xeon® Scalable processor platforms for FlexRAN
- Support for xRAN Test Mode for FlexRAN
- Support for RHEL 8.6 Realtime as base operating system for FlexRAN
- Support for Rocky Linux 9.0
- Support for RHEL 9.0
- Tech Preview: Support for Application Device Queues (ADQ)

New Platforms/CPUs:
- Intel Coyote Pass with 8360Y 3rd Gen Intel® Xeon® Scalable processor CPUs
- Intel Fox Creek Pass with XCC E3-QS 4th Gen Intel® Xeon® Scalable processor CPUs
- Intel Ruby Pass platform
- 4th Gen Intel® Xeon® Scalable processor CPU SKUs: 8470N, 8471N, 8490H

Updates/Changes:
- Updated Ansible to 5.7.1 and ansible-core to 2.12.5
- Updated Kubernetes to 1.24.3 (min supported 1.22)
- Updated Key Management Reference Application (KMRA) support to 2.2.1
- Updated FlexRAN support to 22.07
- Updated TADK to 22.3 Docker image
- Updated Intel device plugins (DPs) to release-0.24
- Updated NGINX image to 1.23.1
- Updated Vector Packet Processing (VPP) to version 22.10
- Updated Trusted Attestation Controller (TAC) to version 0.2.0
- Updated Trusted Certificate Issuer (TCS) to version 0.2.0
- Updated Data Plane Development Kit (DPDK) version to 22.07
- Updated Platform Aware Scheduling (PAS) version to 0.8
- Updated Grafana to 8.5.11
- Updated Prometheus to 2.37.1
- Updated Intel® Ethernet firmware and drivers
- Updated Intel® QuickAssist Technology (Intel® QAT) drivers
- Replaced Barometer Collectd with Containerized Collectd
- Enhanced automatic CPU pinning and isolation for Virtual Machine Reference System Architecture (VMRA)
- VM Cluster expansion with new nodes and/or hosts in VMRA

Removed Support:
- RHEL and Rocky Linux 8 series as base operating systems

Known Limitations/Restrictions:
- Intel® Software Guard Extensions (Intel® SGX) and KMRA are NOT supported on 4th Gen Intel Xeon Scalable processors. These features are automatically disabled on all operating systems
- 4th Gen Intel Xeon Scalable processor Intel DSA and Intel DLB features are NOT supported on Ubuntu 20.04
- Enabling support of Intel QAT on 4th Gen Intel Xeon Scalable processor requires an NDA (see Access to NDA Software Components)
- CRI-O runtime is not supported on RHEL and Rocky Linux 9.0

Co-authored-by: Ali Shah, Syed Faraz <[email protected]>
Co-authored-by: Alshehab, Mustafa <[email protected]>
Co-authored-by: Fiala, Jiri <[email protected]>
Co-authored-by: Gherghe, Calin <[email protected]>
Co-authored-by: Jasiok, Lumir <[email protected]>
Co-authored-by: Kubin, Lukas <[email protected]>
Co-authored-by: Liu, Mark <[email protected]>
Co-authored-by: MacGillivray, Mac <[email protected]>
Co-authored-by: Mika, Dariusz <[email protected]>
Co-authored-by: Mlynek, Krystian <[email protected]>
Co-authored-by: Musial, Michal <[email protected]>
Co-authored-by: Park, Seungweon <[email protected]>
Co-authored-by: Pedersen, Michael <[email protected]>
Co-authored-by: Prokes, Jiri <[email protected]>
Co-authored-by: Puzikov, Dmitrii <[email protected]>
Co-authored-by: Wisniewski, Szymon <[email protected]>
  • Loading branch information
16 people committed Oct 6, 2022
1 parent 9c916a9 commit 76e07c7
Show file tree
Hide file tree
Showing 528 changed files with 20,040 additions and 4,038 deletions.
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,7 @@ The software provided here is for reference only and not intended for production
```

2. Decide which configuration profile you want to use and export environmental variable.
For VM case is PROFILE environment variable mandatory.
> **_NOTE:_** For non-VM case it will be used only to ease execution of the steps listed below.
> **_NOTE:_** It will be used only to ease execution of the steps listed below.
- For **Kubernetes Basic Infrastructure** deployment:

```bash
Expand Down Expand Up @@ -118,12 +117,13 @@ The software provided here is for reference only and not intended for production
For VM case:
- update details relevant for vm_host (e.g.: datalane_interfaces, ...)
- update VMs definition in host_vars/host-for-vms-1.yml
- update VMs definition in host_vars/host-for-vms-1.yml - use that template for the first vm_host
- update VMs definition in host_vars/host-for-vms-2.yml - use that template for the second and all other vm_hosts
- update/create host_vars for all defined VMs (e.g.: host_vars/vm-ctrl-1.yml and host_vars/vm-work-1.yml)
Needed details are at least dataplane_interfaces
For more details see [VM case configuration guide](docs/vm_config_guide.md)
9. **Recommended:** Apply bug fix patch for Kubespray submodule (Required for RHEL 8+ and Ubuntu 22.04).
9. **Required:** Apply bug fix patch for Kubespray submodule (for RHEL 8+ or Rocky 9(if wireguard is enabled)).
```bash
ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.yml
Expand All @@ -149,11 +149,14 @@ Refer to the documentation linked below to see configuration details for selecte
- [SRIOV Network Device Plugin and SRIOV CNI plugin](docs/sriov.md)
- [MinIO Operator](docs/storage.md)
- [Adding and removing worker node(s)](docs/add_remove_nodes.md)
- [VM case configuration guide](docs/vm_config_guide.md)
- [VM multinode setup guide](docs/vm_multinode_setup_guide.md)
- [VM cluster expansion guide](docs/vm_cluster_expansion_guide.md)
## Prerequisites and Requirements
- Required packages on the target servers: **Python3**.
- Required packages on the ansible host (where ansible playbooks are run): **Python3 and Pip3**.
- Required packages on the ansible host (where ansible playbooks are run): **Python3.8-3.10 and Pip3**.
- Required python packages on the ansible host. **See requirements.txt**.
- SSH keys copied to all Kubernetes cluster nodes (`ssh-copy-id <user>@<host>` command can be used for that).
Expand Down
7 changes: 4 additions & 3 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[ssh_connection]
pipelining=True
pipelining = True
ssh_args = -o ServerAliveInterval=60 -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null

[defaults]
Expand All @@ -8,10 +8,11 @@ display_skipped_hosts = no
host_key_checking = False
gathering = smart
stdout_callback = debug
callbacks_enabled = timer, profile_tasks, profile_roles

fact_caching = jsonfile
fact_caching_connection = /tmp
fact_caching_timeout = 7200

action_plugins=./action_plugins:~/.ansible/plugins/action:/usr/share/ansible/plugins/action
library=./library
action_plugins = ./action_plugins:~/.ansible/plugins/action:/usr/share/ansible/plugins/action
library = ./library
126 changes: 126 additions & 0 deletions cloud/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Cloud RA

## Prerequisites

- Python 3.8+
- AWS CLI 2+
- Terraform 1.2+
- Docker 20.10.17+
- `pip install -r requirements.txt`
- `aws configure`

## Managed Kubernetes deployment

### Automatic

Create deployment directory with `cwdf.yaml`, `sw.yaml` hardware and software configuration files:

```commandline
mkdir deployment
vim cwdf.yaml
vim sw.yaml
```

Example `cwdf.yaml` file:
```yaml
cloudProvider: aws
awsConfig:
profile: default
region: eu-central-1
vpc_cidr_block: "10.21.0.0/16"
# These tags will be applied to all created resources
extra_tags:
Owner: "some_user"
Project: "CWDF"
subnets:
- name: "subnet_a"
az: eu-central-1a
cidr_block: "10.21.1.0/24"
- name: "subnet_b"
az: eu-central-1b
cidr_block: "10.21.2.0/24"
sg_whitelist_cidr_blocks:
- "0.0.0.0/0"
eks:
kubernetes_version: "1.22"
# AWS EKS requires at least 2 subnets
subnets: ["subnet_a", "subnet_b"]
node_groups:
- name: "default"
instance_type: "t3.medium"
vm_count: 3
```
Then `sw.yaml` for the software configuration.
[Link to sw_deployment tool README file.](sw_deployment/README.md)

Example `sw.yaml` file:
```yaml
cloud_settings:
provider: aws
region: eu-central-1
controller_ips:
- 127.0.0.1
# exec_containers can be used to deploy additional containers or workloads.
# It defaults to an empty list, but can be changed as shown in the commented lines
exec_containers: []
#exec_containers:
#- ubuntu/kafka
git_tag: None
git_url: https://github.com/intel/container-experience-kits
github_personal_token: None
ra_config_file: data/node1.yaml
ra_ignore_assert_errors: true
ra_machine_architecture: skl
ra_profile: build_your_own
replicate_from_container_registry: https://registry.hub.docker.com
```

Then run `deployer.py deploy` and pass the deployment directory as an argument:
```commandline
python deployer.py deploy --deployment_dir=deployment
```

Along with the EKS cluster additional Ansible instance and ECR container registry will be created.

Ansible instance will be available with AWS CLI, Ansible and kubectl pre-installed.
Kubectl will be also pre-configured and authorized against created EKS cluster.
On the Ansible instance default user is `ubuntu`. Folder `cwdf_deployment` in home directory contains ssh keys for EKS worker nodes and connection info for worker nodes and ECR registry.

After the deployment discovery will run on each EKS worker node. Output will be written to `discovery_results` directory on local machine where `deployer.py` is running and then copied to Ansible host's `cwdf_deployment` directory.

Cleanup created resources:
```commandline
python deployer.py cleanup --deployment_dir=deployment
```

### Manual

Start by creating a directory for the deployment and generate SSH key for instances:
```commandline
mkdir deployment
mkdir deployment/ssh
ssh-keygen -f deployment/ssh
```

1. Create a `cwdf` hardware definition yaml file e.g. `cwdf.yaml`:
```commandline
cp cwdf_example.yaml deployment/cwdf.yaml
```

2. Then generate Terraform manifest using `cwdf.py`:
```commandline
python cwdf.py generate-terraform \
--cwdf_config=deployment/cwdf.yaml \
--ssh_public_key=deployment/ssh/id_rsa.pub \
--job_id=manual \
--create_ansible_host=True \
--create_container_registry=True \
> deployment/main.tf
```

3. Initialize Terraform and deploy resources in the deployment directory:
```commandline
terraform init
terraform apply
```
31 changes: 31 additions & 0 deletions cloud/cwdf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import click
from cwdf import compose_terraform


@click.group()
def cli():
pass


@click.command()
@click.option('--cwdf_config', help='Path to CWDF yaml config file', required=True)
@click.option('--ssh_public_key', help='Path to SSH public key', required=True)
@click.option('--job_id', help='Unique identifier that will be included in resource tags and names', default="manual")
@click.option('--create_ansible_host', help='Will include ansible host in the Terraform manifest', default=True)
@click.option('--create_container_registry', help='Will include managed container registry in the Terraform manifest', default=True)
def generate_terraform(cwdf_config, ssh_public_key, job_id, create_ansible_host, create_container_registry):
with open(cwdf_config, 'r') as f:
cwdf_config = f.read()

with open(ssh_public_key, 'r') as f:
ssh_public_key = f.read().strip()

tf_manifest = compose_terraform(cwdf_config, job_id, ssh_public_key, create_ansible_host, create_container_registry)
click.echo(tf_manifest)


cli.add_command(generate_terraform)


if __name__ == "__main__":
cli()
1 change: 1 addition & 0 deletions cloud/cwdf/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from .main import compose_terraform
36 changes: 36 additions & 0 deletions cloud/cwdf/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
from schema import Schema, Or, Optional


config_schema = Schema({
"cloudProvider": Or("aws"),
Optional("awsConfig"): {
Optional("region", default='eu-central-1'): str,
Optional("profile", default='default'): str,
Optional("vpc_cidr_block", default='10.0.0.0/16'): str,
Optional("sg_whitelist_cidr_blocks", default=['0.0.0.0/0']): [str],
Optional("extra_tags", default={}): {str: str},
"subnets": [{
"name": str,
"cidr_block": str,
"az": str
}],
Optional("instance_profiles"): [{
"name": str,
Optional("instance_type", default='t3.medium'): str,
"ami_id": str,
"subnet": str,
Optional("vm_count", default=1): int,
Optional("root_volume_size", default=16): int,
Optional("root_volume_type", default='gp2'): str
}],
Optional("eks"): {
Optional("kubernetes_version", default='1.22'): str,
"subnets": [str],
"node_groups": [{
"name": str,
Optional("instance_type", default='t3.medium'): str,
Optional("vm_count", default=1): int
}]
}
},
})
95 changes: 95 additions & 0 deletions cloud/cwdf/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
from .config import config_schema
from schema import SchemaError
import yaml
from jinja2 import Template
from os import path
import json


def verify_cwdf_config(config):
# Verify config file has correct schema
configuration = yaml.safe_load(config)
try:
pop = config_schema.validate(configuration)
return pop
except SchemaError as se:
raise se


def compose_terraform(
config, job_id, ssh_public_key,
create_ansible_instance=True,
create_container_registry=True):
cwdf_configuration = verify_cwdf_config(config)
aws_config = cwdf_configuration['awsConfig']

extra_tags_json = json.dumps(aws_config["extra_tags"])
aws_config["extra_tags_json"] = extra_tags_json.replace('"', '\\"')

aws_config['job_id'] = job_id
aws_config['ssh_pub_key'] = ssh_public_key

aws_config["will_create_ansible_instance"] = create_ansible_instance
aws_config["will_create_container_registry"] = create_container_registry

tf_manifest = ""

provider_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/provider.tf.jinja')
with open(provider_template_path, 'r') as f:
provider_template = Template(f.read())
tf_manifest += "### Provider ###\n"
tf_manifest += provider_template.render(aws_config)
tf_manifest += "### End of Provider ###\n\n"

common_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/common.tf.jinja')
with open(common_template_path, 'r') as f:
common_template = Template(f.read())
tf_manifest += "### Common ###\n"
tf_manifest += common_template.render(aws_config)
tf_manifest += "### End of Common ###\n\n"

if "instance_profiles" in aws_config:
compute_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/compute.tf.jinja')
with open(compute_template_path, 'r') as f:
compute_template = Template(f.read())
tf_manifest += "### Bare Metal Compute ###\n"
tf_manifest += compute_template.render(aws_config)
tf_manifest += "### End of Bare Metal Compute ###\n\n"

if "eks" in aws_config:
eks_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/eks.tf.jinja')
with open(eks_template_path, 'r') as f:
eks_template = Template(f.read())
tf_manifest += "### Elastic Kubernetes Service ###\n"
tf_manifest += eks_template.render(aws_config)
tf_manifest += "### End of Elastic Kubernetes Service ###\n\n"

if create_ansible_instance:
ansible_host_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/ansible_host.tf.jinja')
with open(ansible_host_template_path, 'r') as f:
ansible_host_template = Template(f.read())
tf_manifest += "### Ansible Host ###\n"
tf_manifest += ansible_host_template.render(aws_config)
tf_manifest += "### End of Ansible Host ###\n\n"

if create_container_registry:
ecr_template_path = path.join(
path.dirname(__file__),
'templates/terraform/aws/ecr.tf.jinja')
with open(ecr_template_path, 'r') as f:
ecr_template = Template(f.read())
tf_manifest += "### Elastic Container Registry ###\n"
tf_manifest += ecr_template.render(aws_config)
tf_manifest += "### End of Elastic Container Registry ###\n\n"

return tf_manifest
Loading

0 comments on commit 76e07c7

Please sign in to comment.