SPDX-License-Identifier: Apache-2.0
Copyright (c) 2021 Intel Corporation
- Overview
- Intel Ethernet Operator
- Deploying the Operator
- Installing Go
- Installing Operator SDK
- Applying custom resources
- Webserver for disconnected environment
- Certificate validation
- Updating DDP
- Deploying Flow Configuration Agent
- Creating Trusted VF using SRIOV Network Operator
- Check node status
- Create DCF capable SRIOV Network
- Build UFT image
- Creating FlowConfig Node Agent Deployment CR
- Verifying that FlowConfig Daemon is running on available nodes:
- Creating Flow Configuration rules with Intel Ethernet Operator
- Creating Flow Configuration rules with Intel Ethernet Operator (NodeFlowConfig)
- Update a sample Node Flow Configuration rule
- Hardware Validation Environment
- Summary
This document provides the instructions for using the Intel Ethernet Operator on supported Kubernetes clusters (Vanilla K8s installed with Intel Container Experience Kits (CEK) or Red Hat's OpenShift Container Platform). This operator was developed with aid of the Operator SDK project.
Note: This operator is not ready for production environment.
The role of the Intel Ethernet Operator is to orchestrate and manage the configuration of the capabilities exposed by the Intel E810 Series network interface cards (NICs). The operator is a state machine which will configure certain functions of the card and then monitor the status and act autonomously based on the user interaction. The operator design of the Intel Ethernet Operator supports the following E810 series cards:
- Intel® Ethernet Network Adapter E810-CQDA1/CQDA2
- Intel® Ethernet Network Adapter E810-XXVDA4
- Intel® Ethernet Network Adapter E810-XXVDA2
The Intel Ethernet Operator provides functionality for:
- Update of the devices' FW (Firmware) via NVM Update tool.
- Update of the devices' DDP (Dynamic Device Personalization) profile.
- Flow configuration of traffic handling for the devices, based on supported DDP profile.
Upon deployment the operator provides APIs, Controllers and Daemons responsible for management and execution of the supported features. A number of dependencies (ICE driver, SRIOV Network Operator, NFD) must be fulfilled before the deployment of this operator - these dependencies are listed in the prerequisites section. The user interacts with the operator by providing CRs (CustomResources). The operator constantly monitors the state of the CRs to detect any changes and acts based on the changes detected. There is a separate CR to be provided for the FW/DDP update functionality and the Flow Configuration functionality. Once the CR is applied or updated, the operator/daemon checks if the configuration is already applied and if it is not, it applies the configuration.
The controller manager pod is the first pod of the operator, it is responsible for deployment of other assets, exposing the APIs, handling of the CRs and executing the validation webhook. It contains the logic for accepting and splitting the FW/DDP CRs into node CRs and reconciling the status of each CR.
The validation webhook of the controller manager is responsible for checking each CR for invalid arguments.
The CLV-discovery pod is a DaemonSet deployed on each worker node in the cluster. It's responsibility is to check if a supported hardware is discovered on the platform and label the node accordingly.
To get all the nodes containing the supported devices run:
kubectl get EthernetNodeConfig -A
NAMESPACE NAME UPDATE
intel-ethernet-operator worker-1 InProgress
intel-ethernet-operator worker-2 InProgress
To get the list of supported devices to be found by the discovery pod run:
kubectl describe configmap supported-clv-devices -n intel-ethernet-operator
The FW/DDP daemon pod is a DaemonSet deployed as part of the operator. It is deployed on each node labeled with appropriate label indicating that a supported E810 Series NIC is detected on the platform. It is a reconcile loop which monitors the changes in each node's EthernetNodeConfig
and acts on the changes. The logic implemented into this Daemon takes care of updating the cards' NIC firmware and DDP profile. It is also responsible for draining the nodes, taking them out of commission and rebooting when required by the update.
Once the operator/daemon detects a change to a CR related to the update of the Intel® E810 NIC firmware, it tries to perform an update. The firmware for the Intel® E810 NICs is expected to be provided by the user in form of a tar.gz
file. The user is also responsible to verify that the firmware version is compatible with the device. The user is required to place the firmware on an accessible HTTP server and provide an URL for it in the CR. If the file is provided correctly and the firmware is to be updated, the Ethernet Configuration Daemon will update the Intel® E810 NICs with the NVM utility provided.
To update the NVM firmware of the Intel® E810 cards' NICs user must create a CR containing the information about which card should be programmed. The Physical Functions of the NICs will be updated in logical pairs. The user needs to provide the FW URL and checksum (SHA-1) in the CR.
For a sample CR go to Updating Firmware.
Once the operator/daemon detects a change to a CR related to the update of the Intel® E810 DDP profile, it tries to perform an update. The DDP profile for the Intel® E810 NICs is expected to be provided by the user. The user is also responsible to verify that the DDP version is compatible with the device. The user is required to place the DDP package on an accessible HTTP server and provide an URL for it in the CR. If the file is provided correctly and the DDP is to be updated, the Ethernet Configuration Daemon will update the DDP profile of Intel® E810 NICs by placing it in correct filesystem on the host.
To update the DDP profile of the Intel® E810 NIC user must create a CR containing the information about which card should be programmed. All the Physical Functions of the NICs will be updated for each NIC.
For a sample CR go to Updating DDP.
The Flow Configuration pod is a DaemonSet deployed with a CRD FlowConfigNodeAgentDeployment
provided by Ethernet operator once it is up and running and the required DCF VF pools and their network attachement definitions
are created with SRIOV Network Operator APIs. It is deployed on each node that exposes DCF VF pool as extended node resource. It is a reconcile loop which monitors the changes in each node's CR and acts on the changes. The logic implemented into this Daemon takes care of updating the cards' NIC traffic flow configuration. It consists of two components Flow Config controller container and UFT container.
The Node Flow Configuration Controller watches for flow rules changes via a node specific CRD - NodeFlowConfig
named same as the node name. Once the operator/daemon detects a change to this CR related to the Intel® E810 Flow Configuration, it tries to create/delete rules via UFT over an internal gPRC API call.
Once the Flow Config change is required the Flow Config Controller will communicate with the UFT container running a DPDK DCF application. The UFT application accepts an input with the configuration and programmes the device using a trusted VF created for this device (it is responsibility of the user to provide the trusted VFs as an allocatable K8s resource - see pre-requisites section).
The Intel Ethernet Operator has a number of prerequisites that must be met in order for complete operation of the Operator.
In order for the Flow Configuration feature to be able to configure the flow configuration of the NICs traffic the configuration must happen using a trusted Virtual Function (VF) from each Physical Function (PF) in the NIC. Usually it is the VF0 of a PF that has the trust mode set to on
and bound to vfio-pci
driver. This VF pool needs to be created by the user and be allocatable as a K8s resource. This VF pool will be used exclusively by the UFT container and no application container.
For user applications additional VF pools should be created separately as needed.
One way of creating and providing this trusted VF and application VFs is to configure it through SRIOV Network Operator. In OCP environments the SRIOV Network Operator will be deployed as a dependency to Intel Ethernet Operator automatically. The configuration and creation of the trusted VFs and application is out of scope of this Operator and is users responsibility.
Install node feature discovery using the following command:
kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.11.1
Building the operator bundle images will require Go and Operator SDK to be installed.
You can install Go following the steps here.
Note: Intel Ethernet Operator is not compatible with Go versions below 1.19
Please install Operator SDK v1.25.0 following the steps below:
export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac)
export OS=$(uname | awk '{print tolower($0)}')
export SDK_VERSION=v1.25.0
export OPERATOR_SDK_DL_URL=https://github.com/operator-framework/operator-sdk/releases/download/${SDK_VERSION}
curl -LO ${OPERATOR_SDK_DL_URL}/operator-sdk_${OS}_${ARCH}
chmod +x operator-sdk_${OS}_${ARCH} && sudo mv operator-sdk_${OS}_${ARCH} /usr/local/bin/operator-sdk
Based on target cluster please follow one of the deployment steps from list below.
Once the operator is successfully deployed, the user interacts with it by creating CRs which will be interpreted by the operator.
Note: Example code below uses kubectl
and the client binary. You can substitute kubectl
with oc
if you are operating in a OCP cluster.
If cluster is running in disconnected environment, then user has to create local cache (e.g webserver) which will serve required files. Cache should be created on machine with access to Internet. Start by creating dedicated folder for webserver.
mkdir webserver
cd webserver
Create nginx Dockerfile
echo "
FROM nginx
COPY files /usr/share/nginx/html
" >> Dockerfile
Create files
folder
mkdir files
cd files
Download required packages into files
directory
curl -OjL https://downloadmirror.intel.com/709692/E810_NVMUpdatePackage_v3_10_Linux.tar.gz
Build image with packages
cd ..
podman build -t webserver:1.0.0 .
Push image to registry that is available in disconnected environment (or copy binary image to machine via USB flash driver by using podman save
and podman load
commands)
podman push localhost/webserver:1.0.0 $IMAGE_REGISTRY/webserver:1.0.0
Create Deployment on cluster that will expose packages
apiVersion: apps/v1
kind: Deployment
metadata:
name: ice-cache
namespace: default
spec:
selector:
matchLabels:
run: ice-cache
replicas: 1
template:
metadata:
labels:
run: ice-cache
spec:
containers:
- name: ice-cache
image: $IMAGE_REGISTRY/webserver:1.0.0
ports:
- containerPort: 80
And Service to make it accessible within cluster
apiVersion: v1
kind: Service
metadata:
name: ice-cache
namespace: default
labels:
run: ice-cache
spec:
ports:
- port: 80
protocol: TCP
selector:
run: ice-cache
After that package will be available in cluster under following path:
http://ice-cache.default.svc.cluster.local/E810_NVMUpdatePackage_v3_10_Linux.tar.gz
To update FW or DDP on CLV card, you have to download corresponding packages for them. For security reasons, you might want to validate certificate that is exposed by server before downloading and this optional step describes how to add trusted certificate.
Those steps must be done when operator is not installed. If operator is installed, you can apply new configuration by manually restarting fwddp-daemon
pods.
Prepare trusted X509 certificate certificate.der
that will be added to trusted store. It could be either root CA certificate or intermediate certificate. It must contain Subject Alternative Name
or IPAdresses
identical to path from which packages will be downloaded.
kubectl create secret generic tls-cert --from-file=tls.crt=certificate.der -n <namespace>
Restart fwddp-daemon
pods or install operator
Check that certificate is correctly loaded in fwddp-daemon
pods
kubectl logs -n <namespace> pod/fwddp-daemon|grep -i "found certificate - using HTTPS client"
{"level":"info","logger":"daemon","msg": "found certificate - using HTTPS client"}
To find the NIC devices belonging to the Intel® E810 NIC run following command, the user can detect the device information of the NICs from the output:
# kubectl get enc <nodename> -o jsonpath={.status}
To update the Firmware of the supported device run following steps:
Note: The Physical Functions of the NICs will be updated in logical pairs. The user needs to provide the FW URL and checksum (SHA-1).
Create a CR yaml
file:
apiVersion: ethernet.intel.com/v1
kind: EthernetClusterConfig
metadata:
name: config
namespace: <namespace>
spec:
nodeSelectors:
kubernetes.io/hostname: <hostname>
deviceSelector:
pciAddress: "<pci-address>"
deviceConfig:
fwURL: "<URL_to_firmware>"
fwChecksum: "<file_checksum_SHA-1_hash>"
The CR can be applied by running:
# kubectl apply -f <filename>
Once the NVM firmware update is complete, the following status is reported:
# kubectl get enc <nodename> -o jsonpath={.status.conditions}
[
{
"lastTransitionTime": "2021-12-17T15:25:32Z",
"message": "Updated successfully",
"observedGeneration": 3,
"reason": "Succeeded",
"status": "True",
"type": "Updated"
}
]
The user can observe the change of the cards' NICs firmware:
# kubectl get enc <nodename> -o jsonpath={.status.devices[0].firmware}
{
"MAC": "40:a6:b7:67:1f:c0",
"version": "3.00 0x80008271 1.2992.0"
}
If fwUrl
points to external location, then you might need to configure proxy on cluster. You can configure it by using OCP cluster-wide proxy
or by setting HTTP_PROXY, HTTPS_PROXY and NO_PROXY environmental variables in operator's subscription.
Be aware that operator will ignore lowercase http_proxy
variables and will accept only uppercase variables.
To update the DDP profile of the supported device run following steps:
Create a CR yaml
file:
apiVersion: ethernet.intel.com/v1
kind: EthernetClusterConfig
metadata:
name: <name>
namespace: <namespace>
spec:
nodeSelectors:
kubernetes.io/hostname: <hostname>
deviceSelector:
pciAddress: "<pci-address>"
deviceConfig:
ddpURL: "<URL_to_DDP>"
ddpChecksum: "<file_checksum_SHA-1_hash>"
The CR can be applied by running:
# kubectl apply -f <filename>
Once the DDP profile update is complete, the following status is reported:
# kubectl get enc <nodename> -o jsonpath={.status.conditions}
[
{
"lastTransitionTime": "2021-12-17T15:25:32Z",
"message": "Updated successfully",
"observedGeneration": 3,
"reason": "Succeeded",
"status": "True",
"type": "Updated"
}
]
The user can observe the change of the cards' NICs DDP:
# kubectl get enc <nodename> -o jsonpath={.status.devices[0].DDP}|jq
{
"packageName": "ICE COMMS Package",
"trackId": "0xc0000002",
"version": "1.3.30.0"
}
The Flow Configuration Agent Pod runs Unified Flow Tool (UFT) to configure Flow rules for a PF. UFT requires that trust mode is enabled for the first VF (VF0) of a PF so that it has the capability of creating/modifying flow rules for that PF. This VF also needs to be bound to vfio-pci
driver. The SRIOV VFs pools are K8s extended resources that are exposed via SRIOV Network Operator.
The VF pool consists of VF0 from all available Intel E810 series NICs PF which, in this context, we call the Admin VF pool. The Admin VF pool is associated with a NetworkAttachmentDefinition that enables these VFs trust mode 'on'. The SRIOV Network Operator can be used to create the Admin VF pool and the NetworkAttachmentDefinition needed by UFT. You can find more information on creating VF pools with SRIOV Network Operator here and creating NetworkAttachmentDefinition here.
The following steps will guide you through how to create the Admin VF pool and the NetworkAttachmentDefinition needed for Flow Configuration Agent Pod.
Once SRIOV Network operator is up and running we can examine the SriovNetworkNodeStates
to view available Intel E810 Series NICs as shown below:
# kubectl get sriovnetworknodestates -n intel-ethernet-operator
NAME AGE
worker-01 1d
# kubectl describe sriovnetworknodestates worker-01 -n intel-ethernet-operator
Name: worker-01
Namespace: intel-ethernet-operator
Labels: <none>
Annotations: <none>
API Version: sriovnetwork.openshift.io/v1
Kind: SriovNetworkNodeState
Metadata:
Spec:
Dp Config Version: 42872603
Status:
Interfaces:
Device ID: 165f
Driver: tg3
Link Speed: 100 Mb/s
Link Type: ETH
Mac: b0:7b:25:de:3f:be
Mtu: 1500
Name: eno8303
Pci Address: 0000:04:00.0
Vendor: 14e4
Device ID: 165f
Driver: tg3
Link Speed: -1 Mb/s
Link Type: ETH
Mac: b0:7b:25:de:3f:bf
Mtu: 1500
Name: eno8403
Pci Address: 0000:04:00.1
Vendor: 14e4
Device ID: 159b
Driver: ice
Link Speed: -1 Mb/s
Link Type: ETH
Mac: b4:96:91:cd:de:38
Mtu: 1500
Name: eno12399
Pci Address: 0000:31:00.0
Vendor: 8086
Device ID: 159b
Driver: ice
Link Speed: -1 Mb/s
Link Type: ETH
Mac: b4:96:91:cd:de:39
Mtu: 1500
Name: eno12409
Pci Address: 0000:31:00.1
Vendor: 8086
Device ID: 1592
Driver: ice
E Switch Mode: legacy
Link Speed: -1 Mb/s
Link Type: ETH
Mac: b4:96:91:aa:d8:40
Mtu: 1500
Name: ens1f0
Pci Address: 0000:18:00.0
Totalvfs: 128
Vendor: 8086
Device ID: 1592
Driver: ice
E Switch Mode: legacy
Link Speed: -1 Mb/s
Link Type: ETH
Mac: b4:96:91:aa:d8:41
Mtu: 1500
Name: ens1f1
Pci Address: 0000:18:00.1
Totalvfs: 128
Vendor: 8086
Sync Status: Succeeded
Events: <none>
By looking at the sriovnetworknodestates status we can find the NIC information such as PCI address and Interface names to define SriovNetworkNodePolicy
to create required VF pools.
For example, the following three SriovNetworkNodePolicy
CRs will create a trusted VF pool name with resourceName cvl_uft_admin
along with two additional VF pools for application.
Please note that, the "uft-admin-policy" SriovNetworkNodePolicy below uses
pfNames:
with VF index range selectors to target VF0 only of Intel E810 series NIC. More information on using VF partitioning can be found here.
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: uft-admin-policy
namespace: intel-ethernet-operator
spec:
deviceType: vfio-pci
nicSelector:
pfNames:
- ens1f0#0-0
- ens1f1#0-0
vendor: "8086"
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: 'true'
numVfs: 8
priority: 99
resourceName: cvl_uft_admin
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: cvl-vfio-policy
namespace: intel-ethernet-operator
spec:
deviceType: vfio-pci
nicSelector:
pfNames:
- ens1f0#1-3
- ens1f1#1-3
vendor: "8086"
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: 'true'
numVfs: 8
priority: 89
resourceName: cvl_vfio
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: cvl-iavf-policy
namespace: intel-ethernet-operator
spec:
deviceType: netdevice
nicSelector:
pfNames:
- ens1f0#4-7
- ens1f1#4-7
vendor: "8086"
nodeSelector:
feature.node.kubernetes.io/network-sriov.capable: 'true'
numVfs: 8
priority: 79
resourceName: cvl_iavf
Save the above yaml in file name sriov-network-policy.yaml
and then apply this to create the VF pools.
The CR can be applied by running:
# kubectl create -f sriov-network-policy.yaml
Check node status to confirm that cvl_uft_admin resource pool registered DCF capable VFs of the node
# kubectl describe node worker-01 -n intel-ethernet-operator | grep -i allocatable -A 20
Allocatable:
bridge.network.kubevirt.io/cni-podman0: 1k
cpu: 108
devices.kubevirt.io/kvm: 1k
devices.kubevirt.io/tun: 1k
devices.kubevirt.io/vhost-net: 1k
ephemeral-storage: 468315972Ki
hugepages-1Gi: 0
hugepages-2Mi: 8Gi
memory: 518146752Ki
openshift.io/cvl_iavf: 8
openshift.io/cvl_uft_admin: 2
openshift.io/cvl_vfio: 6
pods: 250
Next, we will need to create SRIOV network attachment definition for the DCF VF pool as shown below:
cat <<EOF | kubectl apply -f -
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: sriov-cvl-dcf
spec:
trust: 'on'
networkNamespace: intel-ethernet-operator
resourceName: cvl_uft_admin
EOF
Note if the above does not successfully set trust mode to on for vf 0, you can do it manually using this command:
# ip link set <PF_NAME> vf 0 trust on
# export IMAGE_REGISTRY=<OCP Image registry>
# git clone https://github.com/intel/UFT.git
# git checkout v22.07
# make dcf-image
# docker tag uft:v22.07 $IMAGE_REGISTRY/uft:v22.07
# docker push $IMAGE_REGISTRY/uft:v22.07
Note: The Admin VF pool prefix in
DCFVfPoolName
should match how it is shown on node description in Check node status section.
cat <<EOF | kubectl apply -f -
apiVersion: flowconfig.intel.com/v1
kind: FlowConfigNodeAgentDeployment
metadata:
labels:
control-plane: flowconfig-daemon
name: flowconfig-daemon-deployment
namespace: intel-ethernet-operator
spec:
DCFVfPoolName: openshift.io/cvl_uft_admin
NADAnnotation: sriov-cvl-dcf
EOF
# kubectl get pods -n intel-ethernet-operator
NAME READY STATUS RESTARTS AGE
clv-discovery-kwjkb 1/1 Running 0 44h
clv-discovery-tpqzb 1/1 Running 0 44h
flowconfig-daemon-worker-01 2/2 Running 0 44h
fwddp-daemon-m8d4w 1/1 Running 0 44h
intel-ethernet-operator-controller-manager-79c4d5bf6d-bjlr5 1/1 Running 0 44h
intel-ethernet-operator-controller-manager-79c4d5bf6d-txj5q 1/1 Running 0 44h
# kubectl logs -n intel-ethernet-operator flowconfig-daemon-worker-01 -c uft
Generating server_conf.yaml file...
Done!
server :
ld_lib : "/usr/local/lib64"
ports_info :
- pci : "0000:18:01.0"
mode : dcf
do eal init ...
[{'pci': '0000:18:01.0', 'mode': 'dcf'}]
[{'pci': '0000:18:01.0', 'mode': 'dcf'}]
the dcf cmd line is: a.out -c 0x30 -n 4 -a 0000:18:01.0,cap=dcf -d /usr/local/lib64 --file-prefix=dcf --
EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dcf/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:18:01.0 (socket 0)
EAL: Releasing PCI mapped resource for 0000:18:01.0
EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101000000
EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101020000
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:18:01.0 (socket 0)
ice_load_pkg_type(): Active package is: 1.3.30.0, ICE COMMS Package (double VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
grpc server start ...
now in server cycle
With trusted VFs and application VFs ready to be configured, create a sample ClusterFlowConfig CR:
Please see the ClusterFlowConfig Spec for detailed specification of supported rules. Also, please note that this ClusterFlowConfig CR will create NodeFlowConfig with rules on nodes that meet the podSelector criteria
cat <<EOF | kubectl apply -f -
apiVersion: flowconfig.intel.com/v1
kind: ClusterFlowConfig
metadata:
name: pppoes-sample
namespace: intel-ethernet-operator
spec:
rules:
- pattern:
- type: RTE_FLOW_ITEM_TYPE_ETH
- type: RTE_FLOW_ITEM_TYPE_IPV4
spec:
hdr:
src_addr: 10.56.217.9
mask:
hdr:
src_addr: 255.255.255.255
- type: RTE_FLOW_ITEM_TYPE_END
action:
- type: to-pod-interface
conf:
podInterface: net1
attr:
ingress: 1
priority: 0
podSelector:
matchLabels:
app: vagf
role: controlplane
EOF
Validate that Flow Rules are applied by the controller from UFT logs.
kubectl logs flowconfig-daemon-worker uft
Generating server_conf.yaml file...
Done!
server :
ld_lib : "/usr/local/lib64"
ports_info :
- pci : "0000:18:01.0"
mode : dcf
do eal init ...
[{'pci': '0000:18:01.0', 'mode': 'dcf'}]
[{'pci': '0000:18:01.0', 'mode': 'dcf'}]
the dcf cmd line is: a.out -c 0x30 -n 4 -a 0000:18:01.0,cap=dcf -d /usr/local/lib64 --file-prefix=dcf --
EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/dcf/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:18:01.0 (socket 0)
EAL: Releasing PCI mapped resource for 0000:18:01.0
EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101000000
EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101020000
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:18:01.0 (socket 0)
ice_load_pkg_type(): Active package is: 1.3.30.0, ICE COMMS Package (double VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
grpc server start ...
now in server cycle
flow.rte_flow_attr
flow.rte_flow_item
flow.rte_flow_item
flow.rte_flow_item_ipv4
flow.rte_ipv4_hdr
flow.rte_flow_item_ipv4
flow.rte_ipv4_hdr
flow.rte_flow_item
flow.rte_flow_action
flow.rte_flow_action_vf
flow.rte_flow_action
rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) [rte_flow_item(type_=9, spec=None, last=None, mask=None), rte_flow_item(type_=11, spec=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=171497737, dst_addr=0)), last=None, mask=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=4294967295, dst_addr=0))), rte_flow_item(type_=0, spec=None, last=None, mask=None)] [rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=2)), rte_flow_action(type_=0, conf=None)]
rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0)
1
Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 165230602, 'dst_addr': 0}}
Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 4294967295, 'dst_addr': 0}}
rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=2))
rte_flow_action_vf(reserved=0, original=0, id=2)
Action vf: {'reserved': 0, 'original': 0, 'id': 2}
rte_flow_action(type_=0, conf=None)
Validate ok...
flow.rte_flow_attr
flow.rte_flow_item
flow.rte_flow_item
flow.rte_flow_item_ipv4
flow.rte_ipv4_hdr
flow.rte_flow_item_ipv4
flow.rte_ipv4_hdr
flow.rte_flow_item
flow.rte_flow_action
flow.rte_flow_action_vf
flow.rte_flow_action
rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) [rte_flow_item(type_=9, spec=None, last=None, mask=None), rte_flow_item(type_=11, spec=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=171497737, dst_addr=0)), last=None, mask=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=4294967295, dst_addr=0))), rte_flow_item(type_=0, spec=None, last=None, mask=None)] [rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=2)), rte_flow_action(type_=0, conf=None)]
rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0)
rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0)
1
Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 165230602, 'dst_addr': 0}}
Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 4294967295, 'dst_addr': 0}}
rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=2))
rte_flow_action_vf(reserved=0, original=0, id=2)
Action vf: {'reserved': 0, 'original': 0, 'id': 2}
rte_flow_action(type_=0, conf=None)
free attr
free item ipv4
free item ipv4
free list item
free action vf conf
free list action
Flow rule #0 created on port 0
If ClusterFlowConfig does not satisfy your use case, you can use NodeFlowConfig. Create a sample Node specific NodeFlowConfig CR named same as a target node with empty spec:
cat <<EOF | kubectl apply -f -
apiVersion: flowconfig.intel.com/v1
kind: NodeFlowConfig
metadata:
name: worker-01
spec:
EOF
Check status of CR:
# kubectl describe nodeflowconfig worker-01
Name: worker-01
Namespace: intel-ethernet-operator
Labels: <none>
Annotations: <none>
API Version: flowconfig.intel.com/v1
Kind: NodeFlowConfig
Metadata:
Status:
Port Info:
Port Id: 0
Port Mode: dcf
Port Pci: 0000:18:01.0
Events: <none>
You can see the DCF port information from NodeFlowConfig CR status for a node. These port information can be used to identify for which port on a node the Flow rules should be applied.
Please see the NodeFlowConfig Spec for detailed specification of supported rules. We can update the Node Flow configuration with a sample rule for a target port as shown below:
cat <<EOF | kubectl apply -f -
apiVersion: flowconfig.intel.com/v1
kind: NodeFlowConfig
metadata:
name: worker-01
namespace: intel-ethernet-operator
spec:
rules:
- pattern:
- type: RTE_FLOW_ITEM_TYPE_ETH
- type: RTE_FLOW_ITEM_TYPE_IPV4
spec:
hdr:
src_addr: 10.56.217.9
mask:
hdr:
src_addr: 255.255.255.255
- type: RTE_FLOW_ITEM_TYPE_END
action:
- type: RTE_FLOW_ACTION_TYPE_DROP
- type: RTE_FLOW_ACTION_TYPE_END
portId: 0
attr:
ingress: 1
EOF
- Intel® Ethernet Network Adapter E810-XXVDA2
- 3nd Generation Intel® Xeon® processor platform
The Intel Ethernet Operator is a functional tool to manage the update of Intel® E810 NICs FW and DDP profile, as well as the programming of the NICs VFs Flow Configuration autonomously in a Cloud Native OpenShift environment based on user input. It is easy in use by providing simple steps to apply the Custom Resources to configure various aspects of the device.