OpenNESS 20.06 Release
OpenNESS – 20.06
- OpenNESS is now available in two distributions
- Open source (Apache 2.0 license)
- Intel Distribution of OpenNESS (Intel Proprietary License)
- Includes all the code from the open source distribution plus additional features and enhancements to improve the user experience
- Access requires a signed license. A request for access can be made at openness.org by navigating to the “Products” section and selecting “Intel Distribution of OpenNESS”
- Both distributions are hosted at github.com/open-ness
- On premises configuration now optionally supports Kubernetes
- Core Network Feature (5G)
- Policy Authorization Service support in AF and CNCA over the N5 Interface(3GPP 29.514 - Chapter 5 Npcf_PolicyAuthorization Service API).
- Core Network Notifications for User Plane Path Change event received through Policy Authorization support in AF.
- NEF South Bound Interfaces support to communicate with the Core Network Functions for Traffic Influence and PFD.
- Core Network Test Function (CNTF) microservice added for validating the AF & NEF South Bound Interface communication.
- Flavors added for Core Network control-plane and user-plane.
- OpenNESS assisted Edge cloud deployment in 5G Non Stand Alone mode whitepaper.
- OpenNESS 20.06 5G features enablement through the enhanced-OpenNESS release(IDO).
- Dataplane
- Support for Calico eBPF as CNI
- Performance baselining of the CNIs
- Visual Compute and Media Analytics
- Intel Visual Cloud Accelerator Card - Analytics (VCAC-A) Kubernetes deployment support (CPU, GPU and VPU)
- Node feature discovery of VCAC-A
- Telemetry support for VCAC-A
- Provide ansible and Helm -playbook support for OVC codecs Intel Xeon CPU mode - video analytics service (REST API) for developers
- Edge Applications
- Smart City Application Pipeline supporting CPU or VCAC-A mode with Helm chart
- CDN Content Delivery using NGINX with SR-IOV capability for higher performance with Helm chart
- CDN transcode sample application using Intel Xeon CPU optimized media SDK with Helm Chart
- Support for Transcoding Service using Intel Xeon CPU optimized media SDK with Helm chart
- Intel Edge Insights application support with Helm chart
- Edge Network Functions
- FlexRAN DU with Helm Chart (FlexRAN not part of the release)
- xRAN Fronthaul with Helm CHart (xRAN app not part of the release)
- Core Network Function - Application Function with Helm Chart
- Core Network Function - Network Exposure Function With Helm Chart
- Core Network Function - UPF (UPF app not part of the release)
- Core network Support functions - OAM and CNTF
- Helm Chart for Kubernetes enhancements
- NFD, CMK, SRIOV-Device plugin and Multus
- Support for local Docker registry setup
- Support for deployment specific Flavors
- Minimal
- RAN - 4G and 5G
- Core - User plane and Control Plane
- Media Analytics with VCAC-A and with CPU only mode
- CDN - Transcode
- CDN - Content Delivery
- Azure - Deployment of OpenNESS cluster on Microsoft Azure cloud
- Support for OpenNESS on CSP Cloud
- Azure - Deployment of OpenNESS cluster on Microsoft Azure cloud
- Telemetry Support
- Support for Collectd backend with Intel hardware and custom metrics
- Cpu, cpufreq, load, hugepages, intel_pmu, intel_rdt, ipmi, ovs_stats, ovs_pmd_stats
- FPGA – PACN3000 (collectd) - Temp, Power draw
- VPU Device memory, VPU device thermal, VPU Device utilization
- Open Telemetry - Support for collector and exporter for metrics ( e.g. heartbeat from app)
- Support for PCM counter for Prometheus and Grafana
- Telemetry Aware Scheduler
- Early Access support for Resource Management Daemon (RMD)
- RMD for cache allocation to the application Pods
- Ability to deploy OpenNESS Master and Node on same platform
Changes to Existing Features
- OpenNESS 20.06
- Offline install for Native mode OnPremises has be deprecated
Fixed Issues
- OpenNESS 20.06
- Optimized the Kubernetes based deployment by supporting multiple Flavors
Known Issues and Limitations
- OpenNESS 20.06
- On-Premises edge installation takes 1.5hrs because of docker image build for OVS-DPDK
- Network edge installation takes 1.5hrs because of docker image build for OVS-DPDK
- OpenNESS controller allows management NICs to be in the pool of configuration which might allow configuration by mistake there by disconnecting the node from master
- When using the SRIOV EPA feature added in 20.03 with OVNCNI, the container cannot access the CNI port. This is due to the SRIOV port being set by changing the network used by the container from default to a custom network, This overwrites the OVNCNI network setting configured prior to this to enable the container to work with OVNCNI. Another issue with the SRIOV, is that this also overwrites the network configuration with the EAA and edgedns, agents, which prevents the SRIOV enabled container from communicating with the agents.
- Cannot remove Edge Node from Controller when its offline and traffic policy is configured or app is deployed.
- Legacy OnPremises - Traffic rule creation: cannot parse filled and cleared field
- There is an issue with using CDI when uploading VM images when CMK is enabled due to missing CMK taint toleration. The CDI upload pod does not get deployed and the
virtctl
plugin command times out waiting for the action to complete. A workaround for the issue is to invoke the CDI upload command, edit the taint toleration for the CDI upload to tolerate CMK, update the pod, create the PV and let the pod run to completion. - There is a known issue with cAdvisor which in certain scenarios occasionally fails to expose the metrics for Prometheus endpoint, see Git Hub: google/cadvisor#2537
Release Content
- OpenNESS 20.06
- Open Source: Edge node, Controller, Epcforedge, Common, Spec, Applications and Experience kit.
- IDO: IDO Edge node, IDO Controller, IDO Epcforedge, IDO Spec and IDO Experience kit.
Note: Application repo common to Open Source and IDO
Hardware and Software Compatibility
OpenNESS Edge Node has been tested using the following hardware specification:
Intel® Xeon® D Processor
- Super Micro 3U form factor chasis server, product SKU code: 835TQ-R920B
- Motherboard type: X11SDV-16C-TP8F
- Intel® Xeon® Processor D-2183IT
2nd Generation Intel® Xeon® Scalable Processors
CLX-SP | Compute Node based on CLX-SP(6252N) |
Board | S2600WFT server board |
2 x Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz | |
2 x associated Heatsink | |
Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] |
Chassis | 2U Rackmount Server Enclosure |
Storage | Intel M.2 SSDSCKJW360H6 360G |
NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) |
QAT | Intel Quick Assist Adapter Device 37c8 |
(Symmetrical design) LBG integrated | |
NIC on board | Intel-Ethernet-Controller-I210 (for management) |
Other card | 2x PCIe Riser cards |
Intel® Xeon® Scalable Processors
SKX-SP | Compute Node based on SKX-SP(6148) |
Board | WolfPass S2600WFQ server board(symmetrical QAT)CPU |
2 x Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz | |
2 x associated Heatsink | |
Memory | 12x Micron 16GB DDR4 2400MHz DIMMS * [2666 for PnP] |
Chassis | 2U Rackmount Server Enclosure |
Storage | Intel M.2 SSDSCKJW360H6 360G |
NIC | 1x Intel Fortville NIC X710DA4 SFP+ ( PCIe card to CPU-0) |
QAT | Intel Quick Assist Adapter Device 37c8 |
(Symmetrical design) LBG integrated | |
NIC on board | Intel-Ethernet-Controller-I210 (for management) |
Other card | 2x PCIe Riser cards |
HDDL-R | Mouser Mustang-V100 |
VCAC-A | VCAC-A Accelerator for Media Analytics |
Supported Operating Systems
OpenNESS was tested on CentOS Linux release 7.6.1810 (Core) : Note: OpenNESS is tested with CentOS 7.6 Pre-empt RT kernel to ensure VNFs and Applications can co-exist. There is not a requirement from OpenNESS software to run on a Pre-empt RT kernel.
Packages Version
Package: telemetry, cadvisor 0.36.0, grafana 7.0.3, prometheus 2.16.0, prometheus: node exporter 1.0.0-rc.0, tas 0., golang 1.14.2, docker 19.03., kubernetes 1.18.4, dpdk 18.11.6, ovs 2.11.1, ovn 2.12.0, helm 3.0, kubeovn 1.0.1, flannel 0.12.0, calico 3.14.0 , multus 3.4.1, sriov cni 2.3, nfd 0.5.0, cmk v1.4.1