diff --git a/dev/install-upgrade/install/index.html b/dev/install-upgrade/install/index.html index 00b73df..a6c2ac2 100644 --- a/dev/install-upgrade/install/index.html +++ b/dev/install-upgrade/install/index.html @@ -590,6 +590,24 @@ + + +
  • + + + Steps for Linux + + + +
  • + +
  • + + + Steps for MacOS + + +
  • @@ -1393,6 +1411,24 @@ + + +
  • + + + Steps for Linux + + + +
  • + +
  • + + + Steps for MacOS + + +
  • @@ -1495,7 +1531,13 @@

    Overview of Install Process

    Build Control Node configuration and Installer

    -

    Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

    +

    Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, +validate the supplied configurations, and generate an installation image (.img or .iso) suitable +for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to +run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is +responsible for almost every configuration of the fabric with the exception of the wiring. Each +command and subcommand have usage messages, simply supply the -h flag to your command or sub +command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

    HHFAB commands to make a bootable image

    1. hhfab init --wiring wiring-lab.yaml
    2. @@ -1504,13 +1546,17 @@

      HHFAB commands to make a bootab

  • hhfab validate
  • -
  • hhfab build
  • +
  • hhfab build --mode iso
      +
    1. An ISO is best suited to use with IPMI based virtual media. If desired an IMG file suitable for writing to a USB drive, can be created by passing the --mode usb option. ISO is the default.
    -

    The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

    +
  • + +

    The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.iso and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

    Write USB Image to Disk

    This will erase data on the USB disk.

    +

    Steps for Linux

    1. Insert the USB to your machine
    2. Identify the path to your USB stick, for example: /dev/sdc
    3. @@ -1519,6 +1565,14 @@

      Write USB Image to Disk

    +

    Steps for MacOS

    +
      +
    1. Plug the drive into the computer
    2. +
    3. Open the terminal
    4. +
    5. Identify the drive using diskutil list
    6. +
    7. Unmount the disk diskutil unmount disk5, the disk is specific to your environment
    8. +
    9. Write the image to the disk: sudo dd if=./control-1-install-usb.img of=/dev/disk5 bs=4k status=progress
    10. +

    There are utilities that assist this process such as etcher.

    Install Control Node

    This control node should be given a static IP address. Either a lease or statically assigned.

    @@ -1556,7 +1610,15 @@

    Install Control Node

    Configure Management Network

    -

    The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.

    +

    The control node is dual-homed: it connects to two different networks, which are called +management and external, respectively, in the fab.yaml file. +The management network is for controlling the switches that comprise the fabric. It +can be a simple broadcast domain with layer 2 connectivity. The management network is +not accessible to machines or devices not associated with the fabric; it is a private, +exclusive network. The control node connects to the management network via a 10 GbE +interface. It runs a DHCP server, as well as a small HTTP server.

    +

    The external network allows the user to access the control node via their local +IT network. It provides SSH access to the host operating system on the control node.

    Fabric Manages Switches

    Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.

    At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: @@ -1587,7 +1649,7 @@

    Fabric Manages Switches

    - October 24, 2024 + December 19, 2024 diff --git a/dev/search/search_index.json b/dev/search/search_index.json index 83ee594..07183c1 100644 --- a/dev/search/search_index.json +++ b/dev/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.

    The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.

    You can read more about its concepts and architecture in the documentation.

    You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.

    "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

    The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.

    "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

    The Hedgehog Fabric currently supports two underlay network topologies.

    "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

    A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.

    The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

    "},{"location":"architecture/fabric/#spine-leaf","title":"Spine-Leaf","text":"

    A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.

    This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.

    "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

    The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.

    "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

    The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.

    "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

    To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.

    "},{"location":"architecture/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

    Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.

    Hedgehog Fabric consists of the following components:

    "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

    All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.

    "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

    Wiring Diagram consists of the following resources:

    "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":""},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

    Installer builder and VLAB.

    "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

    Control plane and switch agent.

    "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

    This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

    In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

    just serve\n

    Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

    Additionally you can run

    just build\n

    to make sure that your changes will be built correctly and doesn't break documentation.

    "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

    If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

    Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

    Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

    "},{"location":"contribute/docs/#repository","title":"Repository","text":"

    Documentation is organized in per-release branches:

    Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

    "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

    All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

    For example, top-level .pages in this repository looks like this:

    nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

    Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

    More details in the MkDocs Pages plugin.

    "},{"location":"contribute/docs/#abbreviations","title":"Abbreviations","text":"

    You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

    For example, we have following in includes/abbreviations.md:

    *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

    It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

    "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

    We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

    To view code for examples, please, check the source code of this page.

    "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

    Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

    Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

    Keyboard keys can be written like so:

    Ctrl+Alt+Del

    Amd inline icons/emojis can be added like this:

    :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

    "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

    Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

    Lorem ipsum

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

    "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

    Details can be found here.

    Simple code block with line nums and highlighted lines:

    bubble_sort.py
    def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

    Code annotations:

    theme:\n  features:\n    - content.code.annotate # (1)\n
    1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
    "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

    You can use Tabs to better organize content.

    CC++
    #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
    #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
    "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

    You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

    graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
    sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
    "},{"location":"contribute/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"faq/overview/","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/overview/#what-is-the-hedgehog-fabric","title":"What is the Hedgehog Fabric?","text":"

    The Hedgehog Fabric is a topology of routers arranged in a spine-leaf architecture. A spine-leaf architecture is a type of Clos network topology. In a spine-leaf architecture, the leaves are usually placed in racks and connected directly to the servers, whereas spines are connected only to leaves. In a spine-leaf architecture, the fundamental unit of connection is a layer 3 route.

    The Hedgehog Fabric is managed via Kubernetes objects and custom resource definitions.

    "},{"location":"faq/overview/#what-are-the-advantages-of-a-spine-leaf-architecture","title":"What are the advantages of a spine-leaf architecture?","text":"

    A spine-leaf architecture is designed to facilitate traffic that is passing between servers inside of a data center. By contrast, other architectures like core-access-aggregation facilitate traffic moving in and out of the data center. A spine-leaf architecture provides multiple paths between nodes which allows for router maintenance and resilience in the case of failures. The spine-leaf architecture allows for multiple points of egress via border leaf nodes. In a spine-leaf architecture the unit of connection is a layer 3 route. There are robust tools, queueing algorithms and hardware available to manage network traffic at layer 3. To manage the distribution of routes to switches inside the fabric a protocol such as BGP, OSPF, or IS-IS is used.

    "},{"location":"faq/overview/#spine-leaf-architecture-diagram","title":"Spine Leaf Architecture Diagram","text":"

    The following diagram contains Leaf and Spine routers. Servers inside of a virtual private cloud can be attached to any leaf. To allow the servers to communicate, routes are applied to leaf nodes. The traffic passing from leaf 1 to leaf 2 can travel via any spine: the leaf uses ECMP to decide which spine to use. EVPN ensures that servers inside of a VPC are reachable at layer 2 regardless of which leaf they are attached to in the Fabric.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    S3([Spine 3])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    L4([Leaf 4])\n    WS1[[Worload Servers]]\n    WS2[[Worload Servers]]\n    WS3[[Worload Servers]]\n    WS4[[Worload Servers]]\n\n    S1 & S2 & S3 ---- L1 & L2 & L3 & L4 \n    L1 ---- WS1\n    L2 ---- WS2\n    L3 ---- WS3\n    L4 ---- WS4\n
    "},{"location":"faq/overview/#core-access-aggregation-diagram","title":"Core Access Aggregation Diagram","text":"

    In the diagram below, the Access switches are isolated or managed by layer-2 constructs like ACLs, bridging, and VLANs. The Aggregation routers are where layer-2 traffic is promoted to layer 3. The core routers handle layer-3 traffic only. Often some form of Spanning Tree Protocol is used to avoid loops in the layer-2 domain. Loops would cripple the network as layer 2 often relies on Broadcast/Flooding for discovery. While there are multiple paths out from the workload servers to the core they are often not passing traffic due to the Spanning Tree Protocol, these disable links are shown as dotted lines.

    graph TD\n    CG1((Core Router 1))\n    CG2((Core Router 2))\n    AG1([Aggregation 1])\n    AG2([Aggregation 2])\n    AG3([Aggregation 3])\n    A1[Access 1]\n    A2[Access 2]\n    A3[Access 3]\n    WS1[[Worload Servers]]\n    WS2[[Worload Servers]]\n    WS3[[Worload Servers]]\n\n    CG1 ---- AG1 & AG2 & AG3\n    CG2 ---- AG1 & AG2 & AG3\n    AG1 ---- A1 \n    AG2 ---- A2 \n    AG3 ---- A3 \n    AG1 -..- A2 & A3\n    AG2 -..- A1 & A3\n    AG3 -..- A1 & A2\n    A1 ---- WS1\n    A2 ---- WS2\n    A3 ---- WS3\n
    "},{"location":"faq/overview/#what-does-it-mean-to-manage-my-network-with-kubernetes","title":"What does it mean to manage my network with Kubernetes?","text":"

    A common way to manage a network is to proceed manually via the command-line interface of the equipment, or with the hardware vendor tools. Managing a small number of switches and routers this way is workable, but cumbersome, and it only gets more painful when the network grows. Managing switches and servers with Kubernetes is similar to managing pods and applications with Kubernetes: it provides assistance for deployment, scaling, and management of the network appliances.

    For example, if the administrator of a Kubernetes cluster wants to create a new Nginx pod, they write down the YAML file describing the pod name, the container image, any ports that the pod needs exposed, and what namespace to run the pod in. After the YAML file is created, a simple kubectl apply -f nginx.yaml is all that the administrator needs to run in order for the pod to be scheduled.

    With the Hedgehog Fabric, the same principles apply to managing network resources. Administrators create a YAML file to configure a VPC. The YAML file describes the IP address range for the private cloud, for example the 192.168.0.0/16 space. It also describes any VLANs that the private cloud needs. After the desired options are in the file, administrators can push the configuration to the switch with a mere kubectl apply -f vpc1.yaml, and within a few seconds the switch configuration is live.

    "},{"location":"faq/overview/#what-is-a-virtual-private-cloud-vpc","title":"What is a Virtual Private Cloud (VPC)?","text":"

    A VPC provides layer 3 logical isolation inside of a network. To isolate the servers, a VRF is used. A VRF allows for multiple routing tables to exist at the same time on a switch. Each VPC is isolated from the others because there is simply no route between them.

    "},{"location":"getting-started/download/","title":"Download","text":"

    The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.

    "},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

    Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

    After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:

    docker login ghcr.io --username provided_user_name --password provided_token_string\n
    "},{"location":"getting-started/download/#downloading-hhfab","title":"Downloading hhfab","text":"

    Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.

    All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):

    curl -fsSL https://i.hhdev.io/hhfab | bash\n

    Or download a specific version (e.g. beta-1) using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | VERSION=beta-1 bash\n

    Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.

    "},{"location":"getting-started/download/#installing-oras","title":"Installing ORAS","text":"

    The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:

    curl -fsSL https://i.hhdev.io/oras | bash\n
    "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":""},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

    Under construction.

    "},{"location":"install-upgrade/build-wiring/#overview","title":"Overview","text":"

    A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.

    In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:

    ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n   hhfab sample - generate sample wiring diagram\n\nUSAGE:\n   hhfab sample command [command options]\n\nCOMMANDS:\n   spine-leaf, sl      generate sample spine-leaf wiring diagram\n   collapsed-core, cc  generate sample collapsed-core wiring diagram\n   help, h             Shows a list of commands or help for one command\n\nOPTIONS:\n   --help, -h  show help\n

    Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:

    ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n   hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n   hhfab vlab generate [command options]\n\nOPTIONS:\n   --bundled-servers value      number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n   --eslag-leaf-groups value    eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n   --eslag-servers value        number of ESLAG servers to generate for ESLAG switches (default: 2)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --help, -h                   show help\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-servers value        number of MCLAG servers to generate for MCLAG switches (default: 2)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --no-switches                do not generate any switches (default: false)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --unbundled-servers value    number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
    "},{"location":"install-upgrade/build-wiring/#sample-switch-configuration","title":"Sample Switch Configuration","text":"
    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: ds3000-02\nspec:\n  boot:\n    serial: ABC123XYZ\n  role: server-leaf\n  description: leaf-2\n  profile: celestica-ds3000\n  portBreakouts:\n    E1/1: 4x10G\n    E1/2: 4x10G\n    E1/17: 4x25G\n    E1/18: 4x25G\n    E1/32: 4x25G\n  redundancy:\n    group: mclag-1\n    type: mclag\n
    "},{"location":"install-upgrade/build-wiring/#design-discussion","title":"Design Discussion","text":"

    This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.

    "},{"location":"install-upgrade/build-wiring/#vpc","title":"VPC","text":"

    A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.

    graph TD\n    L1([Leaf 1])\n    L2([Leaf 2])\n    S1[\"Server 1\n      10.7.71.1\"]\n    S2[\"Server 2\n      172.16.2.31\"]\n    S3[\"Server 3\n       192.168.18.85\"]\n\n    L1 <--> S1\n    L1 <--> S2\n    L2 <--> S3\n\n    subgraph VPC 1\n    S1\n    S2\n    S3\n    end
    "},{"location":"install-upgrade/build-wiring/#connection","title":"Connection","text":"

    A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.

    "},{"location":"install-upgrade/build-wiring/#server-connections","title":"Server Connections","text":"

    A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    L4([Leaf 4])\n    L5([Leaf 5])\n    L6([Leaf 6])\n    L7([Leaf 7])\n\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 ---- L1 & L2 & L3 & L4 & L5 & L6 & L7\n    L1 <-- Bundled --> TS1\n    L1 <-- Bundled --> TS1\n    L1 <-- Unbundled --> TS2\n    L2 <-- MCLAG --> TS3\n    L3 <-- MCLAG --> TS3\n    L4 <-- ESLAG --> TS4\n    L5 <-- ESLAG --> TS4\n    L6 <-- ESLAG --> TS4\n    L7 <-- ESLAG --> TS4\n\n    subgraph VPC 1\n    TS1\n    TS2\n    TS3\n    TS4\n    end\n\n    subgraph MCLAG\n    L2\n    L3\n    end\n\n    subgraph ESLAG\n    L3\n    L4\n    L5\n    L6\n    L7\n    end\n
    "},{"location":"install-upgrade/build-wiring/#fabric-connections","title":"Fabric Connections","text":"

    Fabric connections serve as connections between switches, they form the fabric of the network.

    "},{"location":"install-upgrade/build-wiring/#vpc-peering","title":"VPC Peering","text":"

    VPCs need VPC Peerings to talk to each other. VPC Peerings come in two varieties: local and remote.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 <--> L1 & L2\n    L1 <--> TS1 & TS2\n    L2 <--> TS3 & TS4\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
    "},{"location":"install-upgrade/build-wiring/#local-vpc-peering","title":"Local VPC Peering","text":"

    When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.

    graph TD\n\n    L1([Leaf 1])\n    S1[Server1]\n    S2[Server2]\n    S3[Server3]\n    S4[Server4]\n\n    L1 <-.2,loopback.-> L1;\n    L1 <-.3.-> S1;\n    L1 <--> S2 & S4;\n    L1 <-.1.-> S3;\n\n    subgraph VPC 1\n    S1\n    S2\n    end\n\n    subgraph VPC 2\n    S3\n    S4\n    end
    The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1.

    "},{"location":"install-upgrade/build-wiring/#remote-vpc-peering","title":"Remote VPC Peering","text":"

    Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 <-.5.-> L1;\n    S1 <-.2.-> L2;\n    S1 <-.3,4.-> L3;\n    S2 <--> L1;\n    S2 <--> L2;\n    S2 <--> L3;\n    L1 <-.6.-> TS1;\n    L1 <--> TS2;\n    L2 <--> TS3;\n    L2 <-.1.-> TS4;\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
    The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it.

    "},{"location":"install-upgrade/build-wiring/#vpc-loopback","title":"VPC Loopback","text":"

    A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.

    "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":""},{"location":"install-upgrade/config/#overview","title":"Overview","text":"

    The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.

    "},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"

    For a VLAB user, the typical workflow with hhfab is:

    1. hhfab init --dev
    2. hhfab vlab gen
    3. hhfab vlab up

    The above workflow will get a user up and running with a spine-leaf VLAB.

    "},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"

    It's possible to start from scratch:

    1. hhfab init (see different flags to cusomize initial configuration)
    2. Adjust the fab.yaml file to your needs
    3. hhfab validate
    4. hhfab build

    Or import existing config and wiring files:

    1. hhfab init -c fab.yaml -w wiring-file.yaml -w extra-wiring-file.yaml
    2. hhfab validate
    3. hhfab build

    After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.

    "},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"

    Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:

    spec:\n    config:\n      control:\n        defaultUser: # user 'core' on all control nodes\n          password: \"hashhashhashhashhash\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 SecREKeyJumblE\"\n\n        fabric:\n          mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n          defaultSwitchUsers:\n            admin: # at least one user with name 'admin' and role 'admin'\n              role: admin\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n            op: # optional read-only user\n              role: operator\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n

    Control node(s) user is always named core.

    The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.

    "},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"

    The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.

    "},{"location":"install-upgrade/config/#control-node","title":"Control Node","text":"

    The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:

    apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n   disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: enp2s0 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
    The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name.

    "},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"

    There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.

    Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.

    Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:

    spec:\n  config:\n      ...\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        lokiTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all logs\n                  env: env-1\n              url: https://logs-prod-021.grafana.net/loki/api/v1/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n        prometheusTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all metrics\n                  env: env-1\n              sendIntervalSeconds: 120\n              url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n              unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n                  - cpu\n                  - filesystem\n                  - loadavg\n                  - meminfo\n              collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n

    For additional options, see the AlloyConfig struct in Fabric repo.

    "},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
    apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n  name: default\n  namespace: fab\nspec:\n  config:\n    control:\n      tlsSAN: # IPs and DNS names to access API\n        - \"customer.site.io\"\n\n      ntpServers:\n      - time.cloudflare.com\n      - time1.google.com\n\n      defaultUser: # user 'core' on all control nodes\n        password: \"hash...\" # password hash\n        authorizedKeys:\n          - \"ssh-ed25519 hash...\"\n\n    fabric:\n      mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n      includeONIE: true\n      defaultSwitchUsers:\n        admin: # at least one user with name 'admin' and role 'admin'\n          role: admin\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n        op: # optional read-only user\n          role: operator\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        collectSyslogEnabled: true\n        lokiTargets:\n          lab:\n            url: http://url.io:3100/loki/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n        prometheusTargets:\n          lab:\n            url: http://url.io:9100/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n            sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n    disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: eno2 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: eno1\n\n# Currently only one ControlNode is supported\n
    "},{"location":"install-upgrade/install/","title":"Install Fabric","text":""},{"location":"install-upgrade/install/#prerequisites","title":"Prerequisites","text":""},{"location":"install-upgrade/install/#overview-of-install-process","title":"Overview of Install Process","text":"

    This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.

    Download and install hhfab following instructions from the Download section.

    The main steps to install Fabric are:

    1. Install hhfab on the machines with access to the Internet
      1. Prepare Wiring Diagram
      2. Select Fabric Configuration
      3. Build Control Node configuration and installer
    2. Install Control Node
      1. Insert USB with control-os image into Fabric Control Node
      2. Boot the node off the USB to initiate the installation
    3. Prepare Management Network
      1. Connect management switch to Fabric control node
      2. Connect 1GbE Management port of switches to management switch
    4. Prepare supported switches
      1. Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
      2. Boot them into ONIE Install Mode to have them automatically provisioned
    "},{"location":"install-upgrade/install/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"

    Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

    "},{"location":"install-upgrade/install/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
    1. hhfab init --wiring wiring-lab.yaml
    2. The init command generates a fab.yaml file, edit the fab.yaml file for your needs
      1. ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
    3. hhfab validate
    4. hhfab build

    The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

    "},{"location":"install-upgrade/install/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"

    This will erase data on the USB disk.

    1. Insert the USB to your machine
    2. Identify the path to your USB stick, for example: /dev/sdc
    3. Issue the command to write the image to the USB drive

    There are utilities that assist this process such as etcher.

    "},{"location":"install-upgrade/install/#install-control-node","title":"Install Control Node","text":"

    This control node should be given a static IP address. Either a lease or statically assigned.

    1. Configure the server to use UEFI boot without secure boot

    2. Attach the image to the server either by inserting via USB, or attaching via virtual media

    3. Select boot off of the attached media, the installation process is automated

    4. Once the control node has booted, it logs in automatically and begins the installation process

      1. Optionally use journalctl -f -u flatcar-install.service to monitor progress
    5. Once the installation is complete, the system automatically reboots.

    6. After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.

    7. Upon booting into the freshly installed system, the fabric installation will automatically begin

      1. If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
      2. Optionally this can be monitored with journalctl -f -u fabric-install.service
    8. The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.

    "},{"location":"install-upgrade/install/#configure-management-network","title":"Configure Management Network","text":"

    The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.

    "},{"location":"install-upgrade/install/#fabric-manages-switches","title":"Fabric Manages Switches","text":"

    Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.

    At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s

    "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"

    In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.

    "},{"location":"install-upgrade/requirements/#control-node","title":"Control Node","text":"

    In internal testing Hedgehog uses a server with the following specifications:

    "},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":" Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":" Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#reference-control-node-configuration","title":"Reference Control Node Configuration","text":""},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"

    Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):

    Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"

    You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.

    "},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":""},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"

    (could be used for collapsed-core)

    "},{"location":"install-upgrade/upgrade/","title":"Upgrading from Alpha-7 to Beta-1","text":""},{"location":"install-upgrade/upgrade/#control-node","title":"Control Node","text":"

    Ensure the hardware that is to be used for the control node meets the system requirements. The upgrade process is destructive of the host, so ensure all data needed is removed from the selected server before the upgrade is started.

    "},{"location":"install-upgrade/upgrade/#management-network","title":"Management Network","text":"

    Beta-1 uses the RJ-45 management ports of the switches instead of front panel ports. A simple management network will need to be in place and cabled before the install of Beta-1. The control node will run a DHCP server on this network and must be the sole DHCP server. Do not co-mingle other services or equipment on this network, it is for the exclusive use of the control node and switches.

    "},{"location":"install-upgrade/upgrade/#install-switch-vendor-onie","title":"Install Switch Vendor ONIE","text":"

    Beta-1 uses the switch vendor ONIE for installation of the NOS. The latest vendor provided version of ONIE is recommended to be installed. Hedgehog ONIE must not be used.

    "},{"location":"install-upgrade/upgrade/#changes-to-the-wiring-diagram","title":"Changes to the Wiring Diagram","text":""},{"location":"install-upgrade/upgrade/#install-the-control-node","title":"Install The Control Node","text":"

    Follow the instructions for installing the Beta-1 Fabric on a control node.

    "},{"location":"install-upgrade/upgrade/#install-nos-using-onie-nos-install-option","title":"Install NOS using ONIE NOS Install Option","text":"

    As the switches boot up, select the ONIE option from the grub screen. From there select the \"NOS Install\" option. The install option will cause the switch to begin searching for installation media, this media is supplied by the control node.

    "},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":""},{"location":"reference/api/#agentgithedgehogcomv1beta1","title":"agent.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types","title":"Resource Types","text":""},{"location":"reference/api/#adminstatus","title":"AdminStatus","text":"

    Underlying type: string

    Appears in: - SwitchStateInterface

    Field Description `` up down testing"},{"location":"reference/api/#agent","title":"Agent","text":"

    Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

    Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

    AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

    Appears in: - Agent

    Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"

    Appears in: - SwitchStateBGPNeighbor

    Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"

    Appears in: - BGPMessages

    Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"

    Underlying type: string

    Appears in: - SwitchStateBGPNeighbor

    Field Description `` idle connect active openSent openConfirm established"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"

    Underlying type: string

    Appears in: - SwitchStateBGPNeighbor

    Field Description `` internal external"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"

    Underlying type: string

    Appears in: - SwitchStateInterface

    Field Description `` up down testing unknown dormant notPresent lowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"

    Appears in: - AgentStatus

    Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"

    Appears in: - SwitchState

    Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"

    Appears in: - SwitchStateBGPNeighbor

    Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"

    Appears in: - SwitchState

    Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"

    Appears in: - SwitchState

    Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"

    Appears in: - SwitchStateCRMACLInfo

    Field Description Default Validation groupsAvailable integer groupsUsed integer tablesAvailable integer tablesUsed integer"},{"location":"reference/api/#switchstatecrmaclinfo","title":"SwitchStateCRMACLInfo","text":"

    Appears in: - SwitchStateCRMACLStats

    Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"

    Appears in: - SwitchStateCRM

    Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"

    Appears in: - SwitchStateCRM

    Field Description Default Validation dnatEntriesAvailable integer dnatEntriesUsed integer fdbEntriesAvailable integer fdbEntriesUsed integer ipmcEntriesAvailable integer ipmcEntriesUsed integer ipv4NeighborsAvailable integer ipv4NeighborsUsed integer ipv4NexthopsAvailable integer ipv4NexthopsUsed integer ipv4RoutesAvailable integer ipv4RoutesUsed integer ipv6NeighborsAvailable integer ipv6NeighborsUsed integer ipv6NexthopsAvailable integer ipv6NexthopsUsed integer ipv6RoutesAvailable integer ipv6RoutesUsed integer nexthopGroupMembersAvailable integer nexthopGroupMembersUsed integer nexthopGroupsAvailable integer nexthopGroupsUsed integer snatEntriesAvailable integer snatEntriesUsed integer"},{"location":"reference/api/#switchstateinterface","title":"SwitchStateInterface","text":"

    Appears in: - SwitchState

    Field Description Default Validation enabled boolean adminStatus AdminStatus operStatus OperStatus mac string lastChanged Time speed string counters SwitchStateInterfaceCounters transceiver SwitchStateTransceiver lldpNeighbors SwitchStateLLDPNeighbor array"},{"location":"reference/api/#switchstateinterfacecounters","title":"SwitchStateInterfaceCounters","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation inBitsPerSecond float inDiscards integer inErrors integer inPktsPerSecond float inUtilization integer lastClear Time outBitsPerSecond float outDiscards integer outErrors integer outPktsPerSecond float outUtilization integer"},{"location":"reference/api/#switchstatelldpneighbor","title":"SwitchStateLLDPNeighbor","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"

    SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent

    Appears in: - SwitchState

    Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"

    Appears in: - SwitchState

    Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation description string cableClass string formFactor string connectorType string present string cableLength float operStatus string temperature float voltage float serialNumber string vendor string vendorPart string vendorOUI string vendorRev string"},{"location":"reference/api/#dhcpgithedgehogcomv1beta1","title":"dhcp.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":""},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

    DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

    Appears in: - DHCPSubnetStatus

    Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

    DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

    Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

    DHCPSubnetSpec defines the desired state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

    DHCPSubnetStatus defines the observed state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":""},{"location":"reference/api/#external","title":"External","text":"

    External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

    ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

    ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

    ExternalAttachmentSpec defines the desired state of ExternalAttachment

    Appears in: - ExternalAttachment

    Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

    ExternalAttachmentStatus defines the observed state of ExternalAttachment

    Appears in: - ExternalAttachment

    "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

    ExternalAttachmentSwitch defines the switch port configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

    ExternalPeering is the Schema for the externalpeerings API

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

    ExternalPeeringSpec defines the desired state of ExternalPeering

    Appears in: - ExternalPeering

    Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

    ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

    ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

    Appears in: - ExternalPeeringSpec

    Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

    ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

    Appears in: - ExternalPeeringSpecExternal

    Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

    ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

    ExternalPeeringStatus defines the observed state of ExternalPeering

    Appears in: - ExternalPeering

    "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

    ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

    Appears in: - External

    Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

    ExternalStatus defines the observed state of External

    Appears in: - External

    "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

    IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

    IPv4NamespaceSpec defines the desired state of IPv4Namespace

    Appears in: - IPv4Namespace

    Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

    IPv4NamespaceStatus defines the observed state of IPv4Namespace

    Appears in: - IPv4Namespace

    "},{"location":"reference/api/#vpc","title":"VPC","text":"

    VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

    VPCAttachment is the Schema for the vpcattachments API

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

    VPCAttachmentSpec defines the desired state of VPCAttachment

    Appears in: - VPCAttachment

    Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

    VPCAttachmentStatus defines the observed state of VPCAttachment

    Appears in: - VPCAttachment

    "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

    VPCDHCP defines the on-demand DHCP configuration for the subnet

    Appears in: - VPCSubnet

    Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"

    VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

    VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

    Appears in: - VPCPeeringSpec

    Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

    VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:

    spec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n
    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

    VPCPeeringSpec defines the desired state of VPCPeering

    Appears in: - VPCPeering

    Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

    VPCPeeringStatus defines the observed state of VPCPeering

    Appears in: - VPCPeering

    "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

    VPCSpec defines the desired state of VPC. At least one subnet is required.

    Appears in: - VPC

    Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"

    VPCStaticRoute defines the static route for the VPC

    Appears in: - VPCSpec

    Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

    VPCStatus defines the observed state of VPC

    Appears in: - VPC

    "},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

    VPCSubnet defines the VPC subnet configuration

    Appears in: - VPCSpec

    Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":""},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

    BasePortName defines the full name of the switch port

    Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

    ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"

    ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

    ConnExternal defines the external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

    ConnExternalLink defines the external connection link

    Appears in: - ConnExternal

    Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

    ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

    Appears in: - ConnectionSpec

    Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

    ConnFabricLinkSwitch defines the switch side of the fabric link

    Appears in: - FabricLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

    ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

    ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

    Appears in: - ConnectionSpec

    Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

    ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

    ConnStaticExternalLink defines the static external connection link

    Appears in: - ConnStaticExternal

    Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

    ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

    Appears in: - ConnStaticExternalLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$ nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$ subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

    ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

    ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

    Appears in: - ConnectionSpec

    Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"

    Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

    ConnectionSpec defines the desired state of Connection

    Appears in: - Connection

    Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

    ConnectionStatus defines the observed state of Connection

    Appears in: - Connection

    "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

    FabricLink defines the fabric connection link

    Appears in: - ConnFabric

    Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"

    Server is the Schema for the servers API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

    ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

    ServerSpec defines the desired state of Server

    Appears in: - Server

    Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

    ServerStatus defines the observed state of Server

    Appears in: - Server

    "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

    ServerToSwitchLink defines the server-to-switch link

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"

    Switch is the Schema for the switches API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"

    Appears in: - SwitchSpec

    Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

    SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

    SwitchGroupSpec defines the desired state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

    SwitchGroupStatus defines the observed state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchprofile","title":"SwitchProfile","text":"

    SwitchProfile represents switch capabilities and configuration

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchProfile metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"

    Defines switch-specific configuration options

    Appears in: - SwitchProfileSpec

    Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"

    Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation

    Appears in: - SwitchProfileSpec

    Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"

    Defines a switch port configuration Only one of Profile or Group can be set

    Appears in: - SwitchProfileSpec

    Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"

    Defines a switch port group configuration

    Appears in: - SwitchProfileSpec

    Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"

    Defines a switch port profile configuration

    Appears in: - SwitchProfileSpec

    Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"

    Defines a switch port profile breakout configuration

    Appears in: - SwitchProfilePortProfile

    Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"

    Defines a switch port profile breakout mode configuration

    Appears in: - SwitchProfilePortProfileBreakout

    Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"

    Defines a switch port profile speed configuration

    Appears in: - SwitchProfilePortProfile

    Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"

    SwitchProfileSpec defines the desired state of SwitchProfile

    Appears in: - SwitchProfile

    Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"

    SwitchProfileStatus defines the observed state of SwitchProfile

    Appears in: - SwitchProfile

    "},{"location":"reference/api/#switchredundancy","title":"SwitchRedundancy","text":"

    SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

    Appears in: - SwitchSpec

    Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

    Underlying type: string

    SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

    Validation: - Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge]

    Appears in: - SwitchSpec

    Field Description spine server-leaf border-leaf mixed-leaf virtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

    SwitchSpec defines the desired state of Switch

    Appears in: - Switch

    Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

    SwitchStatus defines the observed state of Switch

    Appears in: - Switch

    "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

    SwitchToSwitchLink defines the switch-to-switch link

    Appears in: - ConnMCLAGDomain - ConnVPCLoopback

    Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

    VLANNamespace is the Schema for the vlannamespaces API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

    VLANNamespaceSpec defines the desired state of VLANNamespace

    Appears in: - VLANNamespace

    Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

    VLANNamespaceStatus defines the observed state of VLANNamespace

    Appears in: - VLANNamespace

    "},{"location":"reference/cli/","title":"Fabric CLI","text":"

    Under construction.

    Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

    core@control-1 ~ $ kubectl fabric\nNAME:\n   kubectl fabric - Hedgehog Fabric API kubectl plugin\n\nUSAGE:\n   kubectl fabric [global options] command [command options]\n\nVERSION:\n   v0.53.1\n\nCOMMANDS:\n   vpc               VPC commands\n   switch, sw        Switch commands\n   connection, conn  Connection commands\n   switchgroup, sg   SwitchGroup commands\n   external, ext     External commands\n   inspect, i        Inspect Fabric API Objects and Primitives\n   help, h           Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
    "},{"location":"reference/cli/#vpc","title":"VPC","text":"

    Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

    core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

    Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

    core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

    To peer VPC with another VPC (e.g. vpc-2) use the following command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
    "},{"location":"reference/profiles/","title":"Switch Profiles Catalog","text":"

    The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.

    "},{"location":"reference/profiles/#celestica-ds3000","title":"Celestica DS3000","text":"

    Profile Name (to use in switch.spec.profile): celestica-ds3000

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#celestica-ds4000","title":"Celestica DS4000","text":"

    Profile Name (to use in switch.spec.profile): celestica-ds4000

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5232f-on","title":"Dell S5232F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-s5232f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5248f-on","title":"Dell S5248F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-s5248f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#dell-z9332f-on","title":"Dell Z9332F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-z9332f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 M1 Direct 10G 1G, 10G E1/34 M2 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs203

    Other names: Edgecore AS7326-56X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs204

    Other names: Edgecore AS7726-32X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs501","title":"Edgecore DCS501","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs501

    Other names: Edgecore AS7712-32X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-eps203","title":"Edgecore EPS203","text":"

    Profile Name (to use in switch.spec.profile): edgecore-eps203

    Other names: Edgecore AS4630-54NPE

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/2 2 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/3 3 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/4 4 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/5 5 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/6 6 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/7 7 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/8 8 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/9 9 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/10 10 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/11 11 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/12 12 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/13 13 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/14 14 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/15 15 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/16 16 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/17 17 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/18 18 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/19 19 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/20 20 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/21 21 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/22 22 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/23 23 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/24 24 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/25 25 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/26 26 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/27 27 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/28 28 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/29 29 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/30 30 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/31 31 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/32 32 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/33 33 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/34 34 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/35 35 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/36 36 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/37 37 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/38 38 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/39 39 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/40 40 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/41 41 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/42 42 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/43 43 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/44 44 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/45 45 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/46 46 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/47 47 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/48 48 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/49 49 Direct 25G 1G, 10G, 25G E1/50 50 Direct 25G 1G, 10G, 25G E1/51 51 Direct 25G 1G, 10G, 25G E1/52 52 Direct 25G 1G, 10G, 25G E1/53 53 Direct 100G 40G, 100G E1/54 54 Direct 100G 40G, 100G"},{"location":"reference/profiles/#supermicro-sse-c4632sb","title":"Supermicro SSE-C4632SB","text":"

    Profile Name (to use in switch.spec.profile): supermicro-sse-c4632sb

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#virtual-switch","title":"Virtual Switch","text":"

    Profile Name (to use in switch.spec.profile): vs

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#sonic","title":"SONiC","text":""},{"location":"release-notes/#fabric-provisioning-management","title":"Fabric provisioning, management","text":""},{"location":"release-notes/#api","title":"API","text":""},{"location":"release-notes/#alpha-7","title":"Alpha-7","text":""},{"location":"release-notes/#device-support_1","title":"Device Support","text":"

    New devices supported by the fabric:

    "},{"location":"release-notes/#switchprofiles","title":"SwitchProfiles","text":""},{"location":"release-notes/#new-universal-port-naming-scheme","title":"New Universal Port Naming Scheme","text":""},{"location":"release-notes/#improved-per-switch-modelplatform-validation","title":"Improved per switch-model/platform validation","text":""},{"location":"release-notes/#vpc","title":"VPC","text":""},{"location":"release-notes/#inspection-cli","title":"Inspection CLI","text":"

    CLI commands are intended to navigate fabric configuration and state and allow introspection of the dependencies and cross-domain checking:

    "},{"location":"release-notes/#observability","title":"Observability","text":""},{"location":"release-notes/#bug-fixes","title":"Bug Fixes","text":""},{"location":"release-notes/#alpha-6","title":"Alpha-6","text":""},{"location":"release-notes/#observability_1","title":"Observability","text":""},{"location":"release-notes/#telemetry-prometheus-exporter","title":"Telemetry - Prometheus Exporter","text":""},{"location":"release-notes/#logging","title":"Logging","text":""},{"location":"release-notes/#agent-status-api-enhancements","title":"Agent Status API Enhancements","text":""},{"location":"release-notes/#networking-enhancements","title":"Networking enhancements","text":""},{"location":"release-notes/#other-improvements","title":"Other improvements","text":""},{"location":"release-notes/#bugs-fixed","title":"Bugs fixed","text":""},{"location":"release-notes/#alpha-5","title":"Alpha-5","text":""},{"location":"release-notes/#open-source","title":"Open Source","text":""},{"location":"release-notes/#dhcppxe-boot-support-for-multi-homed-connections","title":"DHCP/PXE boot support for multi-homed connections","text":""},{"location":"release-notes/#improvements","title":"Improvements","text":""},{"location":"release-notes/#alpha-4","title":"Alpha-4","text":""},{"location":"release-notes/#documentation","title":"Documentation","text":""},{"location":"release-notes/#host-connectivity-dual-homing-improvements","title":"Host connectivity dual homing improvements","text":""},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":""},{"location":"release-notes/#static-external-connection","title":"Static External Connection","text":""},{"location":"release-notes/#internal-improvements","title":"Internal Improvements","text":""},{"location":"release-notes/#known-issues","title":"Known Issues","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":""},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":""},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":""},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":""},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":""},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

    Support for 3rd party DHCP server (DHCP Relay config) through the API

    "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

    A single controller. No controller redundancy.

    "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

    For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

    Spine controller connectivity is not supported.

    For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

    Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

    "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":""},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

    Switches not directly connecting to the controllers can chain boot via the data network.

    "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

    CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

    "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

    server leaf, border leaf, and mixed leaf modes are supported.

    "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

    Two ToR/LEAF switches with MCLAG server connection.

    "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

    MCLAG-only.

    "},{"location":"release-notes/#device-support_2","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":""},{"location":"release-notes/#spines","title":"SPINEs","text":""},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

    Port speed, port group speed, port breakouts are configurable through the API

    "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

    VXLAN-based BGP eVPN.

    "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

    A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

    "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

    Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

    "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

    VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

    This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

    "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

    Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

    "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

    VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

    "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

    VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

    The user controls what subnets/prefixes to import and export from/to the external system.

    No NAT function is supported for external peering.

    "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

    Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

    "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

    VPC is provided with an optional DHCP service with simple IPAM

    "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

    To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

    "},{"location":"release-notes/#scale","title":"Scale","text":""},{"location":"release-notes/#software-versions","title":"Software versions","text":""},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":""},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":""},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

    Under construction.

    "},{"location":"user-guide/connections/","title":"Connections","text":"

    Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.

    All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.

    There are several types of connections.

    "},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"

    Server connections are used to connect workload servers to switches.

    "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

    Unbundled server connections are used to connect servers to a single switch using a single port.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
    "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

    Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

    MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#eslag","title":"ESLAG","text":"

    ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--eslag--s5248-01--s5248-02\n  namespace: default\nspec:\n  eslag:\n    links: # Defines multiple links between a single server and a 2-4 switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

    Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

    "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

    A Fabric Connection is used between a specific pair of spine and leaf switches, representing all of the wires between them.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

    MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
    "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

    VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
    "},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"

    Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.

    "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

    StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # Switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n        subnets: # List of subnets to route to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n

    Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.

    ...\nspec:\n  staticExternal:\n    withinVPC: vpc-1 # VPC name to attach the static external to\n    link:\n      ...\n
    "},{"location":"user-guide/connections/#external","title":"External","text":"

    Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
    "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

    All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).

    "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

    Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.

    In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  boot: # at least one of the serial or mac needs to be defined\n    serial: XYZPDQ1234\n    mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n  profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n    E1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n    \"1\": 10G\n    \"2\": 10G\n  portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n    E1/1: 25G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n  - some-group\n  redundancy: # Optional field to define that switch belongs to the redundancy group\n    group: eslag-1 # Name of the redundancy group\n    type: eslag # Type of the redundancy group, one of mclag or eslag\n

    The SwitchGroup is just a marker at that point and doesn't have any configuration options.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
    "},{"location":"user-guide/devices/#redundancy-groups","title":"Redundancy Groups","text":"

    Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.

    MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.

    Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.

    In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.

    Example of switch configured for ESLAG:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: eslag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-03\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: eslag-1\n    type: eslag\n  ...\n

    And example of switch configured for MCLAG:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: mclag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: mclag-1\n    type: mclag\n  ...\n

    In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.

    "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

    Regular workload server:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
    "},{"location":"user-guide/external/","title":"External Peering","text":"

    Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

    Note

    Hedgehog Fabric does not operate Edge side devices.

    "},{"location":"user-guide/external/#overview","title":"Overview","text":"

    Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.

    Note

    External Peering is only available on the switch devices that are capable for sub-interfaces.

    "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

    In order to distinguish VPC traffic, an Edge device should be able to:

    All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

    "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

    The Fabric shares VPC routes with Edge devices via BGP. Peering is done over VLAN in IPv4 Unicast AFI/SAFI.

    "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

    VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

    "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

    Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.

    "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

    General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:

    Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
    "},{"location":"user-guide/external/#connection","title":"Connection","text":"

    A Connection of type external is used to identify the switch port on Border leaf that is cabled with an Edge device.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwitchName/EthernetXXX\n
    "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

    External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP address on the Border Leaf to set up BGP peering\n    vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n

    Several External Attachment can be configured for the same Connection but for different vlan.

    "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

    To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes (routes) to be allowed to pick up from External\n      - # IPv4 prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n

    Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.

    This example allows any IPv4 prefix that came from External:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n

    This example allows all prefixes that match the default route, with any prefix length:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
    "},{"location":"user-guide/external/#examples","title":"Examples","text":"

    This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.

    "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
    # kubectl fabric external create --name hedgeedge --ipns default --in 65102:5000 --out 5000:65102\n
    - apiVersion: vpc.githedgehog.com/v1beta1\n  kind: External\n  metadata:\n    creationTimestamp: \"2024-11-26T21:24:32Z\"\n    generation: 1\n    labels:\n      fabric.githedgehog.com/ipv4ns: default\n    name: hedgeedge\n    namespace: default\n    resourceVersion: \"57628\"\n    uid: a0662988-73d0-45b3-afc0-0d009cd91ebd\n  spec:\n    inboundCommunity: 65102:5000\n    ipv4Namespace: default\n    outboundCommunity: 5000:6510\n
    "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

    Connection should be specified in the wiring diagram.

    ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
    "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

    Specified in wiring diagram

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n
    "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
    "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

    Warning

    Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.

    Interface configuration:

    interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

    BGP configuration:

    !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n

    Route Map configuration:

    route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
    "},{"location":"user-guide/grafana/","title":"Grafana Dashboards","text":"

    To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.

    "},{"location":"user-guide/grafana/#variables","title":"Variables","text":"

    List of common variables used in Hedgehog Grafana dashboards

    "},{"location":"user-guide/grafana/#switch-critical-resources","title":"Switch Critical Resources","text":"

    This table reports usage and capacity of ASIC's programmable resources such as:

    JSON

    "},{"location":"user-guide/grafana/#fabric","title":"Fabric","text":"

    Fabric underlay and external peering monitoring. Including reporing for:

    JSON

    "},{"location":"user-guide/grafana/#interfaces","title":"Interfaces","text":"

    Switch interfaces monitoring visualization that includes:

    JSON

    "},{"location":"user-guide/grafana/#logs","title":"Logs","text":"

    System and fabric logs:

    JSON

    "},{"location":"user-guide/grafana/#platform","title":"Platform","text":"

    Information from PSU, temperature sensors and fan trays:

    JSON

    "},{"location":"user-guide/grafana/#node-exporter","title":"Node Exporter","text":"

    Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as

    JSON

    "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

    This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

    You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.

    You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.

    "},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

    From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.

    Here is a cleaned-up version of what the CRD looks like:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
    "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

    Click \"Create Network Config\". Add your connections and select the bonding type.

    The resulting CRD (cleaned up) looks like the following:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
    "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

    Browse over to \"VM Networks\" and add one network for each VLAN you want to support. Assign them to the cluster network.

    Here is what the CRDs will look like for both VLANs:

    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
    "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

    Now you can choose the new VM Networks when creating a VM in Harvester, and have them created as part of the VPC.

    "},{"location":"user-guide/overview/","title":"Overview","text":"

    This chapter gives an overview of the main features of Hedgehog Fabric and their usage.

    "},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"

    All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.

    Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.

    Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.

    "},{"location":"user-guide/profiles/#port-naming","title":"Port Naming","text":"

    Each switch port is named using one of the the following formats:

    Examples of port names:

    "},{"location":"user-guide/profiles/#available-ports","title":"Available Ports","text":"

    Each switch profile defines a set of ports available on the switch. Ports could be divided into the following types.

    "},{"location":"user-guide/profiles/#directly-configurable-ports","title":"Directly configurable ports","text":"

    Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:

    .spec:\n  portSpeeds:\n    E1/1: 25G\n
    "},{"location":"user-guide/profiles/#port-groups","title":"Port groups","text":"

    Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:

    .spec:\n  portGroupSpeeds:\n    \"1\": 10G\n

    It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.

    "},{"location":"user-guide/profiles/#breakout-ports","title":"Breakout ports","text":"

    Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:

    .spec:\n  portBreakouts:\n    E1/55: 4x25G\n

    Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.

    "},{"location":"user-guide/shrink-expand/","title":"Fabric Shrink/Expand","text":"

    This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.

    Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.

    This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.

    "},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"

    In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guide.

    Note

    If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.

    After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.

    Note

    Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.

    Ensure the management port of the switch is connected to fabric management network.

    "},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"

    Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.

    Warning

    Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.

    "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

    A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n  defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n  defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n          start: 10.10.1.10\n          end: 10.10.1.99\n        options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n          pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n          dnsServers: # (optional) configure DNS servers\n            - 1.1.1.1\n          timeServers: # (optional) configure Time (NTP) Servers\n            - 1.1.1.1\n          interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n      vlan: 1001 # User-defined VLAN from VLAN namespace\n      isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n      restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: 1002\n\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: 1100\n\n  permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n    - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n    - [subnet-4, subnet-5] # Possible to define multiple lists\n\n  staticRoutes: # Optional, static routes to be added to the VPC\n    - prefix: 10.100.0.0/24 # Destination prefix\n      nextHops: # Next hop IP addresses\n        - 10.200.0.0\n
    "},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"

    Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.

    Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.

    Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.

    A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.

    "},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"

    In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:

    "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

    A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

    VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n  nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
    "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

    A VPCPeering enables VPC-to-VPC connectivity. There are two types of VPC peering:

    VPC peering is only possible between VPCs attached to the same IPv4 namespace (see IPv4Namespace)

    "},{"location":"user-guide/vpcs/#local-vpc-peering","title":"Local VPC peering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n    vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
    "},{"location":"user-guide/vpcs/#remote-vpc-peering","title":"Remote VPC peering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # Indicates a switch group to implement the peering on\n
    "},{"location":"user-guide/vpcs/#subnet-filtering","title":"Subnet filtering","text":"

    It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n  - vpc-1:\n      subnets: [subnet-1, subnet-2]\n    vpc-2:\n      subnets: [subnet-3]\n  - vpc-1:\n      subnets: [subnet-4]\n    vpc-2:\n      subnets: [subnet-5, subnet-6]\n
    "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

    An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of prefixes that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
    "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

    A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
    "},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"

    The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.

    You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

    "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

    The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

    For more details on customizing topologies see the Running VLAB section.

    In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n\n    L1 & L2 & L5 & L3 & L4 --> S1 & S2

    As well as the following test servers, as above Control Node connections are omitted:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n    TS1[Server 1]\n    TS2[Server 2]\n    TS3[Server 3]\n    TS4[Server 4]\n    TS5[Server 5]\n    TS6[Server 6]\n    TS7[Server 7]\n    TS8[Server 8]\n    TS9[Server 9]\n    TS10[Server 10]\n\n    subgraph MCLAG\n    L1\n    L2\n    end\n    TS3 --> L1\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS4 --> L2\n\n    subgraph ESLAG\n    L3\n    L4\n    end\n\n    TS7 --> L3\n    TS5 --> L3\n    TS5 --> L4\n    TS6 --> L3\n    TS6 --> L4\n\n    TS8 --> L4\n    TS9 --> L5\n    TS10 --> L5\n\n    L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2
    "},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"

    hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

    NAME:\n   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n   hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP\n   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)\n   --help, -h                                                               show help\n   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)\n   --ipns value                                                             IPv4 namespace for VPCs (default: \"default\")\n   --name value, -n value                                                   name of the VM or HW to access\n   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)\n   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)\n   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP\n   --vlanns value                                                           VLAN namespace for VPCs (default: \"default\")\n   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#setup-peering","title":"Setup Peering","text":"

    hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

    NAME:\n   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n   Example command:\n\n   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n   Which will produce:\n   1. VPC peering between vpc-01 and vpc-02\n   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n      from external permitted as well any route that belongs to 22.22.22.0/24\n\n   VPC Peerings:\n\n   1+2 -- VPC peering between vpc-01 and vpc-02\n   demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n   1+2:remote=border -- same as above\n\n   External Peerings:\n\n   1~as5835 -- external peering for vpc-01 with External as5835\n   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n     default subnet and any route from external\n   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n     default route from external permitted\n   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n   --help, -h                     show help\n   --name value, -n value         name of the VM or HW to access\n   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#test-connectivity","title":"Test Connectivity","text":"

    hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

    NAME:\n   hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n   hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n   --help, -h                     show help\n   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n   --name value, -n value         name of the VM or HW to access\n   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)\n   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

    You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

    core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

    The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

    After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

    core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

    In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

    "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

    You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

    For server-01:

    core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

    And for server-02:

    core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
    "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

    You can test connectivity between the servers before peering the switches using the ping command:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
    "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

    To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

    Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

    If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

    You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

    core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
    "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

    First, create a second IPv4Namespace with the same subnet as the default one:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

    Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

    core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-3\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: 2001\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

    At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.

    "},{"location":"vlab/overview/","title":"VLAB Overview","text":"

    It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.

    In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.

    "},{"location":"vlab/overview/#hhfab","title":"HHFAB","text":"

    Hedgehog maintains a utility to install and configure VLAB, called hhfab, aka Fabricator.

    The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.

    You can find more information about getting hhfab in the download section.

    "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

    Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

    The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login into the OCI registry.

    By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

    You can calculate the system requirements based on the allocated resources to the VMs using the following table:

    Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

    These numbers give approximately the following requirements for the default topologies:

    Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

    NVMe SSD for VM disks is highly recommended.

    "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing Prerequisites","text":"

    To run VLAB, your system needs docker,qemu,kvm, and hhfab. On Ubuntu 22.04 LTS you can install all required packages using the following commands:

    "},{"location":"vlab/overview/#docker","title":"Docker","text":"
    curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
    "},{"location":"vlab/overview/#qemukvm","title":"Qemu/KVM","text":"
    sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

    Good output of the kvm-ok command should look like this:

    ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
    "},{"location":"vlab/overview/#oras","title":"Oras","text":"

    For convenience Hedgehog provides a script to install oras:

    curl -fsSL https://i.hhdev.io/oras | bash\n
    "},{"location":"vlab/overview/#hhfab_1","title":"Hhfab","text":"

    You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:

    docker login ghcr.io --username provided_username --password provided_token\n

    Once logged in, download and run the script:

    curl -fsSL https://i.hhdev.io/hhfab | bash\n
    "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":""},{"location":"vlab/running/","title":"Running VLAB","text":"

    Make sure to follow the prerequisites and check system requirements in the VLAB Overview section before running VLAB.

    "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

    First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.

    ubuntu@docs:~$ hhfab init --dev\n11:26:52 INF Hedgehog Fabricator version=v0.30.0\n11:26:52 INF Generated initial config\n11:26:52 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml\n11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include\n
    "},{"location":"vlab/running/#vlab-topology","title":"VLAB Topology","text":"

    By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

    ubuntu@docs:~$ hhfab vlab gen\n21:27:16 INF Hedgehog Fabricator version=v0.30.0\n21:27:16 INF Building VLAB wiring diagram fabricMode=spine-leaf\n21:27:16 INF >>> spinesCount=2 fabricLinksCount=2\n21:27:16 INF >>> eslagLeafGroups=2\n21:27:16 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n21:27:16 INF >>> orphanLeafsCount=1 vpcLoopbacks=2\n21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n21:27:16 INF Generated wiring file name=vlab.generated.yaml\n
    You can jump to the instructions to start VLAB, or see the next section for customizing the topology.

    "},{"location":"vlab/running/#collapsed-core","title":"Collapsed Core","text":"

    If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:

    ubuntu@docs:~$ hhfab vlab gen\n11:39:02 INF Hedgehog Fabricator version=v0.30.0\n11:39:02 INF Building VLAB wiring diagram fabricMode=collapsed-core\n11:39:02 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n11:39:02 INF >>> orphanLeafsCount=0 vpcLoopbacks=2\n11:39:02 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:39:02 INF Generated wiring file name=vlab.generated.yaml\n
    "},{"location":"vlab/running/#custom-spine-leaf","title":"Custom Spine Leaf","text":"

    Or you can run custom topology with 2 spines, 4 MCLAG leaves and 2 non-MCLAG leaves using flags:

    ubuntu@docs:~$ hhfab vlab gen --mclag-leafs-count 4 --orphan-leafs-count 2\n11:41:06 INF Hedgehog Fabricator version=v0.30.0\n11:41:06 INF Building VLAB wiring diagram fabricMode=spine-leaf\n11:41:06 INF >>> spinesCount=2 fabricLinksCount=2\n11:41:06 INF >>> eslagLeafGroups=\"\"\n11:41:06 INF >>> mclagLeafsCount=4 mclagSessionLinks=2 mclagPeerLinks=2\n11:41:06 INF >>> orphanLeafsCount=2 vpcLoopbacks=2\n11:41:06 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:41:06 INF Generated wiring file name=vlab.generated.yaml\n

    Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.

    Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.

    "},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"

    To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.

    ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
    When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB.

    "},{"location":"vlab/running/#enable-outside-connectivity-from-vlab-vms","title":"Enable Outside connectivity from VLAB VMs","text":"

    By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).

    "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

    The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

    You can select device you want to access or pass the name using the --vm flag.

    ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n
    "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

    Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

    The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!. The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!. The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.

    "},{"location":"vlab/running/#use-kubectl-to-interact-with-the-fabric","title":"Use Kubectl to Interact with the Fabric","text":"

    On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the switches to get installed.

    After the switches are provisioned, the command returns something like this:

    core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

    The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.

    At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.

    "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

    You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

    For example, to get the list of switches, run:

    core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

    Similarly, to get the list of servers, run:

    core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

    For connections, use:

    core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

    For IPv4 and VLAN namespaces, use:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
    "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

    If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.

    "},{"location":"vlab/running/#next-steps","title":"Next steps","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.

    The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.

    You can read more about its concepts and architecture in the documentation.

    You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.

    "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

    The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.

    "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

    The Hedgehog Fabric currently supports two underlay network topologies.

    "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

    A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.

    The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

    "},{"location":"architecture/fabric/#spine-leaf","title":"Spine-Leaf","text":"

    A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.

    This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.

    "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

    The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.

    "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

    The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.

    "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

    To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.

    "},{"location":"architecture/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

    Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.

    Hedgehog Fabric consists of the following components:

    "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

    All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.

    "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

    Wiring Diagram consists of the following resources:

    "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":""},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

    Installer builder and VLAB.

    "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

    Control plane and switch agent.

    "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

    This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

    In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

    just serve\n

    Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

    Additionally you can run

    just build\n

    to make sure that your changes will be built correctly and doesn't break documentation.

    "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

    If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

    Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

    Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

    "},{"location":"contribute/docs/#repository","title":"Repository","text":"

    Documentation is organized in per-release branches:

    Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

    "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

    All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

    For example, top-level .pages in this repository looks like this:

    nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

    Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

    More details in the MkDocs Pages plugin.

    "},{"location":"contribute/docs/#abbreviations","title":"Abbreviations","text":"

    You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

    For example, we have following in includes/abbreviations.md:

    *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

    It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

    "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

    We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

    To view code for examples, please, check the source code of this page.

    "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

    Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

    Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

    Keyboard keys can be written like so:

    Ctrl+Alt+Del

    Amd inline icons/emojis can be added like this:

    :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

    "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

    Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

    Lorem ipsum

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

    "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

    Details can be found here.

    Simple code block with line nums and highlighted lines:

    bubble_sort.py
    def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

    Code annotations:

    theme:\n  features:\n    - content.code.annotate # (1)\n
    1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
    "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

    You can use Tabs to better organize content.

    CC++
    #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
    #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
    "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

    You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

    graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
    sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
    "},{"location":"contribute/overview/","title":"Overview","text":"

    Under construction.

    "},{"location":"faq/overview/","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/overview/#what-is-the-hedgehog-fabric","title":"What is the Hedgehog Fabric?","text":"

    The Hedgehog Fabric is a topology of routers arranged in a spine-leaf architecture. A spine-leaf architecture is a type of Clos network topology. In a spine-leaf architecture, the leaves are usually placed in racks and connected directly to the servers, whereas spines are connected only to leaves. In a spine-leaf architecture, the fundamental unit of connection is a layer 3 route.

    The Hedgehog Fabric is managed via Kubernetes objects and custom resource definitions.

    "},{"location":"faq/overview/#what-are-the-advantages-of-a-spine-leaf-architecture","title":"What are the advantages of a spine-leaf architecture?","text":"

    A spine-leaf architecture is designed to facilitate traffic that is passing between servers inside of a data center. By contrast, other architectures like core-access-aggregation facilitate traffic moving in and out of the data center. A spine-leaf architecture provides multiple paths between nodes which allows for router maintenance and resilience in the case of failures. The spine-leaf architecture allows for multiple points of egress via border leaf nodes. In a spine-leaf architecture the unit of connection is a layer 3 route. There are robust tools, queueing algorithms and hardware available to manage network traffic at layer 3. To manage the distribution of routes to switches inside the fabric a protocol such as BGP, OSPF, or IS-IS is used.

    "},{"location":"faq/overview/#spine-leaf-architecture-diagram","title":"Spine Leaf Architecture Diagram","text":"

    The following diagram contains Leaf and Spine routers. Servers inside of a virtual private cloud can be attached to any leaf. To allow the servers to communicate, routes are applied to leaf nodes. The traffic passing from leaf 1 to leaf 2 can travel via any spine: the leaf uses ECMP to decide which spine to use. EVPN ensures that servers inside of a VPC are reachable at layer 2 regardless of which leaf they are attached to in the Fabric.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    S3([Spine 3])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    L4([Leaf 4])\n    WS1[[Worload Servers]]\n    WS2[[Worload Servers]]\n    WS3[[Worload Servers]]\n    WS4[[Worload Servers]]\n\n    S1 & S2 & S3 ---- L1 & L2 & L3 & L4 \n    L1 ---- WS1\n    L2 ---- WS2\n    L3 ---- WS3\n    L4 ---- WS4\n
    "},{"location":"faq/overview/#core-access-aggregation-diagram","title":"Core Access Aggregation Diagram","text":"

    In the diagram below, the Access switches are isolated or managed by layer-2 constructs like ACLs, bridging, and VLANs. The Aggregation routers are where layer-2 traffic is promoted to layer 3. The core routers handle layer-3 traffic only. Often some form of Spanning Tree Protocol is used to avoid loops in the layer-2 domain. Loops would cripple the network as layer 2 often relies on Broadcast/Flooding for discovery. While there are multiple paths out from the workload servers to the core they are often not passing traffic due to the Spanning Tree Protocol, these disable links are shown as dotted lines.

    graph TD\n    CG1((Core Router 1))\n    CG2((Core Router 2))\n    AG1([Aggregation 1])\n    AG2([Aggregation 2])\n    AG3([Aggregation 3])\n    A1[Access 1]\n    A2[Access 2]\n    A3[Access 3]\n    WS1[[Worload Servers]]\n    WS2[[Worload Servers]]\n    WS3[[Worload Servers]]\n\n    CG1 ---- AG1 & AG2 & AG3\n    CG2 ---- AG1 & AG2 & AG3\n    AG1 ---- A1 \n    AG2 ---- A2 \n    AG3 ---- A3 \n    AG1 -..- A2 & A3\n    AG2 -..- A1 & A3\n    AG3 -..- A1 & A2\n    A1 ---- WS1\n    A2 ---- WS2\n    A3 ---- WS3\n
    "},{"location":"faq/overview/#what-does-it-mean-to-manage-my-network-with-kubernetes","title":"What does it mean to manage my network with Kubernetes?","text":"

    A common way to manage a network is to proceed manually via the command-line interface of the equipment, or with the hardware vendor tools. Managing a small number of switches and routers this way is workable, but cumbersome, and it only gets more painful when the network grows. Managing switches and servers with Kubernetes is similar to managing pods and applications with Kubernetes: it provides assistance for deployment, scaling, and management of the network appliances.

    For example, if the administrator of a Kubernetes cluster wants to create a new Nginx pod, they write down the YAML file describing the pod name, the container image, any ports that the pod needs exposed, and what namespace to run the pod in. After the YAML file is created, a simple kubectl apply -f nginx.yaml is all that the administrator needs to run in order for the pod to be scheduled.

    With the Hedgehog Fabric, the same principles apply to managing network resources. Administrators create a YAML file to configure a VPC. The YAML file describes the IP address range for the private cloud, for example the 192.168.0.0/16 space. It also describes any VLANs that the private cloud needs. After the desired options are in the file, administrators can push the configuration to the switch with a mere kubectl apply -f vpc1.yaml, and within a few seconds the switch configuration is live.

    "},{"location":"faq/overview/#what-is-a-virtual-private-cloud-vpc","title":"What is a Virtual Private Cloud (VPC)?","text":"

    A VPC provides layer 3 logical isolation inside of a network. To isolate the servers, a VRF is used. A VRF allows for multiple routing tables to exist at the same time on a switch. Each VPC is isolated from the others because there is simply no route between them.

    "},{"location":"getting-started/download/","title":"Download","text":"

    The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.

    "},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

    Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

    After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:

    docker login ghcr.io --username provided_user_name --password provided_token_string\n
    "},{"location":"getting-started/download/#downloading-hhfab","title":"Downloading hhfab","text":"

    Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.

    All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):

    curl -fsSL https://i.hhdev.io/hhfab | bash\n

    Or download a specific version (e.g. beta-1) using the following command:

    curl -fsSL https://i.hhdev.io/hhfab | VERSION=beta-1 bash\n

    Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.

    "},{"location":"getting-started/download/#installing-oras","title":"Installing ORAS","text":"

    The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:

    curl -fsSL https://i.hhdev.io/oras | bash\n
    "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":""},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

    Under construction.

    "},{"location":"install-upgrade/build-wiring/#overview","title":"Overview","text":"

    A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.

    In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:

    ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n   hhfab sample - generate sample wiring diagram\n\nUSAGE:\n   hhfab sample command [command options]\n\nCOMMANDS:\n   spine-leaf, sl      generate sample spine-leaf wiring diagram\n   collapsed-core, cc  generate sample collapsed-core wiring diagram\n   help, h             Shows a list of commands or help for one command\n\nOPTIONS:\n   --help, -h  show help\n

    Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:

    ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n   hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n   hhfab vlab generate [command options]\n\nOPTIONS:\n   --bundled-servers value      number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n   --eslag-leaf-groups value    eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n   --eslag-servers value        number of ESLAG servers to generate for ESLAG switches (default: 2)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --help, -h                   show help\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-servers value        number of MCLAG servers to generate for MCLAG switches (default: 2)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --no-switches                do not generate any switches (default: false)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --unbundled-servers value    number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
    "},{"location":"install-upgrade/build-wiring/#sample-switch-configuration","title":"Sample Switch Configuration","text":"
    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: ds3000-02\nspec:\n  boot:\n    serial: ABC123XYZ\n  role: server-leaf\n  description: leaf-2\n  profile: celestica-ds3000\n  portBreakouts:\n    E1/1: 4x10G\n    E1/2: 4x10G\n    E1/17: 4x25G\n    E1/18: 4x25G\n    E1/32: 4x25G\n  redundancy:\n    group: mclag-1\n    type: mclag\n
    "},{"location":"install-upgrade/build-wiring/#design-discussion","title":"Design Discussion","text":"

    This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.

    "},{"location":"install-upgrade/build-wiring/#vpc","title":"VPC","text":"

    A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.

    graph TD\n    L1([Leaf 1])\n    L2([Leaf 2])\n    S1[\"Server 1\n      10.7.71.1\"]\n    S2[\"Server 2\n      172.16.2.31\"]\n    S3[\"Server 3\n       192.168.18.85\"]\n\n    L1 <--> S1\n    L1 <--> S2\n    L2 <--> S3\n\n    subgraph VPC 1\n    S1\n    S2\n    S3\n    end
    "},{"location":"install-upgrade/build-wiring/#connection","title":"Connection","text":"

    A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.

    "},{"location":"install-upgrade/build-wiring/#server-connections","title":"Server Connections","text":"

    A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    L4([Leaf 4])\n    L5([Leaf 5])\n    L6([Leaf 6])\n    L7([Leaf 7])\n\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 ---- L1 & L2 & L3 & L4 & L5 & L6 & L7\n    L1 <-- Bundled --> TS1\n    L1 <-- Bundled --> TS1\n    L1 <-- Unbundled --> TS2\n    L2 <-- MCLAG --> TS3\n    L3 <-- MCLAG --> TS3\n    L4 <-- ESLAG --> TS4\n    L5 <-- ESLAG --> TS4\n    L6 <-- ESLAG --> TS4\n    L7 <-- ESLAG --> TS4\n\n    subgraph VPC 1\n    TS1\n    TS2\n    TS3\n    TS4\n    end\n\n    subgraph MCLAG\n    L2\n    L3\n    end\n\n    subgraph ESLAG\n    L3\n    L4\n    L5\n    L6\n    L7\n    end\n
    "},{"location":"install-upgrade/build-wiring/#fabric-connections","title":"Fabric Connections","text":"

    Fabric connections serve as connections between switches, they form the fabric of the network.

    "},{"location":"install-upgrade/build-wiring/#vpc-peering","title":"VPC Peering","text":"

    VPCs need VPC Peerings to talk to each other. VPC Peerings come in two varieties: local and remote.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 <--> L1 & L2\n    L1 <--> TS1 & TS2\n    L2 <--> TS3 & TS4\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
    "},{"location":"install-upgrade/build-wiring/#local-vpc-peering","title":"Local VPC Peering","text":"

    When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.

    graph TD\n\n    L1([Leaf 1])\n    S1[Server1]\n    S2[Server2]\n    S3[Server3]\n    S4[Server4]\n\n    L1 <-.2,loopback.-> L1;\n    L1 <-.3.-> S1;\n    L1 <--> S2 & S4;\n    L1 <-.1.-> S3;\n\n    subgraph VPC 1\n    S1\n    S2\n    end\n\n    subgraph VPC 2\n    S3\n    S4\n    end
    The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1.

    "},{"location":"install-upgrade/build-wiring/#remote-vpc-peering","title":"Remote VPC Peering","text":"

    Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 <-.5.-> L1;\n    S1 <-.2.-> L2;\n    S1 <-.3,4.-> L3;\n    S2 <--> L1;\n    S2 <--> L2;\n    S2 <--> L3;\n    L1 <-.6.-> TS1;\n    L1 <--> TS2;\n    L2 <--> TS3;\n    L2 <-.1.-> TS4;\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
    The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it.

    "},{"location":"install-upgrade/build-wiring/#vpc-loopback","title":"VPC Loopback","text":"

    A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.

    "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":""},{"location":"install-upgrade/config/#overview","title":"Overview","text":"

    The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.

    "},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"

    For a VLAB user, the typical workflow with hhfab is:

    1. hhfab init --dev
    2. hhfab vlab gen
    3. hhfab vlab up

    The above workflow will get a user up and running with a spine-leaf VLAB.

    "},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"

    It's possible to start from scratch:

    1. hhfab init (see different flags to cusomize initial configuration)
    2. Adjust the fab.yaml file to your needs
    3. hhfab validate
    4. hhfab build

    Or import existing config and wiring files:

    1. hhfab init -c fab.yaml -w wiring-file.yaml -w extra-wiring-file.yaml
    2. hhfab validate
    3. hhfab build

    After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.

    "},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"

    Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:

    spec:\n    config:\n      control:\n        defaultUser: # user 'core' on all control nodes\n          password: \"hashhashhashhashhash\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 SecREKeyJumblE\"\n\n        fabric:\n          mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n          defaultSwitchUsers:\n            admin: # at least one user with name 'admin' and role 'admin'\n              role: admin\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n            op: # optional read-only user\n              role: operator\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n

    Control node(s) user is always named core.

    The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.

    "},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"

    The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.

    "},{"location":"install-upgrade/config/#control-node","title":"Control Node","text":"

    The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:

    apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n   disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: enp2s0 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
    The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name.

    "},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"

    There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.

    Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.

    Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:

    spec:\n  config:\n      ...\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        lokiTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all logs\n                  env: env-1\n              url: https://logs-prod-021.grafana.net/loki/api/v1/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n        prometheusTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all metrics\n                  env: env-1\n              sendIntervalSeconds: 120\n              url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n              unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n                  - cpu\n                  - filesystem\n                  - loadavg\n                  - meminfo\n              collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n

    For additional options, see the AlloyConfig struct in Fabric repo.

    "},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
    apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n  name: default\n  namespace: fab\nspec:\n  config:\n    control:\n      tlsSAN: # IPs and DNS names to access API\n        - \"customer.site.io\"\n\n      ntpServers:\n      - time.cloudflare.com\n      - time1.google.com\n\n      defaultUser: # user 'core' on all control nodes\n        password: \"hash...\" # password hash\n        authorizedKeys:\n          - \"ssh-ed25519 hash...\"\n\n    fabric:\n      mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n      includeONIE: true\n      defaultSwitchUsers:\n        admin: # at least one user with name 'admin' and role 'admin'\n          role: admin\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n        op: # optional read-only user\n          role: operator\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        collectSyslogEnabled: true\n        lokiTargets:\n          lab:\n            url: http://url.io:3100/loki/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n        prometheusTargets:\n          lab:\n            url: http://url.io:9100/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n            sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n    disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: eno2 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: eno1\n\n# Currently only one ControlNode is supported\n
    "},{"location":"install-upgrade/install/","title":"Install Fabric","text":""},{"location":"install-upgrade/install/#prerequisites","title":"Prerequisites","text":""},{"location":"install-upgrade/install/#overview-of-install-process","title":"Overview of Install Process","text":"

    This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.

    Download and install hhfab following instructions from the Download section.

    The main steps to install Fabric are:

    1. Install hhfab on the machines with access to the Internet
      1. Prepare Wiring Diagram
      2. Select Fabric Configuration
      3. Build Control Node configuration and installer
    2. Install Control Node
      1. Insert USB with control-os image into Fabric Control Node
      2. Boot the node off the USB to initiate the installation
    3. Prepare Management Network
      1. Connect management switch to Fabric control node
      2. Connect 1GbE Management port of switches to management switch
    4. Prepare supported switches
      1. Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
      2. Boot them into ONIE Install Mode to have them automatically provisioned
    "},{"location":"install-upgrade/install/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"

    Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img or .iso) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

    "},{"location":"install-upgrade/install/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
    1. hhfab init --wiring wiring-lab.yaml
    2. The init command generates a fab.yaml file, edit the fab.yaml file for your needs
      1. ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
    3. hhfab validate
    4. hhfab build --mode iso
      1. An ISO is best suited to use with IPMI based virtual media. If desired an IMG file suitable for writing to a USB drive, can be created by passing the --mode usb option. ISO is the default.

    The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.iso and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

    "},{"location":"install-upgrade/install/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"

    This will erase data on the USB disk.

    "},{"location":"install-upgrade/install/#steps-for-linux","title":"Steps for Linux","text":"
    1. Insert the USB to your machine
    2. Identify the path to your USB stick, for example: /dev/sdc
    3. Issue the command to write the image to the USB drive
    "},{"location":"install-upgrade/install/#steps-for-macos","title":"Steps for MacOS","text":"
    1. Plug the drive into the computer
    2. Open the terminal
    3. Identify the drive using diskutil list
    4. Unmount the disk diskutil unmount disk5, the disk is specific to your environment
    5. Write the image to the disk: sudo dd if=./control-1-install-usb.img of=/dev/disk5 bs=4k status=progress

    There are utilities that assist this process such as etcher.

    "},{"location":"install-upgrade/install/#install-control-node","title":"Install Control Node","text":"

    This control node should be given a static IP address. Either a lease or statically assigned.

    1. Configure the server to use UEFI boot without secure boot

    2. Attach the image to the server either by inserting via USB, or attaching via virtual media

    3. Select boot off of the attached media, the installation process is automated

    4. Once the control node has booted, it logs in automatically and begins the installation process

      1. Optionally use journalctl -f -u flatcar-install.service to monitor progress
    5. Once the installation is complete, the system automatically reboots.

    6. After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.

    7. Upon booting into the freshly installed system, the fabric installation will automatically begin

      1. If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
      2. Optionally this can be monitored with journalctl -f -u fabric-install.service
    8. The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.

    "},{"location":"install-upgrade/install/#configure-management-network","title":"Configure Management Network","text":"

    The control node is dual-homed: it connects to two different networks, which are called management and external, respectively, in the fab.yaml file. The management network is for controlling the switches that comprise the fabric. It can be a simple broadcast domain with layer 2 connectivity. The management network is not accessible to machines or devices not associated with the fabric; it is a private, exclusive network. The control node connects to the management network via a 10 GbE interface. It runs a DHCP server, as well as a small HTTP server.

    The external network allows the user to access the control node via their local IT network. It provides SSH access to the host operating system on the control node.

    "},{"location":"install-upgrade/install/#fabric-manages-switches","title":"Fabric Manages Switches","text":"

    Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.

    At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s

    "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"

    In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.

    "},{"location":"install-upgrade/requirements/#control-node","title":"Control Node","text":"

    In internal testing Hedgehog uses a server with the following specifications:

    "},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":" Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":" Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#reference-control-node-configuration","title":"Reference Control Node Configuration","text":""},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"

    Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):

    Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"

    You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.

    "},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":""},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"

    (could be used for collapsed-core)

    "},{"location":"install-upgrade/upgrade/","title":"Upgrading from Alpha-7 to Beta-1","text":""},{"location":"install-upgrade/upgrade/#control-node","title":"Control Node","text":"

    Ensure the hardware that is to be used for the control node meets the system requirements. The upgrade process is destructive of the host, so ensure all data needed is removed from the selected server before the upgrade is started.

    "},{"location":"install-upgrade/upgrade/#management-network","title":"Management Network","text":"

    Beta-1 uses the RJ-45 management ports of the switches instead of front panel ports. A simple management network will need to be in place and cabled before the install of Beta-1. The control node will run a DHCP server on this network and must be the sole DHCP server. Do not co-mingle other services or equipment on this network, it is for the exclusive use of the control node and switches.

    "},{"location":"install-upgrade/upgrade/#install-switch-vendor-onie","title":"Install Switch Vendor ONIE","text":"

    Beta-1 uses the switch vendor ONIE for installation of the NOS. The latest vendor provided version of ONIE is recommended to be installed. Hedgehog ONIE must not be used.

    "},{"location":"install-upgrade/upgrade/#changes-to-the-wiring-diagram","title":"Changes to the Wiring Diagram","text":""},{"location":"install-upgrade/upgrade/#install-the-control-node","title":"Install The Control Node","text":"

    Follow the instructions for installing the Beta-1 Fabric on a control node.

    "},{"location":"install-upgrade/upgrade/#install-nos-using-onie-nos-install-option","title":"Install NOS using ONIE NOS Install Option","text":"

    As the switches boot up, select the ONIE option from the grub screen. From there select the \"NOS Install\" option. The install option will cause the switch to begin searching for installation media, this media is supplied by the control node.

    "},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":""},{"location":"reference/api/#agentgithedgehogcomv1beta1","title":"agent.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types","title":"Resource Types","text":""},{"location":"reference/api/#adminstatus","title":"AdminStatus","text":"

    Underlying type: string

    Appears in: - SwitchStateInterface

    Field Description `` up down testing"},{"location":"reference/api/#agent","title":"Agent","text":"

    Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

    Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

    AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

    Appears in: - Agent

    Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"

    Appears in: - SwitchStateBGPNeighbor

    Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"

    Appears in: - BGPMessages

    Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"

    Underlying type: string

    Appears in: - SwitchStateBGPNeighbor

    Field Description `` idle connect active openSent openConfirm established"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"

    Underlying type: string

    Appears in: - SwitchStateBGPNeighbor

    Field Description `` internal external"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"

    Underlying type: string

    Appears in: - SwitchStateInterface

    Field Description `` up down testing unknown dormant notPresent lowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"

    Appears in: - AgentStatus

    Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"

    Appears in: - SwitchState

    Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"

    Appears in: - SwitchStateBGPNeighbor

    Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"

    Appears in: - SwitchState

    Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"

    Appears in: - SwitchState

    Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"

    Appears in: - SwitchStateCRMACLInfo

    Field Description Default Validation groupsAvailable integer groupsUsed integer tablesAvailable integer tablesUsed integer"},{"location":"reference/api/#switchstatecrmaclinfo","title":"SwitchStateCRMACLInfo","text":"

    Appears in: - SwitchStateCRMACLStats

    Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"

    Appears in: - SwitchStateCRM

    Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"

    Appears in: - SwitchStateCRM

    Field Description Default Validation dnatEntriesAvailable integer dnatEntriesUsed integer fdbEntriesAvailable integer fdbEntriesUsed integer ipmcEntriesAvailable integer ipmcEntriesUsed integer ipv4NeighborsAvailable integer ipv4NeighborsUsed integer ipv4NexthopsAvailable integer ipv4NexthopsUsed integer ipv4RoutesAvailable integer ipv4RoutesUsed integer ipv6NeighborsAvailable integer ipv6NeighborsUsed integer ipv6NexthopsAvailable integer ipv6NexthopsUsed integer ipv6RoutesAvailable integer ipv6RoutesUsed integer nexthopGroupMembersAvailable integer nexthopGroupMembersUsed integer nexthopGroupsAvailable integer nexthopGroupsUsed integer snatEntriesAvailable integer snatEntriesUsed integer"},{"location":"reference/api/#switchstateinterface","title":"SwitchStateInterface","text":"

    Appears in: - SwitchState

    Field Description Default Validation enabled boolean adminStatus AdminStatus operStatus OperStatus mac string lastChanged Time speed string counters SwitchStateInterfaceCounters transceiver SwitchStateTransceiver lldpNeighbors SwitchStateLLDPNeighbor array"},{"location":"reference/api/#switchstateinterfacecounters","title":"SwitchStateInterfaceCounters","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation inBitsPerSecond float inDiscards integer inErrors integer inPktsPerSecond float inUtilization integer lastClear Time outBitsPerSecond float outDiscards integer outErrors integer outPktsPerSecond float outUtilization integer"},{"location":"reference/api/#switchstatelldpneighbor","title":"SwitchStateLLDPNeighbor","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"

    SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent

    Appears in: - SwitchState

    Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"

    Appears in: - SwitchState

    Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"

    Appears in: - SwitchStatePlatform

    Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"

    Appears in: - SwitchStateInterface

    Field Description Default Validation description string cableClass string formFactor string connectorType string present string cableLength float operStatus string temperature float voltage float serialNumber string vendor string vendorPart string vendorOUI string vendorRev string"},{"location":"reference/api/#dhcpgithedgehogcomv1beta1","title":"dhcp.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

    "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":""},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

    DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

    Appears in: - DHCPSubnetStatus

    Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

    DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

    Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

    DHCPSubnetSpec defines the desired state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

    DHCPSubnetStatus defines the observed state of DHCPSubnet

    Appears in: - DHCPSubnet

    Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":""},{"location":"reference/api/#external","title":"External","text":"

    External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

    ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

    ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

    ExternalAttachmentSpec defines the desired state of ExternalAttachment

    Appears in: - ExternalAttachment

    Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

    ExternalAttachmentStatus defines the observed state of ExternalAttachment

    Appears in: - ExternalAttachment

    "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

    ExternalAttachmentSwitch defines the switch port configuration for the external attachment

    Appears in: - ExternalAttachmentSpec

    Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

    ExternalPeering is the Schema for the externalpeerings API

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

    ExternalPeeringSpec defines the desired state of ExternalPeering

    Appears in: - ExternalPeering

    Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

    ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

    ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

    Appears in: - ExternalPeeringSpec

    Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

    ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

    Appears in: - ExternalPeeringSpecExternal

    Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

    ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

    Appears in: - ExternalPeeringSpecPermit

    Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

    ExternalPeeringStatus defines the observed state of ExternalPeering

    Appears in: - ExternalPeering

    "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

    ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

    Appears in: - External

    Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

    ExternalStatus defines the observed state of External

    Appears in: - External

    "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

    IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

    IPv4NamespaceSpec defines the desired state of IPv4Namespace

    Appears in: - IPv4Namespace

    Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

    IPv4NamespaceStatus defines the observed state of IPv4Namespace

    Appears in: - IPv4Namespace

    "},{"location":"reference/api/#vpc","title":"VPC","text":"

    VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

    VPCAttachment is the Schema for the vpcattachments API

    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

    VPCAttachmentSpec defines the desired state of VPCAttachment

    Appears in: - VPCAttachment

    Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

    VPCAttachmentStatus defines the observed state of VPCAttachment

    Appears in: - VPCAttachment

    "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

    VPCDHCP defines the on-demand DHCP configuration for the subnet

    Appears in: - VPCSubnet

    Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"

    VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

    VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

    Appears in: - VPCDHCP

    Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

    Appears in: - VPCPeeringSpec

    Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

    VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:

    spec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n
    Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

    VPCPeeringSpec defines the desired state of VPCPeering

    Appears in: - VPCPeering

    Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

    VPCPeeringStatus defines the observed state of VPCPeering

    Appears in: - VPCPeering

    "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

    VPCSpec defines the desired state of VPC. At least one subnet is required.

    Appears in: - VPC

    Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"

    VPCStaticRoute defines the static route for the VPC

    Appears in: - VPCSpec

    Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

    VPCStatus defines the observed state of VPC

    Appears in: - VPC

    "},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

    VPCSubnet defines the VPC subnet configuration

    Appears in: - VPCSpec

    Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"

    Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

    "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":""},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

    BasePortName defines the full name of the switch port

    Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

    ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"

    ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

    ConnExternal defines the external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

    ConnExternalLink defines the external connection link

    Appears in: - ConnExternal

    Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

    ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

    Appears in: - ConnectionSpec

    Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

    ConnFabricLinkSwitch defines the switch side of the fabric link

    Appears in: - FabricLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

    ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

    Appears in: - ConnectionSpec

    Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

    ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

    Appears in: - ConnectionSpec

    Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

    ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

    ConnStaticExternalLink defines the static external connection link

    Appears in: - ConnStaticExternal

    Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

    ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

    Appears in: - ConnStaticExternalLink

    Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$ nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$ subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

    ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

    Appears in: - ConnectionSpec

    Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

    ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

    Appears in: - ConnectionSpec

    Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"

    Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

    ConnectionSpec defines the desired state of Connection

    Appears in: - Connection

    Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

    ConnectionStatus defines the observed state of Connection

    Appears in: - Connection

    "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

    FabricLink defines the fabric connection link

    Appears in: - ConnFabric

    Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"

    Server is the Schema for the servers API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

    ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

    ServerSpec defines the desired state of Server

    Appears in: - Server

    Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

    ServerStatus defines the observed state of Server

    Appears in: - Server

    "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

    ServerToSwitchLink defines the server-to-switch link

    Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

    Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"

    Switch is the Schema for the switches API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"

    Appears in: - SwitchSpec

    Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

    SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

    SwitchGroupSpec defines the desired state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

    SwitchGroupStatus defines the observed state of SwitchGroup

    Appears in: - SwitchGroup

    "},{"location":"reference/api/#switchprofile","title":"SwitchProfile","text":"

    SwitchProfile represents switch capabilities and configuration

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchProfile metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"

    Defines switch-specific configuration options

    Appears in: - SwitchProfileSpec

    Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"

    Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation

    Appears in: - SwitchProfileSpec

    Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"

    Defines a switch port configuration Only one of Profile or Group can be set

    Appears in: - SwitchProfileSpec

    Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"

    Defines a switch port group configuration

    Appears in: - SwitchProfileSpec

    Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"

    Defines a switch port profile configuration

    Appears in: - SwitchProfileSpec

    Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"

    Defines a switch port profile breakout configuration

    Appears in: - SwitchProfilePortProfile

    Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"

    Defines a switch port profile breakout mode configuration

    Appears in: - SwitchProfilePortProfileBreakout

    Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"

    Defines a switch port profile speed configuration

    Appears in: - SwitchProfilePortProfile

    Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"

    SwitchProfileSpec defines the desired state of SwitchProfile

    Appears in: - SwitchProfile

    Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"

    SwitchProfileStatus defines the observed state of SwitchProfile

    Appears in: - SwitchProfile

    "},{"location":"reference/api/#switchredundancy","title":"SwitchRedundancy","text":"

    SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

    Appears in: - SwitchSpec

    Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

    Underlying type: string

    SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

    Validation: - Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge]

    Appears in: - SwitchSpec

    Field Description spine server-leaf border-leaf mixed-leaf virtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

    SwitchSpec defines the desired state of Switch

    Appears in: - Switch

    Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

    SwitchStatus defines the observed state of Switch

    Appears in: - Switch

    "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

    SwitchToSwitchLink defines the switch-to-switch link

    Appears in: - ConnMCLAGDomain - ConnVPCLoopback

    Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

    VLANNamespace is the Schema for the vlannamespaces API

    Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

    VLANNamespaceSpec defines the desired state of VLANNamespace

    Appears in: - VLANNamespace

    Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

    VLANNamespaceStatus defines the observed state of VLANNamespace

    Appears in: - VLANNamespace

    "},{"location":"reference/cli/","title":"Fabric CLI","text":"

    Under construction.

    Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

    core@control-1 ~ $ kubectl fabric\nNAME:\n   kubectl fabric - Hedgehog Fabric API kubectl plugin\n\nUSAGE:\n   kubectl fabric [global options] command [command options]\n\nVERSION:\n   v0.53.1\n\nCOMMANDS:\n   vpc               VPC commands\n   switch, sw        Switch commands\n   connection, conn  Connection commands\n   switchgroup, sg   SwitchGroup commands\n   external, ext     External commands\n   inspect, i        Inspect Fabric API Objects and Primitives\n   help, h           Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
    "},{"location":"reference/cli/#vpc","title":"VPC","text":"

    Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

    core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

    Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

    core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

    To peer VPC with another VPC (e.g. vpc-2) use the following command:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
    "},{"location":"reference/profiles/","title":"Switch Profiles Catalog","text":"

    The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.

    "},{"location":"reference/profiles/#celestica-ds3000","title":"Celestica DS3000","text":"

    Profile Name (to use in switch.spec.profile): celestica-ds3000

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#celestica-ds4000","title":"Celestica DS4000","text":"

    Profile Name (to use in switch.spec.profile): celestica-ds4000

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5232f-on","title":"Dell S5232F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-s5232f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5248f-on","title":"Dell S5248F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-s5248f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#dell-z9332f-on","title":"Dell Z9332F-ON","text":"

    Profile Name (to use in switch.spec.profile): dell-z9332f-on

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x200G, 1x25G, 1x400G, 1x40G, 1x50G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 M1 Direct 10G 1G, 10G E1/34 M2 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs203

    Other names: Edgecore AS7326-56X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs204

    Other names: Edgecore AS7726-32X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs501","title":"Edgecore DCS501","text":"

    Profile Name (to use in switch.spec.profile): edgecore-dcs501

    Other names: Edgecore AS7712-32X

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-eps203","title":"Edgecore EPS203","text":"

    Profile Name (to use in switch.spec.profile): edgecore-eps203

    Other names: Edgecore AS4630-54NPE

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/2 2 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/3 3 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/4 4 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/5 5 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/6 6 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/7 7 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/8 8 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/9 9 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/10 10 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/11 11 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/12 12 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/13 13 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/14 14 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/15 15 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/16 16 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/17 17 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/18 18 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/19 19 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/20 20 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/21 21 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/22 22 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/23 23 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/24 24 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/25 25 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/26 26 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/27 27 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/28 28 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/29 29 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/30 30 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/31 31 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/32 32 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/33 33 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/34 34 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/35 35 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/36 36 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/37 37 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/38 38 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/39 39 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/40 40 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/41 41 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/42 42 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/43 43 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/44 44 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/45 45 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/46 46 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/47 47 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/48 48 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/49 49 Direct 25G 1G, 10G, 25G E1/50 50 Direct 25G 1G, 10G, 25G E1/51 51 Direct 25G 1G, 10G, 25G E1/52 52 Direct 25G 1G, 10G, 25G E1/53 53 Direct 100G 40G, 100G E1/54 54 Direct 100G 40G, 100G"},{"location":"reference/profiles/#supermicro-sse-c4632sb","title":"Supermicro SSE-C4632SB","text":"

    Profile Name (to use in switch.spec.profile): supermicro-sse-c4632sb

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#virtual-switch","title":"Virtual Switch","text":"

    Profile Name (to use in switch.spec.profile): vs

    Supported features:

    Available Ports:

    Label column is a port label on a physical switch.

    Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#sonic","title":"SONiC","text":""},{"location":"release-notes/#fabric-provisioning-management","title":"Fabric provisioning, management","text":""},{"location":"release-notes/#api","title":"API","text":""},{"location":"release-notes/#alpha-7","title":"Alpha-7","text":""},{"location":"release-notes/#device-support_1","title":"Device Support","text":"

    New devices supported by the fabric:

    "},{"location":"release-notes/#switchprofiles","title":"SwitchProfiles","text":""},{"location":"release-notes/#new-universal-port-naming-scheme","title":"New Universal Port Naming Scheme","text":""},{"location":"release-notes/#improved-per-switch-modelplatform-validation","title":"Improved per switch-model/platform validation","text":""},{"location":"release-notes/#vpc","title":"VPC","text":""},{"location":"release-notes/#inspection-cli","title":"Inspection CLI","text":"

    CLI commands are intended to navigate fabric configuration and state and allow introspection of the dependencies and cross-domain checking:

    "},{"location":"release-notes/#observability","title":"Observability","text":""},{"location":"release-notes/#bug-fixes","title":"Bug Fixes","text":""},{"location":"release-notes/#alpha-6","title":"Alpha-6","text":""},{"location":"release-notes/#observability_1","title":"Observability","text":""},{"location":"release-notes/#telemetry-prometheus-exporter","title":"Telemetry - Prometheus Exporter","text":""},{"location":"release-notes/#logging","title":"Logging","text":""},{"location":"release-notes/#agent-status-api-enhancements","title":"Agent Status API Enhancements","text":""},{"location":"release-notes/#networking-enhancements","title":"Networking enhancements","text":""},{"location":"release-notes/#other-improvements","title":"Other improvements","text":""},{"location":"release-notes/#bugs-fixed","title":"Bugs fixed","text":""},{"location":"release-notes/#alpha-5","title":"Alpha-5","text":""},{"location":"release-notes/#open-source","title":"Open Source","text":""},{"location":"release-notes/#dhcppxe-boot-support-for-multi-homed-connections","title":"DHCP/PXE boot support for multi-homed connections","text":""},{"location":"release-notes/#improvements","title":"Improvements","text":""},{"location":"release-notes/#alpha-4","title":"Alpha-4","text":""},{"location":"release-notes/#documentation","title":"Documentation","text":""},{"location":"release-notes/#host-connectivity-dual-homing-improvements","title":"Host connectivity dual homing improvements","text":""},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":""},{"location":"release-notes/#static-external-connection","title":"Static External Connection","text":""},{"location":"release-notes/#internal-improvements","title":"Internal Improvements","text":""},{"location":"release-notes/#known-issues","title":"Known Issues","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":""},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":""},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":""},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":""},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":""},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

    Support for 3rd party DHCP server (DHCP Relay config) through the API

    "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

    A single controller. No controller redundancy.

    "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

    For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

    Spine controller connectivity is not supported.

    For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

    Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

    "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":""},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

    Switches not directly connecting to the controllers can chain boot via the data network.

    "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

    CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

    "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

    server leaf, border leaf, and mixed leaf modes are supported.

    "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

    Two ToR/LEAF switches with MCLAG server connection.

    "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

    MCLAG-only.

    "},{"location":"release-notes/#device-support_2","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":""},{"location":"release-notes/#spines","title":"SPINEs","text":""},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

    Port speed, port group speed, port breakouts are configurable through the API

    "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

    VXLAN-based BGP eVPN.

    "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

    A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

    "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

    Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

    "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

    VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

    This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

    "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

    Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

    "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

    VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

    "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

    VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

    The user controls what subnets/prefixes to import and export from/to the external system.

    No NAT function is supported for external peering.

    "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

    Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

    "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

    VPC is provided with an optional DHCP service with simple IPAM

    "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

    To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

    "},{"location":"release-notes/#scale","title":"Scale","text":""},{"location":"release-notes/#software-versions","title":"Software versions","text":""},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":""},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":""},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

    Under construction.

    "},{"location":"user-guide/connections/","title":"Connections","text":"

    Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.

    All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.

    There are several types of connections.

    "},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"

    Server connections are used to connect workload servers to switches.

    "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

    Unbundled server connections are used to connect servers to a single switch using a single port.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
    "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

    Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

    MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#eslag","title":"ESLAG","text":"

    ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--eslag--s5248-01--s5248-02\n  namespace: default\nspec:\n  eslag:\n    links: # Defines multiple links between a single server and a 2-4 switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
    "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

    Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

    "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

    A Fabric Connection is used between a specific pair of spine and leaf switches, representing all of the wires between them.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
    "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

    MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
    "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

    VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
    "},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"

    Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.

    "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

    StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # Switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n        subnets: # List of subnets to route to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n

    Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.

    ...\nspec:\n  staticExternal:\n    withinVPC: vpc-1 # VPC name to attach the static external to\n    link:\n      ...\n
    "},{"location":"user-guide/connections/#external","title":"External","text":"

    Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
    "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

    All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).

    "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

    Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.

    In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  boot: # at least one of the serial or mac needs to be defined\n    serial: XYZPDQ1234\n    mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n  profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n    E1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n    \"1\": 10G\n    \"2\": 10G\n  portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n    E1/1: 25G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n  - some-group\n  redundancy: # Optional field to define that switch belongs to the redundancy group\n    group: eslag-1 # Name of the redundancy group\n    type: eslag # Type of the redundancy group, one of mclag or eslag\n

    The SwitchGroup is just a marker at that point and doesn't have any configuration options.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
    "},{"location":"user-guide/devices/#redundancy-groups","title":"Redundancy Groups","text":"

    Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.

    MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.

    Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.

    In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.

    Example of switch configured for ESLAG:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: eslag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-03\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: eslag-1\n    type: eslag\n  ...\n

    And example of switch configured for MCLAG:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: mclag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: mclag-1\n    type: mclag\n  ...\n

    In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.

    "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

    Regular workload server:

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
    "},{"location":"user-guide/external/","title":"External Peering","text":"

    Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

    Note

    Hedgehog Fabric does not operate Edge side devices.

    "},{"location":"user-guide/external/#overview","title":"Overview","text":"

    Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.

    Note

    External Peering is only available on the switch devices that are capable for sub-interfaces.

    "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

    In order to distinguish VPC traffic, an Edge device should be able to:

    All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

    "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

    The Fabric shares VPC routes with Edge devices via BGP. Peering is done over VLAN in IPv4 Unicast AFI/SAFI.

    "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

    VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

    "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

    Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.

    "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

    General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:

    Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
    "},{"location":"user-guide/external/#connection","title":"Connection","text":"

    A Connection of type external is used to identify the switch port on Border leaf that is cabled with an Edge device.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwitchName/EthernetXXX\n
    "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

    External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP address on the Border Leaf to set up BGP peering\n    vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n

    Several External Attachment can be configured for the same Connection but for different vlan.

    "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

    To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes (routes) to be allowed to pick up from External\n      - # IPv4 prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n

    Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.

    This example allows any IPv4 prefix that came from External:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n

    This example allows all prefixes that match the default route, with any prefix length:

    spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
    "},{"location":"user-guide/external/#examples","title":"Examples","text":"

    This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.

    "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
    # kubectl fabric external create --name hedgeedge --ipns default --in 65102:5000 --out 5000:65102\n
    - apiVersion: vpc.githedgehog.com/v1beta1\n  kind: External\n  metadata:\n    creationTimestamp: \"2024-11-26T21:24:32Z\"\n    generation: 1\n    labels:\n      fabric.githedgehog.com/ipv4ns: default\n    name: hedgeedge\n    namespace: default\n    resourceVersion: \"57628\"\n    uid: a0662988-73d0-45b3-afc0-0d009cd91ebd\n  spec:\n    inboundCommunity: 65102:5000\n    ipv4Namespace: default\n    outboundCommunity: 5000:6510\n
    "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

    Connection should be specified in the wiring diagram.

    ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
    "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

    Specified in wiring diagram

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n
    "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
    "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

    Warning

    Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.

    Interface configuration:

    interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

    BGP configuration:

    !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n

    Route Map configuration:

    route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
    "},{"location":"user-guide/grafana/","title":"Grafana Dashboards","text":"

    To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.

    "},{"location":"user-guide/grafana/#variables","title":"Variables","text":"

    List of common variables used in Hedgehog Grafana dashboards

    "},{"location":"user-guide/grafana/#switch-critical-resources","title":"Switch Critical Resources","text":"

    This table reports usage and capacity of ASIC's programmable resources such as:

    JSON

    "},{"location":"user-guide/grafana/#fabric","title":"Fabric","text":"

    Fabric underlay and external peering monitoring. Including reporing for:

    JSON

    "},{"location":"user-guide/grafana/#interfaces","title":"Interfaces","text":"

    Switch interfaces monitoring visualization that includes:

    JSON

    "},{"location":"user-guide/grafana/#logs","title":"Logs","text":"

    System and fabric logs:

    JSON

    "},{"location":"user-guide/grafana/#platform","title":"Platform","text":"

    Information from PSU, temperature sensors and fan trays:

    JSON

    "},{"location":"user-guide/grafana/#node-exporter","title":"Node Exporter","text":"

    Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as

    JSON

    "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

    This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

    You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.

    You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.

    "},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

    From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.

    Here is a cleaned-up version of what the CRD looks like:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
    "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

    Click \"Create Network Config\". Add your connections and select the bonding type.

    The resulting CRD (cleaned up) looks like the following:

    apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
    "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

    Browse over to \"VM Networks\" and add one network for each VLAN you want to support. Assign them to the cluster network.

    Here is what the CRDs will look like for both VLANs:

    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
    apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
    "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

    Now you can choose the new VM Networks when creating a VM in Harvester, and have them created as part of the VPC.

    "},{"location":"user-guide/overview/","title":"Overview","text":"

    This chapter gives an overview of the main features of Hedgehog Fabric and their usage.

    "},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"

    All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.

    Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.

    Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.

    "},{"location":"user-guide/profiles/#port-naming","title":"Port Naming","text":"

    Each switch port is named using one of the the following formats:

    Examples of port names:

    "},{"location":"user-guide/profiles/#available-ports","title":"Available Ports","text":"

    Each switch profile defines a set of ports available on the switch. Ports could be divided into the following types.

    "},{"location":"user-guide/profiles/#directly-configurable-ports","title":"Directly configurable ports","text":"

    Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:

    .spec:\n  portSpeeds:\n    E1/1: 25G\n
    "},{"location":"user-guide/profiles/#port-groups","title":"Port groups","text":"

    Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:

    .spec:\n  portGroupSpeeds:\n    \"1\": 10G\n

    It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.

    "},{"location":"user-guide/profiles/#breakout-ports","title":"Breakout ports","text":"

    Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:

    .spec:\n  portBreakouts:\n    E1/55: 4x25G\n

    Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.

    "},{"location":"user-guide/shrink-expand/","title":"Fabric Shrink/Expand","text":"

    This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.

    Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.

    This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.

    "},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"

    In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guide.

    Note

    If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.

    After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.

    Note

    Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.

    Ensure the management port of the switch is connected to fabric management network.

    "},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"

    Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.

    Warning

    Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.

    "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

    A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n  defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n  defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n          start: 10.10.1.10\n          end: 10.10.1.99\n        options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n          pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n          dnsServers: # (optional) configure DNS servers\n            - 1.1.1.1\n          timeServers: # (optional) configure Time (NTP) Servers\n            - 1.1.1.1\n          interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n      vlan: 1001 # User-defined VLAN from VLAN namespace\n      isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n      restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: 1002\n\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: 1100\n\n  permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n    - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n    - [subnet-4, subnet-5] # Possible to define multiple lists\n\n  staticRoutes: # Optional, static routes to be added to the VPC\n    - prefix: 10.100.0.0/24 # Destination prefix\n      nextHops: # Next hop IP addresses\n        - 10.200.0.0\n
    "},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"

    Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.

    Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.

    Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.

    A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.

    "},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"

    In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:

    "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

    A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

    VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n  nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
    "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

    A VPCPeering enables VPC-to-VPC connectivity. There are two types of VPC peering:

    VPC peering is only possible between VPCs attached to the same IPv4 namespace (see IPv4Namespace)

    "},{"location":"user-guide/vpcs/#local-vpc-peering","title":"Local VPC peering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n    vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
    "},{"location":"user-guide/vpcs/#remote-vpc-peering","title":"Remote VPC peering","text":"
    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # Indicates a switch group to implement the peering on\n
    "},{"location":"user-guide/vpcs/#subnet-filtering","title":"Subnet filtering","text":"

    It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n  - vpc-1:\n      subnets: [subnet-1, subnet-2]\n    vpc-2:\n      subnets: [subnet-3]\n  - vpc-1:\n      subnets: [subnet-4]\n    vpc-2:\n      subnets: [subnet-5, subnet-6]\n
    "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

    An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.

    apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of prefixes that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
    "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

    A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.

    apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
    "},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"

    The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.

    You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

    "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

    The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

    For more details on customizing topologies see the Running VLAB section.

    In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n\n    L1 & L2 & L5 & L3 & L4 --> S1 & S2

    As well as the following test servers, as above Control Node connections are omitted:

    graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n    TS1[Server 1]\n    TS2[Server 2]\n    TS3[Server 3]\n    TS4[Server 4]\n    TS5[Server 5]\n    TS6[Server 6]\n    TS7[Server 7]\n    TS8[Server 8]\n    TS9[Server 9]\n    TS10[Server 10]\n\n    subgraph MCLAG\n    L1\n    L2\n    end\n    TS3 --> L1\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS4 --> L2\n\n    subgraph ESLAG\n    L3\n    L4\n    end\n\n    TS7 --> L3\n    TS5 --> L3\n    TS5 --> L4\n    TS6 --> L3\n    TS6 --> L4\n\n    TS8 --> L4\n    TS9 --> L5\n    TS10 --> L5\n\n    L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2
    "},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"

    hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

    NAME:\n   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n   hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP\n   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)\n   --help, -h                                                               show help\n   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)\n   --ipns value                                                             IPv4 namespace for VPCs (default: \"default\")\n   --name value, -n value                                                   name of the VM or HW to access\n   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)\n   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)\n   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP\n   --vlanns value                                                           VLAN namespace for VPCs (default: \"default\")\n   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#setup-peering","title":"Setup Peering","text":"

    hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

    NAME:\n   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n   Example command:\n\n   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n   Which will produce:\n   1. VPC peering between vpc-01 and vpc-02\n   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n      from external permitted as well any route that belongs to 22.22.22.0/24\n\n   VPC Peerings:\n\n   1+2 -- VPC peering between vpc-01 and vpc-02\n   demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n   1+2:remote=border -- same as above\n\n   External Peerings:\n\n   1~as5835 -- external peering for vpc-01 with External as5835\n   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n     default subnet and any route from external\n   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n     default route from external permitted\n   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n   --help, -h                     show help\n   --name value, -n value         name of the VM or HW to access\n   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#test-connectivity","title":"Test Connectivity","text":"

    hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

    NAME:\n   hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n   hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n   --help, -h                     show help\n   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n   --name value, -n value         name of the VM or HW to access\n   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)\n   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
    "},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

    You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

    core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

    The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

    After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

    core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

    In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

    "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

    You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

    For server-01:

    core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

    And for server-02:

    core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
    "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

    You can test connectivity between the servers before peering the switches using the ping command:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
    "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

    To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

    Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:

    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
    core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

    If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
    core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

    You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

    core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
    "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

    First, create a second IPv4Namespace with the same subnet as the default one:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

    Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

    core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-3\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: 2001\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

    At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.

    "},{"location":"vlab/overview/","title":"VLAB Overview","text":"

    It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.

    In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.

    "},{"location":"vlab/overview/#hhfab","title":"HHFAB","text":"

    Hedgehog maintains a utility to install and configure VLAB, called hhfab, aka Fabricator.

    The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.

    You can find more information about getting hhfab in the download section.

    "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

    Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

    The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login into the OCI registry.

    By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

    You can calculate the system requirements based on the allocated resources to the VMs using the following table:

    Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

    These numbers give approximately the following requirements for the default topologies:

    Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

    NVMe SSD for VM disks is highly recommended.

    "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing Prerequisites","text":"

    To run VLAB, your system needs docker,qemu,kvm, and hhfab. On Ubuntu 22.04 LTS you can install all required packages using the following commands:

    "},{"location":"vlab/overview/#docker","title":"Docker","text":"
    curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
    "},{"location":"vlab/overview/#qemukvm","title":"Qemu/KVM","text":"
    sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

    Good output of the kvm-ok command should look like this:

    ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
    "},{"location":"vlab/overview/#oras","title":"Oras","text":"

    For convenience Hedgehog provides a script to install oras:

    curl -fsSL https://i.hhdev.io/oras | bash\n
    "},{"location":"vlab/overview/#hhfab_1","title":"Hhfab","text":"

    You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:

    docker login ghcr.io --username provided_username --password provided_token\n

    Once logged in, download and run the script:

    curl -fsSL https://i.hhdev.io/hhfab | bash\n
    "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":""},{"location":"vlab/running/","title":"Running VLAB","text":"

    Make sure to follow the prerequisites and check system requirements in the VLAB Overview section before running VLAB.

    "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

    First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.

    ubuntu@docs:~$ hhfab init --dev\n11:26:52 INF Hedgehog Fabricator version=v0.30.0\n11:26:52 INF Generated initial config\n11:26:52 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml\n11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include\n
    "},{"location":"vlab/running/#vlab-topology","title":"VLAB Topology","text":"

    By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

    ubuntu@docs:~$ hhfab vlab gen\n21:27:16 INF Hedgehog Fabricator version=v0.30.0\n21:27:16 INF Building VLAB wiring diagram fabricMode=spine-leaf\n21:27:16 INF >>> spinesCount=2 fabricLinksCount=2\n21:27:16 INF >>> eslagLeafGroups=2\n21:27:16 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n21:27:16 INF >>> orphanLeafsCount=1 vpcLoopbacks=2\n21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n21:27:16 INF Generated wiring file name=vlab.generated.yaml\n
    You can jump to the instructions to start VLAB, or see the next section for customizing the topology.

    "},{"location":"vlab/running/#collapsed-core","title":"Collapsed Core","text":"

    If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:

    ubuntu@docs:~$ hhfab vlab gen\n11:39:02 INF Hedgehog Fabricator version=v0.30.0\n11:39:02 INF Building VLAB wiring diagram fabricMode=collapsed-core\n11:39:02 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n11:39:02 INF >>> orphanLeafsCount=0 vpcLoopbacks=2\n11:39:02 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:39:02 INF Generated wiring file name=vlab.generated.yaml\n
    "},{"location":"vlab/running/#custom-spine-leaf","title":"Custom Spine Leaf","text":"

    Or you can run custom topology with 2 spines, 4 MCLAG leaves and 2 non-MCLAG leaves using flags:

    ubuntu@docs:~$ hhfab vlab gen --mclag-leafs-count 4 --orphan-leafs-count 2\n11:41:06 INF Hedgehog Fabricator version=v0.30.0\n11:41:06 INF Building VLAB wiring diagram fabricMode=spine-leaf\n11:41:06 INF >>> spinesCount=2 fabricLinksCount=2\n11:41:06 INF >>> eslagLeafGroups=\"\"\n11:41:06 INF >>> mclagLeafsCount=4 mclagSessionLinks=2 mclagPeerLinks=2\n11:41:06 INF >>> orphanLeafsCount=2 vpcLoopbacks=2\n11:41:06 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:41:06 INF Generated wiring file name=vlab.generated.yaml\n

    Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.

    Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.

    "},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"

    To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.

    ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
    When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB.

    "},{"location":"vlab/running/#enable-outside-connectivity-from-vlab-vms","title":"Enable Outside connectivity from VLAB VMs","text":"

    By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).

    "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

    The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

    You can select device you want to access or pass the name using the --vm flag.

    ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n
    "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

    Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

    The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!. The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!. The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.

    "},{"location":"vlab/running/#use-kubectl-to-interact-with-the-fabric","title":"Use Kubectl to Interact with the Fabric","text":"

    On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the switches to get installed.

    After the switches are provisioned, the command returns something like this:

    core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

    The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.

    At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.

    "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

    You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

    For example, to get the list of switches, run:

    core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

    Similarly, to get the list of servers, run:

    core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

    For connections, use:

    core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

    For IPv4 and VLAN namespaces, use:

    core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
    "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

    If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.

    "},{"location":"vlab/running/#next-steps","title":"Next steps","text":""}]} \ No newline at end of file diff --git a/dev/sitemap.xml b/dev/sitemap.xml index a1dfbd7..b88724a 100644 --- a/dev/sitemap.xml +++ b/dev/sitemap.xml @@ -2,126 +2,126 @@ https://docs.githedgehog.com/dev/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/architecture/fabric/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/architecture/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/concepts/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/contribute/docs/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/contribute/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/faq/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/getting-started/download/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/build-wiring/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/config/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/install/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/requirements/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/supported-devices/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/install-upgrade/upgrade/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/reference/api/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/reference/cli/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/reference/profiles/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/release-notes/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/troubleshooting/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/connections/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/devices/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/external/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/grafana/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/harvester/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/profiles/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/shrink-expand/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/user-guide/vpcs/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/vlab/demo/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/vlab/overview/ - 2024-12-18 + 2024-12-19 https://docs.githedgehog.com/dev/vlab/running/ - 2024-12-18 + 2024-12-19 \ No newline at end of file diff --git a/dev/sitemap.xml.gz b/dev/sitemap.xml.gz index 9c97447..772d42b 100644 Binary files a/dev/sitemap.xml.gz and b/dev/sitemap.xml.gz differ