diff --git a/alpha-3/getting-started/download/index.html b/alpha-3/getting-started/download/index.html index 03a90df..6e0d5b0 100755 --- a/alpha-3/getting-started/download/index.html +++ b/alpha-3/getting-started/download/index.html @@ -1226,14 +1226,18 @@

Getting access

Downloading the software

The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. All software is published into the OCI registry GitHub Package including binaries, container images, helm charts and etc. -The hhfab binary can be downloaded from the GitHub Package using the following command:

-
curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-2 bash
+The latest stable hhfab binary can be downloaded from the GitHub Package using the following
+command:

+
curl -fsSL https://i.hhdev.io/hhfab | bash
+
+

Or you can download a specific version using the following command:

+
curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-X bash
 

The VERSION environment variable can be used to specify the version of the software to download. If it's not specified the latest release will be downloaded. You can pick specific release series (e.g. alpha-2) or specific release.

It requires ORAS to be installed which is used to download the binary from the OCI registry and could be installed using following command:

-
curl -fsSL https://i.hhdev.io/oras | bash
+
curl -fsSL https://i.hhdev.io/oras | bash
 

Currently only Linux x86 is supported for running hhfab.

Next steps

@@ -1249,7 +1253,7 @@

Next steps

Last update: - January 22, 2024 + January 24, 2024
Created: diff --git a/alpha-3/search/search_index.json b/alpha-3/search/search_index.json index 4099f08..98e027b 100755 --- a/alpha-3/search/search_index.json +++ b/alpha-3/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to the private environments. Without vendor lock-in.

Fabric is built around concept of VPCs (Virtual Private Clouds) similar to the public clouds and provides a multi-tenant API to define user intent on network isolation, connectivity and etc which gets automatically transformed into switches and software appliances configuration.

You can read more about concepts and architecture in the documentation.

You can find how to download and try Fabric on the self-hosted fully virtualized lab or on the hardware.

"},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

The Hedgehog Open Network Fabric is an open source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standar BGP EVPN and vxlan technology. The fabric provides a standard kubernetes interfaces to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this we define different abstractions starting from the physical network where we define Connection which defines how a physical server on the network connects to a physical switch on the fabric.

"},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

The Hedgehog Fabric currently support two underlay network topologies.

"},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

A collapsed core topology is just a pair of switches connected in a mclag configuration with no other network elements. All workloads attach to these two switches.

The leaf's in this setup are configured to be in a mclag pair and servers can either be connected to both switches as a mclag port channel or as orphan ports connected to only one switch. both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

"},{"location":"architecture/fabric/#spine-leaf","title":"Spine - Leaf","text":"

A spine-leaf topology is a standard clos network with workloads attaching to leaf switches and spines providing connectivity between different leaves.

This kind of topology is useful for bigger deployments and provides all the advantages of a typical clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. We used RFC7938 as the reference for establishing the underlay network.

"},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Vitualization is achieved in the hedgehog fabric by encapsulating workload traffic over vxlan tunnels that are source and terminated on the leaf switches in the network. The fabric using BGP-EVPN/Vxlan to enable creation and management of virtual networks on top of the virtual. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the hedgehog fabric is identified by a VPC. In the following sections we will dive a bit deeper into a high level overview of how are vpc's implemented in the hedgehog fabric and it's associated objects.

"},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

We know what is a VPC and how to attach workloads to a specific VPC. Let us now take a look at how is this actually implemented on the network to provice the view of a private network.

  • Each VPC is modeled as a vrf on each switch where there are VPC attachments defined for this vpc. The Vrf is allocated its own VNI. The Vrf is local to each switch and the VNI is global for the entire fabric. By mapping the vrf to a VNI and configuring an evpn instance in each vrf we establish a shared l3vni across the entire fabric. All vrf participating in this vni can freely communicate with each other without a need for a policy. A Vlan is allocated for each vrf which functions as a IRB Vlan for the vrf.
  • The vrf created on each switch corresponding to a VPC configures a BGP instance with evpn to advertise its locally attached subnets and import routes from its peered VPC's. The BGP instance in the tenant vrf's does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all vrf's with the same l3vni) across leafs in the network.
  • A VPC can have multuple subnets. Each Subnet in the VPC is modeled as a Vlan on the switch. The vlan is only locally significant and a given subnet might have different Vlan's on different leaves on the network. We assign a globally significant vni for each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single streched l2 domain if the applications need it.
  • The hedgehog fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing vlans. The DHCP server is accesible through the underlay network and is shared by all vpcs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
  • A VPC by default cannot communicate to anyone outside the VPC and we need to define specific peering rules to allow communication to external networks or to other VPCs.
"},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

To enable communication between 2 different VPC's we need to configure a VPC peering policy. The hedgehog fabric supports two different peering modes.

  • Local Peering - A local peering directly imports routers from the other VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached worloads to the peer VPC the fabric automatically creates a stub vpc for peering and imports routes from it. This allows VPC's to peer with each other without the need for dedicated peering leaf. If a local peering is done for a pair of VPC's which have locally attached workloads the fabric automatically allocates a pair of ports on the switch to router traffic between these vrf's using static routes. This is required because of limitations in the underlying platform. The net result of this is that the bandwidth between these 2 VPC's is limited by the bandwidth of the loopback interfaces allocated on the switch.
  • Remote Peering - Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPC's the vrf's corresponding to these VPC's on the peering switch advertise default routes into their specific vrf's identified by the l3vni. All traffic that does not belong to the VPC's is forwarded to the peering switch and which has routes to the other VPC's and gets forwarded from there. The bandwith limitation that exists in the local peering solution is solved here as the bandwith between the two VPC's is determined by the fabric cross section bandwidth.
"},{"location":"architecture/overview/","title":"Overview","text":"

Under construction.

"},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

Hedgehog Open Network Fabric is build on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs) and so you can use standard Kubernetes tools to manage Fabric resources.

Hedgehog Fabric consists of the following components:

  • Fabricator - special tool that allows to install and configre Fabric as well as run virtual labs
  • Control Node - one or more Kubernetes nodes in a single clusters running Fabric software
    • Das Boot - set of services providing switch boot and installation
    • Fabric Controller - main control plane component that manages Fabric resources
  • Fabric Kubectl plugin (Fabric CLI) - plugin for kubectl that allows to manage Fabric resources in an easy way
  • Fabric Agent - runs on every switch and manages switch configuration
"},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. It allows to define switches, servers, control nodes, external systems and connections between them in a single place and then use it to deploy and manage the whole infrastructure. On top of it Fabric provides a set of APIs to manage the VPCs and connections between them and between VPCs and External systems.

"},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

Wiring Diagram consists of the following resources:

  • \"Devices\": describes any device in the Fabric
    • Switch: configuration of the switch, like port group speeds, port breakouts, switch IP/ASN, etc.
    • Server: any physical server attached to the Fabric including control nodes
  • Connection: any logical connection for devices
    • usually it's a connection between two or more ports on two different devices
    • incl. MCLAG Peer Link, Unbundled/MCLAG server connections, Fabric connection between spine and leaf etc.
  • VLANNamespace -> non-overlapping VLAN ranges for attaching servers
  • IPv4Namespace -> non-overlapping IPv4 ranges for VPC subnets
"},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":"
  • VPC API
    • VPC: Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP
    • VPCAttachment: represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding
    • VPCPeering: enables VPC to VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leafs)
  • External API
    • External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
    • ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
    • ExternalPeering: enables VPC to External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
"},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

Installer builder and VLAB.

  • Installer builder based on a preset (currently vlab for virtual & lab for physical)
  • Main input - wiring diagram
  • All input artifacts coming from OCI registry
  • Always full airgap (everything running from private registry)
  • Flatcar Linux for control node, generated ignition.json
  • Automatic k3s installation and private registry setup
  • All components and their dependencies running in K8s
  • Integrated Virtual Lab (VLAB) management
  • Future:
  • In-cluster (control) Operator to manage all components
  • Upgrades handling for everything starting control node OS
  • Installation progress, status and retries
  • Disaster recovery and backups
"},{"location":"concepts/overview/#das-boot","title":"Das Boot","text":"

Switch boot and installation.

  • Seeder
  • Actual switch provisioing
  • ONIE on a switch discovers control node using LLDP
  • It loads and runs our multi-stage installer
    • Network configuration & identity setup
    • Performs device registration
    • Hedgehog identity partion gets created on the switch
    • Downloads SONiC installer and runs it
    • Downloads Agent and it's config and installs to the switch
  • Registration Controller
  • Device identity and registration
  • Actual SONiC installers
  • Misc: rsyslog/ntp
"},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

Control plane and switch agent.

  • Currently Fabric is basically single controller manager running in K8s
  • It includes controllers for different CRDs and needs
  • For example, auto assigning VNIs to VPC or generating Agent config
  • Additionally, it's running admission webhook for our CRD APIs
  • Agent is watching for the corresonding Agent CRD in K8s API
  • It applies the changes and saves new config locally
  • It reports back some status and information back to API
  • Can perform reinstall and reboot of SONiC
"},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

make serve\n

Now you can open continuosly updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

Additionally you can run

make build\n

to make sure that your changes will be built correctly and doesn't break documentation.

"},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

"},{"location":"contribute/docs/#repository","title":"Repository","text":"

Documentation is organized in per-release branches:

  • master - ongoing development, not released yet, referenced as dev version in the documentation
  • release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
  • release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch

Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

"},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

For example, top-level .pages in this repository looks like this:

nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

More details in the MkDocs Pages plugin.

"},{"location":"contribute/docs/#abbreaviations","title":"Abbreaviations","text":"

You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

For example, we have following in includes/abbreviations.md:

*[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

"},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

To view code for examples, please, check the source code of this page.

"},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

Keyboard keys can be written like so:

Ctrl+Alt+Del

Amd inline icons/emojis can be added like this:

:fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

"},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

Lorem ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

Details can be found here.

Simple code block with line nums and higlighted lines:

bubble_sort.py
def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

Code annotations:

theme:\n  features:\n    - content.code.annotate # (1)\n
  1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
"},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

You can use Tabs to better organize content.

CC++
#include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
#include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
"},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
"},{"location":"contribute/overview/","title":"Overview","text":"

Under construction.

"},{"location":"getting-started/download/","title":"Download","text":""},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

Prior to the General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

After that you will be provided with the credentials to access the software on GitHub Package. In order to use it you need to login to the registry using the following command:

docker login ghcr.io\n
"},{"location":"getting-started/download/#downloading-the-software","title":"Downloading the software","text":"

The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. All software is published into the OCI registry GitHub Package including binaries, container images, helm charts and etc. The hhfab binary can be downloaded from the GitHub Package using the following command:

curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-2 bash\n

The VERSION environment variable can be used to specify the version of the software to download. If it's not specified the latest release will be downloaded. You can pick specific release series (e.g. alpha-2) or specific release.

It requires ORAS to be installed which is used to download the binary from the OCI registry and could be installed using following command:

curl -fsSL https://i.hhdev.io/oras | bash\n

Currently only Linux x86 is supported for running hhfab.

"},{"location":"getting-started/download/#next-steps","title":"Next steps","text":"
  • Concepts
  • Virtual LAB
  • Installation
  • User guide
"},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

Under construction.

In the meantime, to have a look at the working wiring diagram for the Hedgehog Fabric, please run sample generator that produces VLAB-compatible wiring diagrams:

ubuntu@sl-dev:~$ hhfab wiring sample -h\nNAME:\n   hhfab wiring sample - sample wiring diagram (would work for vlab)\n\nUSAGE:\n   hhfab wiring sample [command options] [arguments...]\n\nOPTIONS:\n   --brief, -b                    brief output (only warn and error) (default: false)\n   --fabric-mode value, -m value  fabric mode (one of: collapsed-core, spine-leaf) (default: \"spine-leaf\")\n   --help, -h                     show help\n   --verbose, -v                  verbose output (includes debug) (default: false)\n\n   wiring generator options:\n\n   --chain-control-link         chain control links instead of all switches directly connected to control node if fabric mode is spine-leaf (default: false)\n   --control-links-count value  number of control links if chain-control-link is enabled (default: 0)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
"},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":"
  • --fabric-mode <mode-name (collapsed-core or spine-leaf) - Fabric mode to use, default is spine-leaf; in case of collapsed-core mode, there will be no VXLAN configured and only 2 switches will be used
  • --ntp-servers <servers>- Comma-separated list of NTP servers to use, default is time.cloudflare.com,time1.google.com,time2.google.com,time3.google.com,time4.google.com, it'll be used for both control nodes and switches
  • --dhcpd <mode-name> (isc or hedgehog) - DHCP server to use, default is isc; hedgehog DHCP server enables use of on-demand DHCP for multiple IPv4/VLAN namespaces and overlapping IP ranges as well as adds DHCP leases into the Fabric API

You can find more information about using hhfab init in the help message by running it with --help flag.

"},{"location":"install-upgrade/onie-update/","title":"ONIE Update/Upgrade","text":""},{"location":"install-upgrade/onie-update/#hedgehog-onie-honie-supported-systems","title":"Hedgehog ONIE (HONIE) Supported Systems","text":"
  • DELL

  • S5248F-ON

  • S5232F-ON

  • Edge-Core

  • DCS501 (AS7726-32X)

  • DCS203 (AS7326-56X)

  • EPS203 (AS4630-54NPE)

"},{"location":"install-upgrade/onie-update/#updating-onie","title":"Updating ONIE","text":"
  • Via USB

  • For this example we will be updating a DELL S5248 to Hedgehog ONIE (HONIE)

    • Note: the USB port is on the back of the switch with the Management and Console
  • Prepare the USB stick by burning the honie-usb.img to a 4G or larger USB drive

  • Insert the USB drive into the switch

    • For example, to burn the file to disk X of an OSX machine

    • sudo dd if=honie-usb.img of=/dev/rdiskX bs=1m

  • Boot into ONIE Installer

    *

  • ONIE will install the ONIE update and reboot

    • `ONIE: OS Install Mode ...

    Platform\u00a0 : x86_64-dellemc_s5200_c3538-r0

    Version \u00a0 : 3.40.1.1-7 <- Non HONIE version

    Build Date: 2020-03-24T20:44-07:00

    Info: Mounting EFI System on /boot/efi ...

    Info: BIOS mode: UEFI

    Info: Making NOS install boot mode persistent.

    Info: Using eth0 MAC address: 3c:2c:30:66:f0:00

    Info: eth0:\u00a0 Checking link... up.

    Info: Trying DHCPv4 on interface: eth0

    Warning: Unable to configure interface using DHCPv4: eth0

    ONIE: Using link-local IPv4 addr: eth0: 169.254.95.249/16

    Starting: klogd... done.

    Starting: dropbear ssh daemon... done.

    Starting: telnetd... done.

    discover: installer mode detected.\u00a0 Running installer.

    Starting: discover... done.

    Please press Enter to activate this console. Info: eth0:\u00a0 Checking link... up.

    Info: Trying DHCPv4 on interface: eth0

    Warning: Unable to configure interface using DHCPv4: eth0

    ONIE: Using link-local IPv4 addr: eth0: 169.254.6.139/16

    ONIE: Starting ONIE Service Discovery

    Info: Attempting file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538-r0 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538-r0.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-dellemc_s5248f_c3538 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-dellemc_s5248f_c3538.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-bcm ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-bcm.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer.bin ...

    ONIE: Executing installer: file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0

    Verifying image checksum ... OK.

    Preparing image archive ... OK.

    ONIE: Version \u00a0 \u00a0 \u00a0 : 3.40.1.1-8 <- HONIE Version

    ONIE: Architecture\u00a0 : x86_64

    ONIE: Machine \u00a0 \u00a0 \u00a0 : dellemc_s5200_c3538

    ONIE: Machine Rev \u00a0 : 0

    ONIE: Config Version: 1

    ONIE: Build Date\u00a0 \u00a0 : 2023-12-15T23:43+00:00

    Installing ONIE on: /dev/sda

    ONIE: NOS install successful: file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0

    ONIE: Rebooting...

    discover: installer mode detected.

    Stopping: discover...start-stop-daemon: warning: killing process 665: No such process

    Info: Unmounting kernel filesystems

    umount: can't unmount /: Invalid argument

    The system is going down NOW!

    Sent SIGTERM to all processes

    Sent SIGKILL to all processes

    Requesting system reboot`

  • System is now ready for use

"},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

Under construction.

"},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":"
  • Have a machine with access to internet to use Fabricator and build installer
  • Have a machine to install Fabric Control Node on with enough NICs to connect to at least one switch using Front Panel ports and enough CPU and RAM (System Requirements) as well as IPMI access to it to install OS
  • Have enough Supported Switches for your Fabric
"},{"location":"install-upgrade/overview/#main-steps","title":"Main steps","text":"

This chapter is dedicated to the Hedgehog Fabric installation on the bare-metal control node(s) and switches, their preparation and configuration.

Please, get hhfab installed following instructions from the Download section.

Main steps to install Fabric are:

  1. Install hhfab on the machines with access to internet
    1. Prepare Wiring Diagram
    2. Select Fabric Configuration
    3. Build Control Node configuration and installer
  2. Install Control Node
    1. Install Flatcar Linux on the Control Node
    2. Upload and run Control Node installer on the Control Node
  3. Prepare supported switches
    1. Install Hedgehog ONiE (HONiE) on them
    2. Reboot them into ONiE Install Mode and they will be automatically provisioned
"},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and installer","text":"

It's the only step that requires internet access to download artifacts and build installer.

Once you've prepated Wiring Diagram, you can initialize Fabricator by running hhfab init command and passwing optional configuration into it as well as wiring diagram file(s) as flags. Additionally, there are a lot of customizations availble as flags, e.g. to setup default credentials, keys and etc, please, refer to hhfab init --help for more.

The --dev options allows to enable development mode which will enable default credentials and keys for the Control Node and switches:

  • Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!.
  • Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!.
  • Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

Alternatively, you can pass your own credentials and keys using --authorized-key and --control-password-hash flags. Password hash can be generated using openssl passwd -5 command. Further customizations are available in the config file that could be passed using --config flag.

hhfab init --preset lab --dev --wiring file1.yaml --wiring file2.yaml\nhhfab build\n

As a result, you will get the following files in the .hhfab directory or the one you've passed using --basedir flag:

  • control-os/ignition.json - ignition config for the control node to get OS installed
  • control-install.tgz - installer for the control node, it will be uploaded to the control node and run there
"},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

It's fully air-gapped installation and doesn't require internet access.

Please, download latest stable Flatcar Container Linux ISO from the link and boot into it (using IPMI attaching media, USB stick or any other way).

Once you've booted into the Flatcar installer, you need to download ignition.json built in the prvious step to it and run Flatcar installation:

sudo flatcar-install -d /dev/sda -i ignition.json\n

Where /dev/sda is a disk you want to install Control Node to and ignition.json is the control-os/ignition.json file from previous step downloaded to the Flatcar installer.

Once the installation is finished, reboot the machine and wait for it to boot into the installed Flatcar Linux.

At that point, you should get into the installed Flatcar Linux using the dev or provided credentials with user core and you can now install Hedgehog Open Network Fabric on it. Download control-install.tgz to the just installed Control Node (e.g. by using scp) and run it.

tar xzf control-install.tgz && cd control-install && sudo ./hhfab-recipe run\n

It'll output log of installing the Fabric (including Kubernetes cluster, OCI registry misc components and etc), you should see following output in the end:

...\n01:34:45 INF Running name=reloader-image op=\"push fabricator/reloader:v1.0.40\"\n01:34:47 INF Running name=reloader-chart op=\"push fabricator/charts/reloader:1.0.40\"\n01:34:47 INF Running name=reloader-install op=\"file /var/lib/rancher/k3s/server/manifests/hh-reloader-install.yaml\"\n01:34:47 INF Running name=reloader-wait op=\"wait deployment/reloader-reloader\"\ndeployment.apps/reloader-reloader condition met\n01:35:15 INF Done took=3m39.586394608s\n

At that point, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s preinstalled as part of the Control Node installer.

You can now get HONiE installed on your switches and reboot them into ONiE Install Mode and they will be automatically provisioned from the Control Node.

"},{"location":"install-upgrade/requirements/","title":"System Requirements","text":"
  • Fast SSDs for system/root and K8s & container runtime forlders are required for stable work
  • SSDs are mandatory for Control Nodes
  • Minimal (non-HA) setup is a single Contol Node
  • (Future) Full (HA) setup is at least 3 Control Nodes
  • (Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack and etc.
"},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":"
  • Control Node runs non-HA K8s Contol Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
  • Not recommended for more then 10 devices participating in the Hedgehog Fabric or production deployments
Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":"
  • Each Contol Node runs part of the HA K8s Control Plane installation with Hedgehog Fabric Control Plane on top of it in HA mode as well
  • Recommended for all cases where more then 10 devices participating in the Hedgehog Fabric
Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
  • (Future) Each participating device is part of the K8s cluster, so, it run K8s kubelet
  • Additionally it run Hedgehog Fabric Agent that controls devices configuration
Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":""},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":"
  • DELL: S5232F-ON
  • EDGE-CORE: DCS204 (AS7726-54x)
"},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"
  • DELL: S5232F-ON, S5248F-ON
  • EDGE-CORE: DCS204 (AS7726-32x), DCS203 (AS7326-56x), EPS203 (AS4630-54NPE)
"},{"location":"reference/api/","title":"Fabric API","text":"

Under construction.

Please, refer to the User Guide chapter for examples and instructions and VLAB chapter for how to try different APIs in a virtual environment.

"},{"location":"reference/cli/","title":"Fabric CLI","text":"

Under construction.

Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
"},{"location":"reference/cli/#vpc","title":"VPC","text":"

Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

To peer VPC with another VPC (e.g. vpc-2) use the following command:

core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"

Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)

"},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":"
  • Support for multiple overlapping IPv4 addresses in the Fabric
  • Integrated with on-demand DHCP Service (see below)
  • All IPv4 addresses within a given VPC must be unique
  • Only VPCs with non-overlapping IPv4 subnets can peer within the Fabric
  • An external NAT device is required for peering of VPCs with overlapping subnets
"},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":"
  • Custom DHCP server executing in the controllers
  • Multiple IPv4 namespaces with overlapping subnets
  • Multiple VLAN namespaces with overlapping VLAN ranges
  • DHCP leases exposed through the Fabric API
  • Available for VLAB as well as the Fabric
"},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":"
  • Custom NTP servers at the controller
  • Switches automatically configured to use control node as NTP server
  • NTP servers can be configured to sync to external time/NTP server
"},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":"
  • Directly connect external infrastructure services (such as NTP, DHCP, DNS) to the Fabric
  • No BGP is required, just automatically configured static routes
"},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

Support for 3rd party DHCP server (DHCP Relay config) through the API

"},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

A single controller. No controller redundancy.

"},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

Spine controller connectivity is not supported.

For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

"},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":"
  • One 1 gig+ port per to connect to each controller attached switch
  • One+ 1 gig+ ports connecting to the external management network.
  • 4 Cores, 12GB RAM, 100GB SSD.
"},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

Switches not directly connecting to the controllers can chain boot via the data network.

"},{"location":"release-notes/#topology-support","title":"Topology support","text":"

CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

"},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

server leaf, border leaf, and mixed leaf modes are supported.

"},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

Two ToR/LEAF switches with MCLAG server connection.

"},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

MCLAG-only.

"},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":"
  • DELL:
  • S5248F-ON
  • S5232F-ON

  • Edge-Core:

  • DCS204 (AS7726-32X)
  • DCS203 (AS7326-56X)
  • EPS203 (AS4630-54NPE)
"},{"location":"release-notes/#spines","title":"SPINEs","text":"
  • DELL:
  • S5232F-ON
  • Edge-Core:
  • DCS204 (AS7726-32X)
"},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

Port speed, port group speed, port breakouts are configurable through the API

"},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

VXLAN-based BGP eVPN.

"},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

"},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

"},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

"},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

"},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

"},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

The user controls what subnets/prefixes to import and export from/to the external system.

No NAT function is supported for external peering.

"},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

"},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

VPC is provided with an optional DHCP service with simple IPAM

"},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

"},{"location":"release-notes/#scale","title":"Scale","text":"
  • Maximum fabric size: 20 LEAF/ToR switches.
  • Routes per switch: 64k
  • [ silicon platform limitation in Trident 3; limits te number of endpoints in the fabric ]
  • Total VPCs per switch: up to 1000
  • [ Including VPCs attached at the given switch and VPCs peered with ]
  • Total VPCs per VLAN namespace: up to 3000
  • [ assuming 1 subnet per VPC ]
  • Total VPCs per fabric: unlimited
  • [ if using multiple VLAN namespaces ]
  • VPC subnets per switch: up to 3000
  • VPC subnets per VLAN namespace up to 3000
  • Subnets per VPC: up to 20
  • [ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
  • VPC Slots per remote peering @ switch: 2
  • Max VPC loopbacks per switch: 500
  • [ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
"},{"location":"release-notes/#software-versions","title":"Software versions","text":"
  • Fabric: v0.23.0
  • Das-boot: v0.11.4
  • Fabricator: v0.8.0
  • K3s: v1.27.4-k3s1
  • Zot: v1.4.3
  • SONiC
  • Broadcom Enterprise Base 4.1.1
  • Broadcom Enterprise Campus 4.1.1
"},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":"
  • MTU setting inflexibility:
  • Fabric MTU is 9100 and not configurable right now (A3 planned)
  • Server-facing MTU is 9136 and not configurable right now (A3+)
  • no support for Access VLANs for attaching servers (A3 planned)
  • VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
  • peering with external is only possible with a VLAN (by design)
  • If you have VPCs with remote peering on a switch group, you can\u2019t attach those VPCs on that switch group (by definition of remote peering)
  • if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
  • if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
  • the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
"},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":"
  • Controller:

    • A single controller connecting to each switch management port. No redundancy.
  • Controller requirements:

    • One 1 gig port per switch
    • One+ 1 gig+ ports connecting to the external management network.
    • 4 Cores, 12GB RAM, 100GB SSD.
  • Seeder:

    • Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
  • HHFab - the fabricator:

    • An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
  • DHCP Service:

    • A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
  • Topology:

    • Support for a Collapsed Core topology with 2 switch nodes.
  • Underlay:

    • A simple single-VRF network with a BGP control plane. IPv4 support only.
  • External connectivity:

    • An edge router must be connected to selected ports of one or both switches. IPv4 support only.
  • Dual-homing:

    • L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
  • VPC overlay implementation:

    • VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
  • VPC Peering:

    • VPC peering is performed via ACLs with no fine-grained control.
  • NAT

    • DNAT + SNAT are supported per VPC. SNAT and DNAT can\u2019t be enabled per VPC simultaneously.
  • Hardware support:

    • Please see the supported hardware list.
  • Virtual Lab:

    • A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
  • Limitations:

    • 40 VPCs max
    • 50 VPC peerings
    • [ 768 ACL entry platform limitation from Broadcom ]
  • Software versions:

    • Fabricator: v0.5.2
    • Fabric: v0.18.6
    • Das-boot: v0.8.2
    • K3s: v1.27.4-k3s1
    • Zot: v1.4.3
    • SONiC: Broadcom Enterprise Base 4.1.1
"},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

Under construction.

"},{"location":"user-guide/connections/","title":"Connections","text":"

The Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all connections between the devices in the Wiring Diagram.

There are multiple types of connections.

"},{"location":"user-guide/connections/#server-connections-user-facing","title":"Server connections (user-facing)","text":"

Server connections are used to connect workload servers to the switches.

"},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

Unbundled server connections are used to connect servers to the single switche using a single port.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
"},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

Bundled server connections are used to connect servers to the single switch using multiple ports (port channel, LAG).

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
"},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

MCLAG server connections are used to connect servers to the pair of switches using multiple ports (Dual-homing).

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
"},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

"},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

Connections between specific spine and leaf, covers all actual wires between a single pair.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
"},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

Used to define a pair of MCLAG switches with Session and Peer link between them.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
"},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

Required to implement a workaround for the local VPC peering (when both VPC are attached to the same switch) which is caused by the hardware limitation of the currently supported switches.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
"},{"location":"user-guide/connections/#management","title":"Management","text":"

Connection to the Control Node.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: control-1--mgmt--s5248-01-front\n  namespace: default\nspec:\n  management:\n    link: # Defines a single link between a control node and a switch\n      server:\n        ip: 172.30.20.0/31\n        port: control-1/enp2s1\n      switch:\n        ip: 172.30.20.1/31\n        port: s5248-01/Ethernet0\n
"},{"location":"user-guide/connections/#connecting-fabric-to-outside-world","title":"Connecting Fabric to outside world","text":"

Provides connectivity to the outside world, e.g. internet, other networks or some other systems such as DHCP, NTP, LMA, AAA services.

"},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

Simple way to connect things like DHCP server directly to the Fabric by connecting it to specific switch ports.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port, if 0 - no VLAN is configured\n        subnets: # List of subnets that will be routed to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address that will be used when configuring static routes for the \"subnets\" list\n
"},{"location":"user-guide/connections/#external","title":"External","text":"

Connection to the external systems, e.g. edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what's getting advertised and which routes are accepted.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
"},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

All devices in the Hedgehog Fabric are divided into two groups: switches and servers and represented by corresponding Switch and Server objects in the API. It's needed to define all participants of the Fabric and their roles in the Wiring Diagram as well as Connections between them.

"},{"location":"user-guide/devices/#switches","title":"Switches","text":"

Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API and consists of the basic information like name, description, location, role, etc. as well as port group speeds, port breakouts, ASN, IP addresses and etc.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  location:\n    location: gen--default--s5248-01\n  locationSig:\n    sig: <undefined>\n    uuidSig: <undefined>\n  portBreakouts: # Configures port breakouts for the switch\n    1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch\n    \"1\": 10G\n    \"2\": 10G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to\n  - some-group\n

The SwitchGroup is just a marker at that point and doesn't have any configuration options.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
"},{"location":"user-guide/devices/#servers","title":"Servers","text":"

It includes both control nodes and user's workload servers.

Control Node:

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: control-1\n  namespace: default\nspec:\n  type: control # Type of the server, one of control or \"\" (empty) for regular workload server\n

Regular workload server:

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
"},{"location":"user-guide/external/","title":"External Peering","text":"

Hedgehog Fabric uses Border Leaf concept to exchange VPC routes outside the Fabric and providing L3 connectivity. External Peering feature allows to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

Hedgehog Fabric does not operate Edge side devices.

"},{"location":"user-guide/external/#overview","title":"Overview","text":"

Traffic exit from the Fabric is done on Border Leafs that are connected with Edge devices. Border Leafs are suitable to terminate l2vpn connections and distinguish VPC L3 routable traffic towards Edge device as well as to land VPC servers. Border Leafs (or Borders) can connect to several Edge devices.

External Peering is only available on the switch devices that are capable for sub-interfaces.

"},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

In order to distinguish VPC traffic Edge device should be capable for - Set up BGP IPv4 to advertise and receive routes from the Fabric - Connect to Fabric Border Leaf over Vlan - Be able to mark egress routes towards the Fabric with BGP Communities - Be able to filter ingress routes from the Fabric by BGP Communities

All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

"},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

Fabric is sharing VPC routes with Edge devices via BGP. Peering is done over Vlan in IPv4 Unicast AFI/SAFI.

"},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

VPC L3 routable traffic will be tagged with Vlan and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

"},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that VPC can export to Edge devices and import from the Edge devices.

"},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

General configuration starts with specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with following config

  • Name of External
  • Inbound routes are marked with dedicated BGP community
  • Outbound routes are required to be marked with dedicated community

Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
"},{"location":"user-guide/external/#connection","title":"Connection","text":"

Connection of type external is used to identify switch port on Border leaf that is cabled with an Edge device.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwtichName/EthernetXXX\n
"},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

External Attachment is a definition of BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to Connection with type external and specify Vlan that will be used to segregate particular Edge peering.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP Address on the Border Leaf to set up BGP peering\n    vlan: # Vlan ID to tag control and data traffic\n

Several External Attachment can be configured for the same Connection but for different vlan.

"},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

To allow specific VPC have access to Edge devices VPC should be bound to specific External object. This is done via External Peering object.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes(routes) to be allowed to pick up from External\n      - # IPv4 Prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n
Prefixes can be specified as exact match or with mask range indicators le and ge keywords. le is identifying prefixes lengths that are less than or equal and ge for prefixes lengths that are greater than or equal.

Example: Allow ANY IPv4 prefix that came from External - allow all prefixes that match default route with any prefix length

spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n
ge and le can also be combined.

Example:

spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 24\n        ge: 16\n        prefix: 77.0.0.0/8\n
For instance, 77.42.0.0/18 will be matched for given prefix rule above, but 77.128.77.128/25 or 77.10.0.0/16 won't.

"},{"location":"user-guide/external/#examples","title":"Examples","text":"

This example will show peering with External object with name HedgeEdge given Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable between an Edge device on the port Ethernet42. vpc-1 is required to receive any prefixes advertised from the External.

"},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"

# hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n

"},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

Connection should be specified in the wiring diagram.

###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
"},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

Specified in wiring diagram

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n

"},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
"},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

NOTE: Hedgehog does not recommend using following configuration for production. It's just as example of Edge Peer config

Interface config

interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

BGP Config

!\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n
Route Map configuration
route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n

"},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

It's an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

You'll need to define Server object per each server running Harvester and Connection object per each server connection to the switches.

You can have multiple VPCs created and attached to the Connections to this servers to make them available to the VMs in Harvester or any other hypervisor.

"},{"location":"user-guide/harvester/#congigure-harvester","title":"Congigure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

From the \"Cluster Network/Confg\" side menu. Create a new Cluster Network.

Here is what the CRD looks like cleaned up:

apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
"},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

By clicking \"Create Network Confg\". Add your connections and select bonding type.

The resulting cleaned up CRD:

apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
"},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

Browse over to \"VM Networks\" and add one for each Vlan you want to support, assigning them to the cluster network.

Here is what the CRDs will look like for both vlans:

apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
"},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

Now you can choose created VM Networks when creating a VM in Harvester and have them created as part of the VPC.

"},{"location":"user-guide/overview/","title":"Overview","text":"

The chapter is intended to give an overview of the main features of the Hedgehog Fabric and their usage.

"},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits to which subnets could be used by VPC to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits to which switches VPC could be attached to guarantee non-overlapping VLANs\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified\n          start: 10.10.1.10\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      vlan: \"1001\" # User-defined VLAN from vlan namespace\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: \"1002\"\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: \"1100\"\n
"},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

Represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

VPC could be attached to a switch which is a part of the VLAN namespace used by the VPC.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n
"},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

It enables VPC to VPC connectivity. There are tw o types of VPC peering:

  • Local - peering is implemented on the same switches where VPCs are attached
  • Remote - peering is implemented on the border/mixed leafs defined by the SwitchGroup object

VPC peering is only possible between VPCs attached to the same IPv4 namespace.

Local:

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-3\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # meaning all subnets of two VPCs will be able to communicate to each other\n    vpc-3: {} # more advanced filtering will be supported in future releases\n

Remote:

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # indicates a switch group to implement the peering on\n
"},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

Defines non-overlapping VLAN ranges for attaching servers. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of the subnets that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
"},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

Defines non-overlapping IPv4 ranges for VPC subnets. Each VPC belongs to a specific IPv4 namespace.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
"},{"location":"vlab/demo/","title":"Demo on VLAB","text":"

You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

"},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

The default topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

For more details on the customizing topologies see Running VLAB section.

In the default topology, the following Control Node and Switch VMs are created:

graph TD\n    CN[Control Node]\n\n    S1[Spine 1]\n    S2[Spine 2]\n\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    CN --> L1\n    CN --> L2\n\n    S1 --> L1\n    S1 --> L2\n    S2 --> L2\n    S2 --> L3

As well as test servers:

graph TD\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    TS1[Test Server 1]\n    TS2[Test Server 2]\n    TS3[Test Server 3]\n    TS4[Test Server 4]\n    TS5[Test Server 5]\n    TS6[Test Server 6]\n\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS3 --> L1\n    TS4 --> L2\n\n    TS5 --> L3\n    TS6 --> L4
"},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

You can create and attach VPCs to the VMs using the kubectl fabric vpc command on control node or outside of cluster using the kubeconfig. For example, run following commands to create a 2 VPCs with a single subnet each, DHCP server enabled with optional IP address range start defined and attach them to some test servers:

core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

VPC subnet should belong to some IPv4Namespace, default one in the VLAB is 10.0.0.0/16:

core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

After you created VPCs and VPCAttachments, you can check the status of the agents to make sure that requested configuration was apploed to the switches:

core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

As you can see columns APPLIED and APPLIEDG are equal which means that requested configuration was applied.

"},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

You can use hhfab vlab ssh on the host to ssh into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure bond with a vlan on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a vlan and they both will get an IP address from the DHCP server. You can use ip command to configure networking on the servers or use little helper preinstalled by Fabricator on test servers.

For server-01:

core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

And for server-02:

core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
"},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

You can test connectivity between the servers before peering the switches using ping command:

core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
"},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

To enable connectivity between the VPCs, you need to peer them using kubectl fabric vpc peer command:

core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that you can test connectivity between the servers again:

core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

If you will delete VPC peering using command following command and wait for the agent to apply configuration on the switches, you will see that connectivity will be lost again:

core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
"},{"location":"vlab/overview/","title":"Overview","text":"

It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its looka and feel, API, capabilities and etc. It's not suitable for any data plane or performance testing as well as not for production use.

In the VLAB all switches will start as an empty VMs with only ONiE image on them and will go through the whole discovery, boot and installation process like on the real hardware.

"},{"location":"vlab/overview/#overview_1","title":"Overview","text":"

The hhfab CLI provides a special command vlab to manage the virtual labs. It allows to run set of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and automatically runs installer to get Fabric up and running.

You can find more information about getting hhfab in the download section.

"},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

Following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat and docker will be required to login into OCI registry.

By default, VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

You can calculate the system requirements based on the allocated resources to the VMs using the following table:

Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

Which gives approximately the following requirements for the default topologies:

  • Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
  • Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk

Usually, non of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

NVMe SSD for VM disks is highly recommended.

"},{"location":"vlab/overview/#installing-prerequisites","title":"Installing prerequisites","text":"

On Ubuntu 22.04 LTS you can install all required packages using the following commands:

curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

Good output of the kvm-ok command should look like this:

ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
"},{"location":"vlab/overview/#next-steps","title":"Next steps","text":"
  • Running VLAB
"},{"location":"vlab/running/","title":"Running VLAB","text":"

Please, make sure to follow prerequisites and check system requirements in the VLAB Overview section before running VLAB.

"},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

As a first step you need to initialize Fabricator for the VLAB by running hhfab init --preset vlab (or -p vlab). It supports a lot of customization options which you can find by adding --help to the command. If you want to tune the topology used for the VLAB you can use --fabric-mode (or -m) flag to choose between spine-leaf (default) and collapsed-core topologies as well as you can configure the number of spines, leafs, connections and etc. For example, --spines-count and --mclag-leafs-count flags allows to set number of spines and MCLAG leafs respectively.

So, by default you'll get 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC Loopback workaround.

ubuntu@docs:~$ hhfab init -p vlab\n01:17:44 INF Generating wiring from gen flags\n01:17:44 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:17:44 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:17:44 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=1\n01:17:44 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:17:44 INF                     >>> vpcLoopbacks=2\n01:17:44 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:17:44 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Or if you want to run Collapsed Core topology with 2 MCLAG switches:

ubuntu@docs:~$ hhfab init -p vlab -m collapsed-core\n01:20:07 INF Generating wiring from gen flags\n01:20:07 INF Building wiring diagram fabricMode=collapsed-core chainControlLink=false controlLinksCount=0\n01:20:07 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=0\n01:20:07 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:20:07 INF                     >>> vpcLoopbacks=2\n01:20:07 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:20:07 INF Initialized preset=vlab fabricMode=collapsed-core config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Or you can run custom topology with 2 spines, 4 MCLAG leafs and 2 non-MCLAG leafs using flags:

ubuntu@docs:~$ hhfab init -p vlab --mclag-leafs-count 4 --orphan-leafs-count 2\n01:21:53 INF Generating wiring from gen flags\n01:21:53 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:21:53 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:21:53 INF                     >>> mclagLeafsCount=4 orphanLeafsCount=2\n01:21:53 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:21:53 INF                     >>> vpcLoopbacks=2\n01:21:53 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:21:53 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Additionally, you can do extra Fabric configuration using flags on init command or by passing config file, more information about it is available in the Fabric Configuration section.

Once you have initialized the VLAB you need to download all artifacts and build the installer using hhfab build command. It will automatically download all required artifacts from the OCI registry and build the installer as well as all other prerequisites for running the VLAB.

"},{"location":"vlab/running/#build-the-installer-and-vlab","title":"Build the installer and VLAB","text":"
ubuntu@docs:~$ hhfab build\n01:23:33 INF Building component=base\n01:23:33 WRN Attention! Development mode enabled - this is not secure! Default users and keys will be created.\n...\n01:23:33 INF Building component=control-os\n01:23:33 INF Building component=k3s\n01:23:33 INF Downloading name=m.l.hhdev.io:31000/githedgehog/k3s:v1.27.4-k3s1 to=.hhfab/control-install\nCopying k3s-airgap-images-amd64.tar.gz  187.36 MiB / 187.36 MiB   \u2819   0.00 b/s done\nCopying k3s                               56.50 MiB / 56.50 MiB   \u2819   0.00 b/s done\n01:23:35 INF Building component=zot\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/zot:v1.4.3 to=.hhfab/control-install\nCopying zot-airgap-images-amd64.tar.gz  19.24 MiB / 19.24 MiB   \u2838   0.00 b/s done\n01:23:35 INF Building component=misc\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/fabricator/k9s:v0.27.4 to=.hhfab/control-install\nCopying k9s  57.75 MiB / 57.75 MiB   \u283c   0.00 b/s done\n...\n01:25:40 INF Planned bundle=control-install name=fabric-api-chart op=\"push fabric/charts/fabric-api:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-image op=\"push fabric/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-chart op=\"push fabric/charts/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent-seeder op=\"push fabric/agent/x86_64:latest\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent op=\"push fabric/agent:v0.23.0\"\n...\n01:25:40 INF Recipe created bundle=control-install actions=67\n01:25:40 INF Creating recipe bundle=server-install\n01:25:40 INF Planned bundle=server-install name=toolbox op=\"file /opt/hedgehog/toolbox.tar\"\n01:25:40 INF Planned bundle=server-install name=toolbox-load op=\"exec ctr\"\n01:25:40 INF Planned bundle=server-install name=hhnet op=\"file /opt/bin/hhnet\"\n01:25:40 INF Recipe created bundle=server-install actions=3\n01:25:40 INF Building done took=2m6.813384532s\n01:25:40 INF Packing bundle=control-install target=control-install.tgz\n01:25:45 INF Packing bundle=server-install target=server-install.tgz\n01:25:45 INF Packing done took=5.67007384s\n

As soon as it's done you can run the VLAB using hhfab vlab up command. It will automatically start all VMs and run the installers on the control node and test servers. It will take some time for all VMs to come up and for the installer to finish, you will see the progress in the output. If you stop the command, it'll stop all VMs, and you can re-run it to get VMs back up and running.

"},{"location":"vlab/running/#run-vms-and-installers","title":"Run VMs and installers","text":"
ubuntu@docs:~$ hhfab vlab up\n01:29:13 INF Starting VLAB server... basedir=.hhfab/vlab-vms vm-size=\"\" dry-run=false\n01:29:13 INF VM id=0 name=control-1 type=control\n01:29:13 INF VM id=1 name=server-01 type=server\n01:29:13 INF VM id=2 name=server-02 type=server\n01:29:13 INF VM id=3 name=server-03 type=server\n01:29:13 INF VM id=4 name=server-04 type=server\n01:29:13 INF VM id=5 name=server-05 type=server\n01:29:13 INF VM id=6 name=server-06 type=server\n01:29:13 INF VM id=7 name=leaf-01 type=switch-vs\n01:29:13 INF VM id=8 name=leaf-02 type=switch-vs\n01:29:13 INF VM id=9 name=leaf-03 type=switch-vs\n01:29:13 INF VM id=10 name=spine-01 type=switch-vs\n01:29:13 INF VM id=11 name=spine-02 type=switch-vs\n01:29:13 INF Total VM resources cpu=\"38 vCPUs\" ram=\"36352 MB\" disk=\"410 GB\"\n...\n01:29:13 INF Preparing VM id=0 name=control-1 type=control\n01:29:13 INF Copying files  from=.hhfab/control-os/ignition.json to=.hhfab/vlab-vms/control-1/ignition.json\n01:29:13 INF Copying files  from=.hhfab/vlab-files/flatcar.img to=.hhfab/vlab-vms/control-1/os.img\n 947.56 MiB / 947.56 MiB [==========================================================] 5.13 GiB/s done\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_code.fd to=.hhfab/vlab-vms/control-1/efi_code.fd\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_vars.fd to=.hhfab/vlab-vms/control-1/efi_vars.fd\n01:29:14 INF Resizing VM image (may require sudo password) name=control-1\n01:29:17 INF Initializing TPM name=control-1\n...\n01:29:46 INF Installing VM name=control-1 type=control\n01:29:46 INF Installing VM name=server-01 type=server\n01:29:46 INF Installing VM name=server-02 type=server\n01:29:46 INF Installing VM name=server-03 type=server\n01:29:47 INF Installing VM name=server-04 type=server\n01:29:47 INF Installing VM name=server-05 type=server\n01:29:47 INF Installing VM name=server-06 type=server\n01:29:49 INF Running VM id=0 name=control-1 type=control\n01:29:49 INF Running VM id=1 name=server-01 type=server\n01:29:49 INF Running VM id=2 name=server-02 type=server\n01:29:49 INF Running VM id=3 name=server-03 type=server\n01:29:50 INF Running VM id=4 name=server-04 type=server\n01:29:50 INF Running VM id=5 name=server-05 type=server\n01:29:50 INF Running VM id=6 name=server-06 type=server\n01:29:50 INF Running VM id=7 name=leaf-01 type=switch-vs\n01:29:50 INF Running VM id=8 name=leaf-02 type=switch-vs\n01:29:51 INF Running VM id=9 name=leaf-03 type=switch-vs\n01:29:51 INF Running VM id=10 name=spine-01 type=switch-vs\n01:29:51 INF Running VM id=11 name=spine-02 type=switch-vs\n...\n01:30:41 INF VM installed name=server-06 type=server installer=server-install\n01:30:41 INF VM installed name=server-01 type=server installer=server-install\n01:30:41 INF VM installed name=server-02 type=server installer=server-install\n01:30:41 INF VM installed name=server-04 type=server installer=server-install\n01:30:41 INF VM installed name=server-03 type=server installer=server-install\n01:30:41 INF VM installed name=server-05 type=server installer=server-install\n...\n01:31:04 INF Running installer on VM name=control-1 type=control installer=control-install\n...\n01:35:15 INF Done took=3m39.586394608s\n01:35:15 INF VM installed name=control-1 type=control installer=control-install\n

After you see VM installed name=control-1, it means that the installer has finished and you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned.

"},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

Fabricator will create default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

"},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

You can select device you want to access or pass the name using the --vm flag.

ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n

On the control node you'll have access to the kubectl, Fabric CLI and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.

After switches are provisioned you will see something like this:

core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. If AppliedG and CurrentG are different it means that the switch is in the process of applying the configuration.

At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about it in the Running Demo and User Guide sections.

"},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

You can get the main Fabric objects using kubectl get command on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

For example, to get the list of switches you can run:

core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

Similar for the servers:

core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

For connections:

core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\ncontrol-1--mgmt--leaf-01             management     6h11m\ncontrol-1--mgmt--leaf-02             management     6h11m\ncontrol-1--mgmt--leaf-03             management     6h11m\ncontrol-1--mgmt--spine-01            management     6h11m\ncontrol-1--mgmt--spine-02            management     6h11m\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

For IPv4 and VLAN namespaces:

core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
"},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

To reset VLAB and start over just remove the .hhfab directory and run hhfab init again.

"},{"location":"vlab/running/#next-steps","title":"Next steps","text":"
  • Running Demo
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to the private environments. Without vendor lock-in.

Fabric is built around concept of VPCs (Virtual Private Clouds) similar to the public clouds and provides a multi-tenant API to define user intent on network isolation, connectivity and etc which gets automatically transformed into switches and software appliances configuration.

You can read more about concepts and architecture in the documentation.

You can find how to download and try Fabric on the self-hosted fully virtualized lab or on the hardware.

"},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

The Hedgehog Open Network Fabric is an open source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standar BGP EVPN and vxlan technology. The fabric provides a standard kubernetes interfaces to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this we define different abstractions starting from the physical network where we define Connection which defines how a physical server on the network connects to a physical switch on the fabric.

"},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

The Hedgehog Fabric currently support two underlay network topologies.

"},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

A collapsed core topology is just a pair of switches connected in a mclag configuration with no other network elements. All workloads attach to these two switches.

The leaf's in this setup are configured to be in a mclag pair and servers can either be connected to both switches as a mclag port channel or as orphan ports connected to only one switch. both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

"},{"location":"architecture/fabric/#spine-leaf","title":"Spine - Leaf","text":"

A spine-leaf topology is a standard clos network with workloads attaching to leaf switches and spines providing connectivity between different leaves.

This kind of topology is useful for bigger deployments and provides all the advantages of a typical clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. We used RFC7938 as the reference for establishing the underlay network.

"},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Vitualization is achieved in the hedgehog fabric by encapsulating workload traffic over vxlan tunnels that are source and terminated on the leaf switches in the network. The fabric using BGP-EVPN/Vxlan to enable creation and management of virtual networks on top of the virtual. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the hedgehog fabric is identified by a VPC. In the following sections we will dive a bit deeper into a high level overview of how are vpc's implemented in the hedgehog fabric and it's associated objects.

"},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

We know what is a VPC and how to attach workloads to a specific VPC. Let us now take a look at how is this actually implemented on the network to provice the view of a private network.

  • Each VPC is modeled as a vrf on each switch where there are VPC attachments defined for this vpc. The Vrf is allocated its own VNI. The Vrf is local to each switch and the VNI is global for the entire fabric. By mapping the vrf to a VNI and configuring an evpn instance in each vrf we establish a shared l3vni across the entire fabric. All vrf participating in this vni can freely communicate with each other without a need for a policy. A Vlan is allocated for each vrf which functions as a IRB Vlan for the vrf.
  • The vrf created on each switch corresponding to a VPC configures a BGP instance with evpn to advertise its locally attached subnets and import routes from its peered VPC's. The BGP instance in the tenant vrf's does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all vrf's with the same l3vni) across leafs in the network.
  • A VPC can have multuple subnets. Each Subnet in the VPC is modeled as a Vlan on the switch. The vlan is only locally significant and a given subnet might have different Vlan's on different leaves on the network. We assign a globally significant vni for each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single streched l2 domain if the applications need it.
  • The hedgehog fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing vlans. The DHCP server is accesible through the underlay network and is shared by all vpcs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
  • A VPC by default cannot communicate to anyone outside the VPC and we need to define specific peering rules to allow communication to external networks or to other VPCs.
"},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

To enable communication between 2 different VPC's we need to configure a VPC peering policy. The hedgehog fabric supports two different peering modes.

  • Local Peering - A local peering directly imports routers from the other VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached worloads to the peer VPC the fabric automatically creates a stub vpc for peering and imports routes from it. This allows VPC's to peer with each other without the need for dedicated peering leaf. If a local peering is done for a pair of VPC's which have locally attached workloads the fabric automatically allocates a pair of ports on the switch to router traffic between these vrf's using static routes. This is required because of limitations in the underlying platform. The net result of this is that the bandwidth between these 2 VPC's is limited by the bandwidth of the loopback interfaces allocated on the switch.
  • Remote Peering - Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPC's the vrf's corresponding to these VPC's on the peering switch advertise default routes into their specific vrf's identified by the l3vni. All traffic that does not belong to the VPC's is forwarded to the peering switch and which has routes to the other VPC's and gets forwarded from there. The bandwith limitation that exists in the local peering solution is solved here as the bandwith between the two VPC's is determined by the fabric cross section bandwidth.
"},{"location":"architecture/overview/","title":"Overview","text":"

Under construction.

"},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

Hedgehog Open Network Fabric is build on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs) and so you can use standard Kubernetes tools to manage Fabric resources.

Hedgehog Fabric consists of the following components:

  • Fabricator - special tool that allows to install and configre Fabric as well as run virtual labs
  • Control Node - one or more Kubernetes nodes in a single clusters running Fabric software
    • Das Boot - set of services providing switch boot and installation
    • Fabric Controller - main control plane component that manages Fabric resources
  • Fabric Kubectl plugin (Fabric CLI) - plugin for kubectl that allows to manage Fabric resources in an easy way
  • Fabric Agent - runs on every switch and manages switch configuration
"},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. It allows to define switches, servers, control nodes, external systems and connections between them in a single place and then use it to deploy and manage the whole infrastructure. On top of it Fabric provides a set of APIs to manage the VPCs and connections between them and between VPCs and External systems.

"},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

Wiring Diagram consists of the following resources:

  • \"Devices\": describes any device in the Fabric
    • Switch: configuration of the switch, like port group speeds, port breakouts, switch IP/ASN, etc.
    • Server: any physical server attached to the Fabric including control nodes
  • Connection: any logical connection for devices
    • usually it's a connection between two or more ports on two different devices
    • incl. MCLAG Peer Link, Unbundled/MCLAG server connections, Fabric connection between spine and leaf etc.
  • VLANNamespace -> non-overlapping VLAN ranges for attaching servers
  • IPv4Namespace -> non-overlapping IPv4 ranges for VPC subnets
"},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":"
  • VPC API
    • VPC: Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP
    • VPCAttachment: represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding
    • VPCPeering: enables VPC to VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leafs)
  • External API
    • External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
    • ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
    • ExternalPeering: enables VPC to External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
"},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

Installer builder and VLAB.

  • Installer builder based on a preset (currently vlab for virtual & lab for physical)
  • Main input - wiring diagram
  • All input artifacts coming from OCI registry
  • Always full airgap (everything running from private registry)
  • Flatcar Linux for control node, generated ignition.json
  • Automatic k3s installation and private registry setup
  • All components and their dependencies running in K8s
  • Integrated Virtual Lab (VLAB) management
  • Future:
  • In-cluster (control) Operator to manage all components
  • Upgrades handling for everything starting control node OS
  • Installation progress, status and retries
  • Disaster recovery and backups
"},{"location":"concepts/overview/#das-boot","title":"Das Boot","text":"

Switch boot and installation.

  • Seeder
  • Actual switch provisioing
  • ONIE on a switch discovers control node using LLDP
  • It loads and runs our multi-stage installer
    • Network configuration & identity setup
    • Performs device registration
    • Hedgehog identity partion gets created on the switch
    • Downloads SONiC installer and runs it
    • Downloads Agent and it's config and installs to the switch
  • Registration Controller
  • Device identity and registration
  • Actual SONiC installers
  • Misc: rsyslog/ntp
"},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

Control plane and switch agent.

  • Currently Fabric is basically single controller manager running in K8s
  • It includes controllers for different CRDs and needs
  • For example, auto assigning VNIs to VPC or generating Agent config
  • Additionally, it's running admission webhook for our CRD APIs
  • Agent is watching for the corresonding Agent CRD in K8s API
  • It applies the changes and saves new config locally
  • It reports back some status and information back to API
  • Can perform reinstall and reboot of SONiC
"},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

make serve\n

Now you can open continuosly updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

Additionally you can run

make build\n

to make sure that your changes will be built correctly and doesn't break documentation.

"},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

"},{"location":"contribute/docs/#repository","title":"Repository","text":"

Documentation is organized in per-release branches:

  • master - ongoing development, not released yet, referenced as dev version in the documentation
  • release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
  • release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch

Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

"},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

For example, top-level .pages in this repository looks like this:

nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

More details in the MkDocs Pages plugin.

"},{"location":"contribute/docs/#abbreaviations","title":"Abbreaviations","text":"

You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

For example, we have following in includes/abbreviations.md:

*[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

"},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

To view code for examples, please, check the source code of this page.

"},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

Keyboard keys can be written like so:

Ctrl+Alt+Del

Amd inline icons/emojis can be added like this:

:fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

"},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

Lorem ipsum

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

"},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

Details can be found here.

Simple code block with line nums and higlighted lines:

bubble_sort.py
def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

Code annotations:

theme:\n  features:\n    - content.code.annotate # (1)\n
  1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
"},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

You can use Tabs to better organize content.

CC++
#include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
#include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
"},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
"},{"location":"contribute/overview/","title":"Overview","text":"

Under construction.

"},{"location":"getting-started/download/","title":"Download","text":""},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

Prior to the General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

After that you will be provided with the credentials to access the software on GitHub Package. In order to use it you need to login to the registry using the following command:

docker login ghcr.io\n
"},{"location":"getting-started/download/#downloading-the-software","title":"Downloading the software","text":"

The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. All software is published into the OCI registry GitHub Package including binaries, container images, helm charts and etc. The latest stable hhfab binary can be downloaded from the GitHub Package using the following command:

curl -fsSL https://i.hhdev.io/hhfab | bash\n

Or you can download a specific version using the following command:

curl -fsSL https://i.hhdev.io/hhfab | VERSION=alpha-X bash\n

The VERSION environment variable can be used to specify the version of the software to download. If it's not specified the latest release will be downloaded. You can pick specific release series (e.g. alpha-2) or specific release.

It requires ORAS to be installed which is used to download the binary from the OCI registry and could be installed using following command:

curl -fsSL https://i.hhdev.io/oras | bash\n

Currently only Linux x86 is supported for running hhfab.

"},{"location":"getting-started/download/#next-steps","title":"Next steps","text":"
  • Concepts
  • Virtual LAB
  • Installation
  • User guide
"},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

Under construction.

In the meantime, to have a look at the working wiring diagram for the Hedgehog Fabric, please run sample generator that produces VLAB-compatible wiring diagrams:

ubuntu@sl-dev:~$ hhfab wiring sample -h\nNAME:\n   hhfab wiring sample - sample wiring diagram (would work for vlab)\n\nUSAGE:\n   hhfab wiring sample [command options] [arguments...]\n\nOPTIONS:\n   --brief, -b                    brief output (only warn and error) (default: false)\n   --fabric-mode value, -m value  fabric mode (one of: collapsed-core, spine-leaf) (default: \"spine-leaf\")\n   --help, -h                     show help\n   --verbose, -v                  verbose output (includes debug) (default: false)\n\n   wiring generator options:\n\n   --chain-control-link         chain control links instead of all switches directly connected to control node if fabric mode is spine-leaf (default: false)\n   --control-links-count value  number of control links if chain-control-link is enabled (default: 0)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
"},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":"
  • --fabric-mode <mode-name (collapsed-core or spine-leaf) - Fabric mode to use, default is spine-leaf; in case of collapsed-core mode, there will be no VXLAN configured and only 2 switches will be used
  • --ntp-servers <servers>- Comma-separated list of NTP servers to use, default is time.cloudflare.com,time1.google.com,time2.google.com,time3.google.com,time4.google.com, it'll be used for both control nodes and switches
  • --dhcpd <mode-name> (isc or hedgehog) - DHCP server to use, default is isc; hedgehog DHCP server enables use of on-demand DHCP for multiple IPv4/VLAN namespaces and overlapping IP ranges as well as adds DHCP leases into the Fabric API

You can find more information about using hhfab init in the help message by running it with --help flag.

"},{"location":"install-upgrade/onie-update/","title":"ONIE Update/Upgrade","text":""},{"location":"install-upgrade/onie-update/#hedgehog-onie-honie-supported-systems","title":"Hedgehog ONIE (HONIE) Supported Systems","text":"
  • DELL

  • S5248F-ON

  • S5232F-ON

  • Edge-Core

  • DCS501 (AS7726-32X)

  • DCS203 (AS7326-56X)

  • EPS203 (AS4630-54NPE)

"},{"location":"install-upgrade/onie-update/#updating-onie","title":"Updating ONIE","text":"
  • Via USB

  • For this example we will be updating a DELL S5248 to Hedgehog ONIE (HONIE)

    • Note: the USB port is on the back of the switch with the Management and Console
  • Prepare the USB stick by burning the honie-usb.img to a 4G or larger USB drive

  • Insert the USB drive into the switch

    • For example, to burn the file to disk X of an OSX machine

    • sudo dd if=honie-usb.img of=/dev/rdiskX bs=1m

  • Boot into ONIE Installer

    *

  • ONIE will install the ONIE update and reboot

    • `ONIE: OS Install Mode ...

    Platform\u00a0 : x86_64-dellemc_s5200_c3538-r0

    Version \u00a0 : 3.40.1.1-7 <- Non HONIE version

    Build Date: 2020-03-24T20:44-07:00

    Info: Mounting EFI System on /boot/efi ...

    Info: BIOS mode: UEFI

    Info: Making NOS install boot mode persistent.

    Info: Using eth0 MAC address: 3c:2c:30:66:f0:00

    Info: eth0:\u00a0 Checking link... up.

    Info: Trying DHCPv4 on interface: eth0

    Warning: Unable to configure interface using DHCPv4: eth0

    ONIE: Using link-local IPv4 addr: eth0: 169.254.95.249/16

    Starting: klogd... done.

    Starting: dropbear ssh daemon... done.

    Starting: telnetd... done.

    discover: installer mode detected.\u00a0 Running installer.

    Starting: discover... done.

    Please press Enter to activate this console. Info: eth0:\u00a0 Checking link... up.

    Info: Trying DHCPv4 on interface: eth0

    Warning: Unable to configure interface using DHCPv4: eth0

    ONIE: Using link-local IPv4 addr: eth0: 169.254.6.139/16

    ONIE: Starting ONIE Service Discovery

    Info: Attempting file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538-r0 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538-r0.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-dellemc_s5248f_c3538.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-dellemc_s5248f_c3538 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-dellemc_s5248f_c3538.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-bcm ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64-bcm.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64 ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer-x86_64.bin ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer ...

    Info: Attempting file://dev/mmcblk0p1/onie-installer.bin ...

    ONIE: Executing installer: file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0

    Verifying image checksum ... OK.

    Preparing image archive ... OK.

    ONIE: Version \u00a0 \u00a0 \u00a0 : 3.40.1.1-8 <- HONIE Version

    ONIE: Architecture\u00a0 : x86_64

    ONIE: Machine \u00a0 \u00a0 \u00a0 : dellemc_s5200_c3538

    ONIE: Machine Rev \u00a0 : 0

    ONIE: Config Version: 1

    ONIE: Build Date\u00a0 \u00a0 : 2023-12-15T23:43+00:00

    Installing ONIE on: /dev/sda

    ONIE: NOS install successful: file://dev/sdb1/onie-installer-x86_64-dellemc_s5248f_c3538-r0

    ONIE: Rebooting...

    discover: installer mode detected.

    Stopping: discover...start-stop-daemon: warning: killing process 665: No such process

    Info: Unmounting kernel filesystems

    umount: can't unmount /: Invalid argument

    The system is going down NOW!

    Sent SIGTERM to all processes

    Sent SIGKILL to all processes

    Requesting system reboot`

  • System is now ready for use

"},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

Under construction.

"},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":"
  • Have a machine with access to internet to use Fabricator and build installer
  • Have a machine to install Fabric Control Node on with enough NICs to connect to at least one switch using Front Panel ports and enough CPU and RAM (System Requirements) as well as IPMI access to it to install OS
  • Have enough Supported Switches for your Fabric
"},{"location":"install-upgrade/overview/#main-steps","title":"Main steps","text":"

This chapter is dedicated to the Hedgehog Fabric installation on the bare-metal control node(s) and switches, their preparation and configuration.

Please, get hhfab installed following instructions from the Download section.

Main steps to install Fabric are:

  1. Install hhfab on the machines with access to internet
    1. Prepare Wiring Diagram
    2. Select Fabric Configuration
    3. Build Control Node configuration and installer
  2. Install Control Node
    1. Install Flatcar Linux on the Control Node
    2. Upload and run Control Node installer on the Control Node
  3. Prepare supported switches
    1. Install Hedgehog ONiE (HONiE) on them
    2. Reboot them into ONiE Install Mode and they will be automatically provisioned
"},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and installer","text":"

It's the only step that requires internet access to download artifacts and build installer.

Once you've prepated Wiring Diagram, you can initialize Fabricator by running hhfab init command and passwing optional configuration into it as well as wiring diagram file(s) as flags. Additionally, there are a lot of customizations availble as flags, e.g. to setup default credentials, keys and etc, please, refer to hhfab init --help for more.

The --dev options allows to enable development mode which will enable default credentials and keys for the Control Node and switches:

  • Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!.
  • Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!.
  • Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

Alternatively, you can pass your own credentials and keys using --authorized-key and --control-password-hash flags. Password hash can be generated using openssl passwd -5 command. Further customizations are available in the config file that could be passed using --config flag.

hhfab init --preset lab --dev --wiring file1.yaml --wiring file2.yaml\nhhfab build\n

As a result, you will get the following files in the .hhfab directory or the one you've passed using --basedir flag:

  • control-os/ignition.json - ignition config for the control node to get OS installed
  • control-install.tgz - installer for the control node, it will be uploaded to the control node and run there
"},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

It's fully air-gapped installation and doesn't require internet access.

Please, download latest stable Flatcar Container Linux ISO from the link and boot into it (using IPMI attaching media, USB stick or any other way).

Once you've booted into the Flatcar installer, you need to download ignition.json built in the prvious step to it and run Flatcar installation:

sudo flatcar-install -d /dev/sda -i ignition.json\n

Where /dev/sda is a disk you want to install Control Node to and ignition.json is the control-os/ignition.json file from previous step downloaded to the Flatcar installer.

Once the installation is finished, reboot the machine and wait for it to boot into the installed Flatcar Linux.

At that point, you should get into the installed Flatcar Linux using the dev or provided credentials with user core and you can now install Hedgehog Open Network Fabric on it. Download control-install.tgz to the just installed Control Node (e.g. by using scp) and run it.

tar xzf control-install.tgz && cd control-install && sudo ./hhfab-recipe run\n

It'll output log of installing the Fabric (including Kubernetes cluster, OCI registry misc components and etc), you should see following output in the end:

...\n01:34:45 INF Running name=reloader-image op=\"push fabricator/reloader:v1.0.40\"\n01:34:47 INF Running name=reloader-chart op=\"push fabricator/charts/reloader:1.0.40\"\n01:34:47 INF Running name=reloader-install op=\"file /var/lib/rancher/k3s/server/manifests/hh-reloader-install.yaml\"\n01:34:47 INF Running name=reloader-wait op=\"wait deployment/reloader-reloader\"\ndeployment.apps/reloader-reloader condition met\n01:35:15 INF Done took=3m39.586394608s\n

At that point, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s preinstalled as part of the Control Node installer.

You can now get HONiE installed on your switches and reboot them into ONiE Install Mode and they will be automatically provisioned from the Control Node.

"},{"location":"install-upgrade/requirements/","title":"System Requirements","text":"
  • Fast SSDs for system/root and K8s & container runtime forlders are required for stable work
  • SSDs are mandatory for Control Nodes
  • Minimal (non-HA) setup is a single Contol Node
  • (Future) Full (HA) setup is at least 3 Control Nodes
  • (Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack and etc.
"},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":"
  • Control Node runs non-HA K8s Contol Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
  • Not recommended for more then 10 devices participating in the Hedgehog Fabric or production deployments
Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":"
  • Each Contol Node runs part of the HA K8s Control Plane installation with Hedgehog Fabric Control Plane on top of it in HA mode as well
  • Recommended for all cases where more then 10 devices participating in the Hedgehog Fabric
Minimal Recommended CPU 4 8 RAM 12 GB 16 GB Disk 100 GB 250 GB"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
  • (Future) Each participating device is part of the K8s cluster, so, it run K8s kubelet
  • Additionally it run Hedgehog Fabric Agent that controls devices configuration
Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":""},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":"
  • DELL: S5232F-ON
  • EDGE-CORE: DCS204 (AS7726-54x)
"},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"
  • DELL: S5232F-ON, S5248F-ON
  • EDGE-CORE: DCS204 (AS7726-32x), DCS203 (AS7326-56x), EPS203 (AS4630-54NPE)
"},{"location":"reference/api/","title":"Fabric API","text":"

Under construction.

Please, refer to the User Guide chapter for examples and instructions and VLAB chapter for how to try different APIs in a virtual environment.

"},{"location":"reference/cli/","title":"Fabric CLI","text":"

Under construction.

Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
"},{"location":"reference/cli/#vpc","title":"VPC","text":"

Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

To peer VPC with another VPC (e.g. vpc-2) use the following command:

core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"

Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)

"},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":"
  • Support for multiple overlapping IPv4 addresses in the Fabric
  • Integrated with on-demand DHCP Service (see below)
  • All IPv4 addresses within a given VPC must be unique
  • Only VPCs with non-overlapping IPv4 subnets can peer within the Fabric
  • An external NAT device is required for peering of VPCs with overlapping subnets
"},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":"
  • Custom DHCP server executing in the controllers
  • Multiple IPv4 namespaces with overlapping subnets
  • Multiple VLAN namespaces with overlapping VLAN ranges
  • DHCP leases exposed through the Fabric API
  • Available for VLAB as well as the Fabric
"},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":"
  • Custom NTP servers at the controller
  • Switches automatically configured to use control node as NTP server
  • NTP servers can be configured to sync to external time/NTP server
"},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":"
  • Directly connect external infrastructure services (such as NTP, DHCP, DNS) to the Fabric
  • No BGP is required, just automatically configured static routes
"},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

Support for 3rd party DHCP server (DHCP Relay config) through the API

"},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

A single controller. No controller redundancy.

"},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

Spine controller connectivity is not supported.

For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

"},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":"
  • One 1 gig+ port per to connect to each controller attached switch
  • One+ 1 gig+ ports connecting to the external management network.
  • 4 Cores, 12GB RAM, 100GB SSD.
"},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

Switches not directly connecting to the controllers can chain boot via the data network.

"},{"location":"release-notes/#topology-support","title":"Topology support","text":"

CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

"},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

server leaf, border leaf, and mixed leaf modes are supported.

"},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

Two ToR/LEAF switches with MCLAG server connection.

"},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

MCLAG-only.

"},{"location":"release-notes/#device-support","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":"
  • DELL:
  • S5248F-ON
  • S5232F-ON

  • Edge-Core:

  • DCS204 (AS7726-32X)
  • DCS203 (AS7326-56X)
  • EPS203 (AS4630-54NPE)
"},{"location":"release-notes/#spines","title":"SPINEs","text":"
  • DELL:
  • S5232F-ON
  • Edge-Core:
  • DCS204 (AS7726-32X)
"},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

Port speed, port group speed, port breakouts are configurable through the API

"},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

VXLAN-based BGP eVPN.

"},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

"},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

"},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

"},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

"},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

"},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

The user controls what subnets/prefixes to import and export from/to the external system.

No NAT function is supported for external peering.

"},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

"},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

VPC is provided with an optional DHCP service with simple IPAM

"},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

"},{"location":"release-notes/#scale","title":"Scale","text":"
  • Maximum fabric size: 20 LEAF/ToR switches.
  • Routes per switch: 64k
  • [ silicon platform limitation in Trident 3; limits te number of endpoints in the fabric ]
  • Total VPCs per switch: up to 1000
  • [ Including VPCs attached at the given switch and VPCs peered with ]
  • Total VPCs per VLAN namespace: up to 3000
  • [ assuming 1 subnet per VPC ]
  • Total VPCs per fabric: unlimited
  • [ if using multiple VLAN namespaces ]
  • VPC subnets per switch: up to 3000
  • VPC subnets per VLAN namespace up to 3000
  • Subnets per VPC: up to 20
  • [ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
  • VPC Slots per remote peering @ switch: 2
  • Max VPC loopbacks per switch: 500
  • [ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
"},{"location":"release-notes/#software-versions","title":"Software versions","text":"
  • Fabric: v0.23.0
  • Das-boot: v0.11.4
  • Fabricator: v0.8.0
  • K3s: v1.27.4-k3s1
  • Zot: v1.4.3
  • SONiC
  • Broadcom Enterprise Base 4.1.1
  • Broadcom Enterprise Campus 4.1.1
"},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":"
  • MTU setting inflexibility:
  • Fabric MTU is 9100 and not configurable right now (A3 planned)
  • Server-facing MTU is 9136 and not configurable right now (A3+)
  • no support for Access VLANs for attaching servers (A3 planned)
  • VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
  • peering with external is only possible with a VLAN (by design)
  • If you have VPCs with remote peering on a switch group, you can\u2019t attach those VPCs on that switch group (by definition of remote peering)
  • if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
  • if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
  • the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
"},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":"
  • Controller:

    • A single controller connecting to each switch management port. No redundancy.
  • Controller requirements:

    • One 1 gig port per switch
    • One+ 1 gig+ ports connecting to the external management network.
    • 4 Cores, 12GB RAM, 100GB SSD.
  • Seeder:

    • Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
  • HHFab - the fabricator:

    • An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
  • DHCP Service:

    • A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
  • Topology:

    • Support for a Collapsed Core topology with 2 switch nodes.
  • Underlay:

    • A simple single-VRF network with a BGP control plane. IPv4 support only.
  • External connectivity:

    • An edge router must be connected to selected ports of one or both switches. IPv4 support only.
  • Dual-homing:

    • L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
  • VPC overlay implementation:

    • VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
  • VPC Peering:

    • VPC peering is performed via ACLs with no fine-grained control.
  • NAT

    • DNAT + SNAT are supported per VPC. SNAT and DNAT can\u2019t be enabled per VPC simultaneously.
  • Hardware support:

    • Please see the supported hardware list.
  • Virtual Lab:

    • A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
  • Limitations:

    • 40 VPCs max
    • 50 VPC peerings
    • [ 768 ACL entry platform limitation from Broadcom ]
  • Software versions:

    • Fabricator: v0.5.2
    • Fabric: v0.18.6
    • Das-boot: v0.8.2
    • K3s: v1.27.4-k3s1
    • Zot: v1.4.3
    • SONiC: Broadcom Enterprise Base 4.1.1
"},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

Under construction.

"},{"location":"user-guide/connections/","title":"Connections","text":"

The Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all connections between the devices in the Wiring Diagram.

There are multiple types of connections.

"},{"location":"user-guide/connections/#server-connections-user-facing","title":"Server connections (user-facing)","text":"

Server connections are used to connect workload servers to the switches.

"},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

Unbundled server connections are used to connect servers to the single switche using a single port.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
"},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

Bundled server connections are used to connect servers to the single switch using multiple ports (port channel, LAG).

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
"},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

MCLAG server connections are used to connect servers to the pair of switches using multiple ports (Dual-homing).

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
"},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

"},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

Connections between specific spine and leaf, covers all actual wires between a single pair.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
"},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

Used to define a pair of MCLAG switches with Session and Peer link between them.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
"},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

Required to implement a workaround for the local VPC peering (when both VPC are attached to the same switch) which is caused by the hardware limitation of the currently supported switches.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
"},{"location":"user-guide/connections/#management","title":"Management","text":"

Connection to the Control Node.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: control-1--mgmt--s5248-01-front\n  namespace: default\nspec:\n  management:\n    link: # Defines a single link between a control node and a switch\n      server:\n        ip: 172.30.20.0/31\n        port: control-1/enp2s1\n      switch:\n        ip: 172.30.20.1/31\n        port: s5248-01/Ethernet0\n
"},{"location":"user-guide/connections/#connecting-fabric-to-outside-world","title":"Connecting Fabric to outside world","text":"

Provides connectivity to the outside world, e.g. internet, other networks or some other systems such as DHCP, NTP, LMA, AAA services.

"},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

Simple way to connect things like DHCP server directly to the Fabric by connecting it to specific switch ports.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port, if 0 - no VLAN is configured\n        subnets: # List of subnets that will be routed to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address that will be used when configuring static routes for the \"subnets\" list\n
"},{"location":"user-guide/connections/#external","title":"External","text":"

Connection to the external systems, e.g. edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what's getting advertised and which routes are accepted.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
"},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

All devices in the Hedgehog Fabric are divided into two groups: switches and servers and represented by corresponding Switch and Server objects in the API. It's needed to define all participants of the Fabric and their roles in the Wiring Diagram as well as Connections between them.

"},{"location":"user-guide/devices/#switches","title":"Switches","text":"

Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API and consists of the basic information like name, description, location, role, etc. as well as port group speeds, port breakouts, ASN, IP addresses and etc.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  location:\n    location: gen--default--s5248-01\n  locationSig:\n    sig: <undefined>\n    uuidSig: <undefined>\n  portBreakouts: # Configures port breakouts for the switch\n    1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch\n    \"1\": 10G\n    \"2\": 10G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to\n  - some-group\n

The SwitchGroup is just a marker at that point and doesn't have any configuration options.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
"},{"location":"user-guide/devices/#servers","title":"Servers","text":"

It includes both control nodes and user's workload servers.

Control Node:

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: control-1\n  namespace: default\nspec:\n  type: control # Type of the server, one of control or \"\" (empty) for regular workload server\n

Regular workload server:

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
"},{"location":"user-guide/external/","title":"External Peering","text":"

Hedgehog Fabric uses Border Leaf concept to exchange VPC routes outside the Fabric and providing L3 connectivity. External Peering feature allows to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

Hedgehog Fabric does not operate Edge side devices.

"},{"location":"user-guide/external/#overview","title":"Overview","text":"

Traffic exit from the Fabric is done on Border Leafs that are connected with Edge devices. Border Leafs are suitable to terminate l2vpn connections and distinguish VPC L3 routable traffic towards Edge device as well as to land VPC servers. Border Leafs (or Borders) can connect to several Edge devices.

External Peering is only available on the switch devices that are capable for sub-interfaces.

"},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

In order to distinguish VPC traffic Edge device should be capable for - Set up BGP IPv4 to advertise and receive routes from the Fabric - Connect to Fabric Border Leaf over Vlan - Be able to mark egress routes towards the Fabric with BGP Communities - Be able to filter ingress routes from the Fabric by BGP Communities

All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

"},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

Fabric is sharing VPC routes with Edge devices via BGP. Peering is done over Vlan in IPv4 Unicast AFI/SAFI.

"},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

VPC L3 routable traffic will be tagged with Vlan and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

"},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that VPC can export to Edge devices and import from the Edge devices.

"},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

General configuration starts with specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with following config

  • Name of External
  • Inbound routes are marked with dedicated BGP community
  • Outbound routes are required to be marked with dedicated community

Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
"},{"location":"user-guide/external/#connection","title":"Connection","text":"

Connection of type external is used to identify switch port on Border leaf that is cabled with an Edge device.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwtichName/EthernetXXX\n
"},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

External Attachment is a definition of BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to Connection with type external and specify Vlan that will be used to segregate particular Edge peering.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP Address on the Border Leaf to set up BGP peering\n    vlan: # Vlan ID to tag control and data traffic\n

Several External Attachment can be configured for the same Connection but for different vlan.

"},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

To allow specific VPC have access to Edge devices VPC should be bound to specific External object. This is done via External Peering object.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes(routes) to be allowed to pick up from External\n      - # IPv4 Prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n
Prefixes can be specified as exact match or with mask range indicators le and ge keywords. le is identifying prefixes lengths that are less than or equal and ge for prefixes lengths that are greater than or equal.

Example: Allow ANY IPv4 prefix that came from External - allow all prefixes that match default route with any prefix length

spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n
ge and le can also be combined.

Example:

spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - le: 24\n        ge: 16\n        prefix: 77.0.0.0/8\n
For instance, 77.42.0.0/18 will be matched for given prefix rule above, but 77.128.77.128/25 or 77.10.0.0/16 won't.

"},{"location":"user-guide/external/#examples","title":"Examples","text":"

This example will show peering with External object with name HedgeEdge given Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable between an Edge device on the port Ethernet42. vpc-1 is required to receive any prefixes advertised from the External.

"},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"

# hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
apiVersion: vpc.githedgehog.com/v1alpha2\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n

"},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

Connection should be specified in the wiring diagram.

###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1alpha2\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
"},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

Specified in wiring diagram

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n

"},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
apiVersion: vpc.githedgehog.com/v1alpha2\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - le: 32\n        prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
"},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

NOTE: Hedgehog does not recommend using following configuration for production. It's just as example of Edge Peer config

Interface config

interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

BGP Config

!\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n
Route Map configuration
route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n

"},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

It's an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

You'll need to define Server object per each server running Harvester and Connection object per each server connection to the switches.

You can have multiple VPCs created and attached to the Connections to this servers to make them available to the VMs in Harvester or any other hypervisor.

"},{"location":"user-guide/harvester/#congigure-harvester","title":"Congigure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

From the \"Cluster Network/Confg\" side menu. Create a new Cluster Network.

Here is what the CRD looks like cleaned up:

apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
"},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

By clicking \"Create Network Confg\". Add your connections and select bonding type.

The resulting cleaned up CRD:

apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
"},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

Browse over to \"VM Networks\" and add one for each Vlan you want to support, assigning them to the cluster network.

Here is what the CRDs will look like for both vlans:

apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
"},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

Now you can choose created VM Networks when creating a VM in Harvester and have them created as part of the VPC.

"},{"location":"user-guide/overview/","title":"Overview","text":"

The chapter is intended to give an overview of the main features of the Hedgehog Fabric and their usage.

"},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits to which subnets could be used by VPC to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits to which switches VPC could be attached to guarantee non-overlapping VLANs\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified\n          start: 10.10.1.10\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      vlan: \"1001\" # User-defined VLAN from vlan namespace\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: \"1002\"\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: \"1100\"\n

In case if you're using thirt-party DHCP server by configuring spec.subnets.<subnet>.dhcp.relay additional information will be added to the DHCP packet it forwards to the DHCP server to make it possible to identify the VPC and subnet. The information is added under the RelayAgentInfo option(82) on the DHCP packet. The relay sets two suboptions in the packet

  • VirtualSubnetSelection -- (suboption 151) is populated with the VRF which uniquely idenitifies a VPC on the Hedgehog Fabric and will be in VrfV<VPC-name> format, e.g. VrfVvpc-1 for VPC named vpc-1 in the Fabric API
  • CircuitID -- (suboption 1) identifies the VLAN which together with VRF (VPC) name maps to a specific VPC subnet
"},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

Represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

VPC could be attached to a switch which is a part of the VLAN namespace used by the VPC.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n
"},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

It enables VPC to VPC connectivity. There are tw o types of VPC peering:

  • Local - peering is implemented on the same switches where VPCs are attached
  • Remote - peering is implemented on the border/mixed leafs defined by the SwitchGroup object

VPC peering is only possible between VPCs attached to the same IPv4 namespace.

Local:

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-3\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # meaning all subnets of two VPCs will be able to communicate to each other\n    vpc-3: {} # more advanced filtering will be supported in future releases\n

Remote:

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # indicates a switch group to implement the peering on\n
"},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

Defines non-overlapping VLAN ranges for attaching servers. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges.

apiVersion: vpc.githedgehog.com/v1alpha2\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of the subnets that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
"},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

Defines non-overlapping IPv4 ranges for VPC subnets. Each VPC belongs to a specific IPv4 namespace.

apiVersion: wiring.githedgehog.com/v1alpha2\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
"},{"location":"vlab/demo/","title":"Demo on VLAB","text":"

You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

"},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

The default topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

For more details on the customizing topologies see Running VLAB section.

In the default topology, the following Control Node and Switch VMs are created:

graph TD\n    CN[Control Node]\n\n    S1[Spine 1]\n    S2[Spine 2]\n\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    CN --> L1\n    CN --> L2\n\n    S1 --> L1\n    S1 --> L2\n    S2 --> L2\n    S2 --> L3

As well as test servers:

graph TD\n    L1[MCLAG Leaf 1]\n    L2[MCLAG Leaf 2]\n    L3[Leaf 3]\n\n    TS1[Test Server 1]\n    TS2[Test Server 2]\n    TS3[Test Server 3]\n    TS4[Test Server 4]\n    TS5[Test Server 5]\n    TS6[Test Server 6]\n\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS3 --> L1\n    TS4 --> L2\n\n    TS5 --> L3\n    TS6 --> L4
"},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

You can create and attach VPCs to the VMs using the kubectl fabric vpc command on control node or outside of cluster using the kubeconfig. For example, run following commands to create a 2 VPCs with a single subnet each, DHCP server enabled with optional IP address range start defined and attach them to some test servers:

core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

VPC subnet should belong to some IPv4Namespace, default one in the VLAB is 10.0.0.0/16:

core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

After you created VPCs and VPCAttachments, you can check the status of the agents to make sure that requested configuration was apploed to the switches:

core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

As you can see columns APPLIED and APPLIEDG are equal which means that requested configuration was applied.

"},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

You can use hhfab vlab ssh on the host to ssh into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure bond with a vlan on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a vlan and they both will get an IP address from the DHCP server. You can use ip command to configure networking on the servers or use little helper preinstalled by Fabricator on test servers.

For server-01:

core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

And for server-02:

core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
"},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

You can test connectivity between the servers before peering the switches using ping command:

core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
"},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

To enable connectivity between the VPCs, you need to peer them using kubectl fabric vpc peer command:

core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that you can test connectivity between the servers again:

core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

If you will delete VPC peering using command following command and wait for the agent to apply configuration on the switches, you will see that connectivity will be lost again:

core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
"},{"location":"vlab/overview/","title":"Overview","text":"

It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its looka and feel, API, capabilities and etc. It's not suitable for any data plane or performance testing as well as not for production use.

In the VLAB all switches will start as an empty VMs with only ONiE image on them and will go through the whole discovery, boot and installation process like on the real hardware.

"},{"location":"vlab/overview/#overview_1","title":"Overview","text":"

The hhfab CLI provides a special command vlab to manage the virtual labs. It allows to run set of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and automatically runs installer to get Fabric up and running.

You can find more information about getting hhfab in the download section.

"},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

Following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat and docker will be required to login into OCI registry.

By default, VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf. Optionally, you can choose to run default Collapsed Core topology using --fabric-mode collapsed-core (or -m collapsed-core) flag which only conisists of 2 switches.

You can calculate the system requirements based on the allocated resources to the VMs using the following table:

Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

Which gives approximately the following requirements for the default topologies:

  • Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
  • Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk

Usually, non of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

NVMe SSD for VM disks is highly recommended.

"},{"location":"vlab/overview/#installing-prerequisites","title":"Installing prerequisites","text":"

On Ubuntu 22.04 LTS you can install all required packages using the following commands:

curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

Good output of the kvm-ok command should look like this:

ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
"},{"location":"vlab/overview/#next-steps","title":"Next steps","text":"
  • Running VLAB
"},{"location":"vlab/running/","title":"Running VLAB","text":"

Please, make sure to follow prerequisites and check system requirements in the VLAB Overview section before running VLAB.

"},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

As a first step you need to initialize Fabricator for the VLAB by running hhfab init --preset vlab (or -p vlab). It supports a lot of customization options which you can find by adding --help to the command. If you want to tune the topology used for the VLAB you can use --fabric-mode (or -m) flag to choose between spine-leaf (default) and collapsed-core topologies as well as you can configure the number of spines, leafs, connections and etc. For example, --spines-count and --mclag-leafs-count flags allows to set number of spines and MCLAG leafs respectively.

So, by default you'll get 2 spines, 2 MCLAG leafs and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC Loopback workaround.

ubuntu@docs:~$ hhfab init -p vlab\n01:17:44 INF Generating wiring from gen flags\n01:17:44 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:17:44 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:17:44 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=1\n01:17:44 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:17:44 INF                     >>> vpcLoopbacks=2\n01:17:44 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:17:44 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Or if you want to run Collapsed Core topology with 2 MCLAG switches:

ubuntu@docs:~$ hhfab init -p vlab -m collapsed-core\n01:20:07 INF Generating wiring from gen flags\n01:20:07 INF Building wiring diagram fabricMode=collapsed-core chainControlLink=false controlLinksCount=0\n01:20:07 INF                     >>> mclagLeafsCount=2 orphanLeafsCount=0\n01:20:07 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:20:07 INF                     >>> vpcLoopbacks=2\n01:20:07 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:20:07 INF Initialized preset=vlab fabricMode=collapsed-core config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Or you can run custom topology with 2 spines, 4 MCLAG leafs and 2 non-MCLAG leafs using flags:

ubuntu@docs:~$ hhfab init -p vlab --mclag-leafs-count 4 --orphan-leafs-count 2\n01:21:53 INF Generating wiring from gen flags\n01:21:53 INF Building wiring diagram fabricMode=spine-leaf chainControlLink=false controlLinksCount=0\n01:21:53 INF                     >>> spinesCount=2 fabricLinksCount=2\n01:21:53 INF                     >>> mclagLeafsCount=4 orphanLeafsCount=2\n01:21:53 INF                     >>> mclagSessionLinks=2 mclagPeerLinks=2\n01:21:53 INF                     >>> vpcLoopbacks=2\n01:21:53 WRN Wiring is not hydrated, hydrating reason=\"error validating wiring: ASN not set for switch leaf-01\"\n01:21:53 INF Initialized preset=vlab fabricMode=spine-leaf config=.hhfab/config.yaml wiring=.hhfab/wiring.yaml\n

Additionally, you can do extra Fabric configuration using flags on init command or by passing config file, more information about it is available in the Fabric Configuration section.

Once you have initialized the VLAB you need to download all artifacts and build the installer using hhfab build command. It will automatically download all required artifacts from the OCI registry and build the installer as well as all other prerequisites for running the VLAB.

"},{"location":"vlab/running/#build-the-installer-and-vlab","title":"Build the installer and VLAB","text":"
ubuntu@docs:~$ hhfab build\n01:23:33 INF Building component=base\n01:23:33 WRN Attention! Development mode enabled - this is not secure! Default users and keys will be created.\n...\n01:23:33 INF Building component=control-os\n01:23:33 INF Building component=k3s\n01:23:33 INF Downloading name=m.l.hhdev.io:31000/githedgehog/k3s:v1.27.4-k3s1 to=.hhfab/control-install\nCopying k3s-airgap-images-amd64.tar.gz  187.36 MiB / 187.36 MiB   \u2819   0.00 b/s done\nCopying k3s                               56.50 MiB / 56.50 MiB   \u2819   0.00 b/s done\n01:23:35 INF Building component=zot\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/zot:v1.4.3 to=.hhfab/control-install\nCopying zot-airgap-images-amd64.tar.gz  19.24 MiB / 19.24 MiB   \u2838   0.00 b/s done\n01:23:35 INF Building component=misc\n01:23:35 INF Downloading name=m.l.hhdev.io:31000/githedgehog/fabricator/k9s:v0.27.4 to=.hhfab/control-install\nCopying k9s  57.75 MiB / 57.75 MiB   \u283c   0.00 b/s done\n...\n01:25:40 INF Planned bundle=control-install name=fabric-api-chart op=\"push fabric/charts/fabric-api:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-image op=\"push fabric/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-chart op=\"push fabric/charts/fabric:v0.23.0\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent-seeder op=\"push fabric/agent/x86_64:latest\"\n01:25:40 INF Planned bundle=control-install name=fabric-agent op=\"push fabric/agent:v0.23.0\"\n...\n01:25:40 INF Recipe created bundle=control-install actions=67\n01:25:40 INF Creating recipe bundle=server-install\n01:25:40 INF Planned bundle=server-install name=toolbox op=\"file /opt/hedgehog/toolbox.tar\"\n01:25:40 INF Planned bundle=server-install name=toolbox-load op=\"exec ctr\"\n01:25:40 INF Planned bundle=server-install name=hhnet op=\"file /opt/bin/hhnet\"\n01:25:40 INF Recipe created bundle=server-install actions=3\n01:25:40 INF Building done took=2m6.813384532s\n01:25:40 INF Packing bundle=control-install target=control-install.tgz\n01:25:45 INF Packing bundle=server-install target=server-install.tgz\n01:25:45 INF Packing done took=5.67007384s\n

As soon as it's done you can run the VLAB using hhfab vlab up command. It will automatically start all VMs and run the installers on the control node and test servers. It will take some time for all VMs to come up and for the installer to finish, you will see the progress in the output. If you stop the command, it'll stop all VMs, and you can re-run it to get VMs back up and running.

"},{"location":"vlab/running/#run-vms-and-installers","title":"Run VMs and installers","text":"
ubuntu@docs:~$ hhfab vlab up\n01:29:13 INF Starting VLAB server... basedir=.hhfab/vlab-vms vm-size=\"\" dry-run=false\n01:29:13 INF VM id=0 name=control-1 type=control\n01:29:13 INF VM id=1 name=server-01 type=server\n01:29:13 INF VM id=2 name=server-02 type=server\n01:29:13 INF VM id=3 name=server-03 type=server\n01:29:13 INF VM id=4 name=server-04 type=server\n01:29:13 INF VM id=5 name=server-05 type=server\n01:29:13 INF VM id=6 name=server-06 type=server\n01:29:13 INF VM id=7 name=leaf-01 type=switch-vs\n01:29:13 INF VM id=8 name=leaf-02 type=switch-vs\n01:29:13 INF VM id=9 name=leaf-03 type=switch-vs\n01:29:13 INF VM id=10 name=spine-01 type=switch-vs\n01:29:13 INF VM id=11 name=spine-02 type=switch-vs\n01:29:13 INF Total VM resources cpu=\"38 vCPUs\" ram=\"36352 MB\" disk=\"410 GB\"\n...\n01:29:13 INF Preparing VM id=0 name=control-1 type=control\n01:29:13 INF Copying files  from=.hhfab/control-os/ignition.json to=.hhfab/vlab-vms/control-1/ignition.json\n01:29:13 INF Copying files  from=.hhfab/vlab-files/flatcar.img to=.hhfab/vlab-vms/control-1/os.img\n 947.56 MiB / 947.56 MiB [==========================================================] 5.13 GiB/s done\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_code.fd to=.hhfab/vlab-vms/control-1/efi_code.fd\n01:29:14 INF Copying files  from=.hhfab/vlab-files/flatcar_efi_vars.fd to=.hhfab/vlab-vms/control-1/efi_vars.fd\n01:29:14 INF Resizing VM image (may require sudo password) name=control-1\n01:29:17 INF Initializing TPM name=control-1\n...\n01:29:46 INF Installing VM name=control-1 type=control\n01:29:46 INF Installing VM name=server-01 type=server\n01:29:46 INF Installing VM name=server-02 type=server\n01:29:46 INF Installing VM name=server-03 type=server\n01:29:47 INF Installing VM name=server-04 type=server\n01:29:47 INF Installing VM name=server-05 type=server\n01:29:47 INF Installing VM name=server-06 type=server\n01:29:49 INF Running VM id=0 name=control-1 type=control\n01:29:49 INF Running VM id=1 name=server-01 type=server\n01:29:49 INF Running VM id=2 name=server-02 type=server\n01:29:49 INF Running VM id=3 name=server-03 type=server\n01:29:50 INF Running VM id=4 name=server-04 type=server\n01:29:50 INF Running VM id=5 name=server-05 type=server\n01:29:50 INF Running VM id=6 name=server-06 type=server\n01:29:50 INF Running VM id=7 name=leaf-01 type=switch-vs\n01:29:50 INF Running VM id=8 name=leaf-02 type=switch-vs\n01:29:51 INF Running VM id=9 name=leaf-03 type=switch-vs\n01:29:51 INF Running VM id=10 name=spine-01 type=switch-vs\n01:29:51 INF Running VM id=11 name=spine-02 type=switch-vs\n...\n01:30:41 INF VM installed name=server-06 type=server installer=server-install\n01:30:41 INF VM installed name=server-01 type=server installer=server-install\n01:30:41 INF VM installed name=server-02 type=server installer=server-install\n01:30:41 INF VM installed name=server-04 type=server installer=server-install\n01:30:41 INF VM installed name=server-03 type=server installer=server-install\n01:30:41 INF VM installed name=server-05 type=server installer=server-install\n...\n01:31:04 INF Running installer on VM name=control-1 type=control installer=control-install\n...\n01:35:15 INF Done took=3m39.586394608s\n01:35:15 INF VM installed name=control-1 type=control installer=control-install\n

After you see VM installed name=control-1, it means that the installer has finished and you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned.

"},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

Fabricator will create default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

"},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

You can select device you want to access or pass the name using the --vm flag.

ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n

On the control node you'll have access to the kubectl, Fabric CLI and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.

After switches are provisioned you will see something like this:

core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. If AppliedG and CurrentG are different it means that the switch is in the process of applying the configuration.

At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about it in the Running Demo and User Guide sections.

"},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

You can get the main Fabric objects using kubectl get command on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

For example, to get the list of switches you can run:

core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

Similar for the servers:

core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

For connections:

core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\ncontrol-1--mgmt--leaf-01             management     6h11m\ncontrol-1--mgmt--leaf-02             management     6h11m\ncontrol-1--mgmt--leaf-03             management     6h11m\ncontrol-1--mgmt--spine-01            management     6h11m\ncontrol-1--mgmt--spine-02            management     6h11m\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

For IPv4 and VLAN namespaces:

core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
"},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

To reset VLAB and start over just remove the .hhfab directory and run hhfab init again.

"},{"location":"vlab/running/#next-steps","title":"Next steps","text":"
  • Running Demo
"}]} \ No newline at end of file diff --git a/alpha-3/sitemap.xml b/alpha-3/sitemap.xml index 3a2c8c7..f6c1706 100755 --- a/alpha-3/sitemap.xml +++ b/alpha-3/sitemap.xml @@ -2,132 +2,132 @@ https://docs.githedgehog.com/alpha-3/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/architecture/fabric/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/architecture/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/concepts/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/contribute/docs/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/contribute/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/getting-started/download/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/build-wiring/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/config/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/onie-update/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/requirements/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/install-upgrade/supported-devices/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/reference/api/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/reference/cli/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/release-notes/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/troubleshooting/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/connections/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/devices/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/external/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/harvester/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/user-guide/vpcs/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/vlab/demo/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/vlab/overview/ - 2024-01-22 + 2024-01-24 daily https://docs.githedgehog.com/alpha-3/vlab/running/ - 2024-01-22 + 2024-01-24 daily \ No newline at end of file diff --git a/alpha-3/sitemap.xml.gz b/alpha-3/sitemap.xml.gz index 7d7e5ea..5a531e1 100755 Binary files a/alpha-3/sitemap.xml.gz and b/alpha-3/sitemap.xml.gz differ diff --git a/alpha-3/user-guide/vpcs/index.html b/alpha-3/user-guide/vpcs/index.html index 212ab5e..45e4d73 100755 --- a/alpha-3/user-guide/vpcs/index.html +++ b/alpha-3/user-guide/vpcs/index.html @@ -1272,6 +1272,14 @@

subnet: 10.10.100.0/24 vlan: "1100"

+

In case if you're using thirt-party DHCP server by configuring spec.subnets.<subnet>.dhcp.relay additional information +will be added to the DHCP packet it forwards to the DHCP server to make it possible to identify the VPC and subnet. The +information is added under the RelayAgentInfo option(82) on the DHCP packet. The relay sets two suboptions in the packet

+
    +
  • VirtualSubnetSelection -- (suboption 151) is populated with the VRF which uniquely idenitifies a VPC on the Hedgehog + Fabric and will be in VrfV<VPC-name> format, e.g. VrfVvpc-1 for VPC named vpc-1 in the Fabric API
  • +
  • CircuitID -- (suboption 1) identifies the VLAN which together with VRF (VPC) name maps to a specific VPC subnet
  • +

VPCAttachment

Represents a specific VPC subnet assignemnt to the Connection object which means exact server port to a VPC binding. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

@@ -1345,7 +1353,7 @@

VLANNamespace

Last update: - January 22, 2024 + January 24, 2024
Created: