From eb91d9b3c25d4e1ff43255f5f73db8b0ff5d4685 Mon Sep 17 00:00:00 2001
From: Yuliia Horbenko <31223054+yuliiiah@users.noreply.github.com>
Date: Fri, 15 Mar 2024 19:43:11 +0100
Subject: [PATCH] docs: Backport Azure AKS updates to older docs versions
(#2411)
* docs: Backport Azure AKS updates to older docs versions
* chore: Fix broken anchor
* chore: Fix another broken anchor
* docs: Implement peer review
---
.../clusters/public-cloud/azure/aks.md | 433 +++++++-----------
.../clusters/public-cloud/azure/windows.md | 4 +-
.../integrations/kubernetes-generic.md | 28 +-
docs/docs-content/integrations/kubernetes.md | 28 +-
4 files changed, 185 insertions(+), 308 deletions(-)
diff --git a/docs/docs-content/clusters/public-cloud/azure/aks.md b/docs/docs-content/clusters/public-cloud/azure/aks.md
index 0e3adb8561..c5e7cf6a72 100644
--- a/docs/docs-content/clusters/public-cloud/azure/aks.md
+++ b/docs/docs-content/clusters/public-cloud/azure/aks.md
@@ -1,354 +1,241 @@
---
sidebar_label: "Create and Manage Azure AKS Cluster"
title: "Create and Manage Azure AKS Cluster"
-description: "The methods of creating clusters for a speedy deployment on any CSP"
+description: "Learn how to deploy Azure Kubernetes Service clusters in Palette."
hide_table_of_contents: false
-tags: ["public cloud", "azure"]
+tags: ["public cloud", "azure", "aks"]
sidebar_position: 30
---
-Palette supports creating and managing Kubernetes clusters deployed to an Azure subscription. This section guides you on
-how to create an IaaS Kubernetes cluster in Azure that is managed by Palette.
-
-Azure clusters can be created under the following scopes:
-
-- Tenant admin
-
-- Project Scope - This is the recommended scope.
-
-Be aware that clusters that are created under the **Tenant Admin** scope are not visible under Project scope .
+Palette supports creating and managing Azure Kubernetes Service (AKS) clusters deployed to an Azure account. This guide
+explains how you can create an Azure AKS cluster managed by Palette.
## Prerequisites
-These prerequisites must be met before deploying an AKS workload cluster:
-
-1. You need an active Azure cloud account with sufficient resource limits and permissions to provision compute, network,
- and security resources in the desired regions.
-
-2. You will need to have permissions to deploy clusters using the AKS service on Azure.
-
-3. Register your Azure cloud account in Palette as described in the [Creating an Azure Cloud Account](./azure-cloud.md)
- section below.
-
-4. You should have a cluster profile created in Palette for AKS.
-
-5. Associate an SSH key pair to the cluster worker node.
-
-
-
-## Additional Prerequisites
-
-There are additional prerequisites if you want to set up Azure Active Directory integration for the AKS cluster:
-
-1. A Tenant Name must be provided as part of the Azure cloud account creation in Palette.
-
-2. For the Azure client used in the Azure cloud account, these API permissions have to be provided:
-
- | | |
- | --------------- | ------------------------------------- |
- | Microsoft Graph | Group.Read.All (Application Type) |
- | Microsoft Graph | Directory.Read.All (Application Type) |
-
-3. You can configure these permissions from the Azure cloud console under **App registrations** > **API permissions**
- for the specified application.
-
-:::info
-
-Palette **also** enables the provisioning of private AKS clusters via a private cloud gateway (Self Hosted PCGs). The
-Self-Hosted PCG is an AKS cluster that needs to be launched manually and linked to an Azure cloud account in Palette
-Management Console. [Click here for more..](gateways.md)
-
-:::
-
-
-
-To create an Azure cloud account you need the following Azure account information:
-
-- Client ID
-- Tenant ID
-- Client Secret
-- Tenant Name (optional)
-- Toggle `Connect Private Cloud Gateway` option and select the [Self-Hosted PCG](gateways.md) already created from the
- drop-down menu to link it to the cloud account.
-
-**Note:**
-
-For existing cloud account go to `Edit` and toggle the `Connect Private Cloud Gateway` option to select the created
-gateway from the drop down menu.
-
-For Azure cloud account creation, we first need to create an Azure Active Directory (AAD) application that can be used
-with role-based access control. Follow the steps below to create a new AAD application, assign roles, and create the
-client secret:
-
-
-
-1. Follow the steps described
- [here](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-an-azure-active-directory-application)
- to create a new Azure Active Directory application. Note down your ClientID and TenantID.
-
-2. On creating the application, assign a minimum required
- [ContributorRole](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#contributor). To
- assign any type of role, the user must have a minimum role of
- [UserAccessAdministrator](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#user-access-administrator).
- Follow the
- [Assign Role To Application](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-a-role-to-the-application)
- link learn more about roles.
+- An active Azure cloud account integrated with Palette. Review
+ [Register and Manage Azure Cloud Account](./azure-cloud.md) for guidance.
-3. Follow the steps described in the
- [Create an Application Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-a-new-application-secret)
- section to create the client application secret. Store the Client Secret safely as it will not be available as plain
- text later.
+- A Secure Shell (SSH) key that you have pre-configured in your Azure environment. Refer to the
+ [SSH Keys](../../cluster-management/ssh-keys.md) guide for more information about creating and managing SSH keys in
+ Palette.
-## Deploy an AKS Cluster
+- An infrastructure cluster profile for Azure. Review
+ [Create an Infrastructure Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-infrastructure-profile.md)
+ for guidance.
-
+- To use custom storage accounts or containers, you must create them before you create your cluster. For information
+ about use cases for custom storage, review [Azure Storage](./architecture.md#azure-storage).
-
+ :::tip
-The following steps need to be performed to provision a new cluster:
+ If you need help creating a custom storage account or container, check out the
+ [Create a Storage Account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal)
+ and the [Manage Blob Containers](https://learn.microsoft.com/en-us/azure/storage/blobs/blob-containers-portal) guides.
-
+ :::
-1. If you already have a profile to use, go to **Cluster** > **Add a New Cluster** > **Deploy New Cluster** and select
- an Azure cloud. If you do not have a profile to use, review the
- [Creating a Cluster Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-cluster-profiles.md)
- page for guidance on profile types to create.
+- To enable OIDC with Microsoft Entra ID, you need to configure Entra ID with Palette. Review the
+ [Enable SSO with Microsoft Entra ID](../../../user-management/saml-sso/palette-sso-with-entra-id.md) guide for more
+ information.
-2. Fill the basic cluster profile information such as **Name**, **Description**, **Tags** and **Cloud Account**.
+- Optionally, a Virtual Network (VNet). If you do not provide a VNet, Palette creates one for you with compute, network,
+ and storage resources in Azure when it provisions Kubernetes clusters.
-3. In the **Cloud Account** dropdown list, select the Azure Cloud account or create a new one. Refer to the
- [Creating an Azure Cloud Account](azure-cloud.md) section above.
+ To use a VNet that Palette creates, ensure there is sufficient capacity in your preferred Azure region to create the
+ following resources:
-4. Next, in the **Cluster profile** tab from the **Managed Kubernetes** list, pick **AKS**, and select the AKS cluster
- profile definition.
+ - Virtual CPU (vCPU)
+ - VNet
+ - Static Public IP addresses
+ - Virtual Network Interfaces
+ - Load Balancers
+ - Virtual Hard Disk (VHD)
+ - Managed Disks
+ - Virtual Network Address Translation (NAT) Gateway
-5. Review the **Parameters** for the selected cluster profile definitions. By default, parameters for all packs are set
- with values defined in the cluster profile.
+## Deploy an Azure AKS Cluster
-6. Complete the **Cluster config** section with the information for each parameter listed below.
+1. Log in to [Palette](https://console.spectrocloud.com).
- | **Parameter** | **Description** |
- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
- | **Subscription** | Select the subscription which is to be used to access Azure Services. |
- | **Region** | Select a region in Azure in where the cluster should be deployed. |
- | **Resource Group** | Select the resource group in which the cluster should be deployed. |
- | **SSH Key** | The public SSH key for connecting to the nodes. Review Microsoft's [supported SSH](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys#supported-ssh-key-formats) formats. |
- | **Static Placement** | By default, Palette uses dynamic placement. This creates a new VNet for the cluster that contains two subnets in different Availability Zones (AZs). Palette places resources in these clusters, manages the resources, and deletes them when the corresponding cluster is deleted.
If you want to place resources into a pre-existing VNet, enable the **Static Placement** option, and fill out the input values listed in the [Static Placement](#static-placement-settings) table below. |
+2. Ensure you are in the correct project scope.
- #### Static Placement Settings
+3. From the left **Main Menu** select **Clusters** > **Add New Cluster** > **Deploy New Cluster**.
- Each subnet allows you to specify the CIDR range and a security group.
+4. Under **Cloud**, select **Azure** and click **Start Azure Configuration**.
- | **Parameter** | **Description** |
- | -------------------------- | ----------------------------------------------------------- |
- | **Network Resource Group** | The logical container for grouping related Azure resources. |
- | **Virtual Network** | Select the VNet. |
- | **CIDR Block** | Select the IP address CIDR range. |
- | **Security Group Name** | Select the security group name. |
- | **Control Plane Subnet** | Select the control plane subnet. |
- | **Worker Subnet** | Select the worker network. |
+5. Fill out the following basic information and click **Next**.
- :::warning
+ | **Field** | **Description** |
+ | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+ | **Cluster Name** | A custom name for the cluster. |
+ | **Description** | Use the description to provide context about the cluster. |
+ | **Tags** | Assign any desired cluster tags. Tags on a cluster are propagated to the Virtual Machines (VMs) deployed to the target environments. Example: `region:us-west`. |
+ | **Cloud Account** | If you have already added your Azure account in Palette, select it from the **drop-down Menu**. Otherwise, click **Add New Account** and add your Azure account information. |
- If you enable the **Disable Properties** setting when
- [registering an Azure cloud account](./azure-cloud.md#add-azure-cloud-account), Palette cannot create network
- resources on your behalf. In this case, every time you deploy a cluster, you must manually specify their virtual
- network subnets and security groups,
+6. Under **Managed Kubernetes**, select **Azure AKS** and select your Azure AKS cluster profile. Click **Next** to
+ continue.
- :::
+7. Palette displays the cluster profile layers. Review the profile layers and customize parameters as desired in the
+ YAML files that display when you select a layer.
-7. Click **Next** to configure the node pools.
+ You can configure custom OpenID Connect (OIDC) for Azure clusters at the Kubernetes layer. Check out
+ [Configure OIDC Identity Provider](../../../integrations/kubernetes.md#configure-oidc-identity-provider) for more
+ information.
-
+ :::warning
-The [maximum number](https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node) of pods per
-node in an AKS cluster is 250. If you don't specify maxPods when creating new node pools, then the default value of 30
-is applied. You can edit this value from the Kubernetes configuration file at any time by editing the `maxPodPerNode`
-value. Refer to the snippet below:
+ All OIDC options require you to map a set of users or groups to a Kubernetes RBAC role. To learn how to map a
+ Kubernetes role to users and groups, refer to
+ [Create Role Bindings](../../cluster-management/cluster-rbac.md#create-role-bindings).
-
+ :::
-```
-managedMachinePool:
- maxPodPerNode: 30
-```
+8. Click **Next** to continue.
-## Node Pools
+9. Configure your Azure AKS cluster using the following table for reference.
-This section guides you to through configuring Node Pools. As you set up the cluster, the **Nodes config** section will
-allow you to customize node pools. AKS Clusters are comprised of System and User node pools, and all pool types can be
-configured to use the Autoscaler, which scales out pools horizontally based on per node workload counts.
+ :::warning
-A complete AKS cluster contains the following:
+ If you enable the **Disable Properties** setting when
+ [registering an Azure cloud account](./azure-cloud.md#add-azure-cloud-account), Palette cannot create network
+ resources on your behalf. In this case, every time you deploy a cluster, you must manually specify its virtual
+ network subnets and security groups.
-
+ :::
-1. As a mandatory primary **System Node Pool**, this pool will host the pods necessary to operate a Kubernetes cluster,
- like the control plane and etcd. All system pools must have at least a single node for a development cluster; one
- node is enough for high availability production clusters, and three nodes or more is recommended.
+ | **Parameter** | **Description** |
+ | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+ | **Subscription** | Use the **drop-down Menu** to select the subscription that will be used to access Azure services. |
+ | **Region** | Use the **drop-down Menu** to choose the Azure region where you would like to provision the cluster. |
+ | **Resource Group** | Select the name of the resource group that contains the Azure resources you will be accessing. |
+ | **Storage Account** | Optionally, if you have a custom storage account available, you can use the **drop-down Menu** to select the storage account name. For information about use cases for custom storage, review [Azure Storage](../azure/architecture.md#azure-storage). |
+ | **Storage Container** | Optionally, if you are using a custom storage container, use the **drop-down Menu** to select it. For information about use cases for custom storage, review [Azure Storage](../azure/architecture.md#azure-storage). |
+ | **SSH Key** | The public SSH key for connecting to the nodes. SSH key pairs must be pre-configured in your Azure environment. The key you select is inserted into the provisioned VMs. For more information, review Microsoft's [Supported SSH key formats](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys#supported-ssh-key-formats). |
+ | **Enable Private Cluster** | Whether the control plane or API server should have internal IP addresses. Refer to the [Create a private AKS cluster](https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=azure-portal) guide for more information. |
+ | **Static Placement** | By default, Palette uses dynamic placement. This creates a new VNet for clusters with two subnets in different Availability Zones (AZs). Palette places resources in these clusters, manages the resources, and deletes them when the corresponding cluster is deleted.
If you want to place resources into a pre-existing VNet, enable the **Static Placement** option and fill out the input values listed in the [Static Placement](#static-placement-settings) table below. |
-2. **Worker Node** pools consist of one (1) or more per workload requirements. Worker node pools can be sized to zero
- (0) nodes when not in use.
+ #### Static Placement Settings
-
+ Each subnet allows you to specify the CIDR range and a security group.
-## Create and Remove Node Pools
+ | **Parameter** | **Description** |
+ | -------------------------- | ----------------------------------------------------------- |
+ | **Network Resource Group** | The logical container for grouping related Azure resources. |
+ | **Virtual Network** | Select the VNet. |
+ | **CIDR Block** | Select the IP address CIDR range. |
+ | **Security Group Name** | Select the security group name. |
+ | **Control Plane Subnet** | Select the control plane subnet. |
+ | **Worker Subnet** | Select the worker network. |
-During cluster creation, you will default to a single pool.
+10. Click **Next** to continue.
-
+11. Provide the following node pool and cloud configuration information. To learn more about node pools, review the
+ [Node Pool](../../cluster-management/node-pool.md) guide.
-1. To add additional pools, click **Add Node Pool**.
+ #### System Node Pool
-2. Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional, and you can
- use a `key:value` structure, press your space bar to add additional labels, and click the **X** with your mouse to
- remove unwanted labels.
+ To deploy an AKS cluster, you need to have at least one system node pool, which will manage the pods necessary to
+ deploy a Kubernetes cluster, like the control plane and etcd. To add a system node pool, add a worker node pool and
+ select the **System Node Pool** checkbox.
-3. To remove a pool, click **Remove** across from the title for each pool.
+ :::info
-
+ A system pool must have at least one node for development purposes. We recommend having between one and three nodes
+ for high availability in production environments. You can configure a static node count with the **Number of nodes
+ in the pool** parameter or a dynamic node count with the **Enable Autoscaler** parameter.
-## Create a System Node Pool
+ :::
-1. Each cluster requires at least one (1) system node pool. To define a pool as a system pool, check the box labeled
- **System Node Pool**.
-
+ The following table describes how to configure a system node pool.
-:::info
+ | **Parameter** | **Description** |
+ | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+ | **Node pool name** | A descriptive name for the node pool. |
+ | **Enable Autoscaler** | Whether Palette should scale the pool horizontally based on its per-node workload counts. If enabled, instead of the **Number of nodes in the pool** parameter, you will have to configure the **Minimum size** and **Maximum size** parameters, which will allow AKS to adjust the node pool size based on the workload. You can set the node count to a minimum of zero and a maximum of 1000. Setting both parameters to the same value results in a static node count. |
+ | **System Node Pool** | Sets the pool to be a system node pool. |
+ | **Number of nodes in the pool** | A statically defined number of nodes in the system pool. |
+ | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. |
-Identifying a Node Pool as a System Pool will deactivate taints, and the operating system options within the cluster.
-You can not to taint or change the node OS from Linux. Refer to the
-[Azure AKS Documentation](https://docs.microsoft.com/en-us/azure/aks/use-system-pools?tabs=azure-cli#system-and-user-node-pools")
-for more details on pool limitations.
+ #### System Node Pool Cloud Configuration
-:::
+ The following table describes how to configure the Azure Cloud for a system node pool.
-
+ | **Parameter** | **Description** |
+ | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+ | **Instance Type** | Select the instance type to use for all nodes in the system node pool. |
+ | **Managed disk** | Choose a storage option. For more information, refer to Microsoft's [Storage Account Overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) reference. For information about Solid State Drive (SSD) disks, refer to [Standard SSD Disks for Azure Virtual Machine Workloads](https://azure.microsoft.com/en-us/blog/preview-standard-ssd-disks-for-azure-virtual-machine-workloads/) reference. |
+ | **Disk size** | You can choose disk size based on your requirements. The default size is **60**. |
-2. Provide a name in the **Node pool name** text box. When creating a node, it is good practice to include an
- identifying name that matches the node in Azure.
+ #### Worker Node Pool
-3. Add the **Desired size**. You can start with three for multiple nodes.
+ The following table describes how to configure a worker node pool.
-4. Include **Additional Labels**. This is optional.
+ | **Parameter** | **Description** |
+ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+ | **Node pool name** | A descriptive name for the node pool. |
+ | **Enable Autoscaler** | Whether Palette should scale the pool horizontally based on its per-node workload counts. If enabled, instead of the **Number of nodes in the pool** parameter, you will have to configure the **Minimum size** and **Maximum size** parameters, which will allow AKS to adjust the node pool size based on the workload. You can set the node count to a minimum of zero and a maximum of 1000. Setting both parameters to the same value results in a static node count. |
+ | **System Node Pool** | Sets the pool to be a system node pool. |
+ | **Number of nodes in the pool** | A statically defined number of nodes in the system pool. |
+ | **Additional Labels** | Optional node labels in the key-value format. To learn more, review [Apply Labels to Nodes](../../cluster-management/taints.md#labels). Example: `environment:production`. |
+ | **Taints** | You can apply optional taint labels to a worker node pool. Review the [Node Pool](../../cluster-management/node-pool.md) and [Apply Taints to Nodes](../../cluster-management/taints.md#apply-taints-to-nodes) guides to learn more.
Toggle the **Taint** button to create a taint label. When tainting is enabled, you need to provide a custom key-value pair. Use the **drop-down Menu** to choose one of the following **Effect** options:
- **NoSchedule**—Pods are not scheduled onto nodes with this taint.
- **PreferNoSchedule**—Kubernetes attempts to avoid scheduling pods onto nodes with this taint, but scheduling is not prohibited.
- **NoExecute**—Existing pods on nodes with this taint are evicted. |
-5. In the **Azure Cloud Configuration** section, add the **Instance type**. The cost details are present for review.
+ #### Worker Node Pool Cloud Configuration
-6. Enter the **Managed Disk** information and its size.
+ The following table describes how to configure the Azure Cloud for a worker node pool.
- :::info
+ | **Parameter** | **Description** |
+ | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+ | **Instance Type** | Select the instance type to use for all nodes in the worker node pool. You must allocate at least 2 vCPUs and 4 GB RAM across all worker nodes. |
+ | **Managed disk** | Choose a storage option. For more information, refer to Microsoft's [Storage Account Overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview) reference. For information about Solid State Drive (SSD) disks, refer to [Standard SSD Disks for Azure Virtual Machine Workloads](https://azure.microsoft.com/en-us/blog/preview-standard-ssd-disks-for-azure-virtual-machine-workloads/) reference. |
+ | **Disk size** | You can choose disk size based on your requirements. The default size is **60**. |
- You can add more worker node pools after creating the system node pool to customize specific worker nodes for
- specialized workloads. For example, you can configure the system worker pool with the _Standard_D2_v2_ instance type
- for general-purpose workloads, and another worker pool with the _Standard_NC12s_v3_ instance type for GPU workloads.
+12. Click **Next** to continue.
- You can also select **OS Type** as **Windows** to create a worker pool specifically for Windows workloads.
+13. Specify your preferred **OS Patching Schedule**.
- :::
+14. Enable any scan options you want Palette to perform, and select a scan schedule. Palette provides support for
+ Kubernetes configuration security, penetration testing, and conformance testing.
-7. If you require additional or multiple node pools for different types of workloads, click the **Add Worker Pool**
- button to create the next node pool.
+15. Schedule any backups you want Palette to perform. Review
+ [Backup and Restore](../../cluster-management/backup-restore/backup-restore.md) for more information.
-## Configure Node Pools
+16. If you're using custom OIDC, configure the Role-Based Access Control (RBAC). You must map a set of users or groups
+ to a Kubernetes RBAC role. To learn how to map a Kubernetes role to users and groups, refer to
+ [Create Role Bindings](../../cluster-management/cluster-rbac.md#create-role-bindings). Refer to
+ [Use RBAC with OIDC](../../../integrations/kubernetes.md#use-rbac-with-oidc) for an example.
-In all types of node pools, configure the following.
+17. Click **Validate** and review the cluster configuration and settings summary.
-
+18. Click **Finish Configuration** to deploy the cluster. Provisioning Azure AKS clusters can take several minutes.
-1. Provide a name in the **Node pool name** text box. When creating a node, it is good practice to include an
- identifying name.
+The cluster details page contains the status and details of the deployment. Use this page to track the deployment
+progress.
-**Note:** Windows clusters have a name limitation of six (6) characters.
+To learn how to remove a cluster and what to do if a force delete is necessary so you do not incur unexpected costs,
+refer to [Cluster Removal](../../cluster-management/remove-clusters.md).
-2. Provide how many nodes the pool will contain by adding the count to the box labeled **Number of nodes in the pool**.
- Configure each pool to use the autoscaler controller. There are more details on how to configure that below.
+## Validate
-3. Alternative to a static node pool count, you can enable the autoscaler controller, click **Enable Autoscaler** to
- change to the **Minimum size** and **Maximum size** fields which will allow AKS to increase or decrease the size of
- the node pool based on workloads. The smallest size of a dynamic pool is zero (0), and the maximum is one thousand
- (1000); setting both to the same value is identical to using a static pool size.
+1. Log in to [Palette](https://console.spectrocloud.com).
-4. Provide any additional Kubernetes labels to assign to each node in the pool. This section is optional; you can use a
- `key:value` structure. Press your space bar to add additional labels and click the **X** with your mouse to remove
- unwanted labels.
+2. Ensure you are in the correct project scope.
-5. In the **Azure Cloud Configuration** section:
+3. From the left **Main Menu**, select **Clusters**. The **Clusters** page lists all available clusters that Palette
+ manages.
-- Provide instance details for all nodes in the pool with the **Instance type** dropdown. The cost details are present
- for review.
+4. Select the Azure AKS cluster you deployed to review its details. Ensure the **Cluster Status** field displays the
+ value **Running**.
-
-
-:::info
-
-You can add new worker pools to customize specific worker nodes for specialized workloads. As an example, you can
-configure the default worker pool with the _Standard_D2_v2_ instance type for general-purpose workloads, and another
-worker pool with the _Standard_NC12s_v3_ instance type for GPU workloads.
-
-:::
-
-
-
-- Provide the disk type via the **Managed Disk** dropdown and the size in Gigabytes (GB) in the **Disk size** field.
-
-:::info
-
-A minimum allocation of two (2) CPU cores is required across all worker nodes.
-
-A minimum allocation of 4Gi of memory is required across all worker nodes.
-
-:::
-
-
-
-- When are done setting up all node pools, click **Next** to go to the **Settings** page to **Validate** and finish the
- cluster deployment wizard.
-
-**Note**: Keep an eye on the **Cluster Status** once you click **Finish Configuration** as it will start as
-_Provisioning_. Deploying an AKS cluster does take a considerable amount of time to complete, and the **Cluster Status**
-in Palette will say _Ready_ when it is complete and ready to use.
-
-
-
-## Configure an Azure Active Directory
-
-The Azure Active Directory (AAD) could be enabled while creating and linking the Azure Cloud account for the Palette
-Platform, using a simple check box. Once the cloud account is created, you can create the Azure AKS cluster. The
-AAD-enabled AKS cluster will have its Admin _kubeconfig_ file created and can be downloaded from our Palette UI as the
-'Kubernetes config file'. You need to manually create the user's _kubeconfig_ file to enable AAD completely. The
-following are the steps to create the custom user _kubeconfig_ file:
-
-
-
-1. Go to the Azure console to create the Groups in Azure AD to access the Kubernetes RBAC and Azure AD control access to
- cluster resources.
-
-2. After you create the groups, create users in the Azure AD.
-
-3. Create custom Kubernetes roles and role bindings for the created users and apply the roles and role bindings, using
- the Admin _kubeconfig_ file.
-
-
-
-:::info
-
-The above step can also be completed using Spectro RBAC pack available under the Authentication section of Add-on Packs.
-
-:::
-
-
-
-4. Once the roles and role bindings are created, these roles can be linked to the Groups created in Azure AD.
+## Resources
-5. The users can now access the Azure clusters with the complete benefits of AAD. To get the user-specific _kubeconfig_
- file, please issue the following command:
+- [Register and Manage Azure Cloud Account](azure-cloud.md)
-`az aks get-credentials --resource-group --name `
+- [Create an Infrastructure Profile](../../../profiles/cluster-profiles/create-cluster-profiles/create-infrastructure-profile.md)
-
+- [Azure Storage](../azure/architecture.md#azure-storage)
-## Resources
+- [Configure OIDC Identity Provider](../../../integrations/kubernetes.md#configure-oidc-identity-provider)
-- [Use Kubernetes RBAC with Azure AD integration](https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?tabs=portal)
+- [Create Role Bindings](../../cluster-management/cluster-rbac.md#create-role-bindings)
-- [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/)
+- [Use RBAC with OIDC](../../../integrations/kubernetes.md#use-rbac-with-oidc)
diff --git a/docs/docs-content/clusters/public-cloud/azure/windows.md b/docs/docs-content/clusters/public-cloud/azure/windows.md
index e86c568ec2..be94c22a95 100644
--- a/docs/docs-content/clusters/public-cloud/azure/windows.md
+++ b/docs/docs-content/clusters/public-cloud/azure/windows.md
@@ -18,7 +18,6 @@ application to be deployed to that node pool.
- An AKS cluster created as described in the [Create and Manage Azure AKS Cluster](./aks.md) guide.
- A Linux-based node pool configured as the system node pool as described in the
- [Create a System Node Pool](../azure/aks.md#create-a-system-node-pool) section of the
[Create and Manage Azure AKS Cluster](../azure/aks.md) guide.
- A Windows node pool configured as described in the [Create a Windows Node Pool](#create-a-windows-node-pool) section.
@@ -33,8 +32,7 @@ Follow the steps below to create a Windows node pool within an existing AKS clus
:::info
Palette also allows you to add a Windows node pool during the creation of an AKS cluster. Refer to the
-[Create and Manage Azure AKS CLuster - Create and Remove Node Pools](./aks#create-and-remove-node-pools) page to learn
-more.
+[Node Pool](../../cluster-management/node-pool.md) guide to learn more.
:::
diff --git a/docs/docs-content/integrations/kubernetes-generic.md b/docs/docs-content/integrations/kubernetes-generic.md
index ed6b2767fe..2e6358abd4 100644
--- a/docs/docs-content/integrations/kubernetes-generic.md
+++ b/docs/docs-content/integrations/kubernetes-generic.md
@@ -180,8 +180,7 @@ to a Kubernetes role that is available in the cluster. The Kubernetes role can b
#### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -190,8 +189,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
`identityProvider` value with your OIDC provider name.
@@ -413,8 +412,7 @@ to a Kubernetes role that is available in the cluster. The Kubernetes role can b
#### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -423,8 +421,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
`identityProvider` value with your OIDC provider name.
@@ -647,8 +645,7 @@ to a Kubernetes role that is available in the cluster. The Kubernetes role can b
#### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -657,8 +654,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
`identityProvider` value with your OIDC provider name.
@@ -878,8 +875,7 @@ to a Kubernetes role that is available in the cluster. The Kubernetes role can b
#### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -888,8 +884,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile. Replace the
`identityProvider` value with your OIDC provider name.
diff --git a/docs/docs-content/integrations/kubernetes.md b/docs/docs-content/integrations/kubernetes.md
index 2b94124d91..6b5d021779 100644
--- a/docs/docs-content/integrations/kubernetes.md
+++ b/docs/docs-content/integrations/kubernetes.md
@@ -324,8 +324,7 @@ virtual clusters. For guidance, refer to
### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -334,8 +333,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile.
@@ -665,8 +664,7 @@ virtual clusters. For guidance, refer to
### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -675,8 +673,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile.
@@ -1008,8 +1006,7 @@ for virtual clusters. For guidance, refer to
### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -1018,8 +1015,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile.
@@ -1348,8 +1345,7 @@ for virtual clusters. For guidance, refer to
### Configure Custom OIDC
The custom method to configure OIDC and apply RBAC for an OIDC provider can be used for all cloud services except Amazon
-Elastic Kubernetes Service (EKS) and
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory).
+Elastic Kubernetes Service (EKS) and [Azure-AKS](../clusters/public-cloud/azure/aks.md).
@@ -1358,8 +1354,8 @@ Elastic Kubernetes Service (EKS) and
Follow these steps to configure a third-party OIDC IDP. You can apply these steps to all the public cloud providers
except Azure AKS and Amazon EKS clusters. Azure AKS and Amazon EKS require different configurations. AKS requires you to
use Azure Active Directory (AAD) to enable OIDC integration. Refer to
-[Azure-AKS](../clusters/public-cloud/azure/aks.md#configure-an-azure-active-directory) to learn more. Click the **Amazon
-EKS** tab for steps to configure OIDC for EKS clusters.
+[Enable OIDC in Kubernetes Clusters With Entra ID](../user-management/saml-sso/palette-sso-with-entra-id.md#enable-oidc-in-kubernetes-clusters-with-entra-id)
+to learn more. Click the **Amazon EKS** tab for steps to configure OIDC for EKS clusters.
1. Add the following parameters to your Kubernetes YAML file when creating a cluster profile.