Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PLT-708: Import support for clusters. #357

Merged
merged 4 commits into from
Oct 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Releasing the Custom Terraform Provider

This guide outlines the steps to release the custom Terraform provider, ensuring it is published on GitHub and verified on the Terraform Registry.

## 1. Prepare for Release

### Ensure Code Quality and Functionality
- Perform thorough testing to ensure all functionalities are working as expected.
- Ensure that the code adheres to best practices and is well-documented.

### Update Release Notes
- Ensure that the release notes reflect release changes. Check with engineering manager if there is a doubt.
- Ensure that latest documentation is merged to the branch. Work with engineering manager to sign off that documentation is latest.

## 2. GitHub Release

### Draft a New Release
- Navigate to the [Releases](https://github.com/spectrocloud/terraform-provider-spectrocloud/releases) section of the GitHub repository.
- Click on "Draft a new release" and fill in the tag version, release title, and a description of the changes. Default process is for example named and tagged as `v0.15.0` from `main` branch. If custom branch is used it should be specified in the release ticket.

### Attach Binary Files
- Binary files will be built by github action. Monitor the progress of the build and attach the binary files to the release once the build is complete.
- If it fails and retry is needed new version should be released.
- Binaries are uploaded to terraform registry automatically see next steps.

## 3. Verify on Terraform Registry

### Ensure New Version is Available
- After publishing the release on GitHub, check the [Terraform Registry page](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest) to ensure the new version is available. Publish to terraform registry is done automatically.
- It may take some time usually 20 minutes for the new version to be reflected on the Terraform Registry from the previous release published.

### Verify Documentation and Usage Examples
- Ensure that the documentation on the Terraform Registry is accurate and reflects the latest changes.

### Test the Provider
- Implement the provider in a Terraform configuration and initialize it using `terraform init`.
- Ensure that the provider is fetched from the Terraform Registry and works as expected in a sanity test scenario.

### Publish the Release notification on slack channel.
10 changes: 10 additions & 0 deletions examples/e2e/import/import_block.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Terraform configuration
/*import {
id = "652ce1d0a7296c6b3184555f:project"
to = spectrocloud_cluster_edge_native.my_cluster
}

import {
id = "65202c68d160c64b49a34985:tenant"
to = spectrocloud_cluster_aks.my_cluster
}*/
4 changes: 4 additions & 0 deletions examples/e2e/import/import_cli.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@

resource "spectrocloud_cluster_aks" "my_cluster" {
}

28 changes: 28 additions & 0 deletions examples/e2e/import/providers.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
terraform {
required_providers {
spectrocloud = {
version = ">= 0.16.0"
source = "spectrocloud/spectrocloud"
}
}
}

variable "sc_host" {
description = "Spectro Cloud Endpoint"
default = "api.spectrocloud.com"
}

variable "sc_api_key" {
description = "Spectro Cloud API key"
}

variable "sc_project_name" {
description = "Spectro Cloud Project (e.g: Default)"
default = "edge-sites"
}

provider "spectrocloud" {
host = var.sc_host
api_key = var.sc_api_key
project_name = var.sc_project_name
}
4 changes: 4 additions & 0 deletions examples/e2e/import/terraform.template.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Spectro Cloud credentials
sc_host = "{Enter Spectro Cloud API Host}" #e.g: api.spectrocloud.com (for SaaS)
sc_api_key = "{Enter Spectro Cloud API Key}"
sc_project_name = "{Enter Spectro Cloud Project Name}" #e.g: Default
90 changes: 89 additions & 1 deletion spectrocloud/resource_cluster_aks.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,10 @@ func resourceClusterAks() *schema.Resource {
ReadContext: resourceClusterAksRead,
UpdateContext: resourceClusterAksUpdate,
DeleteContext: resourceClusterDelete,
Description: "Resource for managing AKS clusters in Spectro Cloud through Palette.",
Importer: &schema.ResourceImporter{
StateContext: resourceClusterAksImport,
},
Description: "Resource for managing AKS clusters in Spectro Cloud through Palette.",

Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(60 * time.Minute),
Expand Down Expand Up @@ -302,10 +305,19 @@ func resourceClusterAksRead(_ context.Context, d *schema.ResourceData, m interfa
if err := d.Set("cloud_config_id", configUID); err != nil {
return diag.FromErr(err)
}
if err := ReadCommonAttributes(d); err != nil {
return diag.FromErr(err)
}
ClusterContext := d.Get("context").(string)
if config, err := c.GetCloudConfigAks(configUID, ClusterContext); err != nil {
return diag.FromErr(err)
} else {
if err := d.Set("cloud_account_id", config.Spec.CloudAccountRef.UID); err != nil {
return diag.FromErr(err)
}
if err := d.Set("cloud_config", flattenClusterConfigsAks(config)); err != nil {
return diag.FromErr(err)
}
mp := flattenMachinePoolConfigsAks(config.Spec.MachinePoolConfig)
mp, err := flattenNodeMaintenanceStatus(c, d, c.GetNodeStatusMapAks, mp, configUID, ClusterContext)
if err != nil {
Expand All @@ -324,6 +336,80 @@ func resourceClusterAksRead(_ context.Context, d *schema.ResourceData, m interfa
return diags
}

func ReadCommonAttributes(d *schema.ResourceData) error {
ForceDelete := d.Get("force_delete").(bool)
if err := d.Set("force_delete", ForceDelete); err != nil {
return err
}

ForceDeleteDelay := d.Get("force_delete_delay").(int)
if ForceDeleteDelay == 0 {
ForceDeleteDelay = 20 // set default value
}
if err := d.Set("force_delete_delay", ForceDeleteDelay); err != nil {
return err
}

OsPatchOnBoot := d.Get("os_patch_on_boot").(bool)
if err := d.Set("os_patch_on_boot", OsPatchOnBoot); err != nil {
return err
}

SkipCompletion := d.Get("skip_completion").(bool)
if err := d.Set("skip_completion", SkipCompletion); err != nil {
return err
}

ApplySetting := d.Get("apply_setting").(string)
if ApplySetting == "" {
ApplySetting = "DownloadAndInstall" // set default value
}
if err := d.Set("apply_setting", ApplySetting); err != nil {
return err
}

return nil
}

func flattenClusterConfigsAks(config *models.V1AzureCloudConfig) []interface{} {
if config == nil || config.Spec == nil || config.Spec.ClusterConfig == nil {
return make([]interface{}, 0)
}

m := make(map[string]interface{})

if config.Spec.ClusterConfig.SubscriptionID != nil {
m["subscription_id"] = config.Spec.ClusterConfig.SubscriptionID
}
if config.Spec.ClusterConfig.ResourceGroup != "" {
m["resource_group"] = config.Spec.ClusterConfig.ResourceGroup
}
if config.Spec.ClusterConfig.Location != nil {
m["region"] = *config.Spec.ClusterConfig.Location
}
if config.Spec.ClusterConfig.SSHKey != nil {
m["ssh_key"] = *config.Spec.ClusterConfig.SSHKey
}
m["private_cluster"] = config.Spec.ClusterConfig.APIServerAccessProfile.EnablePrivateCluster
if config.Spec.ClusterConfig.VnetName != "" {
m["vnet_name"] = config.Spec.ClusterConfig.VnetName
}
if config.Spec.ClusterConfig.VnetResourceGroup != "" {
m["vnet_resource_group"] = config.Spec.ClusterConfig.VnetResourceGroup
}
if config.Spec.ClusterConfig.VnetCidrBlock != "" {
m["vnet_cidr_block"] = config.Spec.ClusterConfig.VnetCidrBlock
}
if config.Spec.ClusterConfig.WorkerSubnet != nil {
m["worker_subnet_name"] = config.Spec.ClusterConfig.WorkerSubnet.Name
}
if config.Spec.ClusterConfig.WorkerSubnet != nil {
m["worker_cidr"] = config.Spec.ClusterConfig.WorkerSubnet.CidrBlock
}

return []interface{}{m}
}

func flattenMachinePoolConfigsAks(machinePools []*models.V1AzureMachinePoolConfig) []interface{} {
if machinePools == nil {
return make([]interface{}, 0)
Expand All @@ -341,6 +427,8 @@ func flattenMachinePoolConfigsAks(machinePools []*models.V1AzureMachinePoolConfi

oi["name"] = machinePool.Name
oi["count"] = int(machinePool.Size)
oi["min"] = int(machinePool.MinSize)
oi["max"] = int(machinePool.MaxSize)
flattenUpdateStrategy(machinePool.UpdateStrategy, oi)

oi["instance_type"] = machinePool.InstanceType
Expand Down
29 changes: 29 additions & 0 deletions spectrocloud/resource_cluster_aks_import.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
package spectrocloud

import (
"context"
"fmt"

"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/spectrocloud/palette-sdk-go/client"
)

func resourceClusterAksImport(ctx context.Context, d *schema.ResourceData, m interface{}) ([]*schema.ResourceData, error) {
// m is the client, which can be used to make API requests to the infrastructure
c := m.(*client.V1Client)

err := GetCommonCluster(d, c)
if err != nil {
return nil, err
}

diags := resourceClusterAksRead(ctx, d, m)
if diags.HasError() {
return nil, fmt.Errorf("could not read cluster for import: %v", diags)
}

// Return the resource data. In most cases, this method is only used to
// import one resource at a time, so you should return the resource data
// in a slice with a single element.
return []*schema.ResourceData{d}, nil
}
32 changes: 31 additions & 1 deletion spectrocloud/resource_cluster_edge_native.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,10 @@ func resourceClusterEdgeNative() *schema.Resource {
ReadContext: resourceClusterEdgeNativeRead,
UpdateContext: resourceClusterEdgeNativeUpdate,
DeleteContext: resourceClusterDelete,
Description: "Resource for managing Edge Native clusters in Spectro Cloud through Palette.",
Importer: &schema.ResourceImporter{
StateContext: resourceClusterEdgeNativeImport,
},
Description: "Resource for managing Edge Native clusters in Spectro Cloud through Palette.",

Timeouts: &schema.ResourceTimeout{
Create: schema.DefaultTimeout(60 * time.Minute),
Expand Down Expand Up @@ -314,9 +317,16 @@ func flattenCloudConfigEdgeNative(configUID string, d *schema.ResourceData, c *c
if err := d.Set("cloud_config_id", configUID); err != nil {
return diag.FromErr(err)
}
if err := ReadCommonAttributes(d); err != nil {
return diag.FromErr(err)
}

if config, err := c.GetCloudConfigEdgeNative(configUID, ClusterContext); err != nil {
return diag.FromErr(err)
} else {
if err := d.Set("cloud_config", flattenClusterConfigsEdgeNative(config)); err != nil {
return diag.FromErr(err)
}
mp := flattenMachinePoolConfigsEdgeNative(config.Spec.MachinePoolConfig)
mp, err := flattenNodeMaintenanceStatus(c, d, c.GetNodeStatusMapEdgeNative, mp, configUID, ClusterContext)
if err != nil {
Expand All @@ -330,6 +340,26 @@ func flattenCloudConfigEdgeNative(configUID string, d *schema.ResourceData, c *c
return diag.Diagnostics{}
}

func flattenClusterConfigsEdgeNative(config *models.V1EdgeNativeCloudConfig) []interface{} {
if config == nil || config.Spec == nil || config.Spec.ClusterConfig == nil {
return make([]interface{}, 0)
}

m := make(map[string]interface{})

if config.Spec.ClusterConfig.SSHKeys != nil {
m["ssh_keys"] = config.Spec.ClusterConfig.SSHKeys
}
if config.Spec.ClusterConfig.ControlPlaneEndpoint.Host != "" {
m["vip"] = config.Spec.ClusterConfig.ControlPlaneEndpoint.Host
}
if config.Spec.ClusterConfig.NtpServers != nil {
m["ntp_servers"] = config.Spec.ClusterConfig.NtpServers
}

return []interface{}{m}
}

func flattenMachinePoolConfigsEdgeNative(machinePools []*models.V1EdgeNativeMachinePoolConfig) []interface{} {

if machinePools == nil {
Expand Down
71 changes: 71 additions & 0 deletions spectrocloud/resource_cluster_edge_native_import.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
package spectrocloud

import (
"context"
"fmt"
"strings"

"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/spectrocloud/palette-sdk-go/client"
)

func resourceClusterEdgeNativeImport(ctx context.Context, d *schema.ResourceData, m interface{}) ([]*schema.ResourceData, error) {
// m is the client, which can be used to make API requests to the infrastructure
c := m.(*client.V1Client)

err := GetCommonCluster(d, c)
if err != nil {
return nil, err
}

diags := resourceClusterEdgeNativeRead(ctx, d, m)
if diags.HasError() {
return nil, fmt.Errorf("could not read cluster for import: %v", diags)
}

// Return the resource data. In most cases, this method is only used to
// import one resource at a time, so you should return the resource data
// in a slice with a single element.
return []*schema.ResourceData{d}, nil
}

func GetCommonCluster(d *schema.ResourceData, c *client.V1Client) error {
// d.Id() will contain the ID of the resource to import. This ID is provided by the user
// during the import command, and should be parsed to find the existing resource.
// Example: `terraform import spectrocloud_cluster.my_cluster [id]`

// Parse the ID to find the existing resource. This might involve making API requests
// to your infrastructure with the client `c`.
// Example: If the ID is a combination of ClusterId, then name of context/scope: `project` or `tenant`
// and if scope is then followed by projectID "cluster456:project" or "cluster456:tenant"
parts := strings.Split(d.Id(), ":")
// if 2 parts - last part should be `tenant`
scope := "invalid"
clusterID := ""
if len(parts) == 2 && (parts[1] == "tenant" || parts[1] == "project") {
clusterID, scope = parts[0], parts[1]
}
if scope == "invalid" {
return fmt.Errorf("invalid cluster ID format specified for import %s", d.Id())
}

// Use the IDs to retrieve the cluster data from the API
cluster, err := c.GetCluster(scope, clusterID)
if err != nil {
return fmt.Errorf("unable to retrieve cluster data: %s", err)
}

err = d.Set("name", cluster.Metadata.Name)
if err != nil {
return err
}
err = d.Set("context", cluster.Metadata.Annotations["scope"])
if err != nil {
return err
}

// Set the ID of the resource in the state. This ID is used to track the
// resource and must be set in the state during the import.
d.SetId(clusterID)
return nil
}