diff --git a/docs/resources/cluster_gke.md b/docs/resources/cluster_gke.md
new file mode 100644
index 00000000..60671dc1
--- /dev/null
+++ b/docs/resources/cluster_gke.md
@@ -0,0 +1,301 @@
+---
+page_title: "spectrocloud_cluster_gke Resource - terraform-provider-spectrocloud"
+subcategory: ""
+description: |-
+ Resource for managing GKE clusters through Palette.
+---
+
+# spectrocloud_cluster_gke (Resource)
+
+ Resource for managing GKE clusters through Palette.
+
+## Example Usage
+
+
+```terraform
+
+data "spectrocloud_cloudaccount_gcp" "account" {
+ name = var.gcp_cloud_account_name
+}
+
+data "spectrocloud_cluster_profile" "profile" {
+ name = var.gke_cluster_profile_name
+}
+
+
+resource "spectrocloud_cluster_gke" "cluster" {
+ name = var.cluster_name
+ description = "Gke Cluster"
+ tags = ["dev", "department:pax"]
+ cloud_account_id = data.spectrocloud_cloudaccount_gcp.account.id
+ context = "project"
+
+ cluster_profile {
+ id = data.spectrocloud_cluster_profile.profile.id
+ }
+
+ cloud_config {
+ project = var.gcp_project
+ region = var.gcp_region
+ }
+ update_worker_pool_in_parallel = true
+ machine_pool {
+ name = "worker-basic"
+ count = 3
+ instance_type = "n2-standard-4"
+ }
+}
+
+```
+
+## Import
+
+In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import)
+to import the resource spectrocloud_cluster_gke by using its `id` with the Palette `context` separated by a colon. For example:
+
+```terraform
+import {
+ to = spectrocloud_cluster_gke.example
+ id = "example_id:context"
+}
+```
+
+Using `terraform import`, import the cluster using the `id` colon separated with `context`. For example:
+
+```console
+% terraform import spectrocloud_cluster_gke.example example_id:project
+```
+
+Refer to the [Import section](/docs#import) to learn more.
+
+
+## Schema
+
+### Required
+
+- `cloud_account_id` (String)
+- `cloud_config` (Block List, Min: 1, Max: 1) The GKE environment configuration settings such as project parameters and region parameters that apply to this cluster. (see [below for nested schema](#nestedblock--cloud_config))
+- `machine_pool` (Block List, Min: 1) The machine pool configuration for the cluster. (see [below for nested schema](#nestedblock--machine_pool))
+- `name` (String) The name of the cluster.
+
+### Optional
+
+- `apply_setting` (String) The setting to apply the cluster profile. `DownloadAndInstall` will download and install packs in one action. `DownloadAndInstallLater` will only download artifact and postpone install for later. Default value is `DownloadAndInstall`.
+- `backup_policy` (Block List, Max: 1) The backup policy for the cluster. If not specified, no backups will be taken. (see [below for nested schema](#nestedblock--backup_policy))
+- `cluster_meta_attribute` (String) `cluster_meta_attribute` can be used to set additional cluster metadata information, eg `{'nic_name': 'test', 'env': 'stage'}`
+- `cluster_profile` (Block List) (see [below for nested schema](#nestedblock--cluster_profile))
+- `cluster_rbac_binding` (Block List) The RBAC binding for the cluster. (see [below for nested schema](#nestedblock--cluster_rbac_binding))
+- `context` (String) The context of the GKE cluster. Allowed values are `project` or `tenant`. Default is `project`. If the `project` context is specified, the project name will sourced from the provider configuration parameter [`project_name`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs#schema).
+- `description` (String) The description of the cluster. Default value is empty string.
+- `force_delete` (Boolean) If set to `true`, the cluster will be force deleted and user has to manually clean up the provisioned cloud resources.
+- `force_delete_delay` (Number) Delay duration in minutes to before invoking cluster force delete. Default and minimum is 20.
+- `host_config` (Block List) The host configuration for the cluster. (see [below for nested schema](#nestedblock--host_config))
+- `namespaces` (Block List) The namespaces for the cluster. (see [below for nested schema](#nestedblock--namespaces))
+- `os_patch_after` (String) Date and time after which to patch cluster `RFC3339: 2006-01-02T15:04:05Z07:00`
+- `os_patch_on_boot` (Boolean) Whether to apply OS patch on boot. Default is `false`.
+- `os_patch_schedule` (String) Cron schedule for OS patching. This must be in the form of `0 0 * * *`.
+- `pause_agent_upgrades` (String) The pause agent upgrades setting allows to control the automatic upgrade of the Palette component and agent for an individual cluster. The default value is `unlock`, meaning upgrades occur automatically. Setting it to `lock` pauses automatic agent upgrades for the cluster.
+- `review_repave_state` (String) To authorize the cluster repave, set the value to `Approved` for approval and `""` to decline. Default value is `""`.
+- `scan_policy` (Block List, Max: 1) The scan policy for the cluster. (see [below for nested schema](#nestedblock--scan_policy))
+- `skip_completion` (Boolean) If `true`, the cluster will be created asynchronously. Default value is `false`.
+- `tags` (Set of String) A list of tags to be applied to the cluster. Tags must be in the form of `key:value`.
+- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
+- `update_worker_pool_in_parallel` (Boolean)
+
+### Read-Only
+
+- `admin_kube_config` (String) Admin Kube-config for the cluster. This can be used to connect to the cluster using `kubectl`, With admin privilege.
+- `cloud_config_id` (String, Deprecated) ID of the cloud config used for the cluster. This cloud config must be of type `azure`.
+- `id` (String) The ID of this resource.
+- `kubeconfig` (String) Kubeconfig for the cluster. This can be used to connect to the cluster using `kubectl`.
+- `location_config` (List of Object) The location of the cluster. (see [below for nested schema](#nestedatt--location_config))
+
+
+### Nested Schema for `cloud_config`
+
+Required:
+
+- `project` (String) GCP project name.
+- `region` (String)
+
+
+
+### Nested Schema for `machine_pool`
+
+Required:
+
+- `count` (Number) Number of nodes in the machine pool.
+- `instance_type` (String)
+- `name` (String)
+
+Optional:
+
+- `additional_labels` (Map of String)
+- `disk_size_gb` (Number)
+- `node` (Block List) (see [below for nested schema](#nestedblock--machine_pool--node))
+- `taints` (Block List) (see [below for nested schema](#nestedblock--machine_pool--taints))
+- `update_strategy` (String) Update strategy for the machine pool. Valid values are `RollingUpdateScaleOut` and `RollingUpdateScaleIn`.
+
+
+### Nested Schema for `machine_pool.node`
+
+Required:
+
+- `action` (String) The action to perform on the node. Valid values are: `cordon`, `uncordon`.
+- `node_id` (String) The node_id of the node, For example `i-07f899a33dee624f7`
+
+
+
+### Nested Schema for `machine_pool.taints`
+
+Required:
+
+- `effect` (String) The effect of the taint. Allowed values are: `NoSchedule`, `PreferNoSchedule` or `NoExecute`.
+- `key` (String) The key of the taint.
+- `value` (String) The value of the taint.
+
+
+
+
+### Nested Schema for `backup_policy`
+
+Required:
+
+- `backup_location_id` (String) The ID of the backup location to use for the backup.
+- `expiry_in_hour` (Number) The number of hours after which the backup will be deleted. For example, if the expiry is set to 24, the backup will be deleted after 24 hours.
+- `prefix` (String) Prefix for the backup name. The backup name will be of the format --.
+- `schedule` (String) The schedule for the backup. The schedule is specified in cron format. For example, to run the backup every day at 1:00 AM, the schedule should be set to `0 1 * * *`.
+
+Optional:
+
+- `cluster_uids` (Set of String) The list of cluster UIDs to include in the backup. If `include_all_clusters` is set to `true`, then all clusters will be included.
+- `include_all_clusters` (Boolean) Whether to include all clusters in the backup. If set to false, only the clusters specified in `cluster_uids` will be included.
+- `include_cluster_resources` (Boolean) Whether to include the cluster resources in the backup. If set to false, only the cluster configuration and disks will be backed up.
+- `include_disks` (Boolean) Whether to include the disks in the backup. If set to false, only the cluster configuration will be backed up.
+- `namespaces` (Set of String) The list of Kubernetes namespaces to include in the backup. If not specified, all namespaces will be included.
+
+
+
+### Nested Schema for `cluster_profile`
+
+Required:
+
+- `id` (String) The ID of the cluster profile.
+
+Optional:
+
+- `pack` (Block List) For packs of type `spectro`, `helm`, and `manifest`, at least one pack must be specified. (see [below for nested schema](#nestedblock--cluster_profile--pack))
+
+
+### Nested Schema for `cluster_profile.pack`
+
+Required:
+
+- `name` (String) The name of the pack. The name must be unique within the cluster profile.
+
+Optional:
+
+- `manifest` (Block List) (see [below for nested schema](#nestedblock--cluster_profile--pack--manifest))
+- `registry_uid` (String) The registry UID of the pack. The registry UID is the unique identifier of the registry. This attribute is required if there is more than one registry that contains a pack with the same name.
+- `tag` (String) The tag of the pack. The tag is the version of the pack. This attribute is required if the pack type is `spectro` or `helm`.
+- `type` (String) The type of the pack. Allowed values are `spectro`, `manifest` or `helm`. The default value is `spectro`.
+- `uid` (String) The unique identifier of the pack. The value can be looked up using the [`spectrocloud_pack`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/data-sources/pack) data source. This value is required if the pack type is `spectro`.
+- `values` (String) The values of the pack. The values are the configuration values of the pack. The values are specified in YAML format.
+
+
+### Nested Schema for `cluster_profile.pack.manifest`
+
+Required:
+
+- `content` (String) The content of the manifest. The content is the YAML content of the manifest.
+- `name` (String) The name of the manifest. The name must be unique within the pack.
+
+Read-Only:
+
+- `uid` (String)
+
+
+
+
+
+### Nested Schema for `cluster_rbac_binding`
+
+Required:
+
+- `type` (String) The type of the RBAC binding. Can be one of the following values: `RoleBinding`, or `ClusterRoleBinding`.
+
+Optional:
+
+- `namespace` (String) The Kubernetes namespace of the RBAC binding. Required if 'type' is set to 'RoleBinding'.
+- `role` (Map of String) The role of the RBAC binding. Required if 'type' is set to 'RoleBinding'.
+- `subjects` (Block List) (see [below for nested schema](#nestedblock--cluster_rbac_binding--subjects))
+
+
+### Nested Schema for `cluster_rbac_binding.subjects`
+
+Required:
+
+- `name` (String) The name of the subject. Required if 'type' is set to 'User' or 'Group'.
+- `type` (String) The type of the subject. Can be one of the following values: `User`, `Group`, or `ServiceAccount`.
+
+Optional:
+
+- `namespace` (String) The Kubernetes namespace of the subject. Required if 'type' is set to 'ServiceAccount'.
+
+
+
+
+### Nested Schema for `host_config`
+
+Optional:
+
+- `external_traffic_policy` (String) The external traffic policy for the cluster.
+- `host_endpoint_type` (String) The type of endpoint for the cluster. Can be either 'Ingress' or 'LoadBalancer'. The default is 'Ingress'.
+- `ingress_host` (String) The host for the Ingress endpoint. Required if 'host_endpoint_type' is set to 'Ingress'.
+- `load_balancer_source_ranges` (String) The source ranges for the load balancer. Required if 'host_endpoint_type' is set to 'LoadBalancer'.
+
+
+
+### Nested Schema for `namespaces`
+
+Required:
+
+- `name` (String) Name of the namespace. This is the name of the Kubernetes namespace in the cluster.
+- `resource_allocation` (Map of String) Resource allocation for the namespace. This is a map containing the resource type and the resource value. For example, `{cpu_cores: '2', memory_MiB: '2048'}`
+
+Optional:
+
+- `images_blacklist` (List of String) List of images to disallow for the namespace. For example, `['nginx:latest', 'redis:latest']`
+
+
+
+### Nested Schema for `scan_policy`
+
+Required:
+
+- `configuration_scan_schedule` (String) The schedule for configuration scan.
+- `conformance_scan_schedule` (String) The schedule for conformance scan.
+- `penetration_scan_schedule` (String) The schedule for penetration scan.
+
+
+
+### Nested Schema for `timeouts`
+
+Optional:
+
+- `create` (String)
+- `delete` (String)
+- `update` (String)
+
+
+
+### Nested Schema for `location_config`
+
+Read-Only:
+
+- `country_code` (String)
+- `country_name` (String)
+- `latitude` (Number)
+- `longitude` (Number)
+- `region_code` (String)
+- `region_name` (String)
\ No newline at end of file
diff --git a/examples/resources/spectrocloud_cluster_gke/providers.tf b/examples/resources/spectrocloud_cluster_gke/providers.tf
new file mode 100644
index 00000000..7bbf584f
--- /dev/null
+++ b/examples/resources/spectrocloud_cluster_gke/providers.tf
@@ -0,0 +1,16 @@
+terraform {
+ required_providers {
+ spectrocloud = {
+ version = ">= 0.1"
+ source = "spectrocloud/spectrocloud"
+ }
+ }
+}
+
+provider "spectrocloud" {
+ host = var.sc_host
+ api_key = var.sc_api_key
+ project_name = var.sc_project_name
+ trace = true
+}
+
diff --git a/examples/resources/spectrocloud_cluster_gke/resource.tf b/examples/resources/spectrocloud_cluster_gke/resource.tf
new file mode 100644
index 00000000..ccec0d1b
--- /dev/null
+++ b/examples/resources/spectrocloud_cluster_gke/resource.tf
@@ -0,0 +1,31 @@
+data "spectrocloud_cloudaccount_gcp" "account" {
+ name = var.gcp_cloud_account_name
+}
+
+data "spectrocloud_cluster_profile" "profile" {
+ name = var.gke_cluster_profile_name
+}
+
+
+resource "spectrocloud_cluster_gke" "cluster" {
+ name = var.cluster_name
+ description = "Gke Cluster"
+ tags = ["dev", "department:pax"]
+ cloud_account_id = data.spectrocloud_cloudaccount_gcp.account.id
+ context = "project"
+
+ cluster_profile {
+ id = data.spectrocloud_cluster_profile.profile.id
+ }
+
+ cloud_config {
+ project = var.gcp_project
+ region = var.gcp_region
+ }
+ update_worker_pool_in_parallel = true
+ machine_pool {
+ name = "worker-basic"
+ count = 3
+ instance_type = "n2-standard-4"
+ }
+}
diff --git a/examples/resources/spectrocloud_cluster_gke/terraform.template.tfvars b/examples/resources/spectrocloud_cluster_gke/terraform.template.tfvars
new file mode 100644
index 00000000..b5e5d7e1
--- /dev/null
+++ b/examples/resources/spectrocloud_cluster_gke/terraform.template.tfvars
@@ -0,0 +1,19 @@
+# Spectro Cloud credentials
+sc_host = "{Enter Spectro Cloud API Host}" #e.g: api.spectrocloud.com (for SaaS)
+sc_api_key = "{Enter Spectro Cloud API Key}"
+sc_project_name = "{Enter Spectro Cloud Project Name}" #e.g: Default
+
+# Google Cloud account credentials
+# Create a new GCP service account with the Editor role mapping
+# https://cloud.google.com/iam/docs/creating-managing-service-account-keys
+#
+# Paste the service account JSON key contents inside the yaml heredoc EOT markers.
+gcp_serviceaccount_json = <<-EOT
+ {enter GCP service account json}
+EOT
+
+# GCP Cluster Placement properties
+#
+gcp_network = "{enter GCP network}" #e.g: "" (this one can be blank)
+gcp_project = "{enter GCP project}"
+gcp_region = "{enter GCP region}" #e.g: us-west3
diff --git a/examples/resources/spectrocloud_cluster_gke/variables.tf b/examples/resources/spectrocloud_cluster_gke/variables.tf
new file mode 100644
index 00000000..4b5d31ea
--- /dev/null
+++ b/examples/resources/spectrocloud_cluster_gke/variables.tf
@@ -0,0 +1,19 @@
+variable "sc_host" {
+ description = "Spectro Cloud Endpoint"
+ default = "api.spectrocloud.com"
+}
+
+variable "sc_api_key" {
+ description = "Spectro Cloud API key"
+}
+
+variable "sc_project_name" {
+ description = "Spectro Cloud Project (e.g: Default)"
+ default = "Default"
+}
+
+variable "gcp_cloud_account_name" {}
+variable "gke_cluster_profile_name" {}
+variable "gcp_project" {}
+variable "gcp_region" {}
+variable "cluster_name" {}
diff --git a/go.mod b/go.mod
index 4d7cd248..ae57d3c0 100644
--- a/go.mod
+++ b/go.mod
@@ -13,7 +13,7 @@ require (
github.com/robfig/cron v1.2.0
github.com/spectrocloud/gomi v1.14.1-0.20240214074114-c19394812368
github.com/spectrocloud/hapi v1.14.1-0.20240214071352-81f589b1d86d
- github.com/spectrocloud/palette-sdk-go v0.0.0-20240228000639-b48e0cefe460
+ github.com/spectrocloud/palette-sdk-go v0.0.0-20240403123806-17f698cadf11
github.com/stretchr/testify v1.8.4
gotest.tools v2.2.0+incompatible
k8s.io/api v0.23.5
diff --git a/go.sum b/go.sum
index 82c3d4a9..61dc4707 100644
--- a/go.sum
+++ b/go.sum
@@ -735,10 +735,8 @@ github.com/spectrocloud/gomi v1.14.1-0.20240214074114-c19394812368 h1:eY0BOyEbGu
github.com/spectrocloud/gomi v1.14.1-0.20240214074114-c19394812368/go.mod h1:LlZ9We4kDaELYi7Is0SVmnySuDhwphJLS6ZT4wXxFIk=
github.com/spectrocloud/hapi v1.14.1-0.20240214071352-81f589b1d86d h1:OMRbHxMJ1a+G1BYzvUYuMM0wLkYJPdnEOFx16faQ/UY=
github.com/spectrocloud/hapi v1.14.1-0.20240214071352-81f589b1d86d/go.mod h1:MktpRPnSXDTHsQrFSD+daJFQ1zMLSR+1gWOL31jVvWE=
-github.com/spectrocloud/palette-sdk-go v0.0.0-20240219044936-eaefe25f027d h1:Fb6WylMx5PptaZGh8t6T8AQ9ABajDVwFSsJzSadEIJA=
-github.com/spectrocloud/palette-sdk-go v0.0.0-20240219044936-eaefe25f027d/go.mod h1:MvZHrcVf03fcAEcy9Xvp2zWUcLgiAaVQIPSgtfU3pMQ=
-github.com/spectrocloud/palette-sdk-go v0.0.0-20240228000639-b48e0cefe460 h1:zdQlg23MRZeDcn6BV6XQpcyAcUZuY8k5XFgR9yUfy1U=
-github.com/spectrocloud/palette-sdk-go v0.0.0-20240228000639-b48e0cefe460/go.mod h1:MvZHrcVf03fcAEcy9Xvp2zWUcLgiAaVQIPSgtfU3pMQ=
+github.com/spectrocloud/palette-sdk-go v0.0.0-20240403123806-17f698cadf11 h1:b/jhTYB6w2GmvRk4UdVCXAmk5R5jqPD3vRQ9XYBTmeI=
+github.com/spectrocloud/palette-sdk-go v0.0.0-20240403123806-17f698cadf11/go.mod h1:MvZHrcVf03fcAEcy9Xvp2zWUcLgiAaVQIPSgtfU3pMQ=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
diff --git a/spectrocloud/cluster_common_hash.go b/spectrocloud/cluster_common_hash.go
index cb194257..bb3190f4 100644
--- a/spectrocloud/cluster_common_hash.go
+++ b/spectrocloud/cluster_common_hash.go
@@ -178,6 +178,16 @@ func resourceMachinePoolEksHash(v interface{}) int {
return int(hash(buf.String()))
}
+func resourceMachinePoolGkeHash(v interface{}) int {
+ m := v.(map[string]interface{})
+ buf := CommonHash(m)
+ if _, ok := m["disk_size_gb"]; ok {
+ buf.WriteString(fmt.Sprintf("%d-", m["disk_size_gb"].(int)))
+ }
+ buf.WriteString(fmt.Sprintf("%s-", m["instance_type"].(string)))
+ return int(hash(buf.String()))
+}
+
func eksLaunchTemplate(v interface{}) string {
var buf bytes.Buffer
if len(v.([]interface{})) > 0 {
diff --git a/spectrocloud/provider.go b/spectrocloud/provider.go
index db5133a3..5402f67c 100644
--- a/spectrocloud/provider.go
+++ b/spectrocloud/provider.go
@@ -96,7 +96,9 @@ func New(_ string) func() *schema.Provider {
"spectrocloud_cluster_aks": resourceClusterAks(),
"spectrocloud_cloudaccount_gcp": resourceCloudAccountGcp(),
- "spectrocloud_cluster_gcp": resourceClusterGcp(),
+
+ "spectrocloud_cluster_gcp": resourceClusterGcp(),
+ "spectrocloud_cluster_gke": resourceClusterGke(),
"spectrocloud_cloudaccount_openstack": resourceCloudAccountOpenstack(),
"spectrocloud_cluster_openstack": resourceClusterOpenStack(),
diff --git a/spectrocloud/resource_cluster_gke.go b/spectrocloud/resource_cluster_gke.go
new file mode 100644
index 00000000..fedffe18
--- /dev/null
+++ b/spectrocloud/resource_cluster_gke.go
@@ -0,0 +1,498 @@
+package spectrocloud
+
+import (
+ "context"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/diag"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
+ "github.com/spectrocloud/gomi/pkg/ptr"
+ "github.com/spectrocloud/hapi/models"
+ "github.com/spectrocloud/palette-sdk-go/client"
+ "github.com/spectrocloud/terraform-provider-spectrocloud/spectrocloud/schemas"
+ "github.com/spectrocloud/terraform-provider-spectrocloud/types"
+ "log"
+ "time"
+)
+
+func resourceClusterGke() *schema.Resource {
+ return &schema.Resource{
+ CreateContext: resourceClusterGkeCreate,
+ ReadContext: resourceClusterGkeRead,
+ UpdateContext: resourceClusterGkeUpdate,
+ DeleteContext: resourceClusterDelete,
+ Importer: &schema.ResourceImporter{
+ StateContext: resourceClusterGkeImport,
+ },
+ Description: "Resource for managing GKE clusters through Palette.",
+
+ Timeouts: &schema.ResourceTimeout{
+ Create: schema.DefaultTimeout(60 * time.Minute),
+ Update: schema.DefaultTimeout(60 * time.Minute),
+ Delete: schema.DefaultTimeout(60 * time.Minute),
+ },
+
+ SchemaVersion: 1,
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ Description: "The name of the cluster.",
+ },
+ "context": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "project",
+ ValidateFunc: validation.StringInSlice([]string{"", "project", "tenant"}, false),
+ Description: "The context of the GKE cluster. Allowed values are `project` or `tenant`. " +
+ "Default is `project`. " + PROJECT_NAME_NUANCE,
+ },
+ "description": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "",
+ Description: "The description of the cluster. Default value is empty string.",
+ },
+ "tags": {
+ Type: schema.TypeSet,
+ Optional: true,
+ Set: schema.HashString,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ Description: "A list of tags to be applied to the cluster. Tags must be in the form of `key:value`.",
+ },
+ "cluster_meta_attribute": {
+ Type: schema.TypeString,
+ Optional: true,
+ Description: "`cluster_meta_attribute` can be used to set additional cluster metadata information, eg `{'nic_name': 'test', 'env': 'stage'}`",
+ },
+ "cluster_profile": schemas.ClusterProfileSchema(),
+ "apply_setting": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "DownloadAndInstall",
+ ValidateFunc: validation.StringInSlice([]string{"DownloadAndInstall", "DownloadAndInstallLater"}, false),
+ Description: "The setting to apply the cluster profile. `DownloadAndInstall` will download and install packs in one action. " +
+ "`DownloadAndInstallLater` will only download artifact and postpone install for later. " +
+ "Default value is `DownloadAndInstall`.",
+ },
+ "cloud_account_id": {
+ Type: schema.TypeString,
+ Required: true,
+ ForceNew: true,
+ },
+ "cloud_config_id": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "ID of the cloud config used for the cluster. This cloud config must be of type `azure`.",
+ Deprecated: "This field is deprecated and will be removed in the future. Use `cloud_config` instead.",
+ },
+
+ "cloud_config": {
+ Type: schema.TypeList,
+ ForceNew: true,
+ Required: true,
+ MaxItems: 1,
+ Description: "The GKE environment configuration settings such as project parameters and region parameters that apply to this cluster.",
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "project": {
+ Type: schema.TypeString,
+ ForceNew: true,
+ Required: true,
+ Description: "GCP project name.",
+ },
+ "region": {
+ Type: schema.TypeString,
+ ForceNew: true,
+ Required: true,
+ },
+ },
+ },
+ },
+ "update_worker_pool_in_parallel": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: true,
+ },
+ "machine_pool": {
+ Type: schema.TypeList,
+ Required: true,
+ Description: "The machine pool configuration for the cluster.",
+ Elem: &schema.Resource{
+ Schema: map[string]*schema.Schema{
+ "name": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "count": {
+ Type: schema.TypeInt,
+ Required: true,
+ Description: "Number of nodes in the machine pool.",
+ },
+ "disk_size_gb": {
+ Type: schema.TypeInt,
+ Optional: true,
+ Default: 60,
+ },
+ "additional_labels": {
+ Type: schema.TypeMap,
+ Optional: true,
+ Elem: &schema.Schema{
+ Type: schema.TypeString,
+ },
+ },
+ "instance_type": {
+ Type: schema.TypeString,
+ Required: true,
+ },
+ "update_strategy": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "RollingUpdateScaleOut",
+ Description: "Update strategy for the machine pool. Valid values are `RollingUpdateScaleOut` and `RollingUpdateScaleIn`.",
+ },
+ "node": schemas.NodeSchema(),
+ "taints": schemas.ClusterTaintsSchema(),
+ },
+ },
+ },
+ "pause_agent_upgrades": {
+ Type: schema.TypeString,
+ Optional: true,
+ Default: "unlock",
+ ValidateFunc: validation.StringInSlice([]string{"lock", "unlock"}, false),
+ Description: "The pause agent upgrades setting allows to control the automatic upgrade of the Palette component and agent for an individual cluster. The default value is `unlock`, meaning upgrades occur automatically. Setting it to `lock` pauses automatic agent upgrades for the cluster.",
+ },
+ "review_repave_state": {
+ Type: schema.TypeString,
+ Default: "",
+ Optional: true,
+ ValidateFunc: validation.StringInSlice([]string{"", "Approved", "Pending"}, false),
+ Description: "To authorize the cluster repave, set the value to `Approved` for approval and `\"\"` to decline. Default value is `\"\"`.",
+ },
+ "os_patch_on_boot": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ Description: "Whether to apply OS patch on boot. Default is `false`.",
+ },
+ "os_patch_schedule": {
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateDiagFunc: validateOsPatchSchedule,
+ Description: "Cron schedule for OS patching. This must be in the form of `0 0 * * *`.",
+ },
+ "os_patch_after": {
+ Type: schema.TypeString,
+ Optional: true,
+ ValidateDiagFunc: validateOsPatchOnDemandAfter,
+ Description: "Date and time after which to patch cluster `RFC3339: 2006-01-02T15:04:05Z07:00`",
+ },
+ "kubeconfig": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Kubeconfig for the cluster. This can be used to connect to the cluster using `kubectl`.",
+ },
+ "admin_kube_config": {
+ Type: schema.TypeString,
+ Computed: true,
+ Description: "Admin Kube-config for the cluster. This can be used to connect to the cluster using `kubectl`, With admin privilege.",
+ },
+ "backup_policy": schemas.BackupPolicySchema(),
+ "scan_policy": schemas.ScanPolicySchema(),
+ "cluster_rbac_binding": schemas.ClusterRbacBindingSchema(),
+ "namespaces": schemas.ClusterNamespacesSchema(),
+ "host_config": schemas.ClusterHostConfigSchema(),
+ "location_config": schemas.ClusterLocationSchemaComputed(),
+ "skip_completion": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ Description: "If `true`, the cluster will be created asynchronously. Default value is `false`.",
+ },
+ "force_delete": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Default: false,
+ Description: "If set to `true`, the cluster will be force deleted and user has to manually clean up the provisioned cloud resources.",
+ },
+ "force_delete_delay": {
+ Type: schema.TypeInt,
+ Optional: true,
+ Default: 20,
+ Description: "Delay duration in minutes to before invoking cluster force delete. Default and minimum is 20.",
+ ValidateDiagFunc: validation.ToDiagFunc(validation.IntAtLeast(20)),
+ },
+ },
+ }
+}
+
+func resourceClusterGkeCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ c := m.(*client.V1Client)
+
+ // Warning or errors can be collected in a slice type
+ var diags diag.Diagnostics
+ cluster, err := toGkeCluster(c, d)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ ClusterContext := d.Get("context").(string)
+ uid, err := c.CreateClusterGke(cluster, ClusterContext)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ diagnostics, isError := waitForClusterCreation(ctx, d, ClusterContext, uid, diags, c, true)
+ if isError {
+ return diagnostics
+ }
+
+ resourceClusterGkeRead(ctx, d, m)
+ return diags
+}
+
+func resourceClusterGkeRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ c := m.(*client.V1Client)
+
+ var diags diag.Diagnostics
+ cluster, err := resourceClusterRead(d, c, diags)
+ if err != nil {
+ return diag.FromErr(err)
+ } else if cluster == nil {
+ // Deleted - Terraform will recreate it
+ d.SetId("")
+ return diags
+ }
+
+ configUID := cluster.Spec.CloudConfigRef.UID
+ if err := d.Set("cloud_config_id", configUID); err != nil {
+ return diag.FromErr(err)
+ }
+
+ diagnostics, done := readCommonFields(c, d, cluster)
+ if done {
+ return diagnostics
+ }
+
+ return flattenCloudConfigGke(cluster.Spec.CloudConfigRef.UID, d, c)
+}
+
+func resourceClusterGkeUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
+ c := m.(*client.V1Client)
+
+ var diags diag.Diagnostics
+ err := validateSystemRepaveApproval(d, c)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ cloudConfigId := d.Get("cloud_config_id").(string)
+ ClusterContext := d.Get("context").(string)
+ CloudConfig, err := c.GetCloudConfigGke(cloudConfigId, ClusterContext)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ if d.HasChange("machine_pool") {
+ oraw, nraw := d.GetChange("machine_pool")
+ if oraw == nil {
+ oraw = new(schema.Set)
+ }
+ if nraw == nil {
+ nraw = new(schema.Set)
+ }
+
+ os := oraw.([]interface{})
+ ns := nraw.([]interface{})
+
+ osMap := make(map[string]interface{})
+ for _, mp := range os {
+ machinePool := mp.(map[string]interface{})
+ osMap[machinePool["name"].(string)] = machinePool
+ }
+ nsMap := make(map[string]interface{})
+ for _, mp := range ns {
+ machinePoolResource := mp.(map[string]interface{})
+ nsMap[machinePoolResource["name"].(string)] = machinePoolResource
+ // since known issue in TF SDK: https://github.com/hashicorp/terraform-plugin-sdk/issues/588
+ if machinePoolResource["name"].(string) != "" {
+ name := machinePoolResource["name"].(string)
+ hash := resourceMachinePoolGkeHash(machinePoolResource)
+ var err error
+
+ machinePool, err := toMachinePoolGke(machinePoolResource)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ if oldMachinePool, ok := osMap[name]; !ok {
+ log.Printf("Create machine pool %s", name)
+ err = c.CreateMachinePoolGke(cloudConfigId, ClusterContext, machinePool)
+ } else if hash != resourceMachinePoolGkeHash(oldMachinePool) {
+ // TODO
+ log.Printf("Change in machine pool %s", name)
+ err = c.UpdateMachinePoolGke(cloudConfigId, ClusterContext, machinePool)
+ // Node Maintenance Actions
+ err := resourceNodeAction(c, ctx, nsMap[name], c.GetNodeMaintenanceStatusGke, CloudConfig.Kind, ClusterContext, cloudConfigId, name)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ // Processed (if exists)
+ delete(osMap, name)
+ }
+ }
+
+ // Deleted old machine pools
+ for _, mp := range osMap {
+ machinePool := mp.(map[string]interface{})
+ name := machinePool["name"].(string)
+ log.Printf("Deleted machine pool %s", name)
+ if err := c.DeleteMachinePoolGke(cloudConfigId, name, ClusterContext); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+ }
+ diagnostics, done := updateCommonFields(d, c)
+ if done {
+ return diagnostics
+ }
+
+ resourceClusterGkeRead(ctx, d, m)
+
+ return diags
+}
+
+func flattenCloudConfigGke(configUID string, d *schema.ResourceData, c *client.V1Client) diag.Diagnostics {
+ ClusterContext := d.Get("context").(string)
+ if err := d.Set("cloud_config_id", configUID); err != nil {
+ return diag.FromErr(err)
+ }
+ if config, err := c.GetCloudConfigGke(configUID, ClusterContext); err != nil {
+ return diag.FromErr(err)
+ } else {
+ if err := d.Set("cloud_account_id", config.Spec.CloudAccountRef.UID); err != nil {
+ return diag.FromErr(err)
+ }
+ if err := d.Set("cloud_config", flattenClusterConfigsGke(config)); err != nil {
+ return diag.FromErr(err)
+ }
+ mp := flattenMachinePoolConfigsGke(config.Spec.MachinePoolConfig)
+ mp, err := flattenNodeMaintenanceStatus(c, d, c.GetNodeStatusMapGke, mp, configUID, ClusterContext)
+ if err != nil {
+ return diag.FromErr(err)
+ }
+
+ if err := d.Set("machine_pool", mp); err != nil {
+ return diag.FromErr(err)
+ }
+ }
+
+ return diag.Diagnostics{}
+}
+
+func flattenClusterConfigsGke(config *models.V1GcpCloudConfig) []interface{} {
+ if config == nil || config.Spec == nil || config.Spec.ClusterConfig == nil {
+ return make([]interface{}, 0)
+ }
+ m := make(map[string]interface{})
+
+ if config.Spec.ClusterConfig.Project != nil {
+ m["project"] = config.Spec.ClusterConfig.Project
+ }
+ if ptr.String(config.Spec.ClusterConfig.Region) != "" {
+ m["region"] = ptr.String(config.Spec.ClusterConfig.Region)
+ }
+ return []interface{}{m}
+}
+
+func flattenMachinePoolConfigsGke(machinePools []*models.V1GcpMachinePoolConfig) []interface{} {
+
+ if machinePools == nil {
+ return make([]interface{}, 0)
+ }
+
+ ois := make([]interface{}, len(machinePools))
+
+ for i, machinePool := range machinePools {
+ oi := make(map[string]interface{})
+
+ FlattenAdditionalLabelsAndTaints(machinePool.AdditionalLabels, machinePool.Taints, oi)
+ oi["name"] = machinePool.Name
+ oi["count"] = int(machinePool.Size)
+ flattenUpdateStrategy(machinePool.UpdateStrategy, oi)
+
+ oi["instance_type"] = *machinePool.InstanceType
+
+ oi["disk_size_gb"] = int(machinePool.RootDeviceSize)
+ ois[i] = oi
+ }
+
+ return ois
+}
+
+func toGkeCluster(c *client.V1Client, d *schema.ResourceData) (*models.V1SpectroGcpClusterEntity, error) {
+ cloudConfig := d.Get("cloud_config").([]interface{})[0].(map[string]interface{})
+
+ clusterContext := d.Get("context").(string)
+ profiles, err := toProfiles(c, d, clusterContext)
+ if err != nil {
+ return nil, err
+ }
+ cluster := &models.V1SpectroGcpClusterEntity{
+ Metadata: getClusterMetadata(d),
+ Spec: &models.V1SpectroGcpClusterEntitySpec{
+ CloudAccountUID: types.Ptr(d.Get("cloud_account_id").(string)),
+ Profiles: profiles,
+ Policies: toPolicies(d),
+ CloudConfig: &models.V1GcpClusterConfig{
+ Project: types.Ptr(cloudConfig["project"].(string)),
+ Region: types.Ptr(cloudConfig["region"].(string)),
+ ManagedClusterConfig: &models.V1GcpManagedClusterConfig{
+ Location: cloudConfig["region"].(string),
+ },
+ },
+ },
+ }
+
+ machinePoolConfigs := make([]*models.V1GcpMachinePoolConfigEntity, 0)
+ for _, machinePool := range d.Get("machine_pool").([]interface{}) {
+ mp, err := toMachinePoolGke(machinePool)
+ if err != nil {
+ return nil, err
+ }
+ machinePoolConfigs = append(machinePoolConfigs, mp)
+ }
+ cluster.Spec.Machinepoolconfig = machinePoolConfigs
+ cluster.Spec.ClusterConfig = toClusterConfig(d)
+ return cluster, err
+}
+
+func toMachinePoolGke(machinePool interface{}) (*models.V1GcpMachinePoolConfigEntity, error) {
+ m := machinePool.(map[string]interface{})
+
+ mp := &models.V1GcpMachinePoolConfigEntity{
+ CloudConfig: &models.V1GcpMachinePoolCloudConfigEntity{
+ InstanceType: types.Ptr(m["instance_type"].(string)),
+ RootDeviceSize: int64(m["disk_size_gb"].(int)),
+ },
+ PoolConfig: &models.V1MachinePoolConfigEntity{
+ AdditionalLabels: toAdditionalNodePoolLabels(m),
+ Taints: toClusterTaints(m),
+ Name: types.Ptr(m["name"].(string)),
+ Size: types.Ptr(int32(m["count"].(int))),
+ UpdateStrategy: &models.V1UpdateStrategy{
+ Type: getUpdateStrategy(m),
+ },
+ },
+ }
+ if !mp.PoolConfig.IsControlPlane {
+ mp.PoolConfig.Labels = []string{"worker"}
+ }
+ return mp, nil
+}
diff --git a/spectrocloud/resource_cluster_gke_import.go b/spectrocloud/resource_cluster_gke_import.go
new file mode 100644
index 00000000..d470c971
--- /dev/null
+++ b/spectrocloud/resource_cluster_gke_import.go
@@ -0,0 +1,28 @@
+package spectrocloud
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
+ "github.com/spectrocloud/palette-sdk-go/client"
+)
+
+func resourceClusterGkeImport(ctx context.Context, d *schema.ResourceData, m interface{}) ([]*schema.ResourceData, error) {
+ c := m.(*client.V1Client)
+
+ err := GetCommonCluster(d, c)
+ if err != nil {
+ return nil, err
+ }
+
+ diags := resourceClusterGkeRead(ctx, d, m)
+ if diags.HasError() {
+ return nil, fmt.Errorf("could not read cluster for import: %v", diags)
+ }
+
+ // Return the resource data. In most cases, this method is only used to
+ // import one resource at a time, so you should return the resource data
+ // in a slice with a single element.
+ return []*schema.ResourceData{d}, nil
+}
diff --git a/spectrocloud/resource_cluster_gke_test.go b/spectrocloud/resource_cluster_gke_test.go
new file mode 100644
index 00000000..a06bb22d
--- /dev/null
+++ b/spectrocloud/resource_cluster_gke_test.go
@@ -0,0 +1,113 @@
+package spectrocloud
+
+import (
+ "github.com/spectrocloud/hapi/models"
+ "github.com/spectrocloud/terraform-provider-spectrocloud/types"
+ "github.com/stretchr/testify/assert"
+ "testing"
+)
+
+func TestToMachinePoolGke(t *testing.T) {
+ // Simulate input data
+ machinePool := map[string]interface{}{
+ "name": "pool1",
+ "count": 3,
+ "instance_type": "n1-standard-2",
+ "disk_size_gb": 100,
+ }
+ mp, err := toMachinePoolGke(machinePool)
+
+ // Assertions
+ assert.NoError(t, err)
+ assert.NotNil(t, mp)
+
+ // Check the CloudConfig fields
+ assert.NotNil(t, mp.CloudConfig)
+ assert.Equal(t, "n1-standard-2", *mp.CloudConfig.InstanceType)
+ assert.Equal(t, int64(100), mp.CloudConfig.RootDeviceSize)
+
+ // Check the PoolConfig fields
+ assert.NotNil(t, mp.PoolConfig)
+ assert.Equal(t, "pool1", *mp.PoolConfig.Name)
+ assert.Equal(t, int32(3), *mp.PoolConfig.Size)
+ assert.Equal(t, []string{"worker"}, mp.PoolConfig.Labels)
+}
+
+func TestToGkeCluster(t *testing.T) {
+ // Simulate input data
+ cloudConfig := map[string]interface{}{
+ "project": "my-project",
+ "region": "us-central1",
+ }
+ machinePool := map[string]interface{}{
+ "name": "pool1",
+ "count": 3,
+ "instance_type": "n1-standard-2",
+ "disk_size_gb": 100,
+ }
+ d := resourceClusterGke().TestResourceData()
+ d.Set("cloud_config", []interface{}{cloudConfig})
+ d.Set("context", "cluster-context")
+ d.Set("cloud_account_id", "cloud-account-id")
+ d.Set("machine_pool", []interface{}{machinePool})
+
+ // Call the toGkeCluster function with the simulated input data
+ cluster, err := toGkeCluster(nil, d)
+
+ // Assertions
+ assert.NoError(t, err)
+ assert.NotNil(t, cluster)
+
+ // Check the Metadata
+ assert.NotNil(t, cluster.Metadata)
+ // Check other fields similarly
+ assert.NotNil(t, cluster.Spec.CloudConfig)
+ assert.Equal(t, "my-project", *cluster.Spec.CloudConfig.Project)
+ assert.Equal(t, "us-central1", *cluster.Spec.CloudConfig.Region)
+
+ // Check machine pool configuration
+ assert.Len(t, cluster.Spec.Machinepoolconfig, 1)
+ assert.Equal(t, "pool1", *cluster.Spec.Machinepoolconfig[0].PoolConfig.Name)
+ assert.Equal(t, int32(3), *cluster.Spec.Machinepoolconfig[0].PoolConfig.Size)
+ assert.Equal(t, "n1-standard-2", *cluster.Spec.Machinepoolconfig[0].CloudConfig.InstanceType)
+ assert.Equal(t, int64(100), cluster.Spec.Machinepoolconfig[0].CloudConfig.RootDeviceSize)
+}
+
+func TestFlattenMachinePoolConfigsGke(t *testing.T) {
+ // Simulate input data
+ machinePools := []*models.V1GcpMachinePoolConfig{
+ {
+ InstanceType: types.Ptr("n1-standard-2"),
+ Name: "pool1",
+ RootDeviceSize: 100,
+ Size: 3,
+ },
+ {
+ InstanceType: types.Ptr("n1-standard-4"),
+ Name: "pool2",
+ Size: 2,
+ RootDeviceSize: 200,
+ },
+ }
+
+ // Call the flattenMachinePoolConfigsGke function with the simulated input data
+ result := flattenMachinePoolConfigsGke(machinePools)
+
+ // Assertions
+ assert.NotNil(t, result)
+ assert.Len(t, result, 2)
+
+ // Check the first machine pool
+ pool1 := result[0].(map[string]interface{})
+ assert.Equal(t, "pool1", pool1["name"])
+ assert.Equal(t, 3, pool1["count"])
+ assert.Equal(t, "n1-standard-2", pool1["instance_type"])
+ assert.Equal(t, 100, pool1["disk_size_gb"])
+
+ // Check the second machine pool
+ pool2 := result[1].(map[string]interface{})
+ assert.Equal(t, "pool2", pool2["name"])
+ assert.Equal(t, 2, pool2["count"])
+ assert.Equal(t, "n1-standard-4", pool2["instance_type"])
+ assert.Equal(t, 200, pool2["disk_size_gb"])
+}
diff --git a/templates/resources/cluster_gke.md.tmpl b/templates/resources/cluster_gke.md.tmpl
new file mode 100644
index 00000000..1cec0cb6
--- /dev/null
+++ b/templates/resources/cluster_gke.md.tmpl
@@ -0,0 +1,71 @@
+---
+page_title: "{{.Name}} {{.Type}} - {{.ProviderName}}"
+subcategory: ""
+description: |-
+{{ .Description | plainmarkdown | trimspace | prefixlines " " }}
+---
+
+# {{.Name}} ({{.Type}})
+
+{{ .Description | plainmarkdown | trimspace | prefixlines " " }}
+
+## Example Usage
+
+
+```terraform
+
+data "spectrocloud_cloudaccount_gcp" "account" {
+ name = var.gcp_cloud_account_name
+}
+
+data "spectrocloud_cluster_profile" "profile" {
+ name = var.gke_cluster_profile_name
+}
+
+
+resource "spectrocloud_cluster_gke" "cluster" {
+ name = var.cluster_name
+ description = "Gke Cluster"
+ tags = ["dev", "department:pax"]
+ cloud_account_id = data.spectrocloud_cloudaccount_gcp.account.id
+ context = "project"
+
+ cluster_profile {
+ id = data.spectrocloud_cluster_profile.profile.id
+ }
+
+ cloud_config {
+ project = var.gcp_project
+ region = var.gcp_region
+ }
+ update_worker_pool_in_parallel = true
+ machine_pool {
+ name = "worker-basic"
+ count = 3
+ instance_type = "n2-standard-4"
+ }
+}
+
+```
+
+## Import
+
+In Terraform v1.5.0 and later, use an [`import` block](https://developer.hashicorp.com/terraform/language/import)
+to import the resource {{ .Name }} by using its `id` with the Palette `context` separated by a colon. For example:
+
+```terraform
+import {
+ to = {{ .Name }}.example
+ id = "example_id:context"
+}
+```
+
+Using `terraform import`, import the cluster using the `id` colon separated with `context`. For example:
+
+```console
+% terraform import {{ .Name }}.example example_id:project
+```
+
+Refer to the [Import section](/docs#import) to learn more.
+
+{{ .SchemaMarkdown | trimspace }}
\ No newline at end of file