tfp-automation
is a framework designed to test various Rancher2 Terraform provider resources to be tested with Terratest + Go. While this is not meant to serve as a 1:1 partiy with the existing test cases in rancher/rancher
, the overall structure of the tests is. This is to ensure that adoption of the framework is as seamless as possible.
When testing locally, it is required to set the RANCHER2_PROVIDER_VERSION
, as type string
, and formatted without a leading v
.
rancher:
# define rancher specific configs here
terraform:
# define module specific configs here
terratest:
# define test specific configs here
๐บ Back to top
The rancher
configurations in the cattle-config.yaml
will remain consistent across all modules and tests.
rancher:
host: url-to-rancher-server.com
adminToken: token-XXXXX:XXXXXXXXXXXXXXX
insecure: true
cleanup: true
๐บ Back to top
The terraform
configurations in the cattle-config.yaml
are module specific. Fields to configure vary per module. Below are generic fields that are applicable regardless of module. See them below:
terraform:
etcd: # This is an optional block.
disableSnapshot: false
snapshotCron: "0 */5 * * *"
snapshotRetention: 6
s3:
bucket: ""
cloudCredentialName: ""
endpoint: ""
endpointCA: ""
folder: ""
region: ""
skipSSLVerify: true
etcdRKE1: # This is an optional block
backupConfig:
enabled: true
intervalHours: 12
safeTimestamp: true
timeout: 120
s3BackupConfig:
accessKey: ""
bucketName: ""
endpoint: ""
folder: ""
region: ""
secretKey: ""
retention: "72h"
snapshot: false
cloudCredentialName: ""
defaultClusterRoleForProjectMembers: "true" # Can be "true" or "false"
enableNetworkPolicy: false # Can be true or false
hostnamePrefix: ""
machineConfigName: "" # RKE2/K3S specific
networkPlugin: "" # RKE1 specific
nodeTemplateName: "" # RKE1 specific
privateRegistries: # This is an optional block. You must already have a private registry stood up
engineInsecureRegistry: "" # RKE1 specific
url: ""
systemDefaultRegistry: "" # RKE2/K3S specific
username: "" # RKE1 specific
password: "" # RKE1 specific
insecure: true
authConfigSecretName: "" # RKE2/K3S specific. Secret must be created in the fleet-default namespace already
Note: At this time, private registries for RKE2/K3s MUST be used with provider version 3.1.1. This is due to issue rancher/terraform-provider-rancher2#1305.
๐บ Back to top
terraform:
module: aks
cloudCredentialName: tf-aks
azureCredentials:
clientId: ""
clientSecret: ""
environment: "AzurePublicCloud"
subscriptionId: ""
tenantId: ""
azureConfig:
availabilityZones:
- '1'
- '2'
- '3'
image: ""
location: ""
managedDisks: false
mode: "System"
name: "agentpool"
networkDNSServiceIP: ""
networkDockerBridgeCIDR: ""
networkServiceCIDR: ""
noPublicIp: false
osDiskSizeGB: 128
outboundType: "loadBalancer"
resourceGroup: ""
resourceLocation: ""
subnet: ""
taints: ["none:PreferNoSchedule"]
vmSize: Standard_DS2_v2
vnet: ""
๐บ Back to top
terraform:
module: eks
cloudCredentialName: tf-eks
hostnamePrefix: tfp
awsCredentials:
awsAccessKey: ""
awsSecretKey: ""
awsConfig:
awsInstanceType: t3.medium
region: us-east-2
awsSubnets:
- ""
- ""
awsSecurityGroups:
- ""
publicAccess: true
privateAccess: true
๐บ Back to top
terraform:
module: gke
cloudCredentialName: tf-creds-gke
hostnamePrefix: tfp
googleCredentials:
authEncodedJson: |-
{
"type": "service_account",
"project_id": "",
"private_key_id": "",
"private_key": "",
"client_email": "",
"client_id": "",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": ""
}
googleConfig:
region: us-central1-c
projectID: ""
network: default
subnetwork: default
๐บ Back to top
terraform:
module: azure_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
azureCredentials:
clientId: ""
clientSecret: ""
environment: "AzurePublicCloud"
subscriptionId: ""
tenantId: ""
azureConfig:
availabilitySet: "docker-machine"
subscriptionId: ""
customData: ""
diskSize: "100"
dockerPort: "2376"
faultDomainCount: "3"
image: "Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest"
location: "westus2"
managedDisks: false
noPublicIp: false
openPort: ["6443/tcp","2379/tcp","2380/tcp","8472/udp","4789/udp","9796/tcp","10256/tcp","10250/tcp","10251/tcp","10252/tcp"]
privateIpAddress: ""
resourceGroup: ""
size: "Standard_D2_v2"
sshUser: "azureuser"
staticPublicIp: false
storageType: "Standard_LRS"
updateDomainCount: "5"
๐บ Back to top
terraform:
module: ec2_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
awsCredentials:
awsAccessKey: ""
awsSecretKey: ""
awsConfig:
ami:
awsInstanceType: t3.medium
region: us-east-2
awsSecurityGroupNames:
- security-group-name
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
awsRootSize: 80
๐บ Back to top
terraform:
module: linode_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
linodeCredentials:
linodeToken: ""
linodeConfig:
region: us-east
linodeRootPass: ""
๐บ Back to top
terraform:
module: vsphere_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
vsphereCredentials:
password: ""
username: ""
vcenter: ""
vcenterPort: "443"
vsphereConfig:
cfgparam: ["disk.enableUUID=TRUE"]
cloneFrom: ""
cloudConfig: ""
contentLibrary: ""
cpuCount: "4"
creationType: "template"
datacenter: ""
datastore: ""
datastoreCluster: ""
diskSize: "40000"
folder: ""
hostsystem: ""
memorySize: "8192"
network: [""]
pool: ""
sshPassword: "tcuser"
sshPort: "22"
sshUser: "docker"
sshUserGroup: "staff"
๐บ Back to top
terraform:
module: azure_k3s
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
azureCredentials:
clientId: ""
clientSecret: ""
environment: "AzurePublicCloud"
subscriptionId: ""
tenantId: ""
azureConfig:
availabilitySet: "docker-machine"
customData: ""
diskSize: "100"
dockerPort: "2376"
faultDomainCount: "3"
image: "Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest"
location: "westus2"
managedDisks: false
noPublicIp: false
openPort: ["6443/tcp","2379/tcp","2380/tcp","8472/udp","4789/udp","9796/tcp","10256/tcp","10250/tcp","10251/tcp","10252/tcp"]
privateIpAddress: ""
resourceGroup: ""
size: "Standard_D2_v2"
sshUser: ""
staticPublicIp: false
storageType: "Standard_LRS"
updateDomainCount: "5"
๐บ Back to top
terraform:
module: ec2_rke2
cloudCredentialName: tf-creds-rke2
machineConfigName: tf-rke2
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
awsCredentials:
awsAccessKey: ""
awsSecretKey: ""
awsConfig:
ami:
region: us-east-2
awsSecurityGroupNames:
- my-security-group
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
๐บ Back to top
terraform:
module: linode_k3s
cloudCredentialName: tf-linode-creds
machineConfigName: tf-k3s
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
linodeCredentials:
linodeToken: ""
linodeConfig:
linodeImage: linode/ubuntu20.04
region: us-east
linodeRootPass: xxxxxxxxxxxx
๐บ Back to top
terraform:
module: vsphere_k3s
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
vsphereCredentials:
password: ""
username: ""
vcenter: ""
vcenterPort: ""
vsphereConfig:
cfgparam: ["disk.enableUUID=TRUE"]
cloneFrom: ""
cloudConfig: ""
contentLibrary: ""
cpuCount: "4"
creationType: "template"
datacenter: ""
datastore: ""
datastoreCluster: ""
diskSize: "40000"
folder: ""
hostsystem: ""
memorySize: "8192"
network: [""]
pool: ""
sshPassword: "tcuser"
sshPort: "22"
sshUser: "docker"
sshUserGroup: "staff"
๐บ Back to top
The terratest
configurations in the cattle-config.yaml
are test specific. Fields to configure vary per test. The nodepools
field in the below configurations will vary depending on the module. I will outline what each module expects first, then proceed to show the whole test specific configurations.
๐บ Back to top
type: []Nodepool
๐บ Back to top
AKS nodepools only need the quantity
of nodes per pool to be provided, of type int64
. The below example will create a cluster with three node pools, each with a single node.
nodepools:
- quantity: 1
- quantity: 1
- quantity: 1
๐บ Back to top
EKS nodepools require the instanceType
, as type string
, the desiredSize
of the nodepool, as type int64
, the maxSize
of the node pool, as type int64
, and the minSize
of the node pool, as type int64
. The minimum requirement for an EKS nodepool's desiredSize
is 2
. This must be respected or the cluster will fail to provision.
nodepools:
- instanceType: t3.medium
desiredSize: 3
maxSize: 3
minSize: 0
๐บ Back to top
GKE nodepools require the quantity
of the node pool, as type int64
, and the maxPodsContraint
, as type int64
.
nodepools:
- quantity: 2
maxPodsContraint: 110
๐บ Back to top
For these modules, the required nodepool fields are the quantity
, as type int64
, as well as the roles to be assigned, each to be set or toggled via boolean - [etcd
, controlplane
, worker
]. The following example will create three node pools, each with individual roles, and one node per pool.
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
That wraps up the sub-section on nodepools, circling back to the test specific configs now...
Test specific fields to configure in this section are as follows:
๐บ Back to top
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
kubernetesVersion: ""
nodeCount: 3
# Below are the expected formats for all module kubernetes versions
# AKS without leading v
# e.g. '1.28.5'
# EKS without leading v or any tail ending
# e.g. '1.28'
# GKE without leading v but with tail ending included
# e.g. 1.28.7-gke.1026000
# RKE1 with leading v and -rancher1-1 tail
# e.g. v1.28.7-rancher1-1
# RKE2 with leading v and +rke2r# tail
# e.g. v1.28.7+rke2r1
# K3S with leading v and +k3s# tail
# e.g. v1.28.7+k3s1
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
๐บ Back to top
terratest:
kubernetesVersion: ""
nodeCount: 3
scaledUpNodeCount: 8
scaledDownNodeCount: 6
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
scalingInput:
scaledUpNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 3
etcd: false
controlplane: false
worker: true
scaledDownNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
๐บ Back to top
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
nodeCount: 3
kubernetesVersion: ""
upgradedKubernetesVersion: ""
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
๐บ Back to top
terratest:
snapshotInput:
snapshotRestore: "none"
upgradeKubernetesVersion: ""
controlPlaneConcurrencyValue: "15%"
workerConcurrencyValue: "20%"
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
๐บ Back to top
Build module test may be used and ran to create a main.tf terraform configuration file for the desired module. This is logged to the output for future reference and use.
Testing configurations for this are the same as outlined in provisioning test above. Please review provisioning test configurations for more details.
๐บ Back to top
Cleanup test may be used to clean up resources in situations where rancher config has cleanup
set to false
. This may be helpful in debugging. This test expects the same configurations used to initially create this environment, to properly clean them up.