EKS can be used to create a highly available cluster, with multiple master nodes spread across multiple availability zones.
Create a Kubernetes cluster using the following command. Run it in the "bash" terminal tab at the bottom of the Cloud9 IDE. This will create a cluster with master nodes:
$ aws eks create-cluster \ --name k8s-workshop \ --role-arn $EKS_SERVICE_ROLE \ --resources-vpc-config subnetIds=${EKS_SUBNET_IDS},securityGroupIds=${EKS_SECURITY_GROUPS} \ --kubernetes-version 1.10
The EKS_SERVICE_ROLE
, EKS_SUBNET_IDS
, and EKS_SECURITY_GROUPS
environment variables should have been set during the cloud9 environment setup script.
Cluster provisioning usually takes less than 10 minutes. You can query the status of your cluster with the following command. When your cluster status is ACTIVE
, you can proceed.
$ aws eks describe-cluster --name k8s-workshop --query cluster.status --output text
Note: If your 'create cluster' fails with an error like:
aws: error: argument --role-arn: expected one argument
Please confirm the following environment variables are set before executing the 'create cluster' command:
echo $EKS_SERVICE_ROLE
echo $EKS_SUBNET_IDS
echo $EKS_SECURITY_GROUPS
If any of those environment variables are blank, please re-run the "Build Script" cloud9 environment setup that lab-ide-setup.sh
If you receive an UnsupportedAvailabilityZoneException error during EKS cluster creation, your account is using an AZ that is currently resource constrained. This occurs mostly in N.Virginia region (us-east-1).
An error occurred (UnsupportedAvailabilityZoneException) when calling the CreateCluster operation: Cannot create cluster 'k8s-workshop' because us-east-1c,
the targeted availability zone, does not currently have sufficient capacity to support the cluster. Retry and choose from these availability zones: us-east-1a, us-east-1b, us-east-1d
If you receive this error, you need to remove the constrained AZ (us-east-1c in this example) from EKS_SUBNET_IDS
environment variable. Follow these steps to update your environment variable.
Save the EKS recommended AZ’s that is referred in your CLI output in an environment variable. Note: you only need two AZ’s defined to create EKS cluster
$ export EKS_VALID_AZS=us-east-1a,us-east-1b
Run the command below to determine subnet ID’s
$ aws ec2 describe-subnets --filters "Name=vpc-id,Values=$EKS_VPC_ID" "Name=availabilityZone,Values=$EKS_VALID_AZS" --query 'Subnets[*].[SubnetId]' --output text subnet-6e672524 subnet-18b10e44
Save this output as EKS_SUBNET_IDS
environment variable
$ export EKS_SUBNET_IDS=subnet-6e672524,subnet-18b10e44
Re-run EKS create-cluster and you should now be able to create cluster. The output should look similar to this
{ "cluster": { "status": "CREATING", "name": "k8s-workshop", "certificateAuthority": {}, "roleArn": "arn:aws:iam::123456789012:role/k8s-workshop-EksServiceRo-AWSServiceRoleForAmazonE-1PCSJFFFAF4BL", "resourcesVpcConfig": { "subnetIds": [ "subnet-6e672524", "subnet-18b10e44" ], "vpcId": "vpc-a779b4dd", "securityGroupIds": [ "sg-d093de9a" ] }, "version": "1.10", "arn": "arn:aws:eks:us-east-1:123456789012:cluster/k8s-workshop", "createdAt": 1532734869.147 } }
In order to access the cluster locally, use a configuration file (sometimes referred to as a kubeconfig
file). This configuration file can be created automatically.
Once the cluster has moved to the ACTIVE
state, download and run the create-kubeconfig.sh
script.
aws s3 cp s3://aws-kubernetes-artifacts/v0.5/create-kubeconfig.sh . && \ chmod +x create-kubeconfig.sh && \ . ./create-kubeconfig.sh
This will create a configuration file at $HOME/.kube/config
and update the necessary environment variable for default access.
You can test your kubectl configuration using 'kubectl get service'
$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 8m
Now that your EKS master nodes are created, you can launch and configure your worker nodes.
To launch your worker nodes, run the following CloudFormation CLI command:
$ aws cloudformation create-stack \ --stack-name k8s-workshop-worker-nodes \ --template-url https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml \ --capabilities "CAPABILITY_IAM" \ --parameters "[{\"ParameterKey\": \"KeyName\", \"ParameterValue\": \"${AWS_STACK_NAME}\"}, {\"ParameterKey\": \"NodeImageId\", \"ParameterValue\": \"${EKS_WORKER_AMI}\"}, {\"ParameterKey\": \"ClusterName\", \"ParameterValue\": \"k8s-workshop\"}, {\"ParameterKey\": \"NodeGroupName\", \"ParameterValue\": \"k8s-workshop-nodegroup\"}, {\"ParameterKey\": \"ClusterControlPlaneSecurityGroup\", \"ParameterValue\": \"${EKS_SECURITY_GROUPS}\"}, {\"ParameterKey\": \"VpcId\", \"ParameterValue\": \"${EKS_VPC_ID}\"}, {\"ParameterKey\": \"Subnets\", \"ParameterValue\": \"${EKS_SUBNET_IDS}\"}]"
The AWS_STACK_NAME
, EKS_WORKER_AMI
, EKS_VPC_ID
, EKS_SUBNET_IDS
, and EKS_SECURITY_GROUPS
environment variables should have been set during the Cloud9 Environment Setup.
Node provisioning usually takes less than 5 minutes. You can query the status of your cluster with the following command. When your cluster status is CREATE_COMPLETE
, you can proceed.
$ aws cloudformation describe-stacks --stack-name k8s-workshop-worker-nodes --query 'Stacks[0].StackStatus' --output text
To enable worker nodes to join your cluster, download and run the aws-auth-cm.sh
script.
aws s3 cp s3://aws-kubernetes-artifacts/v0.5/aws-auth-cm.sh . && \ chmod +x aws-auth-cm.sh && \ . ./aws-auth-cm.sh
Watch the status of your nodes and wait for them to reach the Ready
status.
$ kubectl get nodes --watch NAME STATUS ROLES AGE VERSION ip-192-168-223-116.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-223-116.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-223-116.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-147-168.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-147-168.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-102-172.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-102-172.us-west-2.compute.internal NotReady <none> 0s v1.10.3 ip-192-168-223-116.us-west-2.compute.internal NotReady <none> 10s v1.10.3 ip-192-168-147-168.us-west-2.compute.internal NotReady <none> 10s v1.10.3 ip-192-168-102-172.us-west-2.compute.internal NotReady <none> 10s v1.10.3 ip-192-168-223-116.us-west-2.compute.internal Ready <none> 20s v1.10.3 ip-192-168-147-168.us-west-2.compute.internal Ready <none> 20s v1.10.3 ip-192-168-102-172.us-west-2.compute.internal Ready <none> 20s v1.10.3
You can manage multiple Kubernetes clusters with kubectl, the Kubernetes CLI. We will look more deeply at kubectl in the next section. The configuration for each cluster is stored in a configuration file, referred to as the “kubeconfig file”. By default, kubectl looks for a file named config
in the directory ~/.kube
. The kubectl CLI uses kubeconfig file to find the information it needs to choose a cluster and communicate with the API server of a cluster.
This allows you to deploy your applications to different environments by just changing the context. For example, here is a typical flow for application development:
-
Build your application using a development environment (perhaps even locally on your laptop)
-
Change the context to a test cluster created on AWS
-
Use the same command to deploy to the test environment
-
Once satisfied, change the context again to a production cluster on AWS
-
Once again, use the same command to deploy to production environment
Get a summary of available contexts:
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * aws kubernetes aws
The output shows different contexts, one per cluster, that are available to kubectl. NAME
column shows the context name. *
indicates the current context.
View the current context:
$ kubectl config current-context aws
If multiple clusters exist, then you can change the context:
$ kubectl config use-context <config-name>
You are now ready to continue on with the workshop!
The sections below provide information on other capabilities of Kubernetes clusters. You are welcome to read and refer to them should you need to use those capabilities.
This section will walk you through how to install a Kubernetes cluster on AWS using kops.
kops, short for Kubernetes Operations, is a set of tools for installing, operating, and deleting Kubernetes clusters. kops can also perform rolling upgrades from older versions of Kubernetes to newer ones, and manage the cluster add-ons.
kops can be used to create a highly available cluster, with multiple master and worker nodes spread across multiple availability zones. The master and worker nodes within the cluster can use either DNS or the Weave Mesh gossip protocol for name resolution. For this workshop, we will use the gossip protocol. A gossip-based cluster is easier and quicker to setup, and does not require a domain, subdomain, or Route53 hosted zone to be registered. Instructions for creating a DNS-based cluster are provided as an appendix at the bottom of this page.
To create a cluster using the gossip protocol, simply use a cluster name with a suffix of .k8s.local
. In the following steps, we will use example.cluster.k8s.local
as a sample gossip cluster name. You may choose a different name as long as it ends with .k8s.local
.
(Note) Before running commands below, check if you’re having 5 VPCs in us-east-1 region (which is maximum number of VPCs per region) and delete one through deleting CloudFormation Stack if you need (DO NOT DELETE Cloud9 IDE-related CloudFormation, becuase you’ll need that). Following error will occur if you reach maximum number of VPCs.
error running task "VPC/example.cluster.k8s.local" (9m59s remaining to succeed): error creating VPC: VpcLimitExceeded: The maximum number of VPCs has been reached.
The command below creates a cluster in a multi-master, multi-node, and multi-az configuration.
Run it in the "bash" terminal tab at the bottom of the Cloud9 IDE.
We can create and build the cluster in one step by passing the --yes
flag.
$ kops create cluster \ --name example.cluster.k8s.local \ --master-count 3 \ --node-count 5 \ --zones $AWS_AVAILABILITY_ZONES \ --yes
A multi-master cluster can be created by using the --master-count
option and specifying the number of master nodes. An odd value is recommended. By default, the master nodes are spread across the AZs specified using the --zones
option. Alternatively, you can use the --master-zones
option to explicitly specify the zones for the master nodes.
The --zones
option is also used to distribute the worker nodes. The number of workers is specified using the --node-count
option.
If you receive an KOPS_STATE_STORE error looks like below during EKS cluster creation, you need to export KOPS_STATE_STORE environment variable with appropriate S3 bucket name that was created with CloudFormation Stack which is starting with s3://k8s-workshop-kopsstatestore-xxx. After that, you can run create command again.
Please use a valid s3 bucket uri when setting --state or KOPS_STATE_STORE env var. A valid value follows the format s3://<bucket>.
Or you can run 'create cluster' command with --state option like below. (modify s3 bucket name with your own)
$ kops create cluster \ --name example.cluster.k8s.local \ --master-count 3 \ --node-count 5 \ --zones $AWS_AVAILABILITY_ZONES \ --state s3://k8s-workshop-kopsstatestore-xxxxxxx --yes
It will take 5-8 minutes for the cluster to be created. Validate the cluster:
$ kops validate cluster
Using cluster from kubectl context: example.cluster.k8s.local
Validating cluster example.cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-eu-central-1a Master m3.medium 1 1 eu-central-1a
master-eu-central-1b Master m3.medium 1 1 eu-central-1b
master-eu-central-1c Master c4.large 1 1 eu-central-1c
nodes Node t2.medium 5 5 eu-central-1a,eu-central-1b,eu-central-1c
NODE STATUS
NAME ROLE READY
ip-172-20-101-97.ec2.internal node True
ip-172-20-119-53.ec2.internal node True
ip-172-20-124-138.ec2.internal master True
ip-172-20-35-15.ec2.internal master True
ip-172-20-63-104.ec2.internal node True
ip-172-20-69-241.ec2.internal node True
ip-172-20-84-65.ec2.internal node True
ip-172-20-93-167.ec2.internal master True
Your cluster example.cluster.k8s.local is ready
Note that all masters are spread across different AZs.
Your output may differ slightly from the one shown here based up on the type of cluster you created.
If you don’t need cluster anymore, you can delete it with following commands.
$ kops delete cluster --name example.cluster.k8s.local --yes
Great job. You did all subject of Kubernetes basic and operation. Go Next advanced subject.