From ec330033daf35c01de6b59eabe4be23418bb19a3 Mon Sep 17 00:00:00 2001 From: Jagdeep <70941399+jagpk@users.noreply.github.com> Date: Tue, 2 Nov 2021 23:33:17 +0800 Subject: [PATCH 1/2] Jenkins Upgrade to EKS MNG --- .../080_jenkins/_index.md | 2 +- .../080_jenkins/autoscaling_nodes.md | 39 ++++--- .../080_jenkins/jenkins_cleanup.md | 2 +- .../080_jenkins/running_jobs.md | 36 +++--- .../080_jenkins/setup_agents.md | 109 +++++++++++------- .../080_jenkins/setup_jenkins.md | 76 +++++++++++- 6 files changed, 178 insertions(+), 86 deletions(-) diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md index f6b56d55..c81dd715 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md @@ -6,4 +6,4 @@ weight: 80 # Running Jenkins jobs - optional module -In this section, we will deploy a Jenkins master server into our cluster, and configure build jobs that will launch Jenkins agents inside Kubernetes pods. The Kubernetes pods will run on a dedicated Spot nodegroup with the optimized configuration for this type of workload, and we will demonstrate automatically restarting jobs that could potentially fail due to EC2 Spot Interruptions, that occur when EC2 needs the capacity back. +In this section, we will deploy a Jenkins master server into our cluster, and configure build jobs that will launch Jenkins agents inside Kubernetes pods. The Kubernetes pods will run on a dedicated EKS managed node group with Spot capacity. We will demonstrate automatically restarting jobs that could potentially fail due to EC2 Spot Interruptions, that occur when EC2 needs the capacity back. diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/autoscaling_nodes.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/autoscaling_nodes.md index 83e4436a..8edc2c6e 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/autoscaling_nodes.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/autoscaling_nodes.md @@ -4,7 +4,7 @@ date: 2018-08-07T08:30:11-07:00 weight: 80 --- -In a previous module in this workshop, we saw that we can use Kubernetes cluster-autoscaler to automatically increase the size of our nodegroups (EC2 Auto Scaling groups) when our Kubernetes deployment scaled out, and some of the pods remained in `pending` state due to lack of resources on the cluster. Let's check the same concept applies for our Jenkins worker nodes and see this in action. +In a previous module in this workshop, we saw that we can use Kubernetes cluster-autoscaler to automatically increase the size of our node groups (EC2 Auto Scaling groups) when our Kubernetes deployment scaled out, and some of the pods remained in `pending` state due to lack of resources on the cluster. Let's check the same concept applies for our Jenkins worker nodes and see this in action. If you recall, Cluster Autoscaler was configured to Auto-Discover Auto Scaling groups created with the tags : k8s.io/cluster-autoscaler/enabled, and k8s.io/cluster-autoscaler/eksworkshop-eksctl. You can find out in the AWS Console section for **EC2 -> Auto Scaling Group**, that the new jenkins node group does indeed have the right tags defined. @@ -16,24 +16,27 @@ CI/CD workloads can benefit of Cluster Autoscaler ability to scale down to 0! Ca #### Running multiple Jenkins jobs to reach a Pending pods state If we replicate our existing Sleep-2m job and run it 5 times, that should be enough for the EC2 Instance in the Jenkins dedicated nodegroup to run out of resources (CPU/Mem), triggering a Scale Up activity from cluster-autoscaler to increase the size of the EC2 Auto Scaling group.\ -1\. On the Jenkins dashboard, in the left pane, click **New Item**\ -2\. Under **Enter an item name**, enter `sleep-2m-2`\ -3\. At the bottom of the page, in the **Copy from** field, start typing Sleep-2m until the job name is auto completed, click **OK**\ -4\. In the job configuration page, click **Save**\ -5\. Repeat steps 1-4 until you have 5 identical jobs with different names\ -6\. In the Jenkins main dashboard page, click the "**Schedule a build for Sleep-2m-***" on all 5 jobs, to schedule all our jobs at the same time\ -7\. Monitor `kubectl get pods -w` and see pods with `jenkins-agent-abcdef` name starting up, until some of them are stuck in `pending` state. You can also use the Kube-ops-view for that purpose.\ -8\. Check the cluster-autoscaler log by running `kubectl logs -f deployment/cluster-autoscaler -n kube-system`\ -9\. The following lines would indicate that cluster-autoscaler successfully identified the pending Jenkins agent pods, detremined that the nodegroups that we created in the previous workshop module are not suitable due to the node selectors, and finally increased the size of the Jenkins dedicated nodegroup in order to have the kube-scheduler schedule these pending pods on new EC2 Instances in our EC2 Auto Scaling group.\ +1. On the Jenkins dashboard, in the left pane, click **New Item**. +2. Under **Enter an item name**, enter `sleep-2m-2`. +3. At the bottom of the page, in the **Copy from** field, start typing Sleep-2m until the job name is auto completed, click **OK**. +4. In the job configuration page, click **Save**. +5. Repeat steps 1-4 until you have 5 identical jobs with different names. +6. In the Jenkins main dashboard page, click the "**Schedule a build for Sleep-2m-***" on all 5 jobs, to schedule all our jobs at the same time. +7. Monitor `kubectl get pods -w` and see pods with `jenkins-agent-abcdef` name starting up, until some of them are stuck in `pending` state. You can also use the Kube-ops-view for that purpose. +8. Check the cluster-autoscaler log by running `kubectl logs -f deployment/cluster-autoscaler -n kube-system`. +9. The following lines would indicate that cluster-autoscaler successfully identified the pending Jenkins agent pods, detremined that the nodegroups that we created in the previous workshop module are not suitable due to the node selectors, and finally increased the size of the Jenkins dedicated nodegroup in order to have the kube-scheduler schedule these pending pods on new EC2 Instances in our EC2 Auto Scaling group. + ``` -Pod default/default-5tb2v is unschedulable -Pod default-5tb2v can't be scheduled on eksctl-eksworkshop-eksctl10-nodegroup-dev-8vcpu-32gb-spot-NodeGroup-16XJ6GMZCT3XQ, predicate failed: GeneralPredicates predicate mismatch, reason: node(s) didn't match node selector -Pod default-5tb2v can't be scheduled on eksctl-eksworkshop-eksctl10-nodegroup-dev-4vcpu-16gb-spot-NodeGroup-1RBXH0I6585MX, predicate failed: GeneralPredicates predicate mismatch, reason: node(s) didn't match node selector -Best option to resize: eksctl-eksworkshop-eksctl10-nodegroup-jenkins-agents-2vcpu-8gb-spot-2-NodeGroup-7GE4LS6B34DK -Estimated 1 nodes needed in eksctl-eksworkshop-eksctl10-nodegroup-jenkins-agents-2vcpu-8gb-spot-2-NodeGroup-7GE4LS6B34DK -Final scale-up plan: [{eksctl-eksworkshop-eksctl10-nodegroup-jenkins-agents-2vcpu-8gb-spot-2-NodeGroup-7GE4LS6B34DK 1->2 (max: 5)}] -Scale-up: setting group eksctl-eksworkshop-eksctl10-nodegroup-jenkins-agents-2vcpu-8gb-spot-2-NodeGroup-7GE4LS6B34DK size to 2 +I1102 14:49:02.645241 1 scale_up.go:300] Pod jenkins-agent-pk7cj can't be scheduled on eksctl-eksworkshop-eksctl-nodegroup-ng-spot-8vcpu-32gb-NodeGroup-1DRVQJ43PHZUK, predicate checking error: node(s) didn't match Pod's node affinity/selector; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity/selector; debugInfo= +I1102 14:49:02.645257 1 scale_up.go:449] No pod can fit to eksctl-eksworkshop-eksctl-nodegroup-ng-spot-8vcpu-32gb-NodeGroup-1DRVQJ43PHZUK +I1102 14:49:02.645416 1 scale_up.go:468] Best option to resize: eks-jenkins-agents-mng-spot-2vcpu-8gb-8abe6f97-53a9-a62a-63f3-a92e6310750c +I1102 14:49:02.645424 1 scale_up.go:472] Estimated 1 nodes needed in eks-jenkins-agents-mng-spot-2vcpu-8gb-8abe6f97-53a9-a62a-63f3-a92e6310750c +I1102 14:49:02.645485 1 scale_up.go:586] Final scale-up plan: [{eks-jenkins-agents-mng-spot-2vcpu-8gb-8abe6f97-53a9-a62a-63f3-a92e6310750c 1->2 (max: 5)}] +I1102 14:49:02.645498 1 scale_up.go:675] Scale-up: setting group eks-jenkins-agents-mng-spot-2vcpu-8gb-8abe6f97-53a9-a62a-63f3-a92e6310750c size to 2 +I1102 14:49:02.645519 1 auto_scaling_groups.go:219] Setting asg eks-jenkins-agents-mng-spot-2vcpu-8gb-8abe6f97-53a9-a62a-63f3-a92e6310750c size to 2 + ``` -10\. The end result, which you can see via `kubectl get pods` or Kube-ops-view, is that all pods were eventually scheduled, and in the Jenkins dashboard, you will see that all 5 jobs have completed successfully. + +10. The end result, which you can see via `kubectl get pods` or Kube-ops-view, is that all pods were eventually scheduled, and in the Jenkins dashboard, you will see that all 5 jobs have completed successfully. Great result! Let's move to the next step and clean up the Jenkins module. diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/jenkins_cleanup.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/jenkins_cleanup.md index 9d18c68e..45ba8e3d 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/jenkins_cleanup.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/jenkins_cleanup.md @@ -16,5 +16,5 @@ helm delete cicd ### Removing the Jenkins nodegroup ``` -eksctl delete nodegroup -f spot_nodegroup_jenkins.yml --approve +eksctl delete nodegroup -f add-mng-spot-jenkins.yml --approve ``` diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/running_jobs.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/running_jobs.md index 8dd7236d..d7e96562 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/running_jobs.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/running_jobs.md @@ -4,9 +4,9 @@ date: 2018-08-07T08:30:11-07:00 weight: 70 --- -We now have a dedicated Spot nodegroup with the capacity-optimized allocation strategy that should decrease the chances of Spot Instances being interrupted, and we configured Jenkins to run jobs on those EC2 Spot Instances. We also installed the Naginator plugin which will allow us to retry failed jobs. +We now have a dedicated managed node group with Spot capacity. We also installed the Naginator plugin which will allow us to retry failed jobs. -#### Creating a Jenkins job +#### Create a Jenkins job 1. On the Jenkins dashboard, in the left pane, click **New Item** 2. Enter an item name: **Sleep-2m**, select **Freestyle project** and click **OK** 3. Scroll down to the **Build** section, and click **Add build step** -> **Execute shell** @@ -21,25 +21,23 @@ Since this workshop module focuses on resilience and cost optimization for Jenki {{% /notice %}} #### Running the Jenkins job -1\. On the project page for Sleep-2m, in the left pane, click the **Build Now** button\ -2\. Browse to the Kube-ops-view tool, and check that a new pod was deployed with a name that starts with `jenkins-agent-`\ - +1. On the project page for Sleep-2m, in the left pane, click the **Build Now** button +2. Browse to the Kube-ops-view tool, and check that a new pod was deployed with a name that starts with `'jenkins-agent-'`. {{%expand "Show me how to get kube-ops-view url" %}} Execute the following command on Cloud9 terminal ``` kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }' ``` {{% /expand %}} - -3\. Check the node on which the pod is running - is the nodegroup name jenkins-agents-2vcpu-8gb-spot? If so, it means that our labeling and Node Selector were configured successfully. \ -4\. Run `kubectl get pods`, and find the name of the Jenkins master pod (i.e cicd-jenkins-123456789-abcde)\ -5\. Run `kubectl logs -f `\ -6\. Do you see log lines that show your job is being started? for example "Started provisioning Kubernetes Pod Template from kubernetes with 1 executors. Remaining excess workload: 0"\ -7\. Back on the Jenkins Dashboard, In the left pane, click **Build History** and click the console icon next to the latest build. When the job finishes, you should see the following console output:\ +3. Check the node on which the pod is running - is the nodegroup name `jenkins-agents-mng-spot-2vcpu-8gb`? If so, it means that our labeling and Node Selector were configured successfully. +4. Run `kubectl get pods`, and find the name of the Jenkins controller pod (i.e `cicd-jenkins-*`). +5. Run `kubectl logs -f -c jenkins `. +6. Do you see log lines that show your job is being started? for example "jenkins-agent-* provisioning successfully completed. We have now 2 computer(s)". +7. Back on the Jenkins Dashboard, In the left pane, click **Build History** and click the console icon next to the latest build. When the job finishes, you should see the following console output: ``` -Building remotely on jenkins-agent-bwtmp (cicd-jenkins-slave) in workspace /home/jenkins/agent/workspace/Sleep-2m -[Sleep-2m] $ /bin/sh -xe /tmp/jenkins365818066752916558.sh +Building remotely on jenkins-agent-nkz2z (cicd-jenkins-agent) in workspace /home/jenkins/agent/workspace/Sleep-2m +[Sleep-2m] $ /bin/sh -xe /tmp/jenkins7588311786413895922.sh + sleep 2m + echo Job finished successfully Job finished successfully @@ -47,13 +45,13 @@ Finished: SUCCESS ``` #### Job failure and automatic retry -Now that we ran our job successfully on Spot Instances, let's test the failure scenario. Since we cannot simulate an EC2 Spot Interruption on instances that are running in an EC2 Auto Scaling group, we will demonstrate a similar effect by simply terminating the instance that our job/pod is running on. +Now that we ran our job successfully on Spot Instances, let's test the failure scenario. We will demonstrate a failure by simply terminating the instance that our job/pod is running on. -1. Go back to the Sleep-2m project page in Jenkins, and click **Build Now** -2. Run `kubectl get po --selector jenkins/cicd-jenkins-slave=true -o wide` to find the Jenkins agent pod and the node on which it is running -3. Run `kubectl describe node ` to find the node's EC2 Instance ID under the `alpha.eksctl.io/instance-id` label -4. Run `aws ec2 terminate-instances --instance-ids ` -5. Back in the Jenkins dashboard, under the **Build History** page, you should now see the Sleep-2m job as broken. You can click the Console button next to the failed run, to see the JNLP errors that indicate that the Jenkins agent was unable to communicate to the Master, due to the termination of the EC2 Instance. +1. Go back to the Sleep-2m project page in Jenkins, and click **Build Now**. +2. Run `kubectl get po --selector jenkins/cicd-jenkins-agent=true -o wide` to find the Jenkins agent pod and the node on which it is running. +3. Run `kubectl describe node ` to find the node's EC2 Instance ID under `ProviderID: aws:///*/i-xxxxx`. +4. Run `aws ec2 terminate-instances --instance-ids `. +5. Back in the Jenkins dashboard, under the **Build History** page, you should now see the Sleep-2m job as broken. You can click the Console button next to the failed run, to see the JNLP errors that indicate that the Jenkins agent was unable to communicate to the Controller, due to the termination of the EC2 Instance. 6. Within 1-3 minutes, the EC2 Auto Scaling group will launch a new replacement instance, and once it has joined the cluster, the sleep-2m job will be retried on the new node. You should see see the sleep-2m job succeed in the Build History page or Project page. Now that we successfully ran a job on a Spot Instance, and automatically restarted a job due to a simulated node failure, let's move to the next step in the workshop and autoscale our Jenkins nodes. diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_agents.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_agents.md index 7326c5d7..357e70e9 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_agents.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_agents.md @@ -1,69 +1,94 @@ --- -title: "Setting up Jenkins agents" +title: "Create Spot workers for Jenkins" date: 2018-08-07T08:30:11-07:00 -weight: 40 +weight: 10 --- -We now have our Jenkins Master running inside our EKS cluster, and we can reach the Jenkins dashboard via an ELB. We can create jobs which will be executed by Jenkins agents in pods within our cluster, but before we do that, let's create a dedicated Spot based nodegroup for our Jenkins agents, which will be slightly different from our existing nodegroups. +#### Create EKS managed node group with Spot capacity for Jenkins agent -#### Creating a new Spot Instances nodegroup for our Jenkins agent pods -Earlier in the workshop, in the **Adding Spot Workers with eksctl** step, we created nodegroups that run a diversified set of Spot Instances to run our applications.Let's create a new eksctl nodegroup configuration file called `spot_nodegroup_jenkins.yml`. The Jenkins default resource requirements (Request and Limit CPU/Memory) are 512m (~0.5 vCPU) and 512Mi (~0.5 GB RAM), and since we are not going to perform any large build jobs in this workshop, we can stick to the defaults and also choose relatively small instance types that can accommodate the Jenkins agent pods. +Earlier in the workshop, in the **Add EKS managed Spot workers** chapter, we created node groups that run a diversified set of Spot Instances to run our applications. Let's create a new eksctl nodegroup configuration file called `add-mng-spot-jenkins.yml`. + +The Jenkins default resource requirements (Request and Limit CPU/Memory) are 512m (~0.5 vCPU) and 512Mi (~0.5 GB RAM), and since we are not going to perform any large build jobs in this workshop, we can stick to the defaults and also choose relatively small instance types that can accommodate the Jenkins agent pods. ``` -cat < ~/environment/spot_nodegroup_jenkins.yml +cat < ~/environment/add-mng-spot-jenkins.yml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: eksworkshop-eksctl region: $AWS_REGION -nodeGroups: - - name: jenkins-agents-2vcpu-8gb-spot - minSize: 0 - maxSize: 5 - desiredCapacity: 1 - instancesDistribution: - instanceTypes: ["m5.large", "m5d.large", "m4.large","t3.large","t3a.large","m5a.large","t2.large"] - onDemandBaseCapacity: 0 - onDemandPercentageAboveBaseCapacity: 0 - spotAllocationStrategy: capacity-optimized - labels: - lifecycle: Ec2Spot - intent: jenkins-agents - aws.amazon.com/spot: "true" - tags: - k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot - k8s.io/cluster-autoscaler/node-template/label/intent: jenkins-agents - k8s.io/cluster-autoscaler/node-template/label/aws.amazon.com/spot: "true" - iam: - withAddonPolicies: - autoScaler: true + +managedNodeGroups: +- name: jenkins-agents-mng-spot-2vcpu-8gb + amiFamily: AmazonLinux2 + desiredCapacity: 1 + minSize: 0 + maxSize: 3 + spot: true + instanceTypes: + - m4.large + - m5.large + - m5a.large + - m5ad.large + - m5d.large + - t2.large + - t3.large + - t3a.large + iam: + withAddonPolicies: + autoScaler: true + cloudWatch: true + albIngress: true + privateNetworking: true + labels: + alpha.eksctl.io/cluster-name: eksworkshop-eksctl + alpha.eksctl.io/nodegroup-name: jenkins-agents-mng-spot-2vcpu-8gb + intent: jenkins-agents + tags: + alpha.eksctl.io/nodegroup-name: jenkins-agents-mng-spot-2vcpu-8gb + alpha.eksctl.io/nodegroup-type: managed + k8s.io/cluster-autoscaler/node-template/label/intent: jenkins-agents + EoF ``` -This will create a `spot_nodegroup_jenkins.yml` file that we will use to instruct eksctl to create one nodegroup (EC2 Auto Scaling group), with the labels `intent: jenkins-agents` and `lifecycle: Ec2Spot`. The ASG will also have a custom tag key `k8s.io/cluster-autoscaler/node-template/label/intent` with the value `jenkins-agents` - This is in order for Kubernetes cluster-autoscaler to respect the node selector configuration that we will apply later in the module. - -Since Jenkins job oriented workloads are not fault-tolerant and an EC2 Spot interruption would cause the build job to fail, we can choose the **capacity-optimized** allocation strategy which will provision Spot Instances for us from the capacity pools that have the lowest chances of being interrupted. This way, we increase the chances of successfully completing our Jenkins jobs when running on Spot Instances. +Create new EKS managed node groups with Spot capacity for Jenkins Agents. ``` -eksctl create nodegroup -f spot_nodegroup_jenkins.yml +eksctl create nodegroup -f add-mng-spot-jenkins.yml ``` {{% notice note %}} The creation of the workers will take about 3 minutes. {{% /notice %}} +{{% notice note %}} +Since eksctl 0.41, integrates with the instance selector ! This can create more convenient configurations that apply diversification of instances in a concise way. +As an exercise, [read eks instance selector documentation](https://eksctl.io/usage/instance-selector/) and figure out which changes you may need to apply the configuration changes using instance selector. +At the time of writing this workshop, we have not included this functionality as there is a pending feature we'd need to deny a few instances [Read more about this here](https://github.com/weaveworks/eksctl/issues/3718) +{{% /notice %}} -#### Instructing Jenkins to run jobs on the new, Spot dedicated nodegroup -1. In the Jenkins dashboard, browse to **Manage Jenkins** -> **Manage Node and Clouds** -1. On the left hand side click on the **Configure Clouds** link. That will take you to the cloud configuration where Kubernetes. -1. Click on the **Pod Templates...** button to expand the default pod template definition and then click again on **Pod Template Details...*** -1. Change the default pod name, attribute **Name** from `defualt` to `jenkins-agent`. We want to be able to identify the pods that are running in our clusters by name. -![Jenkins Pod Setup 1](/images/using_ec2_spot_instances_with_eks/jenkins/jenkinslabels-1.png) -1. At the bottom of the page, near the end of the Pod template section, for the **Node Selector** parameter , add the following: `intent=jenkins-agents,lifecycle=Ec2Spot` in order to instruct the Jenkins agent pods to run on the dedicated node group. -![Jenkins Pod Setup 2](/images/using_ec2_spot_instances_with_eks/jenkins/jenkinslabels-2.png) -1. Click **Save** +There are a few things to note in the configuration that we just used to create these node groups. -Now, when Jenkins creates new pods (=agents), these will be created with a Node Selector that instructs the kube-scheduler to only deploy the pods on nodes with the above mentioned labels, which only exist in the dedicated Jenkins nodegroup. + * Node groups configurations are set under the **managedNodeGroups** section, this indicates that the node groups are managed by EKS. + * The node group has **large** (2 vCPU and 8 GB) instance types with **minSize** 0, **maxSize** 3 and **desiredCapacity** 1. + * The configuration **spot: true** indicates that the node group being created is a EKS managed node group with Spot capacity. + * Notice that the we added 3 node labels per node: + * **alpha.eksctl.io/cluster-name**, to indicate the nodes belong to **eksworkshop-eksctl** cluster. + * **alpha.eksctl.io/nodegroup-name**, to indicate the nodes belong to **jenkins-agents-mng-spot-2vcpu-8gb** node group. + * **intent**, to allow you to deploy jenkins agents on nodes that have been labeled with value **jenkins-agents**. + + * Notice that the we added 1 cluster autoscaler related tag to node groups: + * **k8s.io/cluster-autoscaler/node-template/label/intent** is used by cluster autoscaler when node groups scale down to 0 (and scale up from 0). Cluster autoscaler acts on Auto Scaling groups belonging to node groups, therefore it requires same tags on ASG as well. Currently managed node groups do not auto propagate tags to ASG, see this [open issue](https://github.com/aws/containers-roadmap/issues/1524). Therefore, we will be adding these tags to ASG manually. + + +Let's add these tags to Auto Scaling groups of each node group using AWS cli. + +``` +ASG_JENKINS_2VCPU_8GB=$(eksctl get nodegroup -n jenkins-agents-mng-spot-2vcpu-8gb --cluster eksworkshop-eksctl -o json | jq -r '.[].AutoScalingGroupName') -Move to the next step in the workshop to learn how to increase the resilience of your Jenkins jobs. +aws autoscaling create-or-update-tags --tags \ +ResourceId=$ASG_JENKINS_2VCPU_8GB,ResourceType=auto-scaling-group,Key=k8s.io/cluster-autoscaler/node-template/label/intent,Value=jenkins-agents,PropagateAtLaunch=true + +``` \ No newline at end of file diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md index dada13c3..6a7f1077 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md @@ -1,27 +1,93 @@ --- -title: "Setting up Jenkins master " +title: "Setting up Jenkins" date: 2018-08-07T08:30:11-07:00 weight: 30 --- #### Install Jenkins +Let's download the jenkins repository so we have something to start with: + +``` +helm repo add jenkins https://charts.jenkins.io +``` + +You can then run helm search repo to see the charts. + +``` +helm search repo jenkins +``` + +Let's create the values.yaml to declare the configuration of our Jenkins installation. We will use nodeSelector (`intent: control-apps, eks.amazonaws.com/capacityType: ON_DEMAND`) to deploy jenkins-controller on On-Demand nodes and nodeSelector (`intent: jenkins-agents, eks.amazonaws.com/capacityType: SPOT`) to deploy Spot nodes. + +``` +cat << EOF > values.yaml +--- +controller: + componentName: "jenkins-controller" + image: "jenkins/jenkins" + tag: "2.303.2-lts-jdk11" + resources: + requests: + cpu: "1024m" + memory: "4Gi" + limits: + cpu: "4096m" + memory: "8Gi" + + servicePort: 80 + serviceType: LoadBalancer + + nodeSelector: + intent: control-apps + eks.amazonaws.com/capacityType: ON_DEMAND + +serviceAccountAgent: + create: false + +agent: + enabled: true + image: "jenkins/inbound-agent" + tag: "4.11-1" + workingDir: "/home/jenkins/agent" + componentName: "jenkins-agent" + resources: + requests: + cpu: "512m" + memory: "512Mi" + limits: + cpu: "1024m" + memory: "1Gi" + + nodeSelector: + intent: jenkins-agents + eks.amazonaws.com/capacityType: SPOT + connectTimeout: 300 + # Pod name + podName: "jenkins-agent" +EOF + ``` -helm install cicd jenkinsci/jenkins --set rbac.create=true,master.servicePort=80,master.serviceType=LoadBalancer,master.JCasC.enabled=false,master.enableXmlConfig=true + +Now we’ll use the helm cli to create the Jenkins server as we’ve declared it in the values.yaml file. + ``` +helm install cicd jenkins/jenkins -f values.yaml +``` + The output of this command will give you some additional information such as the `admin` password and the way to get the host name of the ELB that was provisioned. -Let's give this some time to provision and while we do let's watch for Jenkins master pod +Let's give this some time to provision and while we do let's watch for Jenkins Controller pod to boot. ``` kubectl get pods -w ``` -You should see a pod that starts with the name **cicd-jenkins-** in `init`, `pending` or `running` state. +You should see a pod that starts with the name **cicd-jenkins-** in `Pending`, `Init`, `PodInitializing` or `Running` state. Once the pod status changes to `running`, we can get the load balancer address which will allow us to login to the Jenkins dashboard. @@ -62,4 +128,4 @@ printf $(kubectl get secret --namespace default cicd-jenkins -o jsonpath="{.data The output of this command will give you the default password for your `admin` user. Log into the Jenkins login screen using these credentials. Make note of this password, because you will need to use it several times throughout the workshop. -Now that our Jenkins master is working, move to the next step in the workshop to set up Jenkins agents. +Now that our Jenkins Controller is working, move to the next step in the workshop to set up Jenkins agents. From b251efea272e80d4da796346b623faa033d9a4af Mon Sep 17 00:00:00 2001 From: "Carlos Manzanedo Rueda (ruecarlo@)" Date: Sun, 7 Nov 2021 17:03:23 +0000 Subject: [PATCH 2/2] changing a few entries for eks jenkins update --- content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md | 2 +- .../080_jenkins/setup_jenkins.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md index c81dd715..8631d696 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/_index.md @@ -6,4 +6,4 @@ weight: 80 # Running Jenkins jobs - optional module -In this section, we will deploy a Jenkins master server into our cluster, and configure build jobs that will launch Jenkins agents inside Kubernetes pods. The Kubernetes pods will run on a dedicated EKS managed node group with Spot capacity. We will demonstrate automatically restarting jobs that could potentially fail due to EC2 Spot Interruptions, that occur when EC2 needs the capacity back. +In this section, we will deploy a Jenkins server into our cluster, and configure build jobs that will launch Jenkins agents inside Kubernetes pods. The Kubernetes pods will run on a dedicated EKS managed node group with Spot capacity. We will demonstrate automatically restarting jobs that could potentially fail due to EC2 Spot Interruptions, that occur when EC2 needs the capacity back. diff --git a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md index 6a7f1077..8b8d33ca 100644 --- a/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md +++ b/content/using_ec2_spot_instances_with_eks/080_jenkins/setup_jenkins.md @@ -1,5 +1,5 @@ --- -title: "Setting up Jenkins" +title: "Setting up Jenkins server" date: 2018-08-07T08:30:11-07:00 weight: 30 ---