Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new: Cluster Access Management API on Security #919

Merged
merged 35 commits into from
May 29, 2024
Merged
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
5d5f6dd
Adding `prepare-environment` script
rodrigobersa Apr 30, 2024
dfd3e26
Adding workflow image
rodrigobersa Apr 30, 2024
dc6f57b
Reordering position
rodrigobersa Apr 30, 2024
7d4b0d4
Adding Index and Understanding
rodrigobersa Apr 30, 2024
4fd3ad0
Adding managing section
rodrigobersa May 1, 2024
3d57c4f
Adding cluster-admin section
rodrigobersa May 1, 2024
8a6b37c
Adding migration section
rodrigobersa May 1, 2024
d42e2a1
Adding combining section
rodrigobersa May 1, 2024
a592500
Replacing accoung ID with variable
rodrigobersa May 1, 2024
0a2985b
Adding cleanup script
rodrigobersa May 1, 2024
17c4732
Fix spelling
rodrigobersa May 1, 2024
9fa56c3
Adding cleanup scripts
rodrigobersa May 1, 2024
a0e0dd0
Fix indexing
rodrigobersa May 1, 2024
2160c30
Fix spelling
rodrigobersa May 1, 2024
eda07ea
Fixing fenced blocks
rodrigobersa May 1, 2024
c503d67
Running pre-commit
rodrigobersa May 1, 2024
d4afe88
Fixing table format
rodrigobersa May 1, 2024
158b72c
Formatting accessconfig options
rodrigobersa May 1, 2024
90a7bf0
Adding note about access config migration
rodrigobersa May 1, 2024
647b1d5
Minor fixes on managing and cluster-creator files
rodrigobersa May 2, 2024
86c8f56
Minor fixes on migrating and combining files
rodrigobersa May 2, 2024
a3e8b7a
Fix typo
rodrigobersa May 2, 2024
34f5eb6
Fixing prepare-environment script
rodrigobersa May 2, 2024
d67adf0
Fixing prepare-environment
rodrigobersa May 3, 2024
c34ed20
e2e validation for managing section
rodrigobersa May 3, 2024
7ec8aff
e2e validation for cluster-creator section
rodrigobersa May 3, 2024
90014de
Commenting rolebinding from prepare-env. e2e validation for migration…
rodrigobersa May 3, 2024
765ab0d
Adjusting rolebinding from prepare-env. e2e validation for combining …
rodrigobersa May 3, 2024
8d07d00
Running pre-commit
rodrigobersa May 3, 2024
3993ad6
Restructured sections
niallthomson May 24, 2024
02094fb
Some clarifications in rbac section
niallthomson May 25, 2024
e0da115
Tweaked migration section
niallthomson May 28, 2024
48238a6
Fixed flow and tests
niallthomson May 28, 2024
60bd763
Adding final step on associating
rodrigobersa May 29, 2024
77faef4
Fixed ARNs
niallthomson May 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions manifests/modules/security/cam/.workshop/cleanup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash

set -e

# Redefining Cluster Creator Cluster Admin Access
# Getting Cluster Creator Role from CloudFormation Stack
RESOURCE_ID=$(aws cloudformation list-stack-resources --stack-name workshop-stack --query 'StackResourceSummaries[?ResourceType==`AWS::IAM::Role`].LogicalResourceId' | awk -F '"' '/CodeBuildRole/{print$2}')

ROLE_NAME=$(aws cloudformation describe-stack-resource --stack-name workshop-stack --logical-resource-id $RESOURCE_ID --query 'StackResourceDetail.PhysicalResourceId' --output text)

# Getting Role ARN
ROLE_ARN=$(aws iam get-role --role-name $ROLE_NAME --query 'Role.Arn' --output text)

# Granting Cluster Admin Access
aws eks create-access-entry --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN
aws eks associate-access-policy --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster
153 changes: 153 additions & 0 deletions manifests/modules/security/cam/.workshop/terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
locals {
arn_base = join(":", slice(split(":", data.aws_eks_cluster.eks_cluster.arn), 0, 5))
map_accounts = try(yamldecode(data.kubernetes_config_map_v1.aws_auth.data.mapAccounts), [])
map_users = try(yamldecode(data.kubernetes_config_map_v1.aws_auth.data.mapUsers), [])
map_roles = yamldecode(data.kubernetes_config_map_v1.aws_auth.data.mapRoles)
add_roles = concat([{
rolearn = aws_iam_role.eks_developers.arn
username = "developer"
groups = [
"developers"
]
}])
}

data "aws_iam_policy_document" "assume_role" {

statement {
sid = "AssumeRole"
actions = ["sts:AssumeRole"]

principals {
type = "AWS"
identifiers = [data.aws_caller_identity.current.arn]
}
}
}

data "aws_iam_policy_document" "view_only" {
statement {
sid = "List"
actions = [
"eks:ListFargateProfiles",
"eks:ListNodegroups",
"eks:ListUpdates",
"eks:ListAddons",
"eks:ListAccessEntries",
"eks:ListAssociatedAccessPolicies",
"eks:ListIdentityProviderConfigs",
"eks:ListInsights",
"eks:ListPodIdentityAssociations",
]
resources = [
data.aws_eks_cluster.eks_cluster.arn,
"${local.arn_base}:nodegroup/*/*/*",
"${local.arn_base}:addon/*/*/*",
"arn:aws:eks::aws:cluster-access-policy",
]
}

statement {
sid = "ListDescribeAll"
actions = [
"eks:DescribeAddonConfiguration",
"eks:DescribeAddonVersions",
"eks:ListClusters",
"eks:ListAccessPolicies",
]
resources = ["*"]
}

statement {
sid = "Describe"
actions = [
"eks:DescribeNodegroup",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:DescribeUpdate",
"eks:AccessKubernetesApi",
"eks:DescribeCluster",
"eks:DescribeAddon",
"eks:DescribeAccessEntry",
"eks:DescribeIdentityProviderConfig",
"eks:DescribeInsight",
"eks:DescribePodIdentityAssociation",
]
resources = [
data.aws_eks_cluster.eks_cluster.arn,
"${local.arn_base}:fargateprofile/*/*/*",
"${local.arn_base}:nodegroup/*/*/*",
"${local.arn_base}:addon/*/*/*",
]
}
}


resource "aws_iam_role" "eks_developers" {
name = "EKSDevelopers"
path = "/"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}

resource "aws_iam_policy" "eks_developers" {
name = "EKSDevelopers"
policy = data.aws_iam_policy_document.view_only.json
}

resource "aws_iam_role_policy_attachment" "eks_developers" {
policy_arn = aws_iam_policy.eks_developers.arn
role = aws_iam_role.eks_developers.name
}

data "kubernetes_config_map_v1" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
}

resource "kubernetes_config_map_v1_data" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = yamlencode(concat(local.add_roles, local.map_roles))
mapUsers = yamlencode(local.map_users)
mapAccounts = yamlencode(local.map_accounts)
}
force = true
}

resource "kubernetes_cluster_role_binding_v1" "view" {
metadata {
name = "view"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "view"
}
subject {
kind = "Group"
name = "developers"
api_group = "rbac.authorization.k8s.io"
}
}

resource "kubernetes_role_binding_v1" "developers" {
metadata {
name = "app1_dev"
namespace = "default"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "edit"
}
subject {
kind = "Group"
name = "app1_dev"
api_group = "rbac.authorization.k8s.io"
}
}
9 changes: 9 additions & 0 deletions manifests/modules/security/cam/.workshop/terraform/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
output "developers_role" {
description = "AWS IAM Role created for EKS Developers access"
value = aws_iam_role.developers
}

output "aws_auth" {
description = "Merged content for `aws-auth` configMap"
value = kubernetes_config_map_v1_data.aws_auth
}
35 changes: 35 additions & 0 deletions manifests/modules/security/cam/.workshop/terraform/vars.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# tflint-ignore: terraform_unused_declarations
variable "eks_cluster_id" {
description = "EKS cluster name"
type = string
}

# tflint-ignore: terraform_unused_declarations
variable "eks_cluster_version" {
description = "EKS cluster version"
type = string
}

# tflint-ignore: terraform_unused_declarations
variable "cluster_security_group_id" {
description = "EKS cluster security group ID"
type = any
}

# tflint-ignore: terraform_unused_declarations
variable "addon_context" {
description = "Addon context that can be passed directly to blueprints addon modules"
type = any
}

# tflint-ignore: terraform_unused_declarations
variable "tags" {
description = "Tags to apply to AWS resources"
type = any
}

# tflint-ignore: terraform_unused_declarations
variable "resources_precreated" {
description = "Have expensive resources been created already"
type = bool
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
177 changes: 177 additions & 0 deletions website/docs/security/cluster-access-management/cluster-creator.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
---
title: "Cluster Admin Access"
sidebar_position: 13
---

## Managing the Cluster Admin access

As explained earlier with the Cluster Access Management API, it is possible to remove the cluster-admin permissions set for the Cluster Creator during the cluster creation time. Let's do that, since the cluster-admin permissions should be used just for troubleshooting purposes or breaking glass situations.

> Remember to replace the principalArn, with the one existing in your cluster.

```bash
$ CLUSTER_CREATOR=$(aws eks list-access-entries --cluster $EKS_CLUSTER_NAME --output text | awk '/CodeBuild/ {print $2}')
$ aws eks delete-access-entry --cluster-name $EKS_CLUSTER_NAME --principal-arn $CLUSTER_CREATOR
$ aws eks list-access-entries --cluster $EKS_CLUSTER_NAME
{
"accessEntries": [
"arn:aws:iam::143095623777:role/eksctl-eks-workshop-nodegroup-defa-NodeInstanceRole-wtZ9gonWSMRn"
]
}
```

Test your access to the cluster.

```bash
$ kubectl -n kube-system get configmap aws-auth
NAME DATA AGE
aws-auth 3 4h28m
$ kubectl get clusterrole cluster-admin
NAME CREATED AT
cluster-admin 2024-04-29T17:37:43Z
```

You still have cluster-admin access, right? That's because the IAM Role you are using is not the one that created the cluster, this was done by a infrastructure pipeline.

Now, in the `aws-auth` configMap, there is a mapping to your AWS STS Identity, with the `system:masters` group, similar to the one below.

```yaml
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1"
"username": "admin"
```

Check the AWS STS Identity you are using.

```bash
$ aws sts get-caller-identity --query 'Arn'
"arn:aws:sts::$AWS_ACCOUNT_ID:assumed-role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1/i-06b2ef4cc8104bd8a"
```

That name matches the entry! The only difference is that the entry is mapped to the source AWS IAM Role, other than the AWS STS Identity, so the Arn prefix is a bit different.

So let's go ahead and remove that entry as well.

```bash
$ ROLE_NAME=$(aws sts get-caller-identity --query 'Arn' | cut -d/ -f2)
$ eksctl delete iamidentitymapping --cluster $EKS_CLUSTER_NAME --arn arn:aws:iam::$AWS_ACCOUNT_ID:role/$ROLE_NAME
2024-04-29 21:50:20 [ℹ] removing identity "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1" from auth ConfigMap (username = "admin", groups = ["system:masters"])
```

Test your access to the cluster again.

```bash
$ kubectl -n kube-system get configmap aws-auth
error: You must be logged in to the server (Unauthorized)
$ kubectl get clusterrole cluster-admin
error: You must be logged in to the server (Unauthorized)
```

Not authorized, right? This error shows that you're not logged into the cluster. So now you have removed your cluster-admin access to the cluster, and you have no access at all!
If this happened in a cluster set with the `CONFIG_MAP` only authentication mode, and there are none other cluster-admins set, you would have completely lost that access to the cluster, because you can't even list or read the `aws-auth` configMap to add your identity again.

Now with the Cluster Access Management API, it's possible to regain that access with simple `awscli` commands. First get the Arn of your IAM Role.

```bash
$ ROLE_NAME=$(aws sts get-caller-identity --query 'Arn' | cut -d/ -f2)
$ echo $ROLE_NAME
workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1
$ ROLE_ARN=$(aws iam list-roles --query "Roles[?RoleName=='"$ROLE_NAME"'].Arn" --output text)
$ echo $ROLE_ARN
arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1
```

Create the Access Entry.

```bash
$ aws eks create-access-entry --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN
{
"accessEntry": {
"clusterName": "eks-workshop",
"principalArn": "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1",
"kubernetesGroups": [],
"accessEntryArn": "arn:aws:eks:us-west-2:$AWS_ACCOUNT_ID:access-entry/eks-workshop/role/$AWS_ACCOUNT_ID/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1/26c79603-ae69-f3ad-a51d-693d6d004af5",
"createdAt": "2024-04-29T22:43:51.181000+00:00",
"modifiedAt": "2024-04-29T22:43:51.181000+00:00",
"tags": {},
"username": "arn:aws:sts::$AWS_ACCOUNT_ID:assumed-role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1/{{SessionName}}",
"type": "STANDARD"
}
}
```

Test your access to the cluster again.

```bash
$ kubectl -n kube-system get configmap aws-auth
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "arn:aws:sts::$AWS_ACCOUNT_ID:assumed-role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1/i-06b2ef4cc8104bd8a" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
$ kubectl get clusterrole cluster-admin
Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "arn:aws:sts::$AWS_ACCOUNT_ID:assumed-role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1/i-06b2ef4cc8104bd8a" cannot list resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
```

Still not getting access, but with a different error. That's because the Access Entry was not associated to any Access Policies that was covered in the previous section, so you're just authenticated, but no authorization scope was defined.

Validate that with the command below.

```bash
$ aws eks list-associated-access-policies --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN
{
"associatedAccessPolicies": [],
"clusterName": "eks-workshop",
"principalArn": "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1"
}
```

Now, run the following command to map the newly created Access Entry, to the `AmazonEKSClusterAdminPolicy` Access Policy.

```bash
$ aws eks associate-access-policy --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN --policy-arn "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy" --access-scope type=cluster
{
"clusterName": "eks-workshop",
"principalArn": "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1",
"associatedAccessPolicy": {
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2024-04-29T22:50:22.564000+00:00",
"modifiedAt": "2024-04-29T22:50:22.564000+00:00"
}
}
```

Notice the policyArn and accessScope values. Validate the policy association again.

```bash
$ aws eks list-associated-access-policies --cluster-name $EKS_CLUSTER_NAME --principal-arn $ROLE_ARN
{
"associatedAccessPolicies": [
{
"policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",
"accessScope": {
"type": "cluster",
"namespaces": []
},
"associatedAt": "2024-04-29T22:50:22.564000+00:00",
"modifiedAt": "2024-04-29T22:50:22.564000+00:00"
}
],
"clusterName": "eks-workshop",
"principalArn": "arn:aws:iam::$AWS_ACCOUNT_ID:role/workshop-stack-Cloud9Stack-1UEGQA-EksWorkshopC9Role-0GSFxRAwfFG1"
}
```

Test your access to the cluster one more time.

```bash
$ kubectl -n kube-system get configmap aws-auth
NAME DATA AGE
aws-auth 3 5h8m
$ kubectl get clusterrole cluster-admin
NAME CREATED AT
cluster-admin 2024-04-29T17:37:43Z
```

You now have regain cluster-admin access to the cluster! **Use it with responsibility!**
Loading
Loading