Skip to content

Releases: datarockets/infrastructure

v0.3.0-rc2

18 Jan 05:30
Compare
Choose a tag to compare
v0.3.0-rc2 Pre-release
Pre-release
  1. EKS nodes are able to connect to public via http. In example, we need it to make cert-manager to work since it requests ACME challenge from public self via http during the self-check while creating a new certificate.
  2. Enable nginx ingress snippets (it was disable in newer versions of nginx-ingress controller).
  3. Remove redundant security group rules which were added due to misunderstanding of how they work.

v0.3.0-rc1

18 Jan 00:40
Compare
Choose a tag to compare
v0.3.0-rc1 Pre-release
Pre-release

Migration from 0.2.x

This version contains major changes that requires you to be extra careful in order not to recreate resources in the cloud you don't want to recreate. Some of the step might make your services unavailable due to a disruption, please be careful about it.

  1. Update version of module:
module "eks" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
}
  1. Use separate module to create ECR repositories.
 module "eks" {
   source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
 
-  ecr_repositories = ["api", "web"]
 }
module "ecr" {
  source = "[email protected]:datarockets/infrastructure.git//aws/ecr?ref=v0.3.0"

  app = var.app
  environment = var.environment
  repositories = ["api", "web"]
}
  1. Use separate module to create cicd user:
module "eks_cicd_auth" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks/auth/cicd?ref=v0.3.0"

  app = var.app
  environment = var.environment
  cluster_arn = module.eks.cluster_arn
  ecr_repository_arns = module.ecr.repository_arns
  kubernetes_app_namespace = module.kubernetes.app_namespace
}
  1. Use separate module to manage aws-auth ConfigMap since eks module no longer does it:
 module "eks" {
   source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
 
-  masters_aws_groups = ["Ops"]
 }
module "eks_aws_auth" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks/auth/aws?ref=v0.3.0"

  app = var.app
  environment = var.environment

  node_group_iam_role_arn = module.eks.eks_managed_node_group_default_iam_role_arn
  masters_aws_groups = ["Ops"]
  users = [
    {
      userarn = module.eks_cicd_auth.iam_user.arn
      username = module.eks_cicd_auth.iam_user.name
      groups = [module.eks_cicd_auth.kubernetes_group]
    }
  ]
}
  1. Update kubernetes module to latest version:
module "kubernetes" {
  source = "[email protected]:datarockets/infrastructure.git//k8s/basic?ref=v0.3.0"
}
  1. If you used to create kubernetes app namespace in eks module we no longer need it:
  module "kubernetes" {
    source = "[email protected]:datarockets/infrastructure.git//k8s/basic?ref=v0.3.0"
  
-   create_app_namespace = false
-   app_namespace = module.eks.app_namespace
  }
  1. If you use snippets in ingress configuration, you should enable them when installing nginx-ingress helm chart:
module "kubernetes" {

  nginx_ingress_helm_chart_options = [
    {
      name = "controller.enableSnippets"
      value = true
    }
  ]
}
  1. Remove kubernetes-alpha provider entirely.

  2. Update database module:

  module "database" {
-   source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.2.2"
+   source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.3.0"

    app = var.app
    environment = var.environment
    region = var.region
    vpc_id = module.eks.vpc_id
-   eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks
+   allow_security_group_ids = [module.eks.node_security_group_id]
    database_subnets = {
      "10.0.21.0/24" = "ca-central-1a"
      "10.0.22.0/24" = "ca-central-1b"
    }
  }
  1. Update redis module as above.

  2. Run terraform init -upgrade to download newer versions of modules and providers.

  3. Run terraform validate and fix errors. They will mostly be related to output variables moved from one module to another. E.g. module.eks.app_namespace is no longer available and you should use module.kubernetes.app_namespace.

Not a completed list of changed output variables:

  • module.eks.app_namespace -> module.kubernetes.app_namespace
  • module.eks.cicd_key_id -> module.eks_cicd_auth.iam_user_key_id
  • module.eks.cicd_key_secret -> module.eks_cicd_auth.iam_user_key_secret
  • module.eks.ecr_repository_urls -> module.ecr. repository_urls
  1. Move resources from eks to ecr module.

List ECR-related resources terraform state list | grep module.eks.aws_ecr_repository:

module.eks.aws_ecr_repository.ecr_repository["api"]
module.eks.aws_ecr_repository.ecr_repository["web"]

terraform state list | grep module.eks.aws_ecr_lifecycle_policy:

module.eks.aws_ecr_lifecycle_policy.keep_last_10["api"]
module.eks.aws_ecr_lifecycle_policy.keep_last_10["web"]

Move:

terraform state mv 'module.eks.aws_ecr_repository.ecr_repository["api"]' 'module.ecr.aws_ecr_repository.repository["api"]'
terraform state mv 'module.eks.aws_ecr_repository.ecr_repository["web"]' 'module.ecr.aws_ecr_repository.repository["web"]'
terraform state mv 'module.eks.aws_ecr_lifecycle_policy.keep_last_10["api"]' 'module.ecr.aws_ecr_lifecycle_policy.keep_last_10["api"]'
terraform state mv 'module.eks.aws_ecr_lifecycle_policy.keep_last_10["web"]' 'module.ecr.aws_ecr_lifecycle_policy.keep_last_10["web"]'
  1. Move resources from eks to eks_cicd_auth module.

terraform state list | grep "module\.eks\..*\.cicd":

module.eks.aws_iam_access_key.cicd
module.eks.aws_iam_policy.cicd
module.eks.aws_iam_user.cicd
module.eks.aws_iam_user_policy_attachment.cicd
module.eks.kubernetes_role.cicd
module.eks.kubernetes_role_binding.cicd

Move:

terraform state mv module.eks.aws_iam_access_key.cicd module.eks_cicd_auth.aws_iam_access_key.cicd
terraform state mv module.eks.aws_iam_policy.cicd module.eks_cicd_auth.aws_iam_policy.cicd
terraform state mv module.eks.aws_iam_user.cicd module.eks_cicd_auth.aws_iam_user.cicd
terraform state mv module.eks.aws_iam_user_policy_attachment.cicd module.eks_cicd_auth.aws_iam_user_policy_attachment.cicd
terraform state mv module.eks.kubernetes_role.cicd module.eks_cicd_auth.kubernetes_role.cicd
terraform state mv module.eks.kubernetes_role_binding.cicd module.eks_cicd_auth.kubernetes_role_binding.cicd
  1. Move ConfigMap resource previously managed by eks module internals to eks_aws_auth:

terraform state list | grep aws_auth:

module.eks.module.eks.kubernetes_config_map.aws_auth[0]
terraform state mv 'module.eks.module.eks.kubernetes_config_map.aws_auth[0]' 'module.eks_aws_auth.kubernetes_config_map.aws_auth'
  1. Move app namespace kubernetes resource from eks to kubernetes module.

terraform state list | grep module.eks.kubernetes_namespace:

module.eks.kubernetes_namespace.app
terraform state mv module.eks.kubernetes_namespace.app module.kubernetes.kubernetes_namespace.app
  1. Move EKS cluster IAM role and some policies from one internal resource to another in order to avoid recreation:
terraform state mv 'module.eks.module.eks.aws_iam_role.cluster[0]' 'module.eks.module.eks.aws_iam_role.this[0]'
terraform state mv 'module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]' 'module.eks.module.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]'
terraform state mv 'module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]' 'module.eks.module.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"]'
terraform state mv 'module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["default"]' 'module.eks.module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]'
  1. Extract cluster's legacy role name and specify it as argument for eks module.

Run plan for eks module terraform plan -target module.eks. There will be a lot of changes including EKS cluster recreation. You might notice that cluster is recreated because cluster's IAM role is recreated and IAM role is recreated because its name is changed. We can't edit role's name in place:

  # module.eks.module.eks.aws_iam_role.this[0] must be replaced
+/- resource "aws_iam_role" "this" {
      ~ arn                   = "arn:aws:iam::123456789012:role/app-staging20210526165212002100000002" -> (known after apply)
      ~ assume_role_policy    = jsonencode( # whitespace changes
            {
                Statement = [
                    {
                        Action    = "sts:AssumeRole"
                        Effect    = "Allow"
                        Principal = {
                            Service = "eks.amazonaws.com"
                        }
                        Sid       = "EKSClusterAssumeRole"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
      ~ create_date           = "2021-05-26T16:52:25Z" -> (known after apply)
      ~ id                    = "app-staging20210526165212002100000002" -> (known after apply)
      ~ managed_policy_arns   = [
          - "arn:aws:iam::123456789012:policy/app-staging-elb-sl-role-creation20210526165212000700000001",
          - "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",
          - "arn:aws:iam::aws:policy/AmazonEKSServicePolicy",
          - "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController",
        ] -> (known after apply)
      ~ name                  = "app-staging20210526165212002100000002" -> "app-staging-cluster" # forces replacement
      ~ name_prefix           = "app-staging" -> (known after apply)
      - tags                  = {} -> null
        # (4 unchanged attributes hidden)
    }

We're interested in line that forces replacement:

~ name = "app-staging20210526165212002100000002" -> "app-staging-cluster" # forces replacement

In order to avoid recreation of this policy we might add legacy_iam_role_name as argument to eks module:

  module "eks" {
  
+   # We can't rename cluster's role name w/o recreation of cluster. The algorithm
...
Read more

v0.2.2

25 Jul 22:50
Compare
Choose a tag to compare

nginx-ingress upgraded to 0.10.0 (nginx' version is 1.21.1)
cert-manager upgraded to 1.4.1

k8s/basic

  • It is possible now to use service_accounts parameter for k8s/basic module in order to configure annotations for service accounts. It is useful for AWS' managed cluster in order to link service accounts with IAM roles and policies.
  • cert-manager no longer create separate ingresses to server ACME challenge in order to acquire the TLS certificate. nginx-ingress can't merge different ingresses for the same host without a special setup.

v0.2.1

24 Jun 01:18
Compare
Choose a tag to compare

k8s/basic

You can mount a secret as a pod container volume using mount_secrets map in service definition:

services = {
  app = {
    # ...
    mount_secrets = {
      apple-keys = "/app/keys/apple"
    }
  }
}

secrets = {
  apple-keys = {
    "private_key.p8" = data.aws_secretsmanager_secret_version.apple_private_key.secret_string
  }
}

v0.2.0

06 Jun 21:30
Compare
Choose a tag to compare

k8s/basic

A number of changes were added in order to make it possible to setup a cluster step by step and to support clusters managed by AWS EKS.

  • 177a264 and bd28e49
    dcr_credentials, ingresses, secrets are optional variables now.
  • 12e4151
    Use disable_tls in ingress configuration in order to disable TLS and certificate acquiring for particular ingress.
  • 86e8fa2
    Use nginx_ingress_helm_chart_options to pass options for nginx-ingress helm chart.
  • c6e6653
    New output variable host was added. It contains the host of load balancer of nginx-ingress deployment.
  • 05f3992
    It is possible to create application namespace outside of kubernetes module and pass it as variable:
module "kubernetes" {
  create_app_namespace = false
  app_namespace = module.eks.app_namespace
}

aws/eks (new)

You can create kubernetes cluster managed by AWS EKS:

module "eks" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.2.0"

  cluster_version = "1.20"

  app = var.app
  environment = var.environment
  azs = ["${var.region}a", "${var.region}b"]

  masters_aws_groups = ["Ops"] # "Ops" is the AWS IAM group. Every user from this group will be added to "system:masters" kubernetes group

  ecr_repositories = ["api", "app"] # AWS ECR repositories
}

Output variables:

  • app_namespace - a namespace created for the app. We need to create it in the aws/eks module since we add kubernetes role for "cicd" which is able to manage deployments.
  • vpc_id - EKS cluster is placed in newly created VPC and id of that VPC is returned
  • cluster_id - a name of the cluster, equals to "${var.app}-${var.environment}". You can use it later in data aws_eks_cluster_auth in order to pull authentication token for kubernetes provider.
  • private_cidr_blocks and public_cidr_blocks - EKS cluster VPC has 2 private and 2 public subnets. Private subnets have access to internet via NAT gateways placed in public subnets. Public subnets have Internet Gateway for accessing the internet. You or kubernetes controllers may put pods to either subnet group.
  • ecr_repository_urls - a map of container registry repository names to repository urls.
  • cicd_key_id - AWS_ACCESS_KEY_ID of newly created cicd user. cicd user has policies allowing it to change deployments in the app namespace. It is recommended to use cicd's user credentials on CI/CD server.
  • cicd_key_secret - AWS_SECRET_ACCESS_KEY of cicd user.

aws/postgresql (new)

You can create single instance of RDS PostgreSQL database:

module "database" {
  source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.2.0"

  eks = {
    cluster_name = "${var.app}-${var.environment}"
  }

  app = var.app
  environment = var.environment
  region = var.region
  vpc_id = module.eks.vpc_id
  eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks # database instance will be placed in private subnets
  database_subnets = {
    "10.0.21.0/24" = "ca-central-1a" # subnet cidr blocks and availability zones for subnets database instance will be placed to
    "10.0.22.0/24" = "ca-central-1b"
  }
}

A new security group is added that allows pods from EKS private subnets to access the database server.

Output variables:

  • database - an object containing host, port, username, database keys. A new regular user is created for the app. This user will have full access only to a single newly created database.
  • database_password

aws/redis (new)

You can create a single Elasticache Redis instance:

module "redis" {
  source = "[email protected]:datarockets/infrastructure.git//aws/redis?ref=v0.2.0"

  app = var.app
  environment = var.environment

  vpc_id = module.eks.vpc_id # Redis instance will be placed in particular VPC and particular private subnets
  eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks
  redis_subnets = {
    "10.0.31.0/24" = "ca-central-1a"
    "10.0.32.0/24" = "ca-central-1b"
  }
}

A new security group is added that allows pods from EKS private subnets to access the Redis server.

Migration from 0.1.0

Migrating namespace:

It is possible now to create namespace outside of k8s/basic module. We changed the place of kubernetes_namespace resources so you have to remove the previous record from the state and import it to the new place:

terraform state mv module.kubernetes.kubernetes_namespace.application 'module.kubernetes.kubernetes_namespace.app[0]'

Migrating other resources:

terraform state mv module.kubernetes.kubernetes_namespace.cert-manager module.kubernetes.module.dependencies.kubernetes_namespace.cert-manager

terraform state mv module.kubernetes.helm_release.cert-manager module.kubernetes.module.dependencies.helm_release.cert-manager

terraform state mv module.kubernetes.helm_release.nginx-ingress module.kubernetes.module.dependencies.helm_release.nginx-ingress

terraform state mv module.kubernetes.kubernetes_secret.docker-config 'module.kubernetes.module.cluster.kubernetes_secret.docker-config["default"]'

terraform state mv module.kubernetes.kubernetes_manifest.cert-issuer-letsencrypt module.kubernetes.module.cluster.kubernetes_manifest.cert-issuer-letsencrypt

terraform state mv 'module.kubernetes.kubernetes_deployment.deployment' 'module.kubernetes.module.cluster.kubernetes_deployment.deployment'

terraform state mv 'module.kubernetes.kubernetes_service.service' 'module.kubernetes.module.cluster.kubernetes_service.service'

terraform state mv 'module.kubernetes.kubernetes_secret.secret' 'module.kubernetes.module.cluster.kubernetes_secret.secret'

terraform state mv 'module.kubernetes.kubernetes_service_account.service_account' 'module.kubernetes.module.cluster.kubernetes_service_account.service_account'

Migrating ingresses:

Run terraform state list | grep module.kubernetes.kubernetes_ingress. For ingress run:

terraform state mv 'module.kubernetes.kubernetes_ingress.ingress["<ingress_name>"]' 'module.kubernetes.module.ingress["<ingress_name>"].kubernetes_ingress.ingress'

v0.1.0

29 Apr 18:57
Compare
Choose a tag to compare

do/k8s

  • Output database cluster port, it is the custom one, not 5432.

Example:

module "digitalocean" {
  source = "[email protected]:datarockets/infrastructure//do/k8s?ref=v0.1.0"
  # ...
}

module "kubernetes" {
  source = "[email protected]:datarockets/infrastructure//k8s/basic?ref=v0.1.0"
  # ...
  secrets = {
    database = {
      # ...
      DB_PORT = module.digitalocean.db_port
    }
  }
}

k8s/basic

  • FIX 3d5acfc
    Kubernetes cluster now is able to pull images from private registries, such as created by do/k8s module.
  • BREAKING 33a1050
    Add ability to deploy multiple apps to cluster: project attribute is renamed to app and we add it as label to namespace, deployments, pods, services, and ingresses. E.g. you can select all the pods by label now: kubectl get pods -l app=app_name.
  • BREAKING ff55503
    init_command is renamed to init_container. We no longer set the same environment variables for init containers. Now we need to list them separately. It could be helpful for setting different value for a number of database connections in pool, since digitalocean database cluster limit the number of connections to 22 by default. Also, we can customize the image of init container if it differs from the main one.
  • 7584dd2
    It's possible to specify custom labels for deployments, pods, and services.
  • d3d75ca
    We can specify custom service account assigned to pods, so you can bind a role to this service account making it possible for code in pod make queries to kubernetes API.