- terraform-opensearch
Terraform module to setup all resources needed for setting up an AWS OpenSearch Service domain.
Name | Version |
---|---|
terraform | >= 1.3.9, < 1.6.0 |
aws | ~> 5.0 |
Name | Version |
---|---|
aws | ~> 5.0 |
No modules.
Name | Type |
---|---|
aws_cloudwatch_log_group.cwl_application | resource |
aws_cloudwatch_log_group.cwl_index | resource |
aws_cloudwatch_log_group.cwl_search | resource |
aws_cloudwatch_log_resource_policy.cwl_resource_policy | resource |
aws_elasticsearch_domain.es | resource |
aws_security_group.sg | resource |
aws_iam_policy_document.cwl_policy | data source |
aws_region.current | data source |
aws_subnet.private | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
instance_type | Instance type to use for the OpenSearch domain | string |
n/a | yes |
name | Name to use for the OpenSearch domain | string |
n/a | yes |
volume_size | EBS volume size (in GB) to use for the OpenSearch domain | number |
n/a | yes |
application_logging_enabled | Whether to enable OpenSearch application logs (error) in Cloudwatch | bool |
false |
no |
availability_zone_count | Number of Availability Zones for the domain to use with zone_awareness_enabled.Valid values: 2 or 3. Automatically configured through number of instances/subnets available if not set. | number |
null |
no |
cognito_enabled | Whether to enable Cognito for authentication in Kibana | bool |
false |
no |
cognito_identity_pool_id | Required when cognito_enabled is enabled: ID of the Cognito Identity Pool to use | string |
null |
no |
cognito_role_arn | Required when cognito_enabled is enabled: ARN of the IAM role that has the AmazonESCognitoAccess policy attached |
string |
null |
no |
cognito_user_pool_id | Required when cognito_enabled is enabled: ID of the Cognito User Pool to use | string |
null |
no |
custom_endpoint | The domain name to use as custom endpoint for Elasicsearch | string |
null |
no |
custom_endpoint_certificate_arn | ARN of the ACM certificate to use for the custom endpoint. Required when custom endpoint is set along with enabling endpoint_enforce_https |
string |
null |
no |
dedicated_master_count | Number of dedicated master nodes in the domain (can be 3 or 5) | number |
3 |
no |
dedicated_master_enabled | Whether dedicated master nodes are enabled for the domain. Automatically enabled when warm_enabled = true |
bool |
false |
no |
dedicated_master_type | Instance type of the dedicated master nodes in the domain | string |
"t3.small.search" |
no |
encrypt_at_rest | Whether to enable encryption at rest for the cluster. Changing this on an existing cluster will force a new resource! | bool |
true |
no |
encrypt_at_rest_kms_key_id | The KMS key id to encrypt the OpenSearch domain with. If not specified then it defaults to using the aws/es service KMS key |
string |
null |
no |
endpoint_enforce_https | Whether or not to require HTTPS | bool |
true |
no |
endpoint_tls_security_policy | The name of the TLS security policy that needs to be applied to the HTTPS endpoint. Valid values: Policy-Min-TLS-1-0-2019-07 and Policy-Min-TLS-1-2-2019-07 |
string |
"Policy-Min-TLS-1-2-2019-07" |
no |
ephemeral_list | m3 and r3 are supported by aws using ephemeral storage but are a legacy instance type | list(string) |
[ |
no |
instance_count | Size of the OpenSearch domain | number |
1 |
no |
logging_enabled | Whether to enable OpenSearch slow logs (index & search) in Cloudwatch | bool |
false |
no |
logging_retention | How many days to retain OpenSearch logs in Cloudwatch | number |
30 |
no |
node_to_node_encryption | Whether to enable node-to-node encryption. Changing this on an existing cluster will force a new resource! | bool |
true |
no |
options_indices_fielddata_cache_size | Sets the indices.fielddata.cache.size advanced option. Specifies the percentage of heap space that is allocated to fielddata |
number |
null |
no |
options_indices_query_bool_max_clause_count | Sets the indices.query.bool.max_clause_count advanced option. Specifies the maximum number of allowed boolean clauses in a query |
number |
1024 |
no |
options_override_main_response_version | Whether to enable compatibility mode when creating an OpenSearch domain. Because certain Elasticsearch OSS clients and plugins check the cluster version before connecting, compatibility mode sets OpenSearch to report its version as 7.10 so these clients continue to work | bool |
true |
no |
options_rest_action_multi_allow_explicit_index | Sets the rest.action.multi.allow_explicit_index advanced option. When set to false , OpenSearch will reject requests that have an explicit index specified in the request body |
bool |
true |
no |
search_version | Version of the OpenSearch domain | string |
"OpenSearch_1.1" |
no |
security_group_ids | Extra security group IDs to attach to the OpenSearch domain. Note: a default SG is already created and exposed via outputs | list(string) |
[] |
no |
snapshot_start_hour | Hour during which an automated daily snapshot is taken of the OpenSearch indices | number |
3 |
no |
subnet_ids | Required if vpc_id is specified: Subnet IDs for the VPC enabled OpenSearch domain endpoints to be created in | list(string) |
[] |
no |
tags | Optional tags | map(string) |
{} |
no |
volume_iops | Required if volume_type="io1" or "gp3": Amount of provisioned IOPS for the EBS volume | number |
0 |
no |
volume_type | EBS volume type to use for the OpenSearch domain | string |
"gp2" |
no |
vpc_id | VPC ID where to deploy the OpenSearch domain. If set, you also need to specify subnet_ids . If not set, the module creates a public domain |
string |
null |
no |
warm_count | Number of warm nodes (2 - 150) | number |
2 |
no |
warm_enabled | Whether to enable warm storage | bool |
false |
no |
warm_type | Instance type of the warm nodes | string |
"ultrawarm1.medium.search" |
no |
zone_awareness_enabled | Whether to enable zone_awareness or not, if not set, multi az is enabled by default and configured through number of instances/subnets available | bool |
null |
no |
Name | Description |
---|---|
arn | ARN of the OpenSearch domain |
domain_id | ID of the OpenSearch domain |
domain_name | Name of the OpenSearch domain |
domain_region | Region of the OpenSearch domain |
endpoint | DNS endpoint of the OpenSearch domain |
kibana_endpoint | DNS endpoint of Kibana |
sg_id | ID of the OpenSearch security group |
module "opensearch" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch?ref=11.3.0"
name = "logs-${terraform.workspace}-es"
instance_count = 3
instance_type = "m5.large.elasticsearch"
volume_size = 100
vpc_id = data.terraform_remote_state.networking.outputs.vpc_id
subnet_ids = data.terraform_remote_state.networking.outputs.private_db_subnets
}
data "aws_iam_policy_document" "opensearch" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = ["${aws_iam_user.es_user.arn}"]
}
actions = ["es:*"]
resources = ["${module.elasticsearch.arn}/*"]
}
}
resource "aws_elasticsearch_domain_policy" "opensearch" {
domain_name = module.opensearch.domain_name
access_policies = data.aws_iam_policy_document.opensearch.json
}
This module by default creates Cloudwatch Log Groups & IAM permissions for ElasticSearch slow logging (search & index), but we don't enable these logs by default. You can control logging behavior via the logging_enabled
and logging_retention
parameters. When enabling this, make sure you also enable this on Elasticsearch side, following the AWS documentation.
You can also enable Elasticsearch error logs via application_logging_enabled = true
.
For a CloudWatch based solution, check out our terraform-cloudwatch
modules.
For a Kubernetes & Prometheus based solution, see the elasticsearch_k8s_monitoring
module below.
This module will not work without the ES default role AWSServiceRoleForAmazonElasticsearchService. This service role needs to be created per-account so you will need to add it if not present (just once per AWS account).
Here is a code sample you can use:
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
}
This module can be used to create your own snapshots of Opensearch to S3, using Snapshot Management. It can also deploy a PrometheusRule for monitoring snapshot success.
Important
This requires Opensearch >= 2.5!
Name | Version |
---|---|
terraform | >= 1.3.9 |
aws | ~> 5.0 |
kubernetes | ~> 2.23 |
opensearch | ~> 2.2 |
Name | Version |
---|---|
aws | ~> 5.0 |
kubernetes | ~> 2.23 |
opensearch | ~> 2.2 |
Name | Source | Version |
---|---|---|
s3_snapshot | terraform-aws-modules/s3-bucket/aws | ~> 3.15 |
Name | Type |
---|---|
aws_iam_role.snapshot_create | resource |
aws_iam_role_policy.snapshot_create | resource |
kubernetes_manifest.prometheusrule | resource |
opensearch_sm_policy.snapshot | resource |
opensearch_snapshot_repository.repo | resource |
aws_iam_policy_document.s3_snapshot_bucket | data source |
aws_iam_policy_document.snapshot_create | data source |
aws_iam_policy_document.snapshot_create_assume | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
name | Name for the snapshot system, S3 bucket, etc. | string |
n/a | yes |
aws_kms_key_arn | ARN of the CMK used for S3 Server Side Encryption. When specified, we'll use the aws:kms SSE algorithm. When not specified, falls back to using AES256 |
string |
null |
no |
bucket_key_enabled | Whether to use Amazon S3 Bucket Keys for encryption, which reduces API costs | bool |
false |
no |
create_cron_expression | The cron schedule used to create snapshots | string |
"0 0 * * *" |
no |
create_time_limit | Sets the maximum time to wait for snapshot creation to finish. If time_limit is longer than the scheduled time interval for taking snapshots, no scheduled snapshots are taken until time_limit elapses. For example, if time_limit is set to 35 minutes and snapshots are taken every 30 minutes starting at midnight, the snapshots at 00:00 and 01:00 are taken, but the snapshot at 00:30 is skipped | string |
"1h" |
no |
custom_sm_policy | Set this variable when you want to override the generated SM policy JSON with your own. Make sure to correctly set snapshot_config.repository to the same value as var.name (the bucket name) |
string |
null |
no |
delete_cron_expression | The cron schedule used to delete snapshots | string |
"0 2 * * *" |
no |
delete_time_limit | Sets the maximum time to wait for snapshot deletion to finish | string |
"1h" |
no |
extra_bucket_policy | Extra bucket policy to attach to the S3 bucket (JSON string formatted) | string |
null |
no |
indices | The names of the indexes in the snapshot. Multiple index names are separated by , . Supports wildcards (* ) |
string |
"*" |
no |
max_age | The maximum time a snapshot is retained in S3 | string |
"14d" |
no |
max_count | The maximum number of snapshots retained in S3 | number |
400 |
no |
min_count | The minimum number of snapshot retained in S3 | number |
1 |
no |
prometheusrule_alert_labels | Additional labels to add to the PrometheusRule alert | map(string) |
{} |
no |
prometheusrule_enabled | Whether to deploy a PrometheusRule for monitoring the snapshots. Requires the prometheus-operator and elasticsearch-exporter to be deployed | bool |
true |
no |
prometheusrule_labels | Additional K8s labels to add to the PrometheusRule | map(string) |
{ |
no |
prometheusrule_namespace | Namespace where to deploy the PrometheusRule | string |
"infrastructure" |
no |
prometheusrule_query_period | Period to apply to the PrometheusRule queries. Make sure this is bigger than the create_cron_expression interval |
string |
"32h" |
no |
prometheusrule_severity | Severity of the PrometheusRule alert. Usual values are: info , warning and critical |
string |
"warning" |
no |
s3_force_destroy | Whether to force-destroy and empty the S3 bucket when destroying this Terraform module. WARNING: Not recommended! | bool |
false |
no |
s3_replication_configuration | Replication configuration block for the S3 bucket. See https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/v3.15.1/examples/s3-replication for an example | any |
{} |
no |
No outputs.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
opensearch = {
source = "opensearch-project/opensearch"
}
}
}
provider "opensearch" {
url = module.opensearch.endpoint
aws_region = var._aws_provider_region
aws_profile = var._aws_provider_profile
aws_assume_role_arn = "arn:aws:iam::${var._aws_provider_account_id}:role/${var._aws_provider_assume_role}"
}
module "opensearch_snapshots" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch-backup?ref=11.3.0"
name = "${module.opensearch.domain_name}-snapshots"
}
This module deploys our elasticsearch/monitoring
chart on Kubernetes.
Name | Version |
---|---|
terraform | >= 1.3.9, < 1.6.0 |
aws | ~> 5.0 |
helm | ~> 2.11 |
kubernetes | ~> 2.23 |
Name | Version |
---|---|
helm | ~> 2.11 |
No modules.
Name | Type |
---|---|
helm_release.elasticsearch_monitoring | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cloudwatch_exporter_role_arn | IAM role ARN to use for the CloudWatch exporter. Used via either IRSA or kube2iam (see var.irsa_enabled ) |
string |
n/a | yes |
elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string |
n/a | yes |
elasticsearch_domain_region | Region of the AWS Elasticsearch domain | string |
n/a | yes |
elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string |
n/a | yes |
kubernetes_namespace | Kubernetes namespace where to deploy the skyscrapers/elasticsearch-monitoring chart |
string |
n/a | yes |
cw_exporter_memory | Memory request and limit for the prometheus-cloudwatch-exporter pod | string |
"160Mi" |
no |
elasticsearch_monitoring_chart_version | elasticsearch-monitoring Helm chart version to deploy | string |
"1.11.2" |
no |
es_exporter_memory | Memory request and limit for the prometheus-elasticsearch-exporter pod | string |
"48Mi" |
no |
force_helm_update | Modify this variable to trigger an update on all Helm charts (you can set any value). Due to current limitations of the Helm provider, it doesn't detect drift on the deployed values | string |
"1" |
no |
irsa_enabled | Whether to use IAM Roles for Service Accounts. When true , the Cloudwatch exporter's SA is appropriately annotated. If false a kube2iam Pod annotation is set instead |
bool |
true |
no |
sla | SLA of the monitored Elasticsearch cluster. Will default to the k8s cluster SLA if omited | string |
null |
no |
system_nodeSelector | nodeSelector to add to the kubernetes pods. Set to null to disable. | map(map(string)) |
{ |
no |
system_tolerations | Tolerations to add to the kubernetes pods. Set to null to disable. | any |
{ |
no |
No outputs.
This module deploys an Ingress with external authentication on Kubernetes to reach the AWS Elasticsearch Kibana endpoint.
Name | Version |
---|---|
terraform | >= 0.12 |
Name | Version |
---|---|
kubernetes | n/a |
Name | Description | Type | Default | Required |
---|---|---|---|---|
elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string |
n/a | yes |
elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string |
n/a | yes |
ingress_auth_signin | Value to set for the nginx.ingress.kubernetes.io/auth-signin annotation |
string |
n/a | yes |
ingress_auth_url | Value to set for the nginx.ingress.kubernetes.io/auth-url annotation |
string |
n/a | yes |
ingress_host | Hostname to use for the Ingress | string |
n/a | yes |
kubernetes_namespace | Kubernetes namespace where to deploy the Ingress | string |
n/a | yes |
ingress_configuration_snippet | Value to set for the nginx.ingress.kubernetes.io/configuration-snippet annotation |
string |
null |
no |
No output.
This module is no longer maintained!
This module deploys keycloack-gatekeeper as OIDC proxy on Kubernetes to reach the AWS Elasticsearch Kibana endpoint.
Name | Description | Type | Default | Required |
---|---|---|---|---|
elasticsearch_endpoint | Endpoint of the AWS Elasticsearch domain | string | n/a | yes |
elasticsearch_domain_name | Domain name of the AWS Elasticsearch domain | string | n/a | yes |
kubernetes_namespace | Kubernetes namespace where to deploy the Keycloack-gatekeeper proxy chart | string | n/a | yes |
gatekeeper_image | Docker image to use for the Keycloack-gatekeeper deployment | string | "keycloak/keycloak-gatekeeper:6.0.1" |
no |
gatekeeper_ingress_host | Hostname to use for the Ingress | string | n/a | yes |
gatekeeper_discovery_url | URL for OpenID autoconfiguration | string | n/a | yes |
gatekeeper_client_id | Client ID for OpenID server | string | n/a | yes |
gatekeeper_client_secret | Client secret for OpenID server | string | n/a | yes |
gatekeeper_oidc_groups | Groups that will be granted access. When using Dex with GitHub, teams are defined in the form <gh_org>:<gh_team> , for example skyscrapers:k8s-admins |
string | n/a | yes |
gatekeeper_timeout | Upstream timeouts to use for the proxy | string | "500s" |
no |
gatekeeper_extra_args | Additional keycloack-gatekeeper command line arguments | list(string) | [] |
no |
Name | Description |
---|---|
callback_uri | Callback URI. You might need to register this to your OIDC provider (like CoreOS Dex) |
We removed the custom S3 backup mechanism (via Lambda) from the opensearch
module. As an alternative we now offer a new opensearch-backup
module, which relies on the OpenSearch Snapshot Management API to create snapshots to S3.
If you want to upgrade, without destroying your old S3 snapshot bucket, we recommend to remove the bucket from Terraform's state and re-import it into the new backup module. For example, consider your code like this:
module "opensearch" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch?ref=11.3.0"
...
}
module "opensearch_backup" {
source = "github.com/skyscrapers/terraform-opensearch//opensearch-backup?ref=11.3.0"
name = "${module.opensearch.domain_name}-snapshot"
}
Then you can migrate your snapshots S3 bucket like this:
terraform state rm module.opensearch.aws_s3_bucket.snapshot[0]
terraform import module.opensearch_backup.module.s3_snapshot.aws_s3_bucket.this[0] "<opensearch_domain_name>-snapshot"
Also make sure to set var.name
of this module to <opensearch_domain_name>-snapshot
!
Alternatively you can just let the module create a new bucket.
In the elasticsearch_k8s_monitoring
module, the variables system_tolerations
and system_nodeSelector
have been added to isolate the monitoring on a dedicated system node pool. If you don't want this you can override these variables to null
to disable.
In the opensearch
module, the s3_snapshots_schedule_expression
variable has been replaced with s3_snapshots_schedule_period
. Instead of a cron expression, we only allow to specify a period in hours, which will be used as a rate(x hours)
.
This change migrates the elasticsearch
module to opensearch
. This is mostly a cosmetic change, however there's several breaking things to note:
- Security Group description is updated, which would normally trigger a destroy/recreate. However existing setups won't be affected due to an ignore lifecycle
- Variables
project
andenvironment
have been removed. Only thename
variable is now used. For existing setups, you can setname = "<myproject>-<myenvironment>-<oldname>"
to retain the original "name". - CloudWatch Log Groups will be destroyed and recreated using the new name. If you wish to keep your older logs, it's best to remove the existing Log Groups from the TF state:
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_index
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_search
terraform state rm module.elasticsearch.aws_cloudwatch_log_group.cwl_application
- Variable
elasticsearch_version
has been renamed tosearch_version
, with default valueOpenSearch_1.1
- We no longer merge the
tags
variable with our own hardcoded defaults (Environment
,Project
,Name
) , all tags need to be passed through thetags
variable and/or through thedefault_tags
provider setting - Updated list of instance types with NVMe SSD storage
Behavior of this module in function of backups has changed much between versions 6.0.0 and 7.0.0:
- Replace the
snapshot_bucket_enabled
variable withs3_snapshots_enabled
- Note: This will also enable the Lambda for automated backups
- If you just want to keep the bucket, you can remove it from the terraform state and manage it outside the module:
terraform state rm aws_s3_bucket.snapshot[0]
- The IAM role for taking snapshots has been renamed. If you want to keep the old role too, you should remove it from the terraform state:
terraform state rm module.registrations.aws_iam_role.role[0]
- Otherwise just let it destroy the old role and it will create a new one
Also note that some default values for variables has beem changed, mostly related to encryption. If this triggers an unwanted change, you can override this by explicitly setting the variable with it's old value.