page_title |
---|
Sample Project |
Use the Confluent Cloud Terraform provider to automate the workflow for creating a Service Account, a Confluent Cloud environment, a Kafka cluster, and Topics. Also, you can use this provider to assign permissions (ACLs) that enable access to the topics you create.
In this guide, you will:
- Get a Confluent Cloud API Key
- Run Terraform to create your Kafka cluster
- Inspect your automatically created resources
- Run Terraform to create a Kafka topic and ACLs
- Clean up and delete resources
-> Note: The Confluent Cloud Terraform provider is available in an Preview Program for early adopters. Preview features are introduced to gather customer feedback. This feature should be used only for evaluation and non-production testing purposes or to provide feedback to Confluent, particularly as it becomes more widely available in follow-on editions.
Preview Program features are intended for evaluation use in development and testing environments only, and not for production use. The warranty, SLA, and Support Services provisions of your agreement with Confluent do not apply to Preview Program features. Preview Program features are considered to be a Proof of Concept as defined in the Confluent Cloud Terms of Service. Confluent may discontinue providing preview releases of the Preview Program features at any time in Confluent’s sole discretion.
!> Warning: Early Access versions of the Confluent Cloud Terraform Provider (versions 0.1.0 and 0.2.0) are deprecated.
- A Confluent Cloud account. Sign up here
The following steps show how to get the Confluent Cloud API key and secret that you need to access Confluent Cloud programmatically.
-> Note: When you create the Cloud API key, you must select Global access.
-
Create a Cloud API key and secret by using the Confluent Cloud Console or the Confluent CLI. They're required for creating any Confluent Cloud resources.
If you're using the Confluent CLI, the following command creates your API key and secret.
confluent api-key create --resource "cloud"
Save your API key and secret in a secure location.
-
Run the following commands to set the
CONFLUENT_CLOUD_API_KEY
andCONFLUENT_CLOUD_API_SECRET
environment variables:export CONFLUENT_CLOUD_API_KEY="<cloud_api_key>" export CONFLUENT_CLOUD_API_SECRET="<cloud_api_secret>"
-> Note: Quotation marks are required around the API key and secret strings.
The provider uses these environment variables to authenticate to Confluent Cloud.
Run Terraform to create a service account and an environment that has a Kafka cluster.
-
Create a new file named
main.tf
and copy the following Terraform template into it.# Example for using Confluent Cloud https://docs.confluent.io/cloud/current/api.html # that creates multiple resources: a service account, an environment, a basic cluster, a topic, and 2 ACLs. # Configure Confluent Cloud provider terraform { required_providers { confluentcloud = { source = "confluentinc/confluentcloud" version = "0.4.0" } } } provider "confluentcloud" {} resource "confluentcloud_service_account" "test-sa" { display_name = "test_sa" description = "description for test_sa" } resource "confluentcloud_environment" "test-env" { display_name = "test_env" } resource "confluentcloud_kafka_cluster" "test-basic-cluster" { display_name = "test_cluster" availability = "SINGLE_ZONE" cloud = "GCP" region = "us-central1" basic {} environment { id = confluentcloud_environment.test-env.id } }
-
Run the following command to initialize the Confluent Cloud Terraform provider:
terraform init
-
Run the following command to create the plan:
terraform plan -out=tfplan_add_sa_env_and_cluster
-
Run the following command to apply the plan and create cloud resources:
terraform apply tfplan_add_sa_env_and_cluster
Your output should resemble:
confluentcloud_service_account.test-sa: Creating... confluentcloud_environment.test-env: Creating... confluentcloud_environment.test-env: Creation complete after 1s [id=env-***] <--- the Environment's ID confluentcloud_kafka_cluster.test-basic-cluster: Creating... confluentcloud_service_account.test-sa: Creation complete after 1s [id=sa-***] <--- the Service Account's ID confluentcloud_kafka_cluster.test-basic-cluster: Still creating... [10s elapsed] confluentcloud_kafka_cluster.test-basic-cluster: Creation complete after 14s [id=lkc-***] <--- the Kafka cluster's ID Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Terraform creates a
test_sa
service account and atest_env
environment that has a Kafka cluster, namedtest_cluster
.
You can find the created resources (and their IDs: sa-***
, env-***
, lkc-***
from Terraform output) on both Cloud Console and
Confluent CLI.
In previous steps, you used the Confluent Cloud Provider to create a Service Account, an environment, and a Kafka cluster, but your cluster doesn't have any topics. In this step, you create an API key to access the Kafka cluster, and you use it in a plan to create a Kafka topic and related ACLs that authorize access.
-> Important You must manually provision API keys for the service account so it can authenticate with the cluster. ACL management covers only authorization, not authentication and is a manual step after the creation of the Kafka cluster.
The following steps show how to create a Kafka API key and use it in a Terraform file to create a topic and its related ACLs.
Create an API key and secret for the Kafka cluster by using the Confluent Cloud Console or the Confluent CLI. The Kafka API key is distinct from the Cloud API key and is required for creating Kafka topics and ACLs.
The following steps show how to create a Kafka API key by using the Cloud Console.
-
Click into the test_env environment and then click test_cluster.
-
In the navigation menu on the left, click Data integration and click API keys.
-
Click Add key, and in the Create Key page, click Global access and Next. Your API key is generated and displayed for easy copying.
-
Copy and save your API key and secret in a secure location. When you're done, click I have saved my API key and click Save.
If you're using the Confluent CLI, the following command creates your
API key and secret. Replace <cluster_id>
and <env_id>
with your cluster ID and environment ID respectively.
confluent api-key create --resource <cluster_id> --environment <env_id>
Save your Kafka API key and secret in a secure location.
-
Create a new file named
variables.tf
and copy the following template into it.variable "kafka_api_key" { type = string description = "Kafka API Key" } variable "kafka_api_secret" { type = string description = "Kafka API Secret" sensitive = true }
-
Create a new file named
terraform.tfvars
and copy the following into it. Copy the below template and substitute the correct values for each variable.-> Important: Do not store production secrets in a
.tfvars
file. Instead, use environment variables, encrypted files, or a secret storekafka_api_key="<key>" kafka_api_secret="<secret>"
-> Note: Quotation marks are required around the API key and secret strings.
-
Append the following resource definitions into
main.tf
and save the file.resource "confluentcloud_kafka_topic" "orders" { kafka_cluster = confluentcloud_kafka_cluster.test-basic-cluster.id topic_name = "orders" partitions_count = 4 http_endpoint = confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint config = { "cleanup.policy" = "compact" "max.message.bytes" = "12345" "retention.ms" = "6789000" } credentials { key = var.kafka_api_key secret = var.kafka_api_secret } } resource "confluentcloud_kafka_acl" "describe-orders" { kafka_cluster = confluentcloud_kafka_cluster.test-basic-cluster.id resource_type = "TOPIC" resource_name = confluentcloud_kafka_topic.orders.topic_name pattern_type = "LITERAL" principal = "User:${confluentcloud_service_account.test-sa.id}" operation = "DESCRIBE" permission = "ALLOW" http_endpoint = confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint credentials { key = var.kafka_api_key secret = var.kafka_api_secret } } resource "confluentcloud_kafka_acl" "describe-test-basic-cluster" { kafka_cluster = confluentcloud_kafka_cluster.test-basic-cluster.id resource_type = "CLUSTER" resource_name = "kafka-cluster" pattern_type = "LITERAL" principal = "User:${confluentcloud_service_account.test-sa.id}" operation = "DESCRIBE" permission = "ALLOW" http_endpoint = confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint credentials { key = var.kafka_api_key secret = var.kafka_api_secret } }
-
Run the following command to create the plan.
terraform plan -out=tfplan_add_topic_and_2_acls
Your output should resemble:
-
Run the following command to apply the plan.
terraform apply tfplan_add_topic_and_2_acls
Your output should resemble:
confluentcloud_kafka_acl.describe-test-basic-cluster: Creating... confluentcloud_kafka_topic.orders: Creating... confluentcloud_kafka_acl.describe-test-basic-cluster: Creation complete after 1s [id=lkc-odgpo/CLUSTER#kafka-cluster#LITERAL#User:sa-l7v772#*#DESCRIBE#ALLOW] confluentcloud_kafka_topic.orders: Creation complete after 2s [id=lkc-odgpo/orders] confluentcloud_kafka_acl.describe-orders: Creating... confluentcloud_kafka_acl.describe-orders: Creation complete after 0s [id=lkc-odgpo/TOPIC#orders#LITERAL#User:sa-l7v772#*#DESCRIBE#ALLOW] Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
-
Inspect created ACLs:
confluent kafka acl list --cluster lkc-odgpo --environment env-31dgj
Your output should resemble:
Principal | Permission | Operation | ResourceType | ResourceName | PatternType -----------------+------------+-----------+--------------+---------------+-------------- User:sa-l7v772 | ALLOW | DESCRIBE | TOPIC | orders | LITERAL User:sa-l7v772 | ALLOW | DESCRIBE | CLUSTER | kafka-cluster | LITERAL
To clean up and remove the resources you've created, run the following command:
terraform destroy --auto-approve
Your output should resemble:
confluentcloud_service_account.test-sa: Destroying... [id=sa-l7v772]
confluentcloud_kafka_acl.describe-orders: Destroying... [id=lkc-odgpo/TOPIC#orders#LITERAL#User:sa-l7v772#*#DESCRIBE#ALLOW]
confluentcloud_kafka_acl.describe-test-basic-cluster: Destroying... [id=lkc-odgpo/CLUSTER#kafka-cluster#LITERAL#User:sa-l7v772#*#DESCRIBE#ALLOW]
confluentcloud_kafka_acl.describe-orders: Destruction complete after 2s
confluentcloud_kafka_acl.describe-test-basic-cluster: Destruction complete after 2s
confluentcloud_kafka_topic.orders: Destroying... [id=lkc-odgpo/orders]
confluentcloud_service_account.test-sa: Destruction complete after 2s
confluentcloud_kafka_topic.orders: Destruction complete after 0s
confluentcloud_kafka_cluster.test-basic-cluster: Destroying... [id=lkc-odgpo]
confluentcloud_kafka_cluster.test-basic-cluster: Still destroying... [id=lkc-odgpo, 10s elapsed]
confluentcloud_kafka_cluster.test-basic-cluster: Still destroying... [id=lkc-odgpo, 20s elapsed]
confluentcloud_kafka_cluster.test-basic-cluster: Destruction complete after 23s
confluentcloud_environment.test-env: Destroying... [id=env-31dgj]
confluentcloud_environment.test-env: Destroying... [id=env-31dgj]
confluentcloud_environment.test-env: Destruction complete after 1s
Apply complete! Resources: 0 added, 0 changed, 7 destroyed.
-> Next steps: Explore examples in the Confluent Cloud Provider repo