Skip to content

Commit

Permalink
Add support for persistent data volume (#2)
Browse files Browse the repository at this point in the history
* add basic readme file
* also fix deploy checks when a branch lands on master, since it fails
because the comment script doesn't have in it the reference to the open
PR number
  • Loading branch information
hellais authored Feb 14, 2024
1 parent a9d6924 commit d4e22fc
Show file tree
Hide file tree
Showing 5 changed files with 87 additions and 27 deletions.
15 changes: 0 additions & 15 deletions .github/workflows/check_deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@
# * https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request
# * https://docs.github.com/en/webhooks/webhook-events-and-payloads?actionType=synchronize#pull_request
on:
push:
branches:
- main
pull_request:
types:
- opened
Expand Down Expand Up @@ -193,18 +190,6 @@ jobs:
</details>
#### Apply 📖\`${{ steps.apply.outcome }}\`
* **${terraformApplyPlanLine}**
* **${terraformApplyApplyLine}**
<details><summary>Show Apply</summary>
\`\`\`\n
${terraformApplyOutput}
\`\`\`
</details>
| | |
|-------------------|------------------------------------|
| Pusher | @${{ github.actor }} |
Expand Down
52 changes: 52 additions & 0 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# OONI Devops

This repository contains the code necessary for managing the OONI
infrastructure as code and all the necessary tooling for day to day operations
of it.

## Setup

* Install [terraform](https://developer.hashicorp.com/terraform/install)
* Install [ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html)

## Using

For most up to date information, always look at the github workflow.

You should have setup the following environment variables:
```
AWS_ACCESS_KEY_ID=XXXX
AWS_SECRET_ACCESS_KEY=YYYY
TF_VAR_aws_access_key_id=XXX
TF_VAR_aws_secret_access_key=YYYY
TF_VAR_datadog_api_key=ZZZZ
```

### Deploying IaC

```
cd tf/environments/production/
terraform plan
```

Check the plan looks good, then apply:

```
terraform apply
```

This will update the ansible inventory file.

### Deploying Configuration

You can now run:
```
ansible-playbook -i inventory.ini --check --diff playbook.yml
```

And the apply it with:

```
ansible-playbook -i inventory.ini playbook.yml
```

1 change: 1 addition & 0 deletions tf/environments/production/ansible/inventory.ini
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ clickhouse.tier1.prod.ooni.nu

[clickhouse_servers]
clickhouse.tier1.prod.ooni.nu

42 changes: 32 additions & 10 deletions tf/environments/production/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,6 @@ terraform {
}
}

# You cannot create a new backend by simply defining this and then
# immediately proceeding to "terraform apply". The S3 backend must
# be bootstrapped according to the simple yet essential procedure in
# https://github.com/cloudposse/terraform-aws-tfstate-backend#usage
# You cannot create a new backend by simply defining this and then
# immediately proceeding to "terraform apply". The S3 backend must
# be bootstrapped according to the simple yet essential procedure in
Expand Down Expand Up @@ -166,16 +162,42 @@ resource "aws_instance" "clickhouse_server_prod_tier1" {
)
}

resource "aws_ebs_volume" "clickhouse_data_volume" {
availability_zone = aws_instance.clickhouse_server_prod_tier1.availability_zone
size = 1024 # 1 TB
type = "gp3" # SSD-based volume type, provides up to 16,000 IOPS and 1,000 MiB/s throughput
tags = local.tags
# We care to ensure this data volume is not destroyed across re-applies. To do
# that you can either run first an apply with this commented out and then
# specify the data volume below. You can also just create a data volume with the
# appropriate tag manually and then edit the section below to indicate the name.
# If you do that, you will then have to manually also run:
# $ terraform state rm aws_ebs_volume.clickhouse_data_volume
#resource "aws_ebs_volume" "clickhouse_data_volume" {
# availability_zone = aws_instance.clickhouse_server_prod_tier1.availability_zone
# size = 1024 # 1 TB
# type = "gp3" # SSD-based volume type, provides up to 16,000 IOPS and 1,000 MiB/s throughput
# tags = merge(local.tags, {
# Name = "ooni-tier1-prod-clickhouse-vol1"
# })
#
# lifecycle {
# prevent_destroy = true
# }
#}

data "aws_ebs_volume" "clickhouse_data_volume" {
most_recent = true

filter {
name = "tag:Name"
values = ["ooni-tier1-prod-clickhouse-vol1"]
}

filter {
name = "availability-zone"
values = [aws_instance.clickhouse_server_prod_tier1.availability_zone]
}
}

resource "aws_volume_attachment" "clickhouse_data_volume_attachment" {
device_name = local.clickhouse_device_name
volume_id = aws_ebs_volume.clickhouse_data_volume.id
volume_id = data.aws_ebs_volume.clickhouse_data_volume.id
instance_id = aws_instance.clickhouse_server_prod_tier1.id
force_detach = true
}
Expand Down
4 changes: 2 additions & 2 deletions tf/environments/production/templates/clickhouse-setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ sudo hostnamectl set-hostname --static ${hostname}
# Install datadog agent
DD_API_KEY=${datadog_api_key} DD_SITE="datadoghq.eu" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script_agent7.sh)"

sudo mkfs.ext4 -q -F ${device_name}
# This only needs to be run the first time to initialize the volume
# sudo mkfs.ext4 -q -F ${device_name}
sudo mkdir -p /var/lib/clickhouse
sudo mount ${device_name} /var/lib/clickhouse
echo "${device_name} /var/lib/clickhouse ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
sudo chown -R clickhouse:clickhouse /var/lib/clickhouse

0 comments on commit d4e22fc

Please sign in to comment.