You must obtain AWS Cli credentials by going to Company AWS Console > IAM > Users > {your username} > Create access key
Place the Access key ID and Secret access key in a file on your system called:
~/.aws/credentials
in the following format:
[company]
region=us-east-1
aws_access_key_id = {Access key ID}
aws_secret_access_key = {Secret access key}
You must have a file ~/.company_vault.txt
with the vault password (same for each vault.yml file in this repo).
The file is located in the AWS > us-east-1 > System Manager > Parameter Store > company_vault.txt
pip install ansible boto3 botocore dnspython
There is a simple wrapper script to do a full deploy.
sh deploy-stack.sh <name of environment>
sh deploy-stack.sh stage
If you just edited the vars.yml file to change the list of whitelisted IP addresses you would just need to run the following to update the security groups. This is relatively safe but it's recommended to do this on a Test system first:
ansible-playbook deploy-docker.yml -i "environments/stage" --tags "cloud-formation-deploy"
First create a folder by copying prod:
INSTANCE_NAME=newname
cp -R environments/prod environments/$INSTANCE_NAME
Edit the var.yml file to match your instance:
env
- make the same as
$INSTANCE_NAME
above
- make the same as
machine_name
- set to
{{ stack_name }}
(prod is special in that it's the only one that isn't namedcompany-$INSTANCENAME
)
- set to
vpc_cidr
- If connecting to an LD instance, this is important to get from schrodinger as they reserver a specific range for VPC peering connections which are configured manually
livedesign_fqdn
,livedesign_private_ip
,schrodinger_peering_cidr
- This should all be gotten from schrodinger and are only required if hooking the instance up to live design
- Required if using
ldap
authentication foracas_authstrategy
livedesign_fqdn
- This is for providing links to users to reach the live design server (open In live design button)
livedesign_private_ip
- Used for direct database access over peering connection for open in live design scripts
- Used for direct ldap connection over peering connection
schrodinger_peering_cidr
- Currently used just for recording purposes
- This is stored with the cloud formation stack variables but is not currently used
pub_subnet_cidr
- We are reserving the first 16 addresses of the VPC CIDR range for the public subnet so we use the /28 range of the VPC CIDR
- e.g. if
vpc_cidr
is10.100.60.128/25
(10.100.60.128 - 10.100.60.255), then thepub_subnet_cidr
should be10.100.60.128/28
(10.100.60.128 - 10.100.60.143)
acas_private_ip
- AWS reservers the first 3 addreses of a VPC CIDR Range for internal usage, therfore we made the instance privat ip the 4th address in the
pub_subnet_cidr
- e.g. 10.100.60.128/25 - first address = 10.100.60.128 so + 4 = 10.100.60.132
- AWS reservers the first 3 addreses of a VPC CIDR Range for internal usage, therfore we made the instance privat ip the 4th address in the
acas_authstrategy
- Allowed values -
acas
,ldap
acas
- Use ACAS internal authentication instead of reaching to Live Design LDAP
- This can be useful if you want a stanalone instance of acas without having to connect it to Live Design.
ldap
- This sets the appropriate flags in ACAS to reach out to Live Design's ldap system located on the Live Design server
- Allowed values -
run_backups
,backups_retention
- For non-prod systems
run_backups
:false
backups_retention
: {must still be an interger but won't be used}
- For prod system
run_backups
:true
backups_retention
: 7
- For non-prod systems
Get the ssh key AWS > us-east-1 > System Manager > Parameter Store > acas.pem
ssh -i "~/.ssh/acas.pem" [email protected]
Changing a password in a vault file:
First remove the original password by opening the file and deleting the lines
Add a new password:
cd environments/stage/group_vars/all/
echo -n 'TheNewPassword' | ansible-vault encrypt_string --output vault.yml --stdin-name 'livedesign_db_password' >> vault.yml
Viewing a password:
Get the text from the vault file (spaces don't matter)
chiphertext=' $ANSIBLE_VAULT;1.1;AES256
11111111111111111111111111111111111111111111111111111111111111111111111111111111
1111111111111111111111111111111111111111111111111a111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111111111111111
1111111111111111111a111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111111111'
printf "%s\n" $chiphertext | ansible-vault decrypt /dev/stdin --output=/dev/stderr > /dev/null
This is helpsful when debugging why an ec2 instance may not respond to cloud formation after being created
sudo tail -f /var/log/cloud-init.log
sudo tail -f /var/log/cfn-init.log
- We are not doing the following because we found that 1024-65535 are required for Ephemeral Ports
- We haven't sorted out locking down inbound ports 1024-65535. Something in this range is required for the CloudFormation user function to run, so we allow it in the template
- For any system with real data, it is easy to manually lock this down after the CloudFomrmation stack creation process completes:
- Navigate to your new stack cloudformation
- Click on the resources tab and scroll down to find the resource name "NetworkAcl"
- Click on the link to the physical id. This will open then ACL configuration in a new tab
- Select the Inbound Rules tab and then click "Edit Inbound Rules"
- Delete rule 103 and save
aws cloudformation deploy \
--profile company \
--template-file ./ACAS-Template.yaml \
--parameter-overrides \
InstanceType=r3.large \
KeyName=ansible-testing \
ImageId=ami-9887c6e7 \
--stack-name acas-stage
aws cloudformation delete-stack --profile company --stack-name acas-tmp
This currently deletes the data volume!
aws cloudformation update-termination-protection --profile company --enable-termination-protection --stack-name acas-jam-tmp
aws cloudformation update-termination-protection --profile company --no-enable-termination-protection --stack-name acas-jam-tmp