You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While testing #11, I encountered an issue while re-running terraform apply.
provision_master ran once and was successful until the bug reported in #11. This caused a failure, requiring me to re-run Terraform. Unfortunately when re-running the same command, provision_master prompted me that the specified volume already has a filesystem, and prompts the user whether to use it:
null_resource.provision_master (remote-exec): /dev/vdb contains a ext4 file system
null_resource.provision_master (remote-exec): last mounted on /var/lib/docker on Wed Sep 5 16:29:05 2018
null_resource.provision_master (remote-exec): Proceed anyway? (y,n)
After waiting 30+ minutes, still nothing new happens and provision_master cannot ever complete without first performing terraform destroy and starting over from scratch.
Verbose output can be found below. The first section is output from the failure caused by #11, then follows the output from re-running Terraform:
...
null_resource.provision_master (remote-exec): ==> v1beta1/ClusterRoleBinding
null_resource.provision_master (remote-exec): NAME AGE
null_resource.provision_master (remote-exec): support-nginx-ingress 6s
null_resource.provision_master (remote-exec): /tmp/terraform_1092063292.sh: line 6: /etc/bash_completion.d/kubectl: Permission denied
Error: Error applying plan:
3 error(s) occurred:
* module.compute_storage_nodes.openstack_compute_instance_v2.worker: 1 error(s) occurred:
* openstack_compute_instance_v2.worker: Error creating OpenStack server: Bad request with: [POST https://compute.cloud.sdsc.edu:8774/v2.1/servers], error message: {"badRequest": {"message": "Network 1d53ab5a-7584-4de0-9147-6ae7eceb5eee requires a subnet in order to boot instances on.", "code": 400}}
* module.compute_worker_nodes.openstack_compute_instance_v2.worker: 1 error(s) occurred:
* openstack_compute_instance_v2.worker: Error creating OpenStack server: Bad request with: [POST https://compute.cloud.sdsc.edu:8774/v2.1/servers], error message: {"badRequest": {"message": "Network 1d53ab5a-7584-4de0-9147-6ae7eceb5eee requires a subnet in order to boot instances on.", "code": 400}}
* null_resource.provision_master: error executing "/tmp/terraform_1092063292.sh": Process exited with status 1
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
ubuntu@lambert8-dev:~/kubeadm-terraform$ ^C
ubuntu@lambert8-dev:~/kubeadm-terraform$ ^C
ubuntu@lambert8-dev:~/kubeadm-terraform$ ^C
ubuntu@lambert8-dev:~/kubeadm-terraform$ git status
On branch develop
Your branch is up-to-date with 'origin/develop'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: variables.tf
no changes added to commit (use "git add" and/or "git commit -a")
ubuntu@lambert8-dev:~/kubeadm-terraform$ git stash
Saved working directory and index state WIP on develop: 114ffd4 Merge pull request #9 from nds-org/bash-completion
HEAD is now at 114ffd4 Merge pull request #9 from nds-org/bash-completion
ubuntu@lambert8-dev:~/kubeadm-terraform$ git fetch --all
Fetching origin
gitremote: Counting objects: 4, done.
remote: Compressing objects: 100% (1/1), done.
remote: Total 4 (delta 3), reused 4 (delta 3), pack-reused 0
Unpacking objects: 100% (4/4), done.
From https://github.com/nds-org/kubeadm-terraform
* [new branch] 11-completions-fails -> origin/11-completions-fails
ubuntu@lambert8-dev:~/kubeadm-terraform$ git checkout 11-completions-fails
Branch 11-completions-fails set up to track remote branch 11-completions-fails from origin.
Switched to a new branch '11-completions-fails'
ubuntu@lambert8-dev:~/kubeadm-terraform$ git stash^C
ubuntu@lambert8-dev:~/kubeadm-terraform$ ^C
ubuntu@lambert8-dev:~/kubeadm-terraform$ git stash pop
On branch 11-completions-fails
Your branch is up-to-date with 'origin/11-completions-fails'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: variables.tf
no changes added to commit (use "git add" and/or "git commit -a")
Dropped refs/stash@{0} (190d87077e8fb0509766a56f10abd469cb666861)
ubuntu@lambert8-dev:~/kubeadm-terraform$ terraform apply
openstack_networking_floatingip_v2.masterip: Refreshing state... (ID: e36e2bc8-1b60-4cd1-b60e-c9b9af96e9c2)
openstack_compute_keypair_v2.k8s: Refreshing state... (ID: mltest-key_pair)
openstack_blockstorage_volume_v2.storage_node: Refreshing state... (ID: 7ec520ae-4ac5-4a29-b4c6-2fd684bc75e6)
openstack_networking_router_v2.router_1: Refreshing state... (ID: a32a110a-2b2c-4bad-92c4-8edade4b5aa0)
openstack_compute_secgroup_v2.bastion: Refreshing state... (ID: 60465620-6975-40e6-b643-2433d761dfe5)
openstack_blockstorage_volume_v2.worker_docker: Refreshing state... (ID: 154fb2c1-5d6b-4eb2-a7a3-222e514fb218)
openstack_compute_secgroup_v2.k8s: Refreshing state... (ID: 6d74880b-196e-4a5d-9af8-49d86b39e1fd)
openstack_networking_network_v2.network_1: Refreshing state... (ID: 1d53ab5a-7584-4de0-9147-6ae7eceb5eee)
openstack_networking_floatingip_v2.workerip: Refreshing state... (ID: 96809865-a2eb-4f3b-8c25-b95d71d86526)
openstack_blockstorage_volume_v2.storage_docker: Refreshing state... (ID: 8fdcf18a-a64b-464a-afc6-f5c59bd2511e)
openstack_compute_secgroup_v2.k8s_master: Refreshing state... (ID: ccd8327d-3f93-43e1-b7e4-f0601c128fd1)
openstack_blockstorage_volume_v2.master_docker: Refreshing state... (ID: 7a31b965-14b6-4f4a-a3cf-b0cea6f08305)
openstack_networking_subnet_v2.subnet_1: Refreshing state... (ID: 2a15c132-7bdc-4b7a-9608-793890760750)
openstack_networking_router_interface_v2.router_interface_1: Refreshing state... (ID: e17b1014-f6fd-4906-81c8-20907f60bb07)
openstack_compute_instance_v2.master: Refreshing state... (ID: dc3d7771-133f-417c-b20b-bd111286420c)
openstack_compute_volume_attach_v2.master-docker: Refreshing state... (ID: dc3d7771-133f-417c-b20b-bd111286420c/7a31b965-14b6-4f4a-a3cf-b0cea6f08305)
openstack_compute_floatingip_associate_v2.masterip: Refreshing state... (ID: 132.249.238.74/dc3d7771-133f-417c-b20b-bd111286420c/192.168.0.11)
null_resource.provision_master: Refreshing state... (ID: 5824377709132871726)
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
<= read (data resources)
Terraform will perform the following actions:
<= data.external.k8s_join_response
id: <computed>
program.#: "1"
program.0: "assets/get-token.sh"
query.%: "2"
query.host: "132.249.238.74"
query.private_key: "~/.ssh/id_rsa"
result.%: <computed>
+ null_resource.install_nfs
id: <computed>
+ null_resource.label_external_ip_nodes
id: <computed>
+ null_resource.label_storage_nodes
id: <computed>
-/+ null_resource.provision_master (tainted) (new resource required)
id: "5824377709132871726" =><computed> (forces new resource)
+ null_resource.provision_storage_mounts
id: <computed>
+ openstack_compute_floatingip_associate_v2.workerip
id: <computed>
fixed_ip: "${module.compute_worker_nodes.worker-instance-fixed-ips[count.index]}"
floating_ip: "132.249.238.116"
instance_id: "${module.compute_worker_nodes.worker-instance-ids[count.index]}"
region: <computed>
wait_until_associated: "false"
+ openstack_compute_volume_attach_v2.storage_volume
id: <computed>
device: <computed>
instance_id: "${module.compute_storage_nodes.worker-instance-ids[count.index]}"
region: <computed>
volume_id: "7ec520ae-4ac5-4a29-b4c6-2fd684bc75e6"
+ module.compute_storage_nodes.openstack_compute_instance_v2.worker
id: <computed>
access_ip_v4: <computed>
access_ip_v6: <computed>
all_metadata.%: <computed>
availability_zone: <computed>
flavor_id: <computed>
flavor_name: "m1.large"
force_delete: "false"
image_id: <computed>
image_name: "Ubuntu 16.04 LTS x86_64"
key_pair: "mltest-key_pair"
name: "mltest-storage0"
network.#: "1"
network.0.access_network: "false"
network.0.fixed_ip_v4: <computed>
network.0.fixed_ip_v6: <computed>
network.0.floating_ip: <computed>
network.0.mac: <computed>
network.0.name: "mltest-net"
network.0.port: <computed>
network.0.uuid: <computed>
power_state: "active"
region: <computed>
security_groups.#: "2"
security_groups.3321732881: "mltest-k8s"
security_groups.3814588639: "default"
stop_before_destroy: "false"
+ module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker
id: <computed>
device: <computed>
instance_id: "${element(openstack_compute_instance_v2.worker.*.id, count.index)}"
region: <computed>
volume_id: "8fdcf18a-a64b-464a-afc6-f5c59bd2511e"
+ module.compute_worker_nodes.openstack_compute_instance_v2.worker
id: <computed>
access_ip_v4: <computed>
access_ip_v6: <computed>
all_metadata.%: <computed>
availability_zone: <computed>
flavor_id: <computed>
flavor_name: "m1.large"
force_delete: "false"
image_id: <computed>
image_name: "Ubuntu 16.04 LTS x86_64"
key_pair: "mltest-key_pair"
name: "mltest-worker0"
network.#: "1"
network.0.access_network: "false"
network.0.fixed_ip_v4: <computed>
network.0.fixed_ip_v6: <computed>
network.0.floating_ip: <computed>
network.0.mac: <computed>
network.0.name: "mltest-net"
network.0.port: <computed>
network.0.uuid: <computed>
power_state: "active"
region: <computed>
security_groups.#: "2"
security_groups.3321732881: "mltest-k8s"
security_groups.3814588639: "default"
stop_before_destroy: "false"
+ module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker
id: <computed>
device: <computed>
instance_id: "${element(openstack_compute_instance_v2.worker.*.id, count.index)}"
region: <computed>
volume_id: "154fb2c1-5d6b-4eb2-a7a3-222e514fb218"
+ module.provision_storage_nodes.null_resource.provision_worker
id: <computed>
triggers.%: <computed>
+ module.provision_storage_nodes.null_resource.worker_join
id: <computed>
+ module.provision_storage_nodes.null_resource.worker_node
id: <computed>
+ module.provision_worker_nodes.null_resource.provision_worker
id: <computed>
triggers.%: <computed>
+ module.provision_worker_nodes.null_resource.worker_join
id: <computed>
+ module.provision_worker_nodes.null_resource.worker_node
id: <computed>
Plan: 17 to add, 0 to change, 1 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
null_resource.provision_master: Destroying... (ID: 5824377709132871726)
null_resource.provision_master: Destruction complete after 0s
null_resource.provision_master: Creating...
null_resource.provision_master: Provisioning with 'remote-exec'...
null_resource.provision_master (remote-exec): Connecting to remote host via SSH...
null_resource.provision_master (remote-exec): Host: 132.249.238.74
null_resource.provision_master (remote-exec): User: ubuntu
null_resource.provision_master (remote-exec): Password: false
null_resource.provision_master (remote-exec): Private key: true
null_resource.provision_master (remote-exec): SSH Agent: false
null_resource.provision_master (remote-exec): Checking Host Key: false
module.compute_worker_nodes.openstack_compute_instance_v2.worker: Creating...
access_ip_v4: "" =>"<computed>"
access_ip_v6: "" =>"<computed>"
all_metadata.%: "" =>"<computed>"
availability_zone: "" =>"<computed>"
flavor_id: "" =>"<computed>"
flavor_name: "" =>"m1.large"
force_delete: "" =>"false"
image_id: "" =>"<computed>"
image_name: "" =>"Ubuntu 16.04 LTS x86_64"
key_pair: "" =>"mltest-key_pair"
name: "" =>"mltest-worker0"
network.#: "" => "1"
network.0.access_network: "" =>"false"
network.0.fixed_ip_v4: "" =>"<computed>"
network.0.fixed_ip_v6: "" =>"<computed>"
network.0.floating_ip: "" =>"<computed>"
network.0.mac: "" =>"<computed>"
network.0.name: "" =>"mltest-net"
network.0.port: "" =>"<computed>"
network.0.uuid: "" =>"<computed>"
power_state: "" =>"active"
region: "" =>"<computed>"
security_groups.#: "" => "2"
security_groups.3321732881: "" =>"mltest-k8s"
security_groups.3814588639: "" =>"default"
stop_before_destroy: "" =>"false"
module.compute_storage_nodes.openstack_compute_instance_v2.worker: Creating...
access_ip_v4: "" =>"<computed>"
access_ip_v6: "" =>"<computed>"
all_metadata.%: "" =>"<computed>"
availability_zone: "" =>"<computed>"
flavor_id: "" =>"<computed>"
flavor_name: "" =>"m1.large"
force_delete: "" =>"false"
image_id: "" =>"<computed>"
image_name: "" =>"Ubuntu 16.04 LTS x86_64"
key_pair: "" =>"mltest-key_pair"
name: "" =>"mltest-storage0"
network.#: "" => "1"
network.0.access_network: "" =>"false"
network.0.fixed_ip_v4: "" =>"<computed>"
network.0.fixed_ip_v6: "" =>"<computed>"
network.0.floating_ip: "" =>"<computed>"
network.0.mac: "" =>"<computed>"
network.0.name: "" =>"mltest-net"
network.0.port: "" =>"<computed>"
network.0.uuid: "" =>"<computed>"
power_state: "" =>"active"
region: "" =>"<computed>"
security_groups.#: "" => "2"
security_groups.3321732881: "" =>"mltest-k8s"
security_groups.3814588639: "" =>"default"
stop_before_destroy: "" =>"false"
null_resource.provision_master (remote-exec): Connected!
null_resource.provision_master (remote-exec): 127.0.0.1 mltest-master
null_resource.provision_master: Provisioning with 'file'...
null_resource.provision_master: Provisioning with 'remote-exec'...
null_resource.provision_master (remote-exec): Connecting to remote host via SSH...
null_resource.provision_master (remote-exec): Host: 132.249.238.74
null_resource.provision_master (remote-exec): User: ubuntu
null_resource.provision_master (remote-exec): Password: false
null_resource.provision_master (remote-exec): Private key: true
null_resource.provision_master (remote-exec): SSH Agent: false
null_resource.provision_master (remote-exec): Checking Host Key: false
null_resource.provision_master (remote-exec): Connected!
null_resource.provision_master (remote-exec): mke2fs 1.42.13 (17-May-2015)
null_resource.provision_master (remote-exec): /dev/vdb contains a ext4 file system
null_resource.provision_master (remote-exec): last mounted on /var/lib/docker on Wed Sep 5 16:29:05 2018
null_resource.provision_master (remote-exec): Proceed anyway? (y,n)
null_resource.provision_master: Still creating... (10s elapsed)
module.compute_worker_nodes.openstack_compute_instance_v2.worker: Still creating... (10s elapsed)
module.compute_storage_nodes.openstack_compute_instance_v2.worker: Still creating... (10s elapsed)
module.compute_worker_nodes.openstack_compute_instance_v2.worker: Creation complete after 15s (ID: b6befed5-7b2b-4cc7-a8e4-44c4fc70f1a6)
module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker: Creating...
device: "" =>"<computed>"
instance_id: "" =>"b6befed5-7b2b-4cc7-a8e4-44c4fc70f1a6"
region: "" =>"<computed>"
volume_id: "" =>"154fb2c1-5d6b-4eb2-a7a3-222e514fb218"
openstack_compute_floatingip_associate_v2.workerip: Creating...
fixed_ip: "" =>"192.168.0.7"
floating_ip: "" =>"132.249.238.116"
instance_id: "" =>"b6befed5-7b2b-4cc7-a8e4-44c4fc70f1a6"
region: "" =>"<computed>"
wait_until_associated: "" =>"false"
module.compute_storage_nodes.openstack_compute_instance_v2.worker: Creation complete after 16s (ID: 59f4149d-f035-4fe4-b159-b2d52abc2ad6)
module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker: Creating...
device: "" =>"<computed>"
instance_id: "" =>"59f4149d-f035-4fe4-b159-b2d52abc2ad6"
region: "" =>"<computed>"
volume_id: "" =>"8fdcf18a-a64b-464a-afc6-f5c59bd2511e"
openstack_compute_volume_attach_v2.storage_volume: Creating...
device: "" =>"<computed>"
instance_id: "" =>"59f4149d-f035-4fe4-b159-b2d52abc2ad6"
region: "" =>"<computed>"
volume_id: "" =>"7ec520ae-4ac5-4a29-b4c6-2fd684bc75e6"
openstack_compute_floatingip_associate_v2.workerip: Creation complete after 3s (ID: 132.249.238.116/b6befed5-7b2b-4cc7-a8e4-44c4fc70f1a6/192.168.0.7)
null_resource.provision_master: Still creating... (20s elapsed)
module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (10s elapsed)
module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (10s elapsed)
openstack_compute_volume_attach_v2.storage_volume: Still creating... (10s elapsed)
null_resource.provision_master: Still creating... (30s elapsed)
module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (20s elapsed)
module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (20s elapsed)
openstack_compute_volume_attach_v2.storage_volume: Still creating... (20s elapsed)
null_resource.provision_master: Still creating... (40s elapsed)
module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (30s elapsed)
module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker: Still creating... (30s elapsed)
openstack_compute_volume_attach_v2.storage_volume: Still creating... (30s elapsed)
module.compute_worker_nodes.openstack_compute_volume_attach_v2.worker-docker: Creation complete after 31s (ID: b6befed5-7b2b-4cc7-a8e4-44c4fc70f1a6/154fb2c1-5d6b-4eb2-a7a3-222e514fb218)
openstack_compute_volume_attach_v2.storage_volume: Creation complete after 30s (ID: 59f4149d-f035-4fe4-b159-b2d52abc2ad6/7ec520ae-4ac5-4a29-b4c6-2fd684bc75e6)
module.compute_storage_nodes.openstack_compute_volume_attach_v2.worker-docker: Creation complete after 31s (ID: 59f4149d-f035-4fe4-b159-b2d52abc2ad6/8fdcf18a-a64b-464a-afc6-f5c59bd2511e)
null_resource.provision_master: Still creating... (50s elapsed)
null_resource.provision_master: Still creating... (1m0s elapsed)
null_resource.provision_master: Still creating... (1m10s elapsed)
null_resource.provision_master: Still creating... (1m20s elapsed)
null_resource.provision_master: Still creating... (1m30s elapsed)
null_resource.provision_master: Still creating... (1m40s elapsed)
null_resource.provision_master: Still creating... (1m50s elapsed)
null_resource.provision_master: Still creating... (2m0s elapsed)
null_resource.provision_master: Still creating... (2m10s elapsed)
null_resource.provision_master: Still creating... (2m20s elapsed)
null_resource.provision_master: Still creating... (2m30s elapsed)
null_resource.provision_master: Still creating... (2m40s elapsed)
null_resource.provision_master: Still creating... (2m50s elapsed)
null_resource.provision_master: Still creating... (3m0s elapsed)
null_resource.provision_master: Still creating... (3m10s elapsed)
null_resource.provision_master: Still creating... (3m20s elapsed)
null_resource.provision_master: Still creating... (3m30s elapsed)
null_resource.provision_master: Still creating... (3m40s elapsed)
null_resource.provision_master: Still creating... (3m50s elapsed)
null_resource.provision_master: Still creating... (4m0s elapsed)
null_resource.provision_master: Still creating... (4m10s elapsed)
null_resource.provision_master: Still creating... (4m20s elapsed)
null_resource.provision_master: Still creating... (4m30s elapsed)
null_resource.provision_master: Still creating... (4m40s elapsed)
null_resource.provision_master: Still creating... (4m50s elapsed)
null_resource.provision_master: Still creating... (5m0s elapsed)
null_resource.provision_master: Still creating... (5m10s elapsed)
null_resource.provision_master: Still creating... (5m20s elapsed)
null_resource.provision_master: Still creating... (5m30s elapsed)
null_resource.provision_master: Still creating... (5m40s elapsed)
null_resource.provision_master: Still creating... (5m50s elapsed)
null_resource.provision_master: Still creating... (6m0s elapsed)
null_resource.provision_master: Still creating... (6m10s elapsed)
null_resource.provision_master: Still creating... (6m20s elapsed)
null_resource.provision_master: Still creating... (6m30s elapsed)
null_resource.provision_master: Still creating... (6m40s elapsed)
null_resource.provision_master: Still creating... (6m50s elapsed)
null_resource.provision_master: Still creating... (7m0s elapsed)
null_resource.provision_master: Still creating... (7m10s elapsed)
null_resource.provision_master: Still creating... (7m20s elapsed)
null_resource.provision_master: Still creating... (7m30s elapsed)
null_resource.provision_master: Still creating... (7m40s elapsed)
null_resource.provision_master: Still creating... (7m50s elapsed)
null_resource.provision_master: Still creating... (8m0s elapsed)
null_resource.provision_master: Still creating... (8m10s elapsed)
null_resource.provision_master: Still creating... (8m20s elapsed)
null_resource.provision_master: Still creating... (8m30s elapsed)
null_resource.provision_master: Still creating... (8m40s elapsed)
null_resource.provision_master: Still creating... (8m50s elapsed)
null_resource.provision_master: Still creating... (9m0s elapsed)
null_resource.provision_master: Still creating... (9m10s elapsed)
null_resource.provision_master: Still creating... (9m20s elapsed)
null_resource.provision_master: Still creating... (9m30s elapsed)
null_resource.provision_master: Still creating... (9m40s elapsed)
null_resource.provision_master: Still creating... (9m50s elapsed)
The text was updated successfully, but these errors were encountered:
While testing #11, I encountered an issue while re-running
terraform apply
.provision_master
ran once and was successful until the bug reported in #11. This caused a failure, requiring me to re-run Terraform. Unfortunately when re-running the same command,provision_master
prompted me that the specified volume already has a filesystem, and prompts the user whether to use it:null_resource.provision_master (remote-exec): /dev/vdb contains a ext4 file system null_resource.provision_master (remote-exec): last mounted on /var/lib/docker on Wed Sep 5 16:29:05 2018 null_resource.provision_master (remote-exec): Proceed anyway? (y,n)
After waiting 30+ minutes, still nothing new happens and
provision_master
cannot ever complete without first performingterraform destroy
and starting over from scratch.Verbose output can be found below. The first section is output from the failure caused by #11, then follows the output from re-running Terraform:
The text was updated successfully, but these errors were encountered: