Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a disk to an existing VM does not propagate the disk uuid properly #2296

Open
4 tasks done
jtognazzi opened this issue Nov 5, 2024 · 3 comments
Open
4 tasks done
Labels
area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Milestone

Comments

@jtognazzi
Copy link

jtognazzi commented Nov 5, 2024

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

1.9.8

Terraform Provider

2.10.0

VMware vSphere

8.0.3

Description

When Creating a VM with 1 or multiple disks you can properly use the disk uuid in other resources or in locals variables.
the value is only known after apply, but it works properly.

Now, if you add some other disks on a VM which already exists, it does not work anymore as the disk uuid is not shown as known after apply, and if you have resources relying on it it fails because it is null.

Affected Resources or Data Sources

resource/vsphere_virtual_machine

Terraform Configuration

resource "vsphere_virtual_machine" "myVM" {
  name                    = "myvm"

  folder                  = var.vsphere_folder

  num_cpus                = 2
  memory                  = 2024

  firmware                = "efi"
  efi_secure_boot_enabled = "true"
  guest_id                = module.vsphere.template.guest_id
  datastore_id            = module.vsphere.datastores[var.vm_datastore].id
  resource_pool_id        = module.vsphere.pool[var.vsphere_cluster].id

  memory_hot_add_enabled = true
  cpu_hot_add_enabled    = true
  cpu_hot_remove_enabled = true
  enable_disk_uuid       = true

  sync_time_with_host    = false
  #wait_for_guest_net_timeout = 0
  #wait_for_guest_ip_timeout  = 0

  extra_config_reboot_required = false

  network_interface {
    network_id = module.vsphere.networks[var.vm_network_default].id
  }
  disk {
    label            = "root"
    size             = module.vsphere.template.disks.0.size
    eagerly_scrub    = module.vsphere.template.disks.0.eagerly_scrub
    thin_provisioned = module.vsphere.template.disks.0.thin_provisioned
  }
  dynamic "disk" {
    for_each = { for index, name in var.datadisks : index => name }
     content {
      datastore_id = module.vsphere.datastores[var.vm_datastore].id
      label = disk.value
      unit_number = disk.key + 1   
      size = 10   
    }
  }

  clone {
    template_uuid = module.vsphere.template.id
  }
  lifecycle {
    ignore_changes = [
      clone[0].template_uuid,
      annotation
    ]
  }

}

locals {
  uuid_test = [
    for disk in vsphere_virtual_machine.myVM.disk: [
      disk.uuid
    ]
  ]
}

resource terraform_data uuid {
  input = local.uuid_test
}

output uuid {
  value = local.uuid_test
}

Debug Output

https://gist.github.com/jtognazzi/10f79a78405566dc1512e7a620e4b505

Panic Output

No response

Expected Behavior

The uuid of all attached disk should be shown in output and

Actual Behavior

It fails with the following error
terraform apply -var 'datadisks=["disk1"]'

module.vsphere.data.vsphere_datacenter.datacenter: Reading...
module.vsphere.data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-2]
module.vsphere.data.vsphere_network.networks["ELS Testnetzwerk"]: Reading...
module.vsphere.data.vsphere_network.networks["VM Network 2"]: Reading...
module.vsphere.data.vsphere_network.networks["VM Network Docker"]: Reading...
module.vsphere.data.vsphere_datastore.datastores["msa2052_lun2"]: Reading...
module.vsphere.data.vsphere_virtual_machine.template: Reading...
module.vsphere.data.vsphere_datastore.datastores["msa2052_lun1"]: Reading...
module.vsphere.data.vsphere_compute_cluster.cluster["Cluster AMD"]: Reading...
module.vsphere.data.vsphere_datastore.datastores["MSA2060_A_VOL1"]: Reading...
module.vsphere.data.vsphere_network.networks["ELS Testnetzwerk"]: Read complete after 0s [id=network-24]
module.vsphere.data.vsphere_network.networks["VM Network Docker"]: Read complete after 0s [id=network-70]
module.vsphere.data.vsphere_network.networks["VM Network 2"]: Read complete after 0s [id=network-13]
module.vsphere.data.vsphere_datastore.datastores["MSA2060_A_VOL1"]: Read complete after 0s [id=datastore-59065]
module.vsphere.data.vsphere_datastore.datastores["msa2052_lun1"]: Read complete after 0s [id=datastore-32263]
module.vsphere.data.vsphere_virtual_machine.template: Read complete after 0s [id=4221e9e4-860c-e9d2-18b1-88140d3defc1]
module.vsphere.data.vsphere_datastore.datastores["msa2052_lun2"]: Read complete after 0s [id=datastore-32157]
module.vsphere.data.vsphere_compute_cluster.cluster["Cluster AMD"]: Read complete after 0s [id=domain-c2243]
module.vsphere.data.vsphere_resource_pool.pool["Cluster AMD"]: Reading...
module.vsphere.data.vsphere_resource_pool.pool["Cluster AMD"]: Read complete after 0s [id=resgroup-2244]
vsphere_virtual_machine.myVM: Refreshing state... [id=422100c3-67bd-0142-8f63-cbeee204ca43]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform planned the following actions, but then encountered a problem:

  # vsphere_virtual_machine.myVM will be updated in-place
  ~ resource "vsphere_virtual_machine" "myVM" {
        id                                      = "422100c3-67bd-0142-8f63-cbeee204ca43"
        name                                    = "myvm"
        tags                                    = []
        # (73 unchanged attributes hidden)

      + disk {
          + attach           = true
          + controller_type  = "scsi"
          + datastore_id     = "datastore-59065"
          + disk_mode        = "persistent"
          + disk_sharing     = "sharingNone"
          + eagerly_scrub    = false
          + io_limit         = -1
          + io_reservation   = 0
          + io_share_count   = 0
          + io_share_level   = "normal"
          + keep_on_remove   = false
          + key              = 0
          + label            = "disk1"
          + path             = "disk1.vmdk"
          + thin_provisioned = true
          + unit_number      = 1
          + write_through    = false
        }

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
╷
│ Error: Invalid function argument
│ 
│   on main.tf line 62, in locals:
│   62:       lower(disk.uuid)
│     ├────────────────
│     │ disk.uuid is null
│ 
│ Invalid value for "str" parameter: argument must not be null.

Edit: I added the terraform_data resource and now the error is

╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for terraform_data.uuid to include new values learned so far during apply, provider "terraform.io/builtin/terraform" produced an invalid new value for .input[1][0]: was null, but now
│ cty.StringVal("6000C29a-13b1-08b4-fa53-3b96cced6e01").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵

Steps to Reproduce

var.datadisks is an empty list per default.
So running terraform apply will create a new VM with only 1 disk attached
then running terraform apply -var 'datadisks=["disk1"]' will attach another disk to the VM, but it fails.

If I destroy everything and run directly the same command (so the VM has 2 disks attached), it works properly.

Environment Details

No response

Screenshots

No response

References

No response

@jtognazzi jtognazzi added bug Type: Bug needs-triage Status: Issue Needs Triage labels Nov 5, 2024
Copy link

github-actions bot commented Nov 5, 2024

Hello, jtognazzi! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@tenthirtyam tenthirtyam added the area/vm Area: Virtual Machines label Nov 5, 2024
@tenthirtyam tenthirtyam added this to the Backlog milestone Nov 5, 2024
@Keltirion
Copy link

Also encountered this issue today when attaching drives created with vsphere_virtual_disk in different module to existing VMs.

@TallFurryMan
Copy link

Same observed in our infrastructure, in the context of one VM repeatedly provisioning with disks enumerated in an "unexpected" order (e.g. system disk on /dev/sde). We worked around the issue by considering the PCI enumeration is more stable, and by using the /dev/disk/by-path/pci-(...)-scsi-0:0:<index>:0-part1 as device name in our cloud-config, which match the order in which disks are declared in the VM resource.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Projects
None yet
Development

No branches or pull requests

4 participants