Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: A specified parameter was not correct: spec.deviceChange.device.port.switchUuid #1737

Open
4 tasks done
revog opened this issue Aug 25, 2022 · 9 comments
Open
4 tasks done
Labels
area/networking Area: Networking area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Milestone

Comments

@revog
Copy link

revog commented Aug 25, 2022

Community Guidelines

  • I have read and agree to the HashiCorp Community Guidelines .
  • Vote on this issue by adding a 👍 reaction to the original issue initial description to help the maintainers prioritize.
  • Do not leave "+1" or other comments that do not add relevant information or questions.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Terraform

v1.0.5

Terraform Provider

v2.1.1

VMware vSphere

v7.0.3

Description

Within a cluster migration we need have to change the VM's cluster, datastore (SAN to vSAN) and network (vSwitch to DVS).
It seems that the network change does not work correctly.

Error in TF:

¦ Error: A specified parameter was not correct: spec.deviceChange.device.port.switchUuid
¦ 
¦   with module.vsphere.vsphere_virtual_machine.node["xxxxx"],
¦   on ../modules/vsphere/vm.tf line 1, in resource "vsphere_virtual_machine" "node":
¦    1: resource "vsphere_virtual_machine" "node" {

Error shown in vCenter:
A specified parameter was not correct: spec.deviceChange.device.port.switchUuid Host esx01.local is not a member of VDS vdswitch_cld_appl

esx01.local is a node in the old cluster which in which the networks are configures with standard vSwitches. The new cluster consists of vSAN nodes and there we use distributed vSwitches.

Affected Resources or Data Sources

resource/vsphere_virtual_machine

Terraform Configuration

resource "vsphere_virtual_machine" "node" {
  for_each                    = merge(local.infoblox_cp_nodes, local.infoblox_worker_nodes)
  name                        = split(".", each.key)[0]
  num_cpus                    = (replace(each.key, "cpl", "") != each.key) ? var.vsphere_vm_cp_cpu : var.vsphere_vm_worker_cpu
  memory                      = (replace(each.key, "cpl", "") != each.key) ? var.vsphere_vm_cp_mem : var.vsphere_vm_worker_mem
  guest_id                    = var.vsphere_vm_guest_id
  firmware                    = var.vsphere_vm_firmware
  scsi_type                   = data.vsphere_virtual_machine.template.scsi_type
  resource_pool_id            = data.vsphere_resource_pool.pool.id
  datastore_id                = (var.vsphere_datastore == null ? null : data.vsphere_datastore.datastore[0].id)
  datastore_cluster_id        = (var.vsphere_datastore_cluster == null ? null : data.vsphere_datastore_cluster.datastore[0].id)
  storage_policy_id           = data.vsphere_storage_policy.policy.id
  annotation                  = (replace(each.key, "cpl", "") != each.key) ? "${local.desc_prefix} Control Plane Node ${upper(local.stage)} (managed by Terraform)" : "${local.desc_prefix} Worker Node ${upper(local.stage)} (managed by Terraform)"
  custom_attributes           = local.vm_default_attributes
  folder                      = var.vsphere_cpi_enable == true ? vsphere_folder.folder[0].path : null
  hardware_version            = var.vsphere_hardware_version
  wait_for_guest_net_routable = var.vsphere_wait_for_guest_net_routable
  wait_for_guest_net_timeout  = var.vsphere_wait_for_guest_net_timeout
  tags                        = concat(local.vm_default_tags, replace(substr(split(".", each.key)[0], -1, -1) % 2, 1, 1) == "1" ? [data.vsphere_tag.ts.id] : [data.vsphere_tag.rm.id])

  lifecycle {
    ignore_changes = [num_cpus, memory, memory_reservation, tags, custom_attributes, extra_config, pci_device_id, host_system_id, change_version, tags, guest_ip_addresses, clone]
  }

  clone {
    template_uuid = data.vsphere_virtual_machine.template.id
  }

  disk {
    label             = "system_disk"
    size              = (replace(each.key, "cpl", "") != each.key) ? var.vsphere_vm_cp_disk : var.vsphere_vm_worker_disk
    eagerly_scrub     = data.vsphere_virtual_machine.template.disks.0.eagerly_scrub
    thin_provisioned  = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
    storage_policy_id = data.vsphere_storage_policy.policy.id
  }
  disk {
    label             = "container_disk"
    unit_number       = 1
    size              = (replace(each.key, "cpl", "") != each.key) ? var.vsphere_vm_cp_disk_data : var.vsphere_vm_worker_disk_data
    eagerly_scrub     = data.vsphere_virtual_machine.template.disks.0.eagerly_scrub
    thin_provisioned  = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
    storage_policy_id = data.vsphere_storage_policy.policy.id
  }

  enable_disk_uuid = var.vsphere_cpi_enable == true ? true : false

  network_interface {
    network_id     = data.vsphere_network.network.id
    use_static_mac = true
    mac_address    = lower(each.value[1])
  }

  extra_config = {
    "guestinfo.metadata" = base64gzip(templatefile("${path.module}/cloud-init/metadata.yaml.tpl", {
      instance_id    = each.key
      network_config = base64gzip(file("${path.module}/cloud-init/network-config.yaml.tpl"))
    }))
    "guestinfo.metadata.encoding" = "gzip+base64"
    "guestinfo.userdata" = base64gzip(templatefile("${path.module}/cloud-init/userdata.yaml.tpl", {
      authorized_keys = join("\n", formatlist("  - %s", var.authorized_keys))
      hostname        = split(".", each.key)[0]
      fqdn            = each.key
      ip              = each.value[0]
      data_disk       = file("${path.module}/cloud-init/data-disk.yaml.tpl")
      cmd_suma_reg = templatefile("${path.module}/cloud-init/register-suma.yaml.tpl", {
        suma_server_name = var.suma_url
        bootstrap_file   = var.suma_bootstrap_file
      })
      cmd_cleanup = file("${path.module}/cloud-init/cleanup.yaml.tpl")
      cmd_nvidia  = (replace(each.key, "wkr", "") != each.key) ? file("${path.module}/cloud-init/nvidia.yaml.tpl") : ""
    }))
    "guestinfo.userdata.encoding" = "gzip+base64"
    # GPU PCI Passthru options for worker nodes
    "pciPassthru.use64bitMMIO"    = (replace(each.key, "wkr", "") != each.key) ? "TRUE" : null
    "pciPassthru.64bitMMIOSizeGB" = (replace(each.key, "wkr", "") != each.key) ? "64" : null
  }

  depends_on = [local.infoblox_cp_nodes, local.infoblox_worker_nodes, vsphere_folder.folder, null_resource.suma]
}

Debug Output

https://gist.github.com/revog/82b79a5104f4924e84beb10a4d2f5336

Panic Output

No response

Expected Behavior

TF Provider migrates the VM to the new cluster by changing the datastore and network aswell.

Actual Behavior

VM migration and TF execution fails

Steps to Reproduce

  • Change resource pool, datastore and network of existing VM object

Environment Details

No response

Screenshots

No response

References

No response

@revog revog added bug Type: Bug needs-triage Status: Issue Needs Triage labels Aug 25, 2022
@github-actions github-actions bot removed the bug Type: Bug label Aug 25, 2022
@github-actions
Copy link

Hello, revog! 🖐

Thank you for submitting an issue for this provider. The issue will now enter into the issue lifecycle.

If you want to contribute to this project, please review the contributing guidelines and information on submitting pull requests.

@tenthirtyam tenthirtyam added bug Type: Bug area/vm Area: Virtual Machines area/networking Area: Networking labels Aug 25, 2022
@tenthirtyam tenthirtyam changed the title VM Migration between clusters does not work (vSwitch / VDS issue) Error: A specified parameter was not correct: spec.deviceChange.device.port.switchUuid Aug 25, 2022
@haydenseitz
Copy link

haydenseitz commented Nov 4, 2022

👋 I'm having the same issue, I receive Error: A specified parameter was not correct: spec.deviceChange.device.port.switchUuid when attempting to migrate a VM from a host cluster using a standard network to a host cluster using a VDS port group

EDIT:

  • Terraform v1.2.2
  • provider[registry.terraform.io/hashicorp/vsphere] 2.0.2
  • vSphere Client version 7.0.3.00700

@rethridge-lbi
Copy link

rethridge-lbi commented Nov 29, 2022

I have the same issue.
Vsphere provider 2.2.0
Terraform 1.2.3

Our network was moved to a VDS port group now Terraform fails when trying to reconfigure the VM

error reconfiguring virtual machine: A specified parameter was not correct: spec.deviceChange.device.port.switchUuid

@jcpowermac
Copy link
Contributor

I just ran into this as well - it was self-inflected but maybe provide some insight.
We provide managed object ids to network_id to avoid duplicate network names.

Does the vSphere environments where this occur have multiple vCenter clusters with port groups with the same name?

@tenthirtyam tenthirtyam added this to the Backlog milestone May 4, 2023
@GarinKartes
Copy link

GarinKartes commented Oct 6, 2023

currently experiencing this issue after adding the option to target 2 vsphere hosts in the same plan, what makes this happen? is there a fix or workaround?

terraform version 1.5.7
vpshere provider 2.4.3

@TobiPeterG
Copy link

I also experience this issue when creating a VM with a manual MAC address for its network interface. It happens about 50% of the times, the behaviour is not deterministic.
I'd be happy to help solve this issue :)
Using vsphere 2.8.2

@TobiPeterG
Copy link

We found the issue on our side, some hosts were not in a network I wanted to create the vm on with a network interface in that network.
This lead to the issue, the log of vsphere was more helpful
A specified parameter was not correct: spec.deviceChange.device.port.switchUuid Host VSPHERE_HOST is not a member of the VDS NETWORK

@dmpopoff
Copy link

I also encountered this error. We are trying to migrate VMs from one cluster to another using terraform. Simultaneously change one cluster to another and one portgroup to another.
terraform version v1.8.5
vpshere terraform provider v2.8.3
vsphere version 7.0.3, build 23794027

Migration takes place from one distribution switch to another. The storage does not change.

It seems that the problem is that the provider tries to apply one thing first - either cluster change, leaving the switch change for later, or vice versa. It seems that if the migration of both the cluster and the network was done in one transaction, it would have been successful.

@hervedevos
Copy link

hervedevos commented Nov 19, 2024

I am also encountering this issue while migrating a VM from one cluster to another.

When i look at vSphere logs, it shows that provider is trying to move NICs from portgroup to a target cluster portgroup before actually moving the VM to the target cluster, resulting in the following error :

Host A-host-from-source-cluster is not a member of VDS one-VDS-from-target-cluster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking Area: Networking area/vm Area: Virtual Machines bug Type: Bug needs-triage Status: Issue Needs Triage
Projects
None yet
Development

No branches or pull requests

9 participants