Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

network_interface in the wrong order of IP configuration for r/virtual_machine #1147

Closed
skydion opened this issue Jul 28, 2020 · 5 comments
Closed
Assignees
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/guest Area: Guest Operating System bug Type: Bug not-reproduced Status: Not Reproduced size/m Relative Sizing: Medium

Comments

@skydion
Copy link

skydion commented Jul 28, 2020

Terraform Version

Terraform v0.12.29

vSphere Provider Version

provider.vsphere v1.19.0

Affected Resource(s)

vsphere_virtual_machine

Terraform Configuration Files

> var.cp_ip_addresses
{
  "0public" = [
    "69.168.x.y",
  ]
  "1mgmt" = [
    "192.168.16.5",
  ]
  "3appliance" = [
    "192.168.32.5",
  ]
  "4provisioning" = [
    "192.168.40.5",
  ]
  "5provisioning" = [
    "192.168.40.100",
  ]
  "6appliance" = [
    "192.168.32.100",
  ]
}
locals {
  cp_name  = ["tfcp10"]
  cp_count = contains(keys(var.cp_ip_addresses), "1mgmt") == true ? length(var.cp_ip_addresses["1mgmt"]) : 0

  cp_available_networks = {
    "0public"       = local.netids["0public"]
    "1mgmt"         = local.netids["1mgmt"]
    "3appliance"    = local.netids["3appliance"]
    "4provisioning" = local.netids["4provisioning"]
    "5provisioning" = local.netids["4provisioning"]
    "6provisioning" = local.netids["3appliance"]
  }

  cp_public_net = lookup(local.cp_available_networks, "0public", "")
}

resource "vsphere_virtual_machine" "cp" {
  count            = local.cp_count
  name             = local.cp_name[count.index]
  num_cpus         = var.num_cpus["control_panel"][count.index]
  memory           = var.memory["control_panel"][count.index]
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  guest_id         = data.vsphere_virtual_machine.template.guest_id
  scsi_type        = data.vsphere_virtual_machine.template.scsi_type

  dynamic "network_interface" {
    for_each = {
      for key, value in local.cp_available_networks :
      key => value
      if value != ""
    }

    content {
      network_id   = network_interface.value
      adapter_type = data.vsphere_virtual_machine.template.network_interface_types[0]
    }
  }

  disk {
    label            = "disk0"
    size             = data.vsphere_virtual_machine.template.disks.0.size
    thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
  }

  clone {
    template_uuid = data.vsphere_virtual_machine.template.id

    customize {
      linux_options {
        host_name = local.cp_name[count.index]
        domain    = var.domain
      }

      dynamic "network_interface" {
        for_each = {
          for key, value in var.cp_ip_addresses :
          key => value[count.index]
        }

        content {
          ipv4_address = network_interface.value
          ipv4_netmask = 24
        }
      }

      ipv4_gateway    = local.cp_public_net != "" ? var.gateways["public"] : var.gateways["mgmt"]
      dns_server_list = var.dns
    }
  }
}

Debug Output

Panic Output

Expected Behavior

I except than inside VM interfaces will have IP addresses like decribed in terraform output

            network_interface {
                ipv4_address = "69.168.x.y"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.16.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.32.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.40.5"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.40.100"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }
            network_interface {
                ipv4_address = "192.168.32.100"
                ipv4_netmask = 24
                ipv6_netmask = 0
            }

Actual Behavior

But I have like this

192.168.40.5
69.168.x.y
192.168.40.100
192.168.16.5
192.168.32.100
192.168.32.5
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:66:e9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.5/24 brd 192.168.40.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:66e9/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:e1:98 brd ff:ff:ff:ff:ff:ff
    inet 69.168.x.y/24 brd 69.168.x.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:e198/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:db:2f brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.100/24 brd 192.168.40.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:db2f/64 scope link 
       valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:17:11 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.5/24 brd 192.168.16.255 scope global noprefixroute eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:1711/64 scope link 
       valid_lft forever preferred_lft forever
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:8d:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.100/24 brd 192.168.32.255 scope global noprefixroute eth4
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:8de4/64 scope link 
       valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:9f:97:c7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.32.5/24 brd 192.168.32.255 scope global noprefixroute eth5
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe9f:97c7/64 scope link 
       valid_lft forever preferred_lft forever

Steps to Reproduce

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@skydion skydion added the bug Type: Bug label Jul 28, 2020
@bill-rich bill-rich added acknowledged Status: Issue or Pull Request Acknowledged size/m Relative Sizing: Medium labels Jul 28, 2020
@tenthirtyam tenthirtyam added the area/guest Area: Guest Operating System label Feb 22, 2022
@tenthirtyam
Copy link
Collaborator

tenthirtyam commented Feb 26, 2022

@skydion - are you still seeing this issue?

The network_interface should retain the ordering as they are a TypeList.

Can you provide a redacted but reusable version of your configuration for reproduction?

Ryan

@tenthirtyam tenthirtyam added the waiting-response Status: Waiting on a Response label Feb 26, 2022
@tenthirtyam tenthirtyam self-assigned this Feb 26, 2022
@tenthirtyam tenthirtyam added this to the Research milestone Mar 1, 2022
@tenthirtyam tenthirtyam changed the title vsphere_virtual_machine network_interface wrong order of IP configuration inside VM network_interface in the wrong order of IP configuration for r/virtual_machine Mar 1, 2022
@saintdle
Copy link

saintdle commented Mar 1, 2022

If an OVA has two or more networks, when deployed, potentially the wrong networks are assigned to the adapters. Confirmed with multiple provider versions including the latest 2.1.0.

Example OVA used - VMware Data Management for VMware Tanzu - DMS Provider OVA - dms-provider-va-1.1.0.1577-18978276.ova

Example code can be found here

In the below code, the wrong network labels are configured for the two networks mapped in my OVA file. When the VM is powered on, network connectivity cannot be made (ping), but if I go and manually edit the VM properties and change the VM networks around, the VM now responds to ping.

data "vsphere_ovf_vm_template" "ovf" {

  name             = "${var.name}"
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  host_system_id   = "${data.vsphere_host.host.id}"
  local_ovf_path   = "${var.local_ovf_path}"
  ovf_network_map = {
    "Management Network": "${data.vsphere_network.mgmt_network.id}"
    "Control Plane Network": "${data.vsphere_network.control_plane_network.id}"
    }
  }

resource "vsphere_virtual_machine" "vm" {
  name             = "${var.name}"
  num_cpus         = 8
  memory           = 16384
  resource_pool_id = "${var.resource_pool_id}"
  datastore_id     = "${data.vsphere_datastore.datastore.id}"
  folder           = "${var.folder}"
  wait_for_guest_net_timeout = 0
  wait_for_guest_ip_timeout  = 0
  datacenter_id    = "${data.vsphere_datacenter.dc.id}"
  host_system_id   = "${data.vsphere_host.host.id}"

  dynamic "network_interface" {
    for_each = "${data.vsphere_ovf_vm_template.ovf.ovf_network_map}"
    content {
      network_id = network_interface.value
    }
  }

  ovf_deploy {
    ovf_network_map = "${data.vsphere_ovf_vm_template.ovf.ovf_network_map}"
    local_ovf_path = "${data.vsphere_ovf_vm_template.ovf.local_ovf_path}"
    disk_provisioning    = "thin"
   }

@github-actions github-actions bot removed the waiting-response Status: Waiting on a Response label Mar 1, 2022
@tenthirtyam tenthirtyam removed their assignment Mar 21, 2022
@tenthirtyam tenthirtyam self-assigned this Jun 15, 2024
@tenthirtyam tenthirtyam removed their assignment Aug 20, 2024
@burnsjared0415 burnsjared0415 self-assigned this Oct 8, 2024
@burnsjared0415 burnsjared0415 modified the milestones: Backlog, On Deck Oct 8, 2024
@burnsjared0415
Copy link
Collaborator

i ran through a test and this worked for me as the code has it written:

data "vsphere_datacenter" "datacenter" {
  name = "dc01"
}

data "vsphere_compute_cluster" "cluster" {
  name          = "cluster-01""
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_network" "network1" {
  name          = "vlan_10_0"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_network" "network2" {
  name          = "vlan_10_1"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_network" "network3" {
  name          = "vlan_private"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_network" "network4" {
  name          = "vlan_10_2"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_datastore" "datastore" {
  name          = "terraform-cl01-ds01"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

data "vsphere_virtual_machine" "template" {
  name          = "linux-temp"
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

resource "vsphere_virtual_machine" "vm" {
  name             = "foo-lin"
  resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
  datastore_id     = data.vsphere_datastore.datastore.id
  num_cpus         = 2
  memory           = 2048
  guest_id         = data.vsphere_virtual_machine.template.guest_id
  firmware          = data.vsphere_virtual_machine.template.firmware
  scsi_type        = data.vsphere_virtual_machine.template.scsi_type
   network_interface {
    network_id   = data.vsphere_network.network1.id
  }
     network_interface {
    network_id   = data.vsphere_network.network2.id
  }
     network_interface {
    network_id   = data.vsphere_network.network3.id
  }
     network_interface {
    network_id   = data.vsphere_network.network4.id
  }
    disk {
    label            = "disk0"
    size             = data.vsphere_virtual_machine.template.disks.0.size
    thin_provisioned = data.vsphere_virtual_machine.template.disks.0.thin_provisioned
  }
  clone {
    template_uuid = data.vsphere_virtual_machine.template.id
       customize {
      linux_options {
        host_name = "foo-lin"
        domain    = "example.com"
      }
  network_interface {
        ipv4_address = "10.10.0.10"
        ipv4_netmask = 24
      }
     network_interface {
        ipv4_address = "10.10.1.10"
        ipv4_netmask = 24
      }
      network_interface {
        ipv4_address = "192.168.20.11"
        ipv4_netmask = 24
      }
        network_interface {
        ipv4_address = "10.10.2.10"
        ipv4_netmask = 24
      }
      ipv4_gateway = "10.10.0.1"
  }
  }
}

@burnsjared0415 burnsjared0415 added waiting-response Status: Waiting on a Response not-reproduced Status: Not Reproduced labels Oct 8, 2024
@tenthirtyam
Copy link
Collaborator

Marking as closed. Please open a new issue and reference #1147 is the problem persists.

@tenthirtyam tenthirtyam closed this as not planned Won't fix, can't repro, duplicate, stale Oct 16, 2024
@github-actions github-actions bot removed the waiting-response Status: Waiting on a Response label Oct 16, 2024
@tenthirtyam tenthirtyam removed this from the On Deck milestone Oct 25, 2024
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 25, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
acknowledged Status: Issue or Pull Request Acknowledged area/guest Area: Guest Operating System bug Type: Bug not-reproduced Status: Not Reproduced size/m Relative Sizing: Medium
Projects
None yet
Development

No branches or pull requests

5 participants