Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_storage_container: migrating from storage_account_name to storage_account_id forces replacement #27942

Open
1 task done
SanderBlom opened this issue Nov 8, 2024 · 36 comments

Comments

@SanderBlom
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave comments along the lines of "+1", "me too" or "any updates", they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Terraform Version

1.9.8

AzureRM Provider Version

4.9.0

Affected Resource(s)/Data Source(s)

azurerm_storage_container

Terraform Configuration Files

resource "azurerm_storage_account" "app_assets" {
  name                     = "random_name"
  resource_group_name      = data.azurerm_resource_group.workload.name
  location                 = data.azurerm_resource_group.workload.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  account_kind             = "StorageV2"
  tags                     = var.tags
}

resource "azurerm_storage_container" "app_files" {
  name                  = "files"
  storage_account_id    = azurerm_storage_account.app_assets.id
  container_access_type = "private"
}

Debug Output/Panic Output

# azurerm_storage_container.app_files must be replaced
-/+ resource "azurerm_storage_container" "app_files" {
      ~ default_encryption_scope          = "$account-encryption-key" -> (known after apply)
      ~ has_immutability_policy           = false -> (known after apply)
      ~ has_legal_hold                    = false -> (known after apply)
      ~ id                                = "https://random_name.blob.core.windows.net/files" -> (known after apply)
      ~ metadata                          = {} -> (known after apply)
        name                              = "files"
      ~ resource_manager_id               = "/subscriptions/x/resourceGroups/y/providers/Microsoft.Storage/storageAccounts/z/blobServices/default/containers/files" -> (known after apply)
      + storage_account_id                = "/subscriptions/x/resourceGroups/y/providers/Microsoft.Storage/storageAccounts/z" # forces replacement
      - storage_account_name              = "random_name" -> null # forces replacement
        # (2 unchanged attributes hidden)
    }

Expected Behaviour

I expect that when we get promted by a warning that: the storage_account_nameproperty has been deprecated in favour ofstorage_account_id and will be removed in version 5.0 of the Provider. I then would expect that when we change from using the storage_account_name to the storage_account_id that it will not try to recreate the storage account but just plan with 0 changes.

Actual Behaviour

When running a plan I get the message that the azurerm_storage_account has to be recreated because we have added the storage_account_id property and that we have removed the storage_account_name

Steps to Reproduce

Create a new storage account and a new storage container where you use the storage_account_name property to reference the storage account. After the resources have been created, try to change from using the storage_account_name to the storage_account_id and observe the plan output.

Important Factoids

No response

References

No response

@Fatalo83
Copy link

The same issue occurs with the azurerm_storage_share resource. The following example also forces a replacement.

resource "azurerm_storage_share" "file_share" {
  name               = "example"
  #storage_account_name = azurerm_storage_account.example.name
  storage_account_id = azurerm_storage_account.example.id
  quota              = 100
}

@radurobot
Copy link

radurobot commented Nov 11, 2024

Here's a quick fix until this is resolved:

  1. Option 1: Add a lifecycle block with ignore_changes for storage_account_name.

  2. Option 2: Update the azurerm_storage_container resource by removing storage_account_name and adding storage_account_id:

    resource "azurerm_storage_container" "containerName" {
      name                  = "mycontainer"
      storage_account_id    = azurerm_storage_account.example.id
      container_access_type = "private"
    }

    Then, run the following commands:

    terraform state rm azurerm_storage_container.containerName
    terraform import azurerm_storage_container.containerName <resource_manager_id>

@eric-mark
Copy link

We have also ran into the same issue (from the original post) and noticed that the container was destroyed in our lower environment.

@enorlando
Copy link

enorlando commented Nov 12, 2024

We are also seeing the azurerm_role_assignment resources wanting to be re-created when using azurerm_storage_share.xx.resource_manager_id as the scope.

@gerhard-strauss-jambit
Copy link

you can re-import those Ressource, so that you dont have to re-create them.

terraform state rm azurerm_storage_container.containerName
terraform import azurerm_storage_container.containerName <resource_manager_id>

The Important Part is to use the resource_manager_id, not the url.
e.g.
terraform import azurerm_storage_container.containerName /subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/example-resource-group/providers/Microsoft.Storage/storageAccounts/storageAccountName/blobServices/default/containers/ContainerName

if you do a Plan before, you can see the current resource_manager_id and use it in case of Blob. in my Tests, the resource_manager_id shown on File Shares was incorrect. on Position 10 it was printed with "fileshares" instead of "shares"

@peruzzof
Copy link

I have too many repos to adjust state in all environments and the other workaround was not enough as adding the account_id is enough to trigger a replacement. I will postpone this version, is there any plan to implement a proper fix in the code?

@shacal
Copy link

shacal commented Nov 13, 2024

@peruzzof - Same here... this "migration" will be pain in the ass.

│ This property has been deprecated and will be replaced by storage_account_id in version 5.0 of the provider.

│ (and 3463 more similar warnings elsewhere)

@tw-sematell
Copy link

Seriously, this should just work.

@Pesticles
Copy link

I can confirm that re-importing using the resource_manager_id attribute value rather than the URL as specified in the documentation does correctly import the resource and clears the replacement.

@babuga365
Copy link

The only option we have now is remove from state file and reimport the storage container and storage share resoure?
Or do we have any other solutions for this case.

@WodansSon
Copy link
Collaborator

linking the PR that deprecated the storage_account_name for context: #27733

@watarukura
Copy link

terraform state rm and terraform import did not solve azurerm_storage_container force replacement.

  # azurerm_storage_container.tfstate must be replaced
  # (imported from "https://example.blob.core.windows.net/tfstate")
  # Warning: this will destroy the imported resource
-/+ resource "azurerm_storage_container" "tfstate" {
        container_access_type             = "private"
      ~ default_encryption_scope          = "$account-encryption-key" -> (known after apply)
        encryption_scope_override_enabled = true
      ~ has_immutability_policy           = false -> (known after apply)
      ~ has_legal_hold                    = false -> (known after apply)
      ~ id                                = "https://example.blob.core.windows.net/tfstate" -> (known after apply)
      ~ metadata                          = {} -> (known after apply)
        name                              = "tfstate"
      ~ resource_manager_id               = "/subscriptions/******resourceGroups/tfstate/providers/Microsoft.Storage/storageAccounts/example/blobServices/default/containers/tfstate" -> (known after apply)
      + storage_account_id                = "/subscriptions/******/resourceGroups/tfstate/providers/Microsoft.Storage/storageAccounts/example" # forces replacement
      - storage_account_name              = "example" -> null # forces replacement
    }

Plan: 1 to import, 1 to add, 0 to change, 1 to destroy.

@Pesticles
Copy link

When you do the import, use the value from "resource_manager_id" instead of the ID/URL (disregard the documentation)

@watarukura
Copy link

When you do the import, use the value from "resource_manager_id" instead of the ID/URL (disregard the documentation)

Oh, thanks! I'll try it.

@DT-BMA
Copy link

DT-BMA commented Nov 19, 2024

Adding to this issue, as I am using Terraform format in both local (Ubuntu 24.04) and GIthub Actions workflows, the checks will fail/interrupt deployments because my environments cannot decide which it prefers.

Storage_account_id is too new for my terraform on Ubuntu 24.04/22.04 (WSL) but storage_account_name is too old (and, doesn't just throw a passable warning) on the Github Actions terraform fmt -check and errors. This is a new behavior apparently, and unlike other 'depreciated' configs is causing issues

@enorlando
Copy link

enorlando commented Nov 19, 2024

Since resource_manager_id property is now deprecated in both resources in particular azurerm_storage_share and this value is available as id in resources using storage_account_id, after making the changes suggested we are still seeing all our azurerm_role_assignment resources being wanting to be re-created

  # azurerm_role_assignment.xx must be replaced
-/+ resource "azurerm_role_assignment" "xx" {
      ~ id                                     = "/subscriptions/11111111-1111-1111-1111-11111111111/resourceGroups/xx/providers/Microsoft.Storage/storageAccounts/xx/fileServices/default/fileshares/xx/providers/Microsoft.Authorization/roleAssignments/11111111-1111-1111-1111-11111111111" -> (known after apply)
      ~ name                                   = "11111111-1111-1111-1111-11111111111" -> (known after apply)
      ~ principal_type                         = "Group" -> (known after apply)
      ~ role_definition_id                     = "/subscriptions/11111111-1111-1111-1111-11111111111/providers/Microsoft.Authorization/roleDefinitions/11111111-1111-1111-1111-11111111111" -> (known after apply)
      ~ scope                                  = "/subscriptions/11111111-1111-1111-1111-11111111111/resourceGroups/xx/providers/Microsoft.Storage/storageAccounts/xx/fileServices/default/fileshares/xx" -> "/subscriptions/11111111-1111-1111-1111-11111111111/resourceGroups/xx/providers/Microsoft.Storage/storageAccounts/xx/fileServices/default/shares/xx" # forces replacement
      + skip_service_principal_aad_check       = (known after apply)
        # (6 unchanged attributes hidden)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

REF: #28023

@patsmitty
Copy link

Here's my effort to consolidate the full workaround solution until this is properly fixed:

  1. run the terraform state rm command to remove the container from your state file
  2. refactor the azurerm_storage_container resource to use storage_account_id in place of storage_account_name
  3. temporarily add the following code to the azurerm_storage_container resource:
  lifecycle {
    ignore_changes = [
      storage_account_name,
    ]
  }
  1. run the terraform import command to re-add the container to your state file using the resource_manager_id and not the URL (kudos to @gerhard-strauss-jambit)
  2. apply your terraform - it should not force a replacement
  3. remove the temporary lifecycle block

@jackofallops
Copy link
Member

Hi folks 👋

Just wanted to clarify this change, and the reason for the addition of the new property. The azurerm_storage_container and azurerm_storage_share resources were developed on the Data Plane APIs and not, as the Provider's name suggests, the Resource Manager APIs. The addition of storage_account_id allows users to optionally start using the the Resource Manager implementation in place of the Data Plane. These two approaches are not directly interchangeable. The IDs for the respective Implementation are very different, Data Plane is an https URL and Resource Manager is the familiar /subscriptions/.... reference. Both storage_account_name and storage_account_id are ForceNew properties, (storage_account_name has always been ForceNew). Since these properties control which API is used, and therefore which ID type Terraform uses to track the resource, they are not interchangeable without removing and re-importing the resource. This deprecation work has been done to provide all users with as much time as possible to be able to plan and execute migration before the release of 5.0 (which we anticipate will be mid to late 2025) at which time the storage_account_name property will be removed. We are investigating options to make it easier for users to adopt new major versions and reduce load on updating configuration and managing state, which we hope to be available by this time.

For broader context: this is part of an effort to separate out, and reduce the use of, Data Plane APIs in the Resource Manager Provider. We understand the value of Resources and Data Sources that are only available via Data Plane and have no comparable RM alternative, so we are continuing to support these where possible and splitting out the Data Plane functionality from RM based resources. For example, the new azurerm_storage_account_queue_properties and azurerm_storage_account_static_websitewhich were introduced to replace the now deprecated queue_properties and static_website blocks of azurerm_storage_account.

I'll take a look at adding some additional guidance on this to the resource(s) documentation, since it's apparent this was not sufficiently well explained.

Hope that helps.

@Pesticles
Copy link

@jackofallops that's all fine and a laudable goal, however you haven't addressed the issue at the core of this ticket which is that the new attribute appears to be mandatory, and forces a resource replacement.

It seems to me that a change of this type would have been better implemented as a new resource, whilst deprecating the old one.

@rseoaneg
Copy link

rseoaneg commented Nov 25, 2024

For basic use cases it can be done in a single step with HCL as follows:

# resource "azurerm_storage_container" "original" {  # The original code needs to be commented.
#   name                  = var.container_name
#   storage_account_name  = var.sa_name
#   container_access_type = "private"
# }

removed {
      from = azurerm_storage_container.original
  lifecycle {
    destroy = false
  }
}

resource "azurerm_storage_container" "new" {
  name                  = var.container_name
  storage_account_id  = format("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Storage/storageAccounts/%s", var.subs_id,var.rg_name, var.sa_name)
  container_access_type = "private"
  lifecycle {
    ignore_changes = [ storage_account_name ]
  }
}

import {
    to = azurerm_storage_container.new
    id = format("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Storage/storageAccounts/%s/blobServices/default/containers/%s", var.subs_id,var.rg_name, var.sa_name, var.container_name)
}

After applying changes remove lifecycle block, and import and removed blocks.

@CorrenSoft
Copy link
Contributor

@jackofallops that's all fine and a laudable goal, however you haven't addressed the issue at the core of this ticket which is that the new attribute appears to be mandatory, and forces a resource replacement.

It seems to me that a change of this type would have been better implemented as a new resource, whilst deprecating the old one.

Is not yet mandatory, it will be at version 5.x. You still can continue using the SA name, and you will receive a warning.

@jackofallops I think that it will be necessary to track down places in where the "old" container id may be used, and figure it out what to do.
As example, i found myself with a problem with "hdinsight_kafka_cluster" resource, which breaks completely once you are using a storage container created with the new implementation and using the id attribute.
PS: Let me know if you want me to create a new issue with this scenario.

@jackofallops
Copy link
Member

@jackofallops that's all fine and a laudable goal, however you haven't addressed the issue at the core of this ticket which is that the new attribute appears to be mandatory, and forces a resource replacement.
It seems to me that a change of this type would have been better implemented as a new resource, whilst deprecating the old one.

Is not yet mandatory, it will be at version 5.x. You still can continue using the SA name, and you will receive a warning.

@jackofallops I think that it will be necessary to track down places in where the "old" container id may be used, and figure it out what to do. As example, i found myself with a problem with "hdinsight_kafka_cluster" resource, which breaks completely once you are using a storage container created with the new implementation and using the id attribute. PS: Let me know if you want me to create a new issue with this scenario.

Thanks @CorrenSoft - Issues with details are always welcome ❤️ . We're working on this as an extended piece of work, and dealing with the down-stream complications as we uncover them (some we are already aware of). The initial work was shipped to unblock users that couldn't use the resource at all in it's previous state, and we're running along behind it working to update things to be compatible, or have option to be able to use either until 5.0.

Thanks again!

@gygitlab
Copy link

gygitlab commented Dec 4, 2024

Might be useful to know that AWS tried something very similar a while back and this was not received well at all by the community - They eventually had to implement a more graceful solution.

State migrations are a painful action to take for a lot of folks and can have numerous consequences downstream. If possible, I would recommend a more graceful solution is explored here.

@snooops
Copy link

snooops commented Dec 5, 2024

Why not simply allowing both and storing both in the statefile? Making it ok to supply the id AND the name, but internally you can use the id if you need to. Would make the transition more smoooth.

@pumacln
Copy link

pumacln commented Dec 5, 2024

@patsmitty
Thanks for summarizing this.
while not ideal, at least it allows us to move forward.

Let's hope a permanent fix is implemented soon. There's too many instances where this needs to be updated and it takes a lot of work with the workaround.

@magodo
Copy link
Collaborator

magodo commented Dec 7, 2024

Tried to compose the information and provide a guide for the manual migration: https://gist.github.com/magodo/386c3ff5a9acd4cf6eb76540d39b2e84

@ColinM9991
Copy link

Joy, another series of terraform import commands to run. Getting a bit fed up of constantly having to hand-hold and guide Terraform when it could just as easily handle these scenarios automatically. There's no reason for the tool, nor the AzureRM provider, to be this brittle.

@andersro93
Copy link

As others already pointed out the simple solution here is to do:

  1. Run: terraform state rm azurerm_storage_container.your_container_name
  2. Add the following import block to your code:
locals {
  // Insert your values here
  subscription_id = ""
  resource_group_name = ""
  storage_account_name = ""
  storage_container_name = ""
}

import {
  id = "/subscriptions/${local.subscription_id}/resourceGroups/${local.resource_group_name}/providers/Microsoft.Storage/storageAccounts/${local.storage_account_name}/blobServices/default/containers/${local.storage_container_name}"
  to = azurerm_storage_container.your_container_name
}

resource "azurerm_storage_container" "your_container_name" {
  // ...
}
  1. Verify in plan that it says just import and no replacement and apply

@adelinn
Copy link

adelinn commented Dec 13, 2024

Why not simply allowing both and storing both in the statefile? Making it ok to supply the id AND the name, but internally you can use the id if you need to. Would make the transition more smoooth.

+1 for this.
I think a phased migration to the storage_account_id would be the best.
Don't know if it's technically possible, but why not have storage_account_id be computed value for now, keep it that way for a few versions so that the majority of people adopts it, then switch to having storage_account_name to be computed and storage_account_id to be required (maybe in v5.x)

@maximveksler
Copy link

The quality of azurerm provider is below acceptable. This change, unless carefully observed is 100% data loss.

@franklouwers
Copy link

There might be good reasons to do this, underlaying api blabla we get that.

The problem is:

  • non-graceful data destruction when doing exact what the warning tells you to do
  • non-graceful data desctruction when trying to be smart, doing the import dance and using the latest docs to do the import (which are wrong).

This is not a good thing...

@maximveksler
Copy link

@jackofallops · he/him

Some attention to this would be wonderful.

@eklesel
Copy link

eklesel commented Jan 9, 2025

Just want to add this here in case anyone trips up at the same place I did.

The documentation states to use the https endpoint of the container (in my case) to import the resource:

terraform import azurerm_storage_container.container1 https://example.blob.core.windows.net/container

This will result in the storage_account_name being populated rather than ID, and you'll be in the same position. You need to use the resource ID, the /subscriptions/<sub_id>/bla/bla/bla address, in order to populate the storage_account_id property, and not get a forced replacement.

@franklouwers
Copy link

@eklesel Yes, it's two problems really:

  1. "happy path" causes data destruction
  2. following the docs is wrong and will again lead to data destruction if not careful.

@priyankar-dutta
Copy link

The happy path causing data destruction if proper controls and monitoring is not there is plain bad and unacceptable. I wonder how much damage it will cause. This really should be fixed.

@tbosnjak
Copy link

I can confirm that the issue is present in the version 4.15.0 of the provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests