Terraform Bigquery create tables replace table instead of edit - google-cloud-platform

I added a json file that contains all the tables i want to create :
tables.json
"tables": {
"table1": {
"dataset_id": "dataset1",
"table_id": "table1",
"schema_path": "folder/table1.json"
},
"table2": {
"dataset_id": "dataset2",
"table_id": "table2",
"schema_path": "folder/table2.json"
}
}
Then with a foreach on a Terraform resource, i want to create these tables dynamically :
local.tf file
locals {
tables = jsondecode(file("${path.module}/resource/tables.json"))["tables"]
}
variables.tf file
variable "project_id" {
description = "Project ID, used to enforce providing a project id."
type = string
}
variable "time_partitioning" {
description = "Configures time-based partitioning for this table."
type = map(string)
default = {
type = "DAY"
field = "my_field"
}
}
main.tf file
resource "google_bigquery_table" "tables" {
for_each = local.tables
project = var.project_id
dataset_id = each.value["dataset_id"]
table_id = each.value["table_id"]
dynamic "time_partitioning" {
for_each = [
var.time_partitioning
]
content {
type = try(each.value["partition_type"], time_partitioning.value["type"])
field = try(each.value["partition_field"], time_partitioning.value["field"])
expiration_ms = try(time_partitioning.value["expiration_ms"], null)
require_partition_filter = try(time_partitioning.value["require_partition_filter"], null)
}
}
schema = file("${path.module}/resource/schema/${each.value["schema_path"]}")
}
The schema files contains classic bigquery schema, for example :
[
{
"name": "field",
"type": "STRING",
"mode": "NULLABLE",
"description": "My field"
}
]
The creation of tables works well, but when i add a new nullable field on a schema, Terraform proposes to "replace table" (destroy and recreate) instead of "update table".
The normal behaviour in this case for native Bigquery and Terraform is to update the table.
When a do the same test with the same Terraform resource but without forEach, Terraform has the expected behaviour and proposes to update the table.
An example of the Terraform log with "forEach" :
# google_bigquery_table.tables["table1"] must be replaced
-/+ resource "google_bigquery_table" "tables" {
~ creation_time = 1616764894477 -> (known after apply)
dataset_id = "dataset1"
deletion_protection = true
~ etag = "G9qwId8jgQS8nN4N61zqcA==" -> (known after apply)
~ expiration_time = 0 -> (known after apply)
~ id = "projects/my-project/datasets/dataset1/tables/table1" -> (known after apply)
- labels = {} -> null
~ last_modified_time = 1617075251337 -> (known after apply)
~ location = "EU" -> (known after apply)
~ num_bytes = 0 -> (known after apply)
~ num_long_term_bytes = 0 -> (known after apply)
~ num_rows = 0 -> (known after apply)
project = "project"
~ schema = jsonencode(
~ [ # forces replacement
{
description = "Field"
mode = "NULLABLE"
name = "field"
type = "STRING"
}
.....
+ {
+ description = "Field"
+ mode = "NULLABLE"
+ name = "newField"
+ type = "STRING"
}
Terraform displays and detects correctly the new column to add for a table, but indicates a replace instead of an edition.
I repeat that, the exact same test with a the same Terraform resource without forEach and on a single Bigquery table, works well (same schema, same change). I create the table and when a add a new nullable column, Terraform proposes an edition (the expected behaviour).
I checked on Terraform doc and web, i didn't saw some examples to manage a list of table with Terraform.
Is it not possible to create and update tables with configured tables and foreach ?
Thanks for your help.

This sounded like a provider bug. I found this issue in the terraform-provider-google repository that seems related to your issue. The fix was merged just 13 hours ago (at the time of writing). So, maybe you can wait for the next release (v3.63.0) and see if it fixes your issue.
Just FYI: You might want to verify that the fix commit was actually included in the next release. It happened to me before that something that was merged in master before a released was not actually released.

Thanks so much #Alessandro, the problem was indeed due to the Terraform provide Google version.
I used the v3.62.0 version of Google provider, and you target me to the good direction.
I saw this link too : https://github.com/hashicorp/terraform-provider-google/issues/8503
There is a very useful comment by "tpolekhin" (thanks to him) :
Hopefully im not beating a dead horse commenting on the closed issue, but I did some testing with various versions on the provider, and it behaves VERY differently each time.
So, our terraform code change was pretty simple: add 2 new columns to existing BigQuery table SCHEDULE
Nothing changes between runs - only provider version
v3.52.0
Plan: 0 to add, 19 to change, 0 to destroy.
Mostly adds + mode = "NULLABLE" to fields in bunch of tables, and adds 2 new fields in SCHEDULE table
v3.53.0
Plan: 0 to add, 2 to change, 0 to destroy.
Adds 2 new fields to SCHEDULE table, and moves one field in another table in a different place (sorting?)
v3.54.0
Plan: 1 to add, 1 to change, 1 to destroy.
Adds 2 new fields to SCHEDULE table, and moves one field in another table in a different place (sorting?) but now with table re-creation for some reason
v3.55.0
Plan: 0 to add, 2 to change, 0 to destroy.
Adds 2 new fields to SCHEDULE table, and moves one field in another table in a different place (sorting?)
behaves exactly like v3.53.0
v3.56.0
Plan: 1 to add, 0 to change, 1 to destroy.
In this comment, we can see that some versions have the problem.
For example this works with v3.55.0 but not with v3.56.0
I temporary downgrade the version to v3.55.0 and when the next release will solve this issue, i will upgrade it.
provider.tf :
provider "google" {
version = "= 3.55.0"
}

Related

Terraform separate input variables via IF statement according to values of another input variable

I have two elasticsearch services managed with terraform. But one version is 6.8 while the other is 7.10 . The problem is that I had to describe the ebs_option input variable because of the instance size that I am using. However, when I run the terraform plan command after describing this, I get the following output:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.aws-opensearch.aws_elasticsearch_domain.elastic-domains[1] will be updated in-place
~ resource "aws_elasticsearch_domain" "elastic-domains" {
id = "arn:aws:es:eu-central-1:xxx:domain/new-elastic"
tags = {
"Environment" = "test"
"Name" = "new-elastic"
"Namespace" = "test"
}
# (9 unchanged attributes hidden)
~ ebs_options {
- iops = 3000 -> null
# (4 unchanged attributes hidden)
}
# (13 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Even though I apply it, I get the same output every time I run the terraform apply command.
When I researched this a bit, when elasticsearch is version 7.10, it uses gp3 storage. But in version 6.8 it uses gp2. There are some differences between the two that come by default. iops is one of them.
How can I overcome this problem? Since I defined it under a single module, I cannot give it separately.
I have terraform configuration below:
main.tf
resource "aws_elasticsearch_domain" "elastic-domains" {
count = length(var.domain_names)
domain_name = var.domain_names[count.index].domain_name
elasticsearch_version = var.domain_names[count.index].elasticsearch_version
...
ebs_options {
ebs_enabled = true
volume_size = 50
}
}
variables.tf
variable domain_names {
type=list(object({
domain_name = string
elasticsearch_version = number
}))
}
terraform.tfvars
domain_names = [
{
domain_name = "elastic"
elasticsearch_version = "6.8"
},
{
domain_name = "new-elastic"
elasticsearch_version = "7.10"
}
]
You conditionally set the iops to null depending on the version. E.g.
ebs_options {
ebs_enabled = true
volume_size = 50
iops = startswith(var.domain_names[count.index].elasticsearch_version, "7") ? 3000 : null
}

Planning to use reusable TF module to create GCP resource using .tfvars file and need post destroy of middle resource next resource shouldn't recreate

We are testing reusable terraform module to create GCP resources and same we are able to achieve using count variable. now we have an challenge to decommission/destroy one/two of the in between created resources. while destroying previously created resources making other resources to be recreated.
Terraform will perform the following actions:
# google_service_account.sa_npe_policy[1] must be replaced
-/+ resource "google_service_account" "sa_npe_policy" {
~ account_id = "sa-test13" -> "sa-test11" # forces replacement
~ display_name = "sa-test13" -> "sa-test11"
~ email = "sa-test13#gcp-prj-npe.iam.gserviceaccount.com" -> (known after apply)
~ id = "projects/gcp-prj-npe/serviceAccounts/sa-tegcp-prj-npe.iam.gserviceaccount.com" -> (known after apply)
~ name = "projects/gcp-prj-npe/serviceAccounts/sa-test13#gcp-prj-npe.iam.gserviceaccount.com" -> (known after apply)
project = "gcp-prj-npe"
~ unique_id = "111295737867502004228" -> (known after apply)
}
# google_service_account.sa_npe_policy[2] will be created
+ resource "google_service_account" "sa_npe_policy" {
+ account_id = "sa-test13"
+ display_name = "sa-test13"
+ email = (known after apply)
+ id = (known after apply)
+ name = (known after apply)
+ project = "gcp-prj-npe"
+ unique_id = (known after apply)
}
Plan: 2 to add, 0 to change, 1 to destroy.
Here we are trying to remove sa-test11 which is impacting next resource sa-test13 to be replaced with sa-test11.
something we are looking for without recreate/replace with already created resource we need to delete any one of the resource in middle.
Despite the fact that the output you posted suggests that you are trying to add a third resource instead of deleting one, I will try to explain the general approach you could take.
Assuming your initial code looks simlar like the folowing and you now want to remove satest12:
variable "sa_name" {
type = list(string)
default = ["satest11", "satest12", "satest13"]
}
resource "google_service_account" "sa_npe_policy" {
count = length(var.sa_name)
account_id = var.sa_name[count.index]
display_name = var.sa_name[count.index]
project = "gcp-prj-npe"
}
If you just remove "satest12" from the list Terraform will suggest you to delete satest12 and satest13 and afterwards recreate satest13.
Why is that?
Terraform internally stores the state of your resources and each of your resources will be assigned an internal address. satest12 has the address google_service_account.sa_npe_policy[1] and satest13 has the address google_service_account.sa_npe_policy[2]. Now if you remove "satest12" the resource list only comprises two elements and thus satest13 will get the address google_service_account.sa_npe_policy[1].
Terraform - for whatever reasons - is not capable to recognize that the resource already exists at the other address, so it suggests to delete two resources and create another.
How could you circumvent that?
Fortunately, Terraform gives us the means to manipulate its internal state. So after removing "satest12" do not execute terraform apply immediately.
Instead execute
tf state mv 'google_service_account.sa_npe_policy[1]' 'google_service_account.choose_an_unused_name'
tf state mv 'google_service_account.sa_npe_policy[2]' 'google_service_account.sa_npe_policy[1]'
This way you
readdress satest12 to an unused address
readdress satest13 to the address previously used by satest12
If you now run terraform apply, Terraform will recognize that there is no need to recreate satest13 and will only destroy satest12.

How to update google_bigquery_table_iam_member to point to new table on recreation due to an updated schema

I have a BigQuery table and an add a service account as an iam member to this table:
resource "google_bigquery_table" "table" {
dataset_id = dataset
table_id = table
project = project
schema = "jsonSchema.json"
}
resource "google_bigquery_table_iam_member" "access_right" {
project = google_bigquery_table.table.project
dataset_id = google_bigquery_table.table.dataset_id
table_id = google_bigquery_table.table.table_id
role = "roles/bigquery.dataEditor"
member = "serviceAccount:serviceAccount#GCPserviceAccount.com"
}
Removing a column from jsonSchema.json and applying the changes enforces the destruction of the table and the creation of a new one:
Terraform will perform the following actions:
# module.module.google_bigquery_table.table must be replaced
-/+ resource "google_bigquery_table" "table" {
...
~ schema = jsonencode(
~ [ # forces replacement
# (8 unchanged elements hidden)
{
mode = "REQUIRED"
name = "column1"
type = "TIMESTAMP"
},
- {
- mode = "REQUIRED"
- name = "column2"
- type = "STRING"
},
]
)
...
Plan: 1 to add, 0 to change, 1 to destroy.
At this point the google_bigquery_table_iam_member resource created is still pointing to the old table in the state. However, in GCP the service account no longer has access to the non existing table and no new access has been given to the newly created table.
Running terraform apply a second time it notices the missing access
Terraform will perform the following actions:
# module.module.google_bigquery_table_iam_member.access_rights will be created
+ resource "google_bigquery_table_iam_member" "access_rights" {
+ dataset_id = "dataset"
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:serviceAccount#GCPserviceAccount.com"
+ project = "project"
+ role = "roles/bigquery.dataEditor"
+ table_id = "table"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Is it possible to achieve this in a single step (a single terraform apply)?
i.e.
The table gets destroyed and recreated.
The access_right resource updates so the SA has access to the new table.
As mentioned in the comment section by OP, the solution was to upgrade Terraform and the provider versions. However as I performed a few tests I wanted to share output of them.
Another option to solve this issue is to use recreate. Below Example how it works.
main.tf
### Creating Service Account
resource "google_service_account" "bigquerytest" {
project = "<MyProjectID>"
account_id = "bigquery-table"
display_name = "bigquery-table-test"
provider = google
}
### Re-Create table
resource "google_bigquery_table" "table" {
dataset_id = "test"
table_id = "bqtable"
project = "<MyProjectID>"
schema = file("/home/<myuser>/terrabq/jsonSchema.json")
deletion_protection=false
}
### DataEditor binding
resource "google_bigquery_table_iam_member" "access-right" {
project = "<MyProjectID>"
dataset_id = "<YourDataset_id>"
table_id = "bqtable"
role = "roles/bigquery.dataEditor"
member = "serviceAccount:${google_service_account.bigquerytest.email}"
}
jsonSchema.json
[
{
"mode": "NULLABLE",
"name": "source",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "status",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "test",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "test4",
"type": "STRING"
}
]
Scenario:
Create a new Table - bqtable with specific schema, create ServiceAccount and proper IAM member permission for this Table.
Output:
...
Plan: 3 to add, 0 to change, 0 to destroy.
...
google_service_account.bigquerytest: Creating...
google_bigquery_table.table: Creating...
google_bigquery_table.table: Creation complete after 1s [id=projects/<myproject>/datasets/test/tables/bqtable]
google_service_account.bigquerytest: Creation complete after 1s [id=projects/<myproject>/serviceAccounts/bigquery-table#<myproject>.iam.gserviceaccount.com]
google_bigquery_table_iam_member.access-right: Creating...
google_bigquery_table_iam_member.access-right: Creation complete after 4s [id=projects/<myproject>/datasets/test/tables/bqtable/roles/bigquery.dataEditor/serviceAccount:bigquery-table#<myproject>.iam.gserviceaccount.com]
Next step is to change the schema in jsonSchema.json.
NOTE
When you are adding column in the schema, the table won't be recreated. It will just update the table and in all new column will be NULL value.
# google_bigquery_table.table will be updated in-place
~ resource "google_bigquery_table" "table" {
In BQ it would looks like that:
When you are removing column from schema
# google_bigquery_table.table must be replaced
-/+ resource "google_bigquery_table" "table" {
Please keep in mind that if the Table will be recreated, all data from it will be purged.
Issued scenario:
If you will just change the schema (remove column), the table will be recreated but IAM rules weren't updated.
Plan output was probably like that: Plan: 1 to add, 0 to change, 1 to destroy.
OP was able to solve this issue with an updated version of Terraform and provider.
However, if you still have issues, you can use -replace flag to re-create resource.
$ terraform apply -replace=google_bigquery_table_iam_member.access-right
Actions taken by terraform was:
# google_bigquery_table.table must be replaced
-/+ resource "google_bigquery_table" "table" {
...
# google_bigquery_table_iam_member.access-right will be replaced, as requested
-/+ resource "google_bigquery_table_iam_member" "access-right" {
~ etag = "<randomString>" -> (known after apply)
~ id = "projects/<myproject>/datasets/<mydataset>/tables/bqtable/roles/bigquery.dataEditor/serviceAccount:bigquery-table#<myproject>.iam.gserviceaccount.com" -> (known after apply)
~ table_id = "projects/<myproject>/datasets/<mydataset>/tables/bqtable" -> "bqtable"
# (4 unchanged attributes hidden)
}
Plan: 2 to add, 0 to change, 2 to destroy.
In Addition to depends_on in terraform, it's used mainly for ordering or to postpone creation of resources.
To sum up:
Solution which worked for OP was updating Terraform and Provider versions
Another solution is to use terraform apply -replace=[resource.resourcename]
Running this setup with:
terraform version 1.1.4
google provider version 4.9.0
Rather than:
terraform version 1.1.3
google provider version 4.5.0
eliminated the issue.
However, if you for some reason cannot do that, #PjoterS has an alternative solution
This is weird behavior maybe caused by the provider knowing the result of the resource fields beforehand and confusing terraforms implicit dependency detection.
You can try to force the dependency by adding an explicit depends_on to the iam resource to ensure recreation:
resource "google_bigquery_table_iam_member" "access_right" {
project = google_bigquery_table.table.project
dataset_id = google_bigquery_table.table.dataset_id
table_id = google_bigquery_table.table.table_id
role = "roles/bigquery.dataEditor"
member = "serviceAccount:serviceAccount#GCPserviceAccount.com"
depends_on = [google_bigquery_table.table]
}
in this case terraform should be able to detect changes as you now also depend on changing fields like etag that are only known after apply.
To better understand the issue a plan output of an initial apply would help.

What happens if running "terraform apply" twice? (Terraform)

What happens if running "terraform apply" twice?
Does it create all the resources twice?
I'm assuming that when you say "terraform deploy" here you mean running the terraform apply command.
The first time you run terraform apply against an entirely new configuration, Terraform will propose to create new objects corresponding with each of the resource instances you declared in the configuration. If you accept the plan and thus allow Terraform to really apply it, Terraform will create each of those objects and record information about them in the Terraform state.
If you then run terraform apply again, Terraform will compare your configuration with the state to see if there are any differences. This time, Terraform will propose changes only if the configuration doesn't match the existing objects that are recorded in the state. If you accept that plan then Terraform will take each of the actions it proposed, which can be a mixture of different action types: update, create, destroy.
This means that in order to use Terraform successfully you need to make sure to keep the state snapshots safe between Terraform runs. With no special configuration at all Terraform will by default save the state in a local file called terraform.tfstate, but when you are using Terraform in production you'll typically use remote state, which is a way to tell Terraform to store state snapshots in a remote data store separate from the computer where you are running Terraform. By storing the state in a location that all of your coworkers can access, you can collaborate together.
If you use Terraform Cloud, a complementary hosted service provided by HashiCorp, you can configure Terraform to store the state snapshots in Terraform Cloud itself. Terraform Cloud has various other capabilities too, such as running Terraform in a remote execution environment so that everyone who uses that environment can be sure to run Terraform with a consistent set of environment variables stored remotely.
If you run the terraform apply command first time, it will create the necessary resource which was in terraform plan.
If you run the terraform apply command second time, it will try to check if that resource already exist there or not. If found then will not create any duplicate resource.
Before running the terraform apply for the second time, if you run terraform plan you will get the list of change/create/delete list.
Apr, 2022 Update:
The first run of "terraform apply" creates(adds) resources.
The second or later run of "terraform apply" creates(adds), updates(changes) or deletes(destroys) existed resources if there are changes for them. Plus, basically when changing the mutable value of an existed resource, its existed resource is updated rather than deleted then created and basically when changing the immutable value of an existed resource, its existed resource is deleted then created rather than updated.
*A mutable value is the value which can change after creating a resource.
*An immutable values is the value which cannot change after creating a resource.
For example, I create(add) the Cloud Storage bucket "kai_bucket" with the Terraform code below:
resource "google_storage_bucket" "bucket" {
name = "kai_bucket"
location = "ASIA-NORTHEAST1"
force_destroy = true
uniform_bucket_level_access = true
}
So, do the first run of the command below:
terraform apply -auto-approve
Then, one resource "kai_bucket" is created(added) as shown below:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# google_storage_bucket.bucket will be created
+ resource "google_storage_bucket" "bucket" {
+ force_destroy = true
+ id = (known after apply)
+ location = "ASIA-NORTHEAST1"
+ name = "kai_bucket"
+ project = (known after apply)
+ self_link = (known after apply)
+ storage_class = "STANDARD"
+ uniform_bucket_level_access = true
+ url = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
google_storage_bucket.bucket: Creating...
google_storage_bucket.bucket: Creation complete after 1s [id=kai_bucket]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Now, I change the mutable value "uniform_bucket_level_access" from "true" to "false":
resource "google_storage_bucket" "bucket" {
name = "kai_bucket"
location = "ASIA-NORTHEAST1"
force_destroy = true
uniform_bucket_level_access = false # Here
}
Then, do the second run of the command below:
terraform apply -auto-approve
Then, "uniform_bucket_level_access" is updated(changed) from "true" to "false" as shown below:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_storage_bucket.bucket will be updated in-place
~ resource "google_storage_bucket" "bucket" {
id = "kai_bucket"
name = "kai_bucket"
~ uniform_bucket_level_access = true -> false
# (9 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
google_storage_bucket.bucket: Modifying... [id=kai_bucket]
google_storage_bucket.bucket: Modifications complete after 1s [id=kai_bucket]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Now, I change the immutable value "location" from "ASIA-NORTHEAST1" to "US-EAST1":
resource "google_storage_bucket" "bucket" {
name = "kai_bucket"
location = "US-EAST1" # Here
force_destroy = true
uniform_bucket_level_access = false
}
Then, do the third run of the command below:
terraform apply -auto-approve
Then, one resource "kai_bucket" with "ASIA-NORTHEAST1" is deleted(destroyed) then one resource "kai_bucket" with "US-EAST1" is created(added) as shown below:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_storage_bucket.bucket must be replaced
-/+ resource "google_storage_bucket" "bucket" {
- default_event_based_hold = false -> null
~ id = "kai_bucket" -> (known after apply)
- labels = {} -> null
~ location = "ASIA-NORTHEAST1" -> "US-EAST1" # forces replacement
name = "kai_bucket"
~ project = "myproject-272234" -> (known after apply)
- requester_pays = false -> null
~ self_link = "https://www.googleapis.com/storage/v1/b/kai_bucket" -> (known after apply)
~ url = "gs://kai_bucket" -> (known after apply)
# (3 unchanged attributes hidden)
}
Plan: 1 to add, 0 to change, 1 to destroy.
google_storage_bucket.bucket: Destroying... [id=kai_bucket]
google_storage_bucket.bucket: Destruction complete after 1s
google_storage_bucket.bucket: Creating...
google_storage_bucket.bucket: Creation complete after 1s [id=kai_bucket]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

Terraform wants to replace Google compute engine if its start/stop scheduler is modified

First of all, I am surprised that I have found very few resources on Google that mention this issue with Terraform.
This is an essential feature for optimizing the cost of cloud instances though, so I'm probably missing out on a few things, thanks for your tips and ideas!
I want to create an instance and manage its start and stop daily, programmatically.
The resource "google_compute_resource_policy" seems to meet my use case. However, when I change the stop or start time, Terraform plans to destroy and recreate the instance... which I absolutely don't want!
The resource "google_compute_resource_policy" is attached to the instance via the argument resource_policies where it is specified: "Modifying this list will cause the instance to recreate."
I don't understand why Terraform handles this simple update so badly. It is true that it is not possible to update a scheduler, whereas it is perfectly possible to detach it manually from the instance, then to destroy it before recreating it with the new stop/start schedule and the attach to the instance again.
Is there a workaround without going through a null resource to run a gcloud script to do these steps?
I tried to add an "ignore_changes" lifecycle on the "resource_policies" argument of my instance, Terraform no longer wants to destroy my instance, but it gives me the following error:
Error when reading or editing ResourcePolicy: googleapi: Error 400: The resource_policy resource 'projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule' is already being used by 'projects/my-project-id/zones/europe-west1-b/instances/my-instance', resourceInUseByAnotherResource"
Here is my Terraform code
resource "google_compute_resource_policy" "instance_schedule" {
name = "my-instance-schedule"
region = var.region
description = "Start and stop instance"
instance_schedule_policy {
vm_start_schedule {
schedule = var.vm_start_schedule
}
vm_stop_schedule {
schedule = var.vm_stop_schedule
}
time_zone = "Europe/Paris"
}
}
resource "google_compute_instance" "my-instance" {
// ******** This is my attempted workaround ********
lifecycle {
ignore_changes = [resource_policies]
}
name = "my-instance"
machine_type = var.machine_type
zone = "${var.region}-b"
allow_stopping_for_update = true
resource_policies = [
google_compute_resource_policy.instance_schedule.id
]
boot_disk {
device_name = local.ref_name
initialize_params {
image = var.boot_disk_image
type = var.disk_type
size = var.disk_size
}
}
network_interface {
network = data.google_compute_network.default.name
access_config {
nat_ip = google_compute_address.static.address
}
}
}
If it can be useful, here is what the terraform apply returns
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
-/+ destroy and then create replacement
Terraform will perform the following actions:
# google_compute_resource_policy.instance_schedule must be replaced
-/+ resource "google_compute_resource_policy" "instance_schedule" {
~ id = "projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule" -> (known after apply)
name = "my-instance-schedule"
~ project = "my-project-id" -> (known after apply)
~ region = "https://www.googleapis.com/compute/v1/projects/my-project-id/regions/europe-west1" -> "europe-west1"
~ self_link = "https://www.googleapis.com/compute/v1/projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule" -> (known after apply)
# (1 unchanged attribute hidden)
~ instance_schedule_policy {
# (1 unchanged attribute hidden)
~ vm_start_schedule {
~ schedule = "0 9 * * *" -> "0 8 * * *" # forces replacement
}
# (1 unchanged block hidden)
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
Do you want to perform these actions in workspace "prd"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_resource_policy.instance_schedule: Destroying... [id=projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule]
Error: Error when reading or editing ResourcePolicy: googleapi: Error 400: The resource_policy resource 'projects/my-project-id/regions/europe-west1/resourcePolicies/my-instance-schedule' is already being used by 'projects/my-project-id/zones/europe-west1-b/instances/my-instance', resourceInUseByAnotherResource
NB: I am working with Terraform 0.14.7 and I am using google provider version 3.76.0
An instance inside GCP can be power off without destroy it with the module google_compute_instance using the argument desired_status, keep in mind that if you are creating the instance for the first time this argument needs to be on “RUNNING”. This module can be used as the following.
resource "google_compute_instance" "default" {
name = "test"
machine_type = "f1-micro"
zone = "us-west1-a"
desired_status = "RUNNING"
}
You can also modify your “main.tf” file if you need to stop the VM first and then started creating a dependency in terraform with depends_on.
As you can see in the following comment, the service account will be created but the key will be assigned until the first sentence is done.
resource "google_service_account" "service_account" {
account_id = "terraform-test"
display_name = "Service Account"
}
resource "google_service_account_key" "mykey" {
service_account_id = google_service_account.service_account.id
public_key_type = "TYPE_X509_PEM_FILE"
depends_on = [google_service_account.service_account]
}
If the first component already exists, terraform only deploys the dependent.
I faced same problem with snapshot policy.
I controlled resource policy creation using a flag input variable and using count. For the first time, I created policy resource using flag as 'true'. When I want to change schedule time, I change the flag as 'false' and apply the plan. This will detach the resource.
I then make flag as 'true' again and apply the plan with new time.
This worked for me for snapshot policy. Hope it could solve yours too.
I solved the "resourceInUseByAnotherResource" error by adding the following lifecycle to the google_compute_resource_policy resource:
lifecycle {
create_before_destroy = true
}
Also, this requires to have a unique name with each change, otherwise, the new resource can't be created, because the resource with the same name already exists. So I appended a random ID to the end of the schedule name:
resource "random_pet" "schedule" {
keepers = {
start_schedule = "${var.vm_start_schedule}"
stop_schedule = "${var.vm_stop_schedule}"
}
}
...
resource "google_compute_resource_policy" "schedule" {
name = "schedule-${random_pet.schedule.id}"
...
lifecycle {
create_before_destroy = true
}
}