I have custom resource defined in my terraform module:
resource "aws_alb_target_group" "whatever"
{
....
}
Turns out whatever is not good name, and I need to update it.
Classic way of doing it would be to login onto each environment and execute terraform state mv, however I have lots of environments, and no automation for such action.
How I can change name of resource without manually moving state (only through editing terraform modules and applying plans)?
Based on the explanation in the question, I guess your best bet would be to use the moved block [1]. So for example, in your case that would be:
resource "aws_alb_target_group" "a_much_better_whatever"
{
....
}
moved {
from = aws_alb_target_group.whatever
to = aws_alb_target_group.a_much_better_whatever
}
EDIT: As #Matt Schuchard noted, the moved block is available only for Terraform versions >=1.1.0.
EDIT 2: As per #Martin Atkins' comments, changed the resource name to be the name of the resource moving to instead of moving from.
[1] https://www.terraform.io/language/modules/develop/refactoring#moved-block-syntax
I'm in the same situation.
My plan is to create the new resource group in Terraform, apply it, move the resources in the Azure portal to the new resource group, and then do terraform state mv to move the resources in terraform.
Yes, if you have a lot of resources its boring.. but I guess I won't brake anything this way.
Related
As in the title of my post, it is possible? I don't see any options.
Also, the job details include advanced properties that include the name of my script, for example, job-name.py When the python script is renamed, will the job name also change? I'm afraid I might mess something up after the change.
Thanks in advance
From CloudFormation docs (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-job.html) is stated that changing the name parameter would replace the job so I would say no.
Expanding on the answer from #zoran2709:
From CloudFormation docs (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-glue-job.html) is stated that changing the name parameter would replace the job so I would say no.
In the console, it doesn't look like you can. If you defined this job in CloudFormation, you can update the name of the job, but it will replace the resource with a new one. I would recommend setting the UpdateReplacePolicy on the resource to Retain just to be safe. Checkout the docs here:
If you update a resource property that requires that the resource be replaced, CloudFormation recreates the resource during the update. Recreating the resource generates a new physical ID. CloudFormation creates the replacement resource first, and then changes references from other dependent resources to point to the replacement resource. By default, CloudFormation then deletes the old resource. Using the UpdateReplacePolicy, you can specify that CloudFormation retain or, in some cases, create a snapshot of the old resource.
I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.
I m trying to create terraform script to launch the fastai instance from the marketplace.
I m adding image name as,
boot_disk {
initialize_params {
image = "<image name>"
}
}
When I add
click-to-deploy-images/deeplearning
from url
https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning
is giving error,
Error: Error resolving image name 'click-to-deploy-images/deeplearning': Could not find image or family click-to-deploy-images/deeplearning
on fastai.tf line 13, in resource "google_compute_instance" "default":
13: resource "google_compute_instance" "default" {
If I use
debian-cloud/debian-9
from url
https://console.cloud.google.com/marketplace/details/debian-cloud/debian-stretch?project=<>
is working.
Can we deploy fastai image through terraform?
I made a deployment from the deep learning marketplace VM instance you share and review the source image[1], you should be able to use that url I provided to deploy with Terraform. I also notice a warning image stating that image is deprecated and there is this new version[2].
Hope this helps!
[1]sourceImage: https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200109
[2]https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200124
In this particular case, the name was "deeplearning-platform-release/pytorch-latest-gpu",
boot_disk {
initialize_params {
image = "deeplearning-platform-release/pytorch-latest-gpu"
...
}
}
Now I m able to create the instance.
To other newbies like me:
Apparently GCP Marketplace is using Deployment Manager which is google's own declarative tool to manage infrastructure. (I think modules are the closest abstraction in terraform to it.)
Hence, there is no simple/single answer to the question in the title.
In my opinion - if you start from scratch and/or can afford the effort the time - the best is to use terraform modules instead of GCP marketplace solutions - if such exists.
However, changes are good that you are importing an existing infra and you cannot just replace it immediately (or there is no such module).
In this case, I think the best that you can do is go to Deployment Manager in google console and open the particular deployment you need to import.
At this point you can see what resources make up the deployment. Probably there will be vm template(s), vm(s), firewall rule(s), etc...
Clicking on vm instance and the template will show you a lot of useful details.
Most importantly you can deduce what image was used.
E.g.:
In my case it showed:
sourceImage https://www.googleapis.com/compute/v1/projects/openvpn-access-server-200800/global/images/aspub275
From this I could define (based on an answer on issue #7319)
data "google_compute_image" "openvpn_server" {
name = "aspub275"
project = "openvpn-access-server-200800"
}
Which I could in turn use in google_compute_instance resource.
This will force a recreation of the VM though.
Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.
New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}