Is there a way I can use a terraform data call for a bucket (perhaps created and stored in a different state file) and then in the event nothing is in data, create the resource by setting a count?
I've been doing some experiments and continually get the following:
Error: Failed getting S3 bucket (example_random_bucket_name): NotFound: Not Found
status code: 404, request id: <ID here>, host id: <host ID here>
Sample code to test (this has been modified from the original code which generated this error):
variable "bucket_name" {
default = "example_random_bucket_name"
}
data "aws_s3_bucket" "new" {
bucket = var.bucket_name
}
resource "aws_s3_bucket" "s3_bucket" {
count = try(1, data.aws_s3_bucket.new.id == "" ? 1 : 0 )
bucket = var.bucket_name
}
I feel like rather than generating an error I should get an empty result, but that's not the case.
Terraform is a desired-state system, so you can only describe what result you want, not the steps/conditions to get there.
If Terraform did allow you to decide whether to declare a bucket based on whether there is already a bucket of that name, you would create a configuration that could never converge: on the first run, it would not exist and so your configuration would declare it. But on the second run, the bucket would then exist and therefore your configuration would not declare it anymore, and so Terraform would plan to destroy it. On the third run, it would propose to create it again, and so on.
Instead, you must decide as part of your system design which Terraform configuration (or other system) is responsible for managing each object:
If you decide that a particular Terraform configuration is responsible for managing this S3 bucket then you can declare it with an unconditional aws_s3_bucket resource.
If you decide that some other system ought to manage the bucket then you'll write your configuration to somehow learn about the bucket name from elsewhere, such as by an input variable or using the aws_s3_bucket data source.
Sadly you can't do this. data sources must exist, otherwise they error out. There is no build in way in TF to check if a resource exists or not. There is nothing in between, in a sense that a resource may, or may not exist.
If you require such functionality, you have to program it yourself using External Data Source. Or maybe simpler, provide an input variable bucket_exist, so that you explicitly set it during apply.
Data sources are designed to fail this way.
However, if you use a state file from external configuration, it's possible to declare an output in the external state, based on whether the s3 bucket is managed by that state and use it in s3_bucket resource as condition.
For example, the output in external state will be empty string (not managed) or value for whatever property is useful for you. Boolean is another choice. Delete data source from this configuration and add condition to the resource based on the output.
It's your call if any such workarounds complicate or simplify your configuration.
Related
Terraform fails on terraform apply, because of failure on "already exists" error.
I think this happened, because I manually deleted the tfstate and ddb md5 entries. Which created the whacky state of Terraform.
Now when I do init, plan and apply, I am getting quite a few errors as follows example:
Error: error creating SSM parameter: ParameterAlreadyExists: The parameter already exists. To overwrite this value, set the overwrite option in the request to true.
......
Error: error creating SSM parameter: ParameterAlreadyExists: The parameter already exists. To overwrite this value, set the overwrite option in the request to true.
Error: Error creating DB Parameter Group: DBParameterGroupAlreadyExists: Parameter group abc already exists
I have taken a look into the import option, but it's too messy.
Is there an easy or cleaner approach on tacking this?
Thank you, any advice will be helpful.
The short answer is, it depends.
Each resource has it own functionalities, some allow you to overwrite existing resources and some don't.
For example, for ssm parameters, you can add a "overwrite" flag to the resource.
resource "aws_ssm_parameter" "foo" {
name = "foo"
type = "String"
value = "bar"
overwrite = true
}
Official reference: ssm_parameter
Now, a good way to avoid the issue of loosing tfstate is to store it in S3 in a bucket that has version control.
I have a simple Terraform code where I manage an application's version code in S3
I want to manage multiple version of this code in S3.
My code is as follows:
main.tf
resource "aws_s3_bucket" "caam_test_bucket" {
bucket = "caam-test-bucket"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "caam_test_bucket_obj" {
bucket = aws_s3_bucket.caam_test_bucket.id
key = "${var.env}/v-${var.current_version}/app.zip"
source = "app.zip"
}
Every time I update the code, I export it to app.zip, increment the variable current_version and push the terraform code.
The issue here is that instead of keeping multiple version folders in the S3 buckets, it deletes the existing one and creates another.
I want Terraform to keep any paths and files created and to not delete it.
For e.g if a path dev/v-1.0/app.zip already exists and i increment the current version to 2.0 and push the code, i want Terraform to keep dev/v-1.0/app.zip and also add the dev/v-2.0/app.zip to the bucket.
Is there a way to do that ?
TF deletes your object, because that is how it works:
Destroy resources that exist in the state but no longer exist in the configuration.
One way to overcome this is to keep all your objects in the configuration, through for_each. This way you would keep adding new versions to a map of existing objects, rather then keep replacing them. This can be problematic if you are creating lots of versions, as you have to keep them all.
Probably easier way is to use local-exec which is going to use AWS CLI to upload the object. This happens "outside" of TF, thus TF will not be deleting pre-existing objects, as TF won't be aware of them.
I am using variables with sensitivity true, even though, state file stores id and password. Any how to avoid it?
variable "rs_master_pass" {
type = string
sensitive = true
}
In state file,
"master_password": 'password'
Even though, taking out from state manually, comes back in each apply.
There is no "easy" way to avoid that. You must simply not hard-code the values in your TF files. Setting sensitive = true does not protect against having the secrets in plain text as you noticed.
The general ways for properly handling secrets in TF are:
use specialized, external vaults, such as Terraform Vault, AWS Parameter Store or AWS Secret Manger. They have to be set separately as to again not having their secrets available in TF state file.
use local-exec to setup the secrets outside of TF. Whatever you do in local-exec does not get stored in TF state file. This often is done to change dummy secrets that may be required in your TF code (e.g. RDS password) to the actual values outside of TF knowledge.
if the above solutions are not accessible, then you have to protect your state file (its good practice anyway). This is often done by storing it remotely in S3 under strict access policies.
In the Terraform State File, following is a section of .tfstate file against OpenStack (which uses AWS APIs and hence AWS is the provider) :
"aws_instance.7.3"
"primary": {
"attributes": {
"id": "6b646e50-..."
Say I delete an instance/resource manually in the console.
My Terraform Configuration can be changed in such a way that Terraform Plan can either trigger a
i) re-create (+/-) or a
ii) destroy (-) and a create/add (+).
Question is : In either case, is there a possibility for the new node that is created to have the same "id" attribute as it had before in the state file ?
In other words, will "id" attribute ("instance_id" in case of GCP) be always unique throughout the infrastructure's lifecycle?
(so that I know a new node is created/re-created for sure when comparing the old tfstate file with new tfstate file against "id" or "instance_id" attribute and ensure .tfstate reflects what was said to happen in the plan.)
The reason I am checking if the .tfstate reflects the plan (exact number of creates/re-creates/destroy) is, though the "apply" happens according to the "plan", sometimes the .tfstate DOES NOT reflect that.
The reason for this is that after "apply" , terraform seems to do a
GET call to the provider to update the .tfstate file and this GET call
sometimes returns inconsistent state of the infrastucture (i.e it may
not return a node's details even though it is created and is part of the
infrastructure !).
In that case I have to let know in our automated tool that .tfstate did not happen according to the plan and so there is a possible corruption/inconsistency in the .tfstate file to fix it with manual import.
Seems it's common practice to make use of count on a resource to conditionally create it in Terraform using a ternary statement.
I'd like to conditionally update an AWS Route 53 entry based on a push_to_prod variable. Meaning I don't want to delete the resource if I'm not pushing to production, I only want to update it, or leave the CNAME value as it is.
Has anyone done something like this before in Terraform?
Currently as it stands interpolation syntax isn't supported in lifecycle tags. You can read more here. Which will make this harder because you could use the "Prevent Destroy". However, without more specifics I am going to take my best guess on how to get your there.
I would use the allow_overwrite property on the Route53 record and set that based on your flag. That way if you are pushing to prod you can set it it false. Which should trigger creating a new one. I haven't tested that.
Also note that if you don't make any changes to the Route53 resource it should trigger any changes in Terraform to be applied. So updating any part of the record will trigger the deployment.
You may want to combine this with some lifecycle events, but I don't have enough time to dig into that specific resource and how it happens.
Two examples I can think of are:
type = "${var.push_to_prod == "true" ? "CNAME" : var.other_value}" - this will have a fixed other_value, there is no way to have terraform "ignore" the resource once it's being managed by terraform.
or
type = "${var.aws_route53_record_type}" and you can have dev.tfvars and prod.tfvars, with aws_route53_record_type defined as whatever you want for dev and CNAME for prod.
The thing is with what you're trying to do, "I only want to update it, or leave the CNAME value as it is.", that's not how terraform works. Terraform either manages the resource for you or it doesn't. If it's managing it, it'll update the resource based on the config you've defined in your .tf file. If it's not managing the resource it won't modify it. It sounds like what you're really after is the second solution where you pass in two different configs from your .tfvars file into your .tf file and based off the different configs, different resources are created. You can couple this with count to determine if a resource should be created or not.