When ever I am running a terraform plan using the following:
resource "google_sql_user" "users" {
name = "me"
instance = "${google_sql_database_instance.master.name}"
host = "me.com"
password = "changeme"
}
This only happens when I am running this a Postgres instance on Google Cloud SQL.
Terraform always outputs that the plan will create a user, even though it's already created. I'm using terraform version 0.11.1. Is there something that i'm missing? I tried setting the id value, however there it still recretes itself.
It turns out that the terraform state of the user was not being added, the cause seemed to be that when using postgres, the value host = "me.com" is not required, but causes problems if it's left there.
Once that was removed, then the terraform state will be correct.
Related
Is it possible to create a terraform module that updates a specific resource which is created by another module?
Currently, I have two modules...
linux-system: which creates a linux vm with boot disks
disk-updater: which I'm planning to use to update the disks I created from the first module
The reason behind is I want to create a pipeline that will do disk operations tasks via terraform like disk resizing.
data "google_compute_disk" "boot_disk" {
name = "linux-boot-disk"
zone = "europe-west2-b"
}
resource "google_compute_disk" "boot_disk" {
name = data.google_compute_disk.boot_disk.name
zone = data.google_compute_disk.boot_disk.zone
size = 25
}
I tried to use data block to retrieve the existing disk details and pass it to resource block hoping to update the same disk but it seems like it will just try to create a new disk with the same name thats why im getting this error.
Error creating Disk: googleapi: Error 409: The resource ... already exists, alreadyExists
I think I'm doing it wrong, can someone give me advice how to proceed without using the first module I built. btw I'm a newbie when it comes to terraform
updates a specific resource which is created by another module?
No. You have to update the resource using its original definition.
The only way to update it from other module, is to import to the other module, which is bad design, as now you will have to definitions for the same resource, resulting in out-sync state files.
I'm learning how to use Terraform to manage my AWS Infrastructure.
Monday I created it all from scratch based on my Terraform Apply.
Tuesday (the next day) I wanted to update my app with some code changes (nothing that would affect the rest of the infrastructure, just my image in ECS) and got this error message in my terraform apply output:
Error: Error modifying DB Instance foo-staging-db: InvalidParameterCombination: Cannot upgrade postgres from 11.8 to 11.4
When I double checked my terraform database.tf I saw this:
resource "aws_db_instance" "main" {
...
engine = "postgres"
engine_version = "11.4"
...
}
Does anybody has an idea of what could have happened here?
This is not the first time that I update my databases like this, since I destroy my infrastructure every weekend to limit my AWS costs.
I solved the issue by changing my terraform Postgres version to 11.8, but still want to understand why the error happened in the first place.
AWS use default setting auto_minor_version_upgrade=true and tries to update your database.
You can do following to solve it
Method 1
Set flag to false explicitly using auto_minor_version_upgrade = false
Method 2
Use only first octet in version number engine_version = "11"
For more information https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#engine_version
I have a simple site set up on AWS and have a terraform script working to deploy it (at least from my local machine).
When I have a successful deployment through terraform apply, quite often if I then run terraform plan again (immediately after the apply) I will see changes like this:
# aws_route53_record.alias_route53_record_portal will be updated in-place
~ resource "aws_route53_record" "alias_route53_record_portal" {
fqdn = "mysite.co.uk"
id = "Z12345678UR1K1IFUBA_mysite.co.uk_A"
name = "mysite.co.uk"
records = []
ttl = 0
type = "A"
zone_id = "Z12345678UR1K1IFUBA"
- alias {
- evaluate_target_health = false -> null
- name = "d12345mkpmx9ii.cloudfront.net" -> null
- zone_id = "Z2FDTNDATAQYW2" -> null
}
+ evaluate_target_health = true
+ name = "d12345mkpmx9ii.cloudfront.net"
+ zone_id = "Z2FDTNDATAQYW2"
}
}
Why is terraform saying that some parts of resources need recreating when nothing has changed?
EDIT My actual tf resource...
resource "aws_route53_record" "alias_route53_record_portal" {
zone_id = data.aws_route53_zone.sds_zone.zone_id
name = "mysite.co.uk"
type = "A"
alias {
name = aws_cloudfront_distribution.s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
evaluate_target_health = true
}
}
You have changed evaluate_target_health from false to true. Terraform will just update the fields that have changed. The reason why it's showing like this is because often times AWS doesn't provide separate APIs for each field. Since TF is showing that this resource will be updated in-place, it will touch the minimum number of resources to make this change.
The "plan" operation in Terraform first synchronizes the Terraform state with remote objects (by making calls to the remote API), and then it compares the updated state with the configuration.
Terraform (or, more accurately, the relevant Terraform provider) then generates a planned update or replace for any case where the state and the configuration disagree.
If you see a planned update for a resource whose configuration you know you haven't changed, then by process of elimination that suggests that the remote system is what has changed.
Sometimes that can happen if some other process (or a human in the admin console) changes an object that Terraform believes itself to be responsible for. In that case, the typical resolution is to ensure that each object is only managed by one system and that no-one is routinely making changes to Terraform-managed objects outside of Terraform.
One way to diagnose this would be to consult the remote system and see whether its current settings agree with your Terraform configuration. If not, that would suggest that something other than Terraform has changed the value.
A less common reason this can arise is due to a bug in the provider itself. There are two variations of this class of bug:
When creating the object, the provider doesn't correctly translate the given configuration to a remote API call, and so it ends up creating an object that doesn't match the configuration. A subsequent Terraform plan will then notice that inconsistency and plan an update to fix it. If the provider's update operation has a similar bug then this will never converge, causing the provider to repeatedly plan the same update.
Conversely, the create/update may be implemented correctly but the "refresh" operation (updating the state to match the remote system) may inaccurately translate the remote object data back to Terraform state data, causing the state to not accurately reflect the remote system. In that case, the provider will probably then find that the configuration doesn't match the state anymore, even though the state was correct after the initial create.
Both of these bugs are typically nicknamed "permadiff" by provider developers, because the symptom is Terraform seeming to plan the same change indefinitely, no matter how many times you apply it. If you think you've encountered a "permadiff" bug then usually the path forward is to report a bug in the provider's development repository so that the maintainers can investigate.
One specific variation of "permadiff" is a situation where the remote system does some sort of normalization of your given values which the provider doesn't take into account. For example, some remote systems will accept strings containing uppercase letters but will convert them to lowercase for permanent storage. If a provider doesn't take that into account, it will probably incorrectly plan to change the value back to the one containing uppercase letters again in order to try to make the state match the configuration. This subclass of bug is a normalization permadiff, which provider teams will typically address by re-implementing the remote system's normalization logic in the provider itself.
If you find a normalization permadiff then you can often work around it until the bug is fixed by figuring out what normalization the remote system expects and then manually normalizing your configuration to match it, so that the provider will then see the configuration as matching the remote system.
I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!
I attempted to manage my application versions in my terraform template by parameterising the name. This was an attempt to have a new application version created by our CI process whenever the contents of the application changed. This way in elasticbeanstalk i could keep a list of historic application versions so that i could roll back etc. This didnt work as the same application version was constantly updated and in effect i lost the history of all application versions.
resource "aws_elastic_beanstalk_application_version" "default" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
I then tried to parameterise the logical resource reference name, but this isnt supported by terraform.
resource "aws_elastic_beanstalk_application_version" "${var.build-number}" {
name = "${var.eb-app-name}-${var.build-number}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Currently my solution is to manage my application versions outside of terraform which is disappointing as there are other associated resources such as the S3 bucket and permissions to worry about.
Am i missing something?
As far as Terraform is concerned you are just updating a single EB application version resource there. If you wanted to keep the previous versions around then you might need to try and increment the count of resources that Terraform is managing.
Off the top of my head you could try something like this:
variable "builds" = {
type = list
}
resource "aws_elastic_beanstalk_application_version" "default" {
count = "${length(var.builds)}"
name = "${var.eb-app-name}-${element(builds, count.index)}"
application = "${var.eb-app-name}"
description = "application version created by terraform"
bucket = "${aws_s3_bucket.default.id}"
key = "${aws_s3_bucket_object.default.id}"
}
Then if you have a list of builds it should create a new application version for each build.
Of course that could be dynamic in that the variable could instead be a data source that returns a list of all your builds. If a data source doesn't exist for it already you could write a small script that is used as an external data source.