Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.
Related
Well, I have some resources in AWS that were created via terraform module. But now I have to change source module to the identical module except some stuff like name of some resources and now I need to use another module and avoid replacement. Now I have problems only with names of 4 resources. Here is the example:
KMS-ALIAS: BEFORE: kms-alias-s3bucket, CHANGES IN MODULE: kms-alias-s3bucket-dev. How to avoid replacement without changing the resources names'. I heard about terraform state mv but actually don't know how to properly configure that
Here is the output:
Here is how changes looks like:
Changing the terraform state to add the prefix -dev in the resources names will force terraform to diff from your Cloud environment, any update on those resources afterwards will force a replacement unless you do not touch those resources anymore.
If you cloud environment has this bucket named xyz, you want your state with the bucket name as xyz. So changing those names will depend on what those 4 resources are, bucket name change forces replacement for instance, so if you really want this environment as prefix you can create another bucket with the desired name <bucket-name>-dev and move everything from the old to the new one and then import the new one using terraform import into your state, then terraform will not force replacement anymore.
terraform import aws_s3_bucket.bucket <new-bucket-name>-dev
Additional Info
Modifying your state directly is usually for changing structural stuff and the resource local name could be among those potential changes.
resource "aws_s3_bucket" "bucket" { #bucket = resource local name
bucket = "my-tf-test-bucket" # my-tf-test-bucket = bucket name itself, it is unique and could not be changed without creating another bucket. AWS api does not allow that. That's why always depends on which resource.
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
In resume I would say if terraform try to replace when you change some argument (such as "bucket name") you will need to replace to be applied in your remote system (cloud environment). If AWS API / AWS Console allow you to change without recreating (even terraform saying that you need, it could happen sometimes), you can then import the resource into your state instead of editing the state.
I have run a terraform script to create some resources, including a VPC with private subnets, an RDS instance, and Kinesis/Firehose. This is working fine.
When I went to re-run terraform and add some new resources (ElasticSearch in this case), Terraform started outputting a plan that included adding AWS tags to many of my previously existing resources, the text of which look like "map-migrated" = "d-server-01uw80xeqs2083". Here is a snippet from the plan:
# module.rds.aws_db_instance.etl_metastore_rds_dbinstance will be updated in-place
~ resource "aws_db_instance" "rds_dbinstance" {
id = "MyRDSId"
name = "etldb"
~ tags = {
- "map-migrated" = "d-server-01uw80xeqs2083" -> null
# (2 unchanged elements hidden)
}
~ tags_all = {
- "map-migrated" = "d-server-01uw80xeqs2083" -> null
# (2 unchanged elements hidden)
}
# (48 unchanged attributes hidden)
}
I don't know why these tags are being added. Neither Google nor the Terraform docs have been any help on this issue. Is this something I can safely ignore? I'm worried that somehow I have crossed versions of Terraform and it's doing a migration that I don't want. As far as I know I am using the same version of Terraform before and after (1.0.1).
Terraform proposes to remove those tags.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
- destroy
Reason for this is, that Terraform compares what you have running on AWS to what you defined in your terraform files. (docs)
It will then take all the actions needed, so that your AWS infrastructure will match your Terraform configuration. (i.e. create, change and destroy AWS resources)
Here Terraform detected, that you have those "map-migrated" tags on various AWS resources, so it proposes to delete them, since they are not defined in your terraform files.
Now why and how were those tags added - and can you safely remove/ignore them?
Why?
These tags are used for the AWS Migration Acceleration Program (MAP), that's why adding those tags is called "MAP tagging".
This is a cloud migration program, in which AWS offers to help companies get into the cloud by giving methods, tools, trainings and money (you can get AWS Credits).
Now a requirement for that program is tagging your migrated resources with a "map-migrated" tag - or no credits for you. (credits are based on the cost and usage of tagged resources)
How?
Maybe your team is using the AWS Application Migration Service, which offers a setting to automatically apply MAP tags to launched instances.
Or someone added the tags manually in the AWS account.
Can you safely remove/ignore them?
Technically you can, it won't break anything.
But your project manager might get really angry, since you will lose out on funding. And the person who added those tags in AWS won't be happy either, if you override them with Terraform.
Solution
So the solution will be to ask your manager for the "List of all MAP Included Services" and incorporate the MAP tags in the Terraform code for all appropriate resources:
tags = {
...
"map-migrated" = "d-server-01uw80xeqs2083"
}
(Or - dirty solution - copy the tags from the terraform apply output into your code, until there are no such tag differences between AWS and your terraform code anymore.)
Note: this tag is always the same for one management (payer) account and all its member accounts. The unique value is called Server Id.
I have a number of ECS. Is it possible to pass a variable to ecs-container-definition.json to deploy to a specific ECS?
e.g. terraform apply -var 'deploy=aws-ecs-backend'
Firstly, there are a few overlapping concepts here. You don't specify what version of Terraform you are using so I'll assume it's relatively recent.
The Terraform resource you are likely referring to is an ecs_task_definition. This resource takes the following format (from the docs):
resource "aws_ecs_task_definition" "service" {
family = "service"
container_definitions = jsonencode([
//ommitted for brevity
])
}
It appears you are instead using the file or templatefile functions to embed the contents of a file called ecs-container-definition.json into your resource. From Terraform's perspective, you do not apply changes to this file, you apply changes to the resource referring to it (the aws_ecs_task_definition).
The process you are trying to undertake is resource targeting, and it is invoked (for example) by specifying the target resource in the terraform apply command. Here is an example, if your ecs_task_definition resource is called myservice you will have the below:
resource "aws_ecs_task_definition" "myservice" {
family = "service"
container_definitions = file('./templates/aws_ecs_task_definition')
.....
}
You would then apply changes to this resource using the below command (run from the same directory as the containing .tf file:
terraform plan -target="aws_ecs_task_definition.myservice" (to see planned changes)
terraform apply -target="aws_ecs_task_definition.myservice" (to apply changes)
Be aware that this is something of a Terraform anti-pattern, as the documentation states in the opening paragraph:
Occasionally you may want to only apply part of a plan, such as
situations where Terraform's state has become out of sync with your
resources due to a network failure, a problem with the upstream cloud
platform, or a bug in Terraform or its providers.
Generally best practice is to have Terraform "cleanly apply" in that all of the state is in sync with your repository content.
My Terraform state file is messedup. Resources are already available on AWS. When I run terraform apply command I am getting multiple "Already Exists" error same as below.
aws_autoscaling_group.mysql-asg: Error creating AutoScaling Group: AlreadyExists: AutoScalingGroup by this name already exists - A group with the name int-mysql-asg already exists
When I do terraform import then it goes away. but I have hundreds of resources which is giving error. What is the best way to sync terraform state and make terraform apply successful?
You may want to look at Terraforming
It's a Ruby project that states "Export existing AWS resources to Terraform style (tf, tfstate)"
I have the AWS CLI installed on my Windows computer, and running this command "works" exactly like I want it to.
aws ec2 describe-images
I get the following output, which is exactly what I want to see, because although I have access to AWS through my corporation (e.g. to check code into CodeCommit), I can see in the AWS web console for EC2 that I don't have permission to list running instances:
An error occurred (UnauthorizedOperation) when calling the DescribeImages operation: You are not authorized to perform this operation.
I've put terraform.exe onto my computer as well, and I've created a file "example.tf" that contains the following:
provider "aws" {
region = "us-east-1"
}
I'd like to issue some sort of Terraform command that would yell at me, explaining that my AWS account is not allowed to list Amazon instances.
Most Hello World examples involve using terraform plan against a resource to do an "almost-write" against AWS.
Personally, however, I always feel more comfortable knowing that things are behaving as expected with something a bit more "truly read-only." That way, I really know the round-trip to AWS worked but I didn't modify any of my corporation's state.
There's a bunch of stuff on the internet about "data sources" and their "aws_ami" or "aws_instances" flavors, but I can't find anything that tells me how to actually use it with a Terraform command for a simple print()-type interaction (the way it's obvious that, say, "resources" go with the "terraform plan" and "terraform apply" commands).
Is there something I can do with Terraform commands to "hello world" an attempt at listing all my organization's EC2 servers and, accordingly, watching AWS tell me to buzz off because I'm not authorized?
You can use the data source for AWS instances. You create a data source similar to the below:
data "aws_instances" "test" {
instance_tags = {
Role = "HardWorker"
}
filter {
name = "instance.group-id"
values = ["sg-12345678"]
}
instance_state_names = ["running", "stopped"]
}
This will attempt to perform a read action listing your EC2 instances designated by the filter you put in the config. This will also utilize the IAM associated with the Terraform user you are performing the terraform plan with. This will result in the error you described regarding lack of authorization, which is your stated goal. You should modify the filter to target your organization's EC2 instances.