In my terraform script I defined a load balancer plus two listeners and two target groups which each have assigned two targets. This all works okay. When any of these defined items is removed manually from within the AWS console they are added again by TF script once it is run again.
The script makes use of these modules:
aws_alb
aws_lb_target_group
aws_lb_listener
aws_lb_target_group_attachment
But when I manually add a new listener plus a targetgroup with its own targets this change isn't detected by the terraform script I would expect that these manual additions would be removed as they are linked to the aws_alb that is created with TF. Is this the expected behavior?
Yes, this is expected. Terraform is declarative, you define your infrastructure and it will figure out what the diffs are in order to determine what changes it needs to make. It can only make these changes and diff against resources it controls unless you use data sources to look up AWS resources. Manually created resources won't be managed by Terraform, however you can create the Terraform config for them and import if you want to manage them with Terraform (see the docs for import)
Related
Could you please suggest easy way to copy/move already created AWS stuffs from current VPC to another ?
There is no "easy" way but you can use different tools to import existing resources and deploy them to different VPCs.
There is a free tool called Former2 (https://github.com/iann0036/former2) which can be used to scan existing resources and produce outputs. These outputs can be used to deploy new resources. I tested this tool and it seems to be quite intuitive to use to gather information about existing resources and produce outputs in different template languages (Cloudformation, Terraform, CDK, Stroposphere, Pulumi).
Terraform can be used to import existing resources to the current state and after that copy resources to configuration. Future versions of Terraform will be able to also update the configuration. To use terraform you must know every resource you want to import and use import command with their ids. Terraform does not support nested imports so importing vpc does not add subnets or other resources to the state but they have to be imported separately.
I think CloudFormation has also import feature but specific resource template must be written beforehand and submitted during the import. This is not an easy or fast way to copy resources but should work and as an end product there should a template which can be used to deploy resources to another VPCs.
I will start a new Terraform project on AWS. The VPC is already created and i want to know what's the best way to integrate it in my code. Do i have to create it again and Terraform will detect it and will not override it ? Or do i have to use Data source for that ? Or is there other best way like Terraform Import ?
I want also to be able in the future to deploy the entire infrastructure in other Region or other Account.
Thanks.
When it comes to integrating with existing objects, you first have to decide between two options: you can either import these objects into Terraform and use Terraform to manage them moving forward, or you can leave them managed by whatever existing system and use them in Terraform by reference.
If you wish to use Terraform to manage these existing objects, you must first write a configuration for the object as if Terraform were going to create it itself:
resource "aws_vpc" "example" {
# fill in here all the same settings that the existing object already has
cidr_block = "10.0.0.0/16"
}
# Can then use that vpc's id in other resources using:
# aws_vpc.example.id
But then rather than running terraform apply immediately, you can first run terraform import to instruct Terraform to associate this resource block with the existing VPC using its id assigned by AWS:
terraform import aws_vpc.example vpc-abcd1234
If you then run terraform plan you should see that no changes are required, because Terraform detected that the configuration matches the existing object. If Terraform does propose some changes, you can either accept them by running terraform apply or continue to update the configuration until it matches the existing object.
Once you have done this, Terraform will consider itself the owner of the VPC and will thus plan to update it or destroy it on future runs if the configuration suggests it should do so. If any other system was previously managing this VPC, it's important to stop it doing so or else this other system is likely to conflict with Terraform.
If you'd prefer to keep whatever existing system is managing the VPC, you can also use the Data Sources feature to look up the existing VPC without putting Terraform in charge of it.
In this case, you might use the aws_vpc data source, which can look up VPCs by various attributes. A common choice is to look up a VPC by its tags, assuming your environment has a predictable tagging scheme that allows you to describe the single VPC you are looking for:
data "aws_vpc" "example" {
tags = {
Name = "example-VPC-name"
}
}
# Can then use that vpc's id in other resources using:
# data.aws_vpc.example.id
In some cases users will introduce additional indirection to find the VPC some other way than by querying the AWS VPC APIs directly. That is a more advanced configuration and the options here are quite broad, but for example if you are using SSM Parameter Store you could place the VPC into a parameter store parameter and retrieve it using the aws_ssm_parameter data source.
If the existing system managing the VPC is CloudFormation, you could also use aws_cloudformation_export or aws_cloudformation_stack to retrieve the information from the CloudFormation API.
If you are happy to manage it via terraform moving forward then you can import existing resources into your terraform state. Here is the usage page for it https://www.terraform.io/docs/import/usage.html
You will have to define a resource block inside of your configuration for the vpc first. You could do something like:
resource "aws_vpc" "existing" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "prod"
}
}
and then on the cli run the command
terraform import aws_vpc.existing <vpc-id>
Make sure you run a terraform plan afterwards, because terraform may try to make changes to it. You kind of have to reverse engineer it a bit, by adding all the necessary configuration to the aws_vpc resource. Once it is aligned, terraform will not attempt to change it. You can then re-use this to deploy to other accounts and regions.
As you suggested, you could use a data source for the vpc. This can be useful if you want to manage it outside of terraform, instead of having the potential to destroy the vpc if it is run by an inexperienced user.
Some customers I've worked with prefer to manage resources like vpcs/subnets (and other core infrastructure) in separate terraform scripts that only senior engineers have access to. This can avoid the disaster scenarios where people destroy the underlying infrastructure by accident.
I personally prefer managing all my terraform code in a git repository that is then deployed using a CI/CD tool, even if it's just myself working on it. Some people may not see the value in spending the time creating the pipeline though and may stick with running it locally.
This post has some great recommendations on running terraform in an an automated environment https://learn.hashicorp.com/terraform/development/running-terraform-in-automation
I am trying to create a terraform script which will create a vpc and other resources. I am passing the parameters for scripts from a .tfvars file. I have successfully created the vpc and resources by executing the script. Now I want to create another vpc with same set of resources but with different set of parameter values. I have created a new .tfvars file with new values and tried to execute it with the old main.tf file. When I execute the 'terraform plan' command its showing that it will delete the vpc and resources created during my first run will create a new vpc with the new values.
Is there any method to create resources using same terraform main.tf file and by changing the .tfvars file.
You are running into a state-based issue. When you define a resource you give it a name. Those names are used in the state file and that is what is making Terraform to think you are trying to alter an existing resource. You have a couple of ways to address this and it depends on what you are really doing.
Terraform Workspaces
You could use workspaces in terraform for each VPC you are creating, this would keep the state separated, however, workspaces are really intended to separate environments, not multiple resources in the same environment. You can read more here.
Terraform Modules
What it sounds like to me is that you really want to create a terraform module for your VPC configuration. Then create each VPC using your module in the same main.tf. That way you will have unique names resources which will not confuse the state management. You can read more about modules here. A good resource for information about it can be found in this blog post.
The way to do this is by creating a module. You should be able to pretty much cut / paste your current code in to your module. You may only need to remove the provider definition from your module. Then in your new main code (root module) reference the module for each set of resources you want to create.
Ah the reason TF is trying to remove the resources you already created is because they've been captured in its state.
When you create the module add the resources you already created back in. TF will always try and configure as per the code, if the resources are remove it will try and destroy them
Create a module in terraform
This is because you are working on the same tfstate file.
Following you could do :
1. If you are working with local state: copy the whole code in a different directory and with new tfvars file and work there. This will start a new clean tfstate
If you are working with remote state:
a. Configure different remote state and then use new tfvars file, or
b. Create a different directory, symlink your code to this directory and replace old backend config and tfvars file with the new one.
I have sample code of working with multi-env https://github.com/pradeepbhadani/tf-course/tree/master/Lesson5
Create a Terraform module of your VPC code and then call it from a separate directory.
I'm using a tool named kops that generates a terraform file to set up some infrastructure for kubernetes. After that, we want to use terraform to create parts of our infrastructure specific to our application. e.g., a queue, a proxy, elasticache, etc.
The terraform file that kops generates has a lot of information in it that I'd like to refer to when creating the queue/proxy/elasticache. e.g., the subnet ranges to use, the cidr blocks, the availability zones, etc. But, I don't want to modify the kops generated terraform file because whenever there's a kops upgrade, I'll have to re-generate it then re-modify it.
The terraform file that kops generates doesn't provide any output variables. I could append my queue/proxy/elasticache configurations to the bottom of the file that kops generates. Then I'd be able to refer to the kops generated variables. But I consider this to be a modification to the kops generated file and would like to avoid this for the reasons above.
How can I make my custom terraform reference the parts of a generated terraform file?
If there are no output variables in the generated terraform files and you do not want to change them, how about using data sources?
https://www.terraform.io/docs/configuration/data-sources.html
Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to build on information defined outside of Terraform, or defined by another separate Terraform configuration.
Is there a way to create the terraform state file, from the existing infrastructure. For example, an AWS account comes with a some services already in place ( for ex: default VPC).
But terraform, seems to know only the resources, it creates. So,
What is the best way to migrate an existing AWS Infrastructure to Terraform code
Is it possible to add a resource manually and modify the state file manually (bad effects ?)
Update
Terraform 0.7.0 supports importing single resource.
For relatively small things I've had some success in manually mangling a state file to add stubbed resources that I then proceeded to Terraform over the top (particularly with pre-existing VPCs and subnets and then using Terraform to apply security groups etc).
For anything more complicated there is an unofficial tool called terraforming which I've heard is pretty good at generating Terraform state files but also merging with pre-existing state files. I've not given it a go but it might be worth looking into.
Update
Since Terraform 0.7, Terraform now has first class support for importing pre-existing resources using the import command line tool.
As of 0.7.4 this will import the pre-existing resource into the state file but not generate any configuration for the resource. Of course if then attempt a plan (or an apply) Terraform will show that it wants to destroy this orphaned resource. Before running the apply you would then need to create the configuration to match the resource and then any future plans (and applys) should show no changes to the resource and happily keep the imported resource.
Use Terraforming https://github.com/dtan4/terraforming , To date it can generate most of the *.tfstate and *.tf file except for vpc peering.