We have provisioned amazon resources (EC2, Loadbalancers, Target groups,...) using Terragrunt, when we re-apply EC2 Instances script it removes Target Groups associated to load balancer.
This is due to the dependencies we create in Target Groups scripts, but would like to understand the best practices to implement the loosely couple terraform/terragrunt scripts. I mean when we re-apply the .hcl file it shouldn't impact the other related resources.
Please suggest.
The way terraform/terragrunt know what to destroy is by referencing the state file (local, remote). When you run the terraform apply or terragrunt apply inside a folder, terraform looks at what is in AWS, what is in tfstate file on disk, what are your scripts asking you to do and it performs diffs on all three of those, figures out the delta and decides what to do. An important thing to know about terraform is that terraform is directory specific, any directory you run terraform, it creates a state file in the directory you are running in. There is also a concept of remote state using S3 alongwith DynamoDB so that multiple developers can share the state without stepping on each other toes
Related
I have really simplified everything down to the basics to demonstrate the following: Create Two VPC structures, one for test & one for development, then trying to use exactly the same code (from the same folder) to place a security group into each environment, (test-vpc and dev-vpc).
Each VPC deployment is using a unique Amazon S3 backend, by using a unique key within the AWS S3 bucket to store the remote state file.
The security_group.tf, is utilizing a variable to point at the different S3 key file for terraform remote state files (key=var.vpc_choice). Where vpc_choice will equal the key value for S3 backend.
Then executing the terraform apply command twice from the same folder "terraform apply -vars-file=test.tfvars" and then once again with a different variable "terraform apply -vars-file=dev.tfvars".
My expectation is that the security group is provisioned into a different VPC because the variable is point to the different backend state.
However, the local terraform state in that folder is getting in my way. It doesn't matter that I'm pointing at a remote state, the local state file knows the security group was already provisioned and wants to destroy that security group and create the security group in the other VPC.
IT works if I copy the code to another folder like "groups2". The first Terraform apply, provisions to test-vpc and the second terraform apply (as long as the code is in a different folder), provisions into dev-vpc. So while the code is exactly the same, and does provision into two different VPC's because of the variable answered with a .tfvars file, I have not achieved the ability to provision from the same folder.
The BIG question is, is that possible, have I missed something like an ability to not care about the local state file so I can provision to different VPCs by using a variable?
You will find a copy of my code at https://github.com/surfingjoe/Proposed_Terraform_Modules
Mark B commented on my question, but in fact, answered the question. Thank you Mark!
Using Terraform Workspaces works perfectly!
One environment = one remote backend (one tfstate file)
So, If you have two environments you have to open each folder, set unique name for tfstate in the remote backend, and run terraform apply.
A contractor built an application's AWS infrastructure on his local laptop, never commited his code then left (wiping the notebook's HD). But he did create the infrastructure with Terraform and stored the remote state in an s3 bucket, s3://analytics-nonprod/analytics-dev.tfstate.
This state file includes all of the VPC, subnets, igw, nacl, ec2, ecs, sqs, sns, lambda, firehose, kinesis, redshift, neptune, glue connections, glue jobs, alb, route53, s3, etc. for the application.
I am able to run Cloudformer to generate cloudformation for the entire infrastructure, and also tried to import the infrastructure using terraformer but terraformer does not include neptune and lambda components.
What is the best way/process to recreate a somewhat usable terraform just from the remote state?
Should I generate some generic :
resource "aws_glue_connection" "dev" { }
and run "terraform import aws_glue_connection.dev"
then run "terraform show"
for each resource?
Terraform doesn't have a mechanism specifically for turning existing state into configuration, and indeed doing so would be lossy in the general case because the Terraform configuration likely contained expressions connecting resources to one another that are not captured in the state snapshots.
However, you might be able to get a starting point -- possibly not 100% valid but hopefully a better starting point than nothing at all -- by configuring Terraform just enough to find the remote state you have access to, running terraform init to make Terraform read it, and then run terraform show to see the information from the state in a human-oriented way that is designed to resemble (but not necessarily exactly match) the configuration language.
For example, you could write a backend configuration like this:
terraform {
backend "s3" {
bucket = "analytics-nonprod"
key = "analytics-dev.tfstate"
}
}
If you run terraform init with appropriate AWS credentials available then Terraform should read that state snapshot, install the providers that the resource instances within it belong to, and then leave you in a situation where you can run Terraform commands against that existing state. As long as you don't take any actions that modify the state, you should be able to inspect it with commands like terraform show.
You could then copy the terraform show output into another file in your new Terraform codebase as a starting point. The output is aimed at human consumption and is not necessarily all parsable by Terraform itself, but the output style is similar enough to the configuration language that hopefully it won't take too much effort to massage it into a usable shape.
One important detail to watch out for is the handling of Terraform modules. If the configuration that produced this state contained any module "foo" blocks then in your terraform show output you will see some things like this:
# module.foo.aws_instance.bar
resource "aws_instance" "bar" {
# ...
}
In order to replicate the configuration for that, it is not sufficient to paste the entire output into one file. Instead, any resource block that has a comment above it indicating that it belongs to a module will need to be placed in a configuration file belonging to that module, or else Terraform will not understand that block as relating to the object it can see in the state.
I'd strongly suggest taking a backup copy of the state object you have before you begin, and you should be very careful not to apply any plans while you're in this odd state of having only a backend configuration, because Terraform might (if it's able to pick up enough provider configuration from the execution environment) plan to destroy all of the objects in the state in order to match the configuration.
I have created an infrastructure with terraform and now I need to setup a CI... I'm thinking of using terraform also. Is it possible to extract certain part of tf code to place to Pipeline in order of updating ECS tasks ignoring the rest of infrastructure?
As ydaetskcoR suggested in a comment, if you really want to run parts of your Terraform configuration independently of the rest, you're better off splitting it up.
What I'd suggest is several terraform projects grouped in the way you'd organize responsibility and releases, (e.g. projects might have their own, VPC might be on its own, some shared infrastructure in its own), and use Terraform remote state to connect them all.
I manage an established AWS ECS application with terraform. The terraform also manages all other aspects of each of 4 AWS environments, including VPCs, subnets, bastion hosts, RDS databases, security groups and so on.
We manage our 4 environments by putting all the common configuration in modules which are parameterised with variables derived from the environment specific terraform files.
Now, we are trying to migrate to using Kubernetes instead of Amazon ECS for container orchestration and I am trying to do this incrementally rather than with a big bang approach. In particular, I'd like to use terraform to provision the Kubernetes cluster and link it to the other AWS resources.
What I'd initially hoped to do was capture the terraform output from kops create cluster, generalise it by parameterising it with environment specific variables and then use this one kubernetes module across all 4 environments.
However, I now realise this isn't going to work because the k8s nodes and masters all reference the kops state bucket (in s3) and it seems like I am going to have clone that bucket and rewrite the files contained therein. This seems like a rather fragile way to manage the kubernetes environment - if I recreate the terraform environment, the related state kops state bucket is going to be inconsistent with the AWS environment.
It seems to me that kops generated terraform may be useful for managing a single instance of an environment, but it isn't easily applied to multiple environments - you effectively need one kops generated terraform per environment and there is noway to reuse the terraform to establish a new environment - for this you must fall back from a declarative approach and resort to an imperative kops create cluster command.
Am I missing a good way to manage the definition of multiple similar kubernetes environments with a single terraform module?
I'm not sure how you reached either conclusions, Terraform (which will cause more trouble than it will ever solve, that's definitely a tool to get rid of asap), or having to duplicate S3 buckets.
I'm pretty sure you'd be interested in kops's cluster template feature.
You won't need to generate, hack, and launch (and debug...) Terraform, and kops templates are just as easy if not significantly easier (and more specific...) to maintain than Terraform.
When kops releases new versions, you won't have to re-generate and re-hack your Terraform scripts either!
Hope this helps!
Is there a way to create the terraform state file, from the existing infrastructure. For example, an AWS account comes with a some services already in place ( for ex: default VPC).
But terraform, seems to know only the resources, it creates. So,
What is the best way to migrate an existing AWS Infrastructure to Terraform code
Is it possible to add a resource manually and modify the state file manually (bad effects ?)
Update
Terraform 0.7.0 supports importing single resource.
For relatively small things I've had some success in manually mangling a state file to add stubbed resources that I then proceeded to Terraform over the top (particularly with pre-existing VPCs and subnets and then using Terraform to apply security groups etc).
For anything more complicated there is an unofficial tool called terraforming which I've heard is pretty good at generating Terraform state files but also merging with pre-existing state files. I've not given it a go but it might be worth looking into.
Update
Since Terraform 0.7, Terraform now has first class support for importing pre-existing resources using the import command line tool.
As of 0.7.4 this will import the pre-existing resource into the state file but not generate any configuration for the resource. Of course if then attempt a plan (or an apply) Terraform will show that it wants to destroy this orphaned resource. Before running the apply you would then need to create the configuration to match the resource and then any future plans (and applys) should show no changes to the resource and happily keep the imported resource.
Use Terraforming https://github.com/dtan4/terraforming , To date it can generate most of the *.tfstate and *.tf file except for vpc peering.