I'm using a tool named kops that generates a terraform file to set up some infrastructure for kubernetes. After that, we want to use terraform to create parts of our infrastructure specific to our application. e.g., a queue, a proxy, elasticache, etc.
The terraform file that kops generates has a lot of information in it that I'd like to refer to when creating the queue/proxy/elasticache. e.g., the subnet ranges to use, the cidr blocks, the availability zones, etc. But, I don't want to modify the kops generated terraform file because whenever there's a kops upgrade, I'll have to re-generate it then re-modify it.
The terraform file that kops generates doesn't provide any output variables. I could append my queue/proxy/elasticache configurations to the bottom of the file that kops generates. Then I'd be able to refer to the kops generated variables. But I consider this to be a modification to the kops generated file and would like to avoid this for the reasons above.
How can I make my custom terraform reference the parts of a generated terraform file?
If there are no output variables in the generated terraform files and you do not want to change them, how about using data sources?
https://www.terraform.io/docs/configuration/data-sources.html
Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to build on information defined outside of Terraform, or defined by another separate Terraform configuration.
Related
In my terraform script I defined a load balancer plus two listeners and two target groups which each have assigned two targets. This all works okay. When any of these defined items is removed manually from within the AWS console they are added again by TF script once it is run again.
The script makes use of these modules:
aws_alb
aws_lb_target_group
aws_lb_listener
aws_lb_target_group_attachment
But when I manually add a new listener plus a targetgroup with its own targets this change isn't detected by the terraform script I would expect that these manual additions would be removed as they are linked to the aws_alb that is created with TF. Is this the expected behavior?
Yes, this is expected. Terraform is declarative, you define your infrastructure and it will figure out what the diffs are in order to determine what changes it needs to make. It can only make these changes and diff against resources it controls unless you use data sources to look up AWS resources. Manually created resources won't be managed by Terraform, however you can create the Terraform config for them and import if you want to manage them with Terraform (see the docs for import)
I had a general question in regards to data sources in terraform. Can you specify a data source in terraform for a particular resource even if that resource is not present in your environment and expect it to retrieve information regarding that resource or when specifying a data source, does it create the resource and then just return the information in the data source block. I hope this makes sense. Thank you for any insight.
A Terraform data source allows you to refer to other data, configuration, or infrastructure defined in another Terraform configuration or outside source. Referencing a resource defined in a data source won't create the resource itself, and your plan will fail if you reference nonexistent data or infrastructure.
One example to help understand this is the aws_ami datasource: if you reference a nonexistent AWS AMI in an aws_ami datasource block, your Terraform plan will fail -- e.g. it won't try to create an AMI, but can only reference an existing one.
From the documentation:
Data sources allow data to be fetched or computed for use elsewhere in Terraform configuration. Use of data sources allows a Terraform configuration to make use of information defined outside of Terraform, or defined by another separate Terraform configuration.
I am trying to create a terraform script which will create a vpc and other resources. I am passing the parameters for scripts from a .tfvars file. I have successfully created the vpc and resources by executing the script. Now I want to create another vpc with same set of resources but with different set of parameter values. I have created a new .tfvars file with new values and tried to execute it with the old main.tf file. When I execute the 'terraform plan' command its showing that it will delete the vpc and resources created during my first run will create a new vpc with the new values.
Is there any method to create resources using same terraform main.tf file and by changing the .tfvars file.
You are running into a state-based issue. When you define a resource you give it a name. Those names are used in the state file and that is what is making Terraform to think you are trying to alter an existing resource. You have a couple of ways to address this and it depends on what you are really doing.
Terraform Workspaces
You could use workspaces in terraform for each VPC you are creating, this would keep the state separated, however, workspaces are really intended to separate environments, not multiple resources in the same environment. You can read more here.
Terraform Modules
What it sounds like to me is that you really want to create a terraform module for your VPC configuration. Then create each VPC using your module in the same main.tf. That way you will have unique names resources which will not confuse the state management. You can read more about modules here. A good resource for information about it can be found in this blog post.
The way to do this is by creating a module. You should be able to pretty much cut / paste your current code in to your module. You may only need to remove the provider definition from your module. Then in your new main code (root module) reference the module for each set of resources you want to create.
Ah the reason TF is trying to remove the resources you already created is because they've been captured in its state.
When you create the module add the resources you already created back in. TF will always try and configure as per the code, if the resources are remove it will try and destroy them
Create a module in terraform
This is because you are working on the same tfstate file.
Following you could do :
1. If you are working with local state: copy the whole code in a different directory and with new tfvars file and work there. This will start a new clean tfstate
If you are working with remote state:
a. Configure different remote state and then use new tfvars file, or
b. Create a different directory, symlink your code to this directory and replace old backend config and tfvars file with the new one.
I have sample code of working with multi-env https://github.com/pradeepbhadani/tf-course/tree/master/Lesson5
Create a Terraform module of your VPC code and then call it from a separate directory.
I manage an established AWS ECS application with terraform. The terraform also manages all other aspects of each of 4 AWS environments, including VPCs, subnets, bastion hosts, RDS databases, security groups and so on.
We manage our 4 environments by putting all the common configuration in modules which are parameterised with variables derived from the environment specific terraform files.
Now, we are trying to migrate to using Kubernetes instead of Amazon ECS for container orchestration and I am trying to do this incrementally rather than with a big bang approach. In particular, I'd like to use terraform to provision the Kubernetes cluster and link it to the other AWS resources.
What I'd initially hoped to do was capture the terraform output from kops create cluster, generalise it by parameterising it with environment specific variables and then use this one kubernetes module across all 4 environments.
However, I now realise this isn't going to work because the k8s nodes and masters all reference the kops state bucket (in s3) and it seems like I am going to have clone that bucket and rewrite the files contained therein. This seems like a rather fragile way to manage the kubernetes environment - if I recreate the terraform environment, the related state kops state bucket is going to be inconsistent with the AWS environment.
It seems to me that kops generated terraform may be useful for managing a single instance of an environment, but it isn't easily applied to multiple environments - you effectively need one kops generated terraform per environment and there is noway to reuse the terraform to establish a new environment - for this you must fall back from a declarative approach and resort to an imperative kops create cluster command.
Am I missing a good way to manage the definition of multiple similar kubernetes environments with a single terraform module?
I'm not sure how you reached either conclusions, Terraform (which will cause more trouble than it will ever solve, that's definitely a tool to get rid of asap), or having to duplicate S3 buckets.
I'm pretty sure you'd be interested in kops's cluster template feature.
You won't need to generate, hack, and launch (and debug...) Terraform, and kops templates are just as easy if not significantly easier (and more specific...) to maintain than Terraform.
When kops releases new versions, you won't have to re-generate and re-hack your Terraform scripts either!
Hope this helps!
Is there a way to create the terraform state file, from the existing infrastructure. For example, an AWS account comes with a some services already in place ( for ex: default VPC).
But terraform, seems to know only the resources, it creates. So,
What is the best way to migrate an existing AWS Infrastructure to Terraform code
Is it possible to add a resource manually and modify the state file manually (bad effects ?)
Update
Terraform 0.7.0 supports importing single resource.
For relatively small things I've had some success in manually mangling a state file to add stubbed resources that I then proceeded to Terraform over the top (particularly with pre-existing VPCs and subnets and then using Terraform to apply security groups etc).
For anything more complicated there is an unofficial tool called terraforming which I've heard is pretty good at generating Terraform state files but also merging with pre-existing state files. I've not given it a go but it might be worth looking into.
Update
Since Terraform 0.7, Terraform now has first class support for importing pre-existing resources using the import command line tool.
As of 0.7.4 this will import the pre-existing resource into the state file but not generate any configuration for the resource. Of course if then attempt a plan (or an apply) Terraform will show that it wants to destroy this orphaned resource. Before running the apply you would then need to create the configuration to match the resource and then any future plans (and applys) should show no changes to the resource and happily keep the imported resource.
Use Terraforming https://github.com/dtan4/terraforming , To date it can generate most of the *.tfstate and *.tf file except for vpc peering.