I have AWS WAF Classic that I would like to upgrade to WAFv2 without having to run a Terraform script to create WAFv2.Please how can I upgrade the current WAF classic to WAFv2 without disturbing the current classic configuration using Terraform
If we are talking a migration of a service about a cloud resource (waf classic to waf2 in your case) you should use new terraform resource blocks for the new service. So no one can write your terraform codes from scratch or refactor them. You should provide new terrafom codes then create new resources from them. Terraform import is also not applicable since the waf2 is completely different than waf and of course terraform codes are different such as "aws_waf_web_acl" "aws_wafv2_web_acl".
I can say that it is not a big issue, just start from scratch. The major change is lack of aws_waf_rule anymore. It is in the description of acl.
Related
I'm trying to replicate existing AWS WAF and ACL configuration into Terraform so that going forward, the config of the WAF rules etc can be controlled and monitored via Terraform.
The idea being that further configuration can be added via a Terraform Repo's deployment.
I've looked at the import options but I haven't been able to locate any specific resources to allow the WAF config to be exported. I've mainly come across EC2 examples.
Is there a tool within Terraform or another tool which will allow me to pull the current WAF data as TF code so that I can begin editing from there or do I have to replicate this configuration manually first and then run "terraform plan" command to check that nothing is due to be changed? (This would confirm that the code matches the current config)
Thanks in advance
I am setting up a CI&CD environment for a GCP project involves Cloud Run. While setting up everything via Terraform is pretty much straightforward, I cannot figure out how to update the environment when the code changes.
The documentation says:
Make a change to the configuration file.
But that couples the application deployment to terraform configuration, which should be responsible only for infrastructure deployment.
Ideally, I use terraform to provision the infrastructure, and another CI step to build and deploy the container.
Is there a best-practice here?
Relevant sources: 1.
I ended up separating Cloud Run service creation (which is still done in Terraform) and deployment to two different workflows.
The key component was to make terraform ignore the actual deployed image so that when the code deployment workflow is done, terraform won't complained that the Cloud Run image is different from the one it manages. I achieved this by setting ignore_changes = [template[0].spec[0].containers[0].image] on the google_cloud_run_service resource.
We are using codepipeline to deploy our application on to the AWS EC2 Nodes.
However codepipeline is not supported in all the AWS Regions and causing our terraform deployment to fail.
I would like to use userdatascript on AWS EC2 nodes, where AWS Regions lacking support of AWS Codepipeline.
Is there any way for me to detect/findout if codepipeline service supported/or not on targeted region through Terraform ?
AWS provides endpoint for the codepipeline in this documentation - https://docs.aws.amazon.com/general/latest/gr/codepipeline.html
My logical/hypothetical solution here is below
Run the curl command via local-exec or use http get data source to hit the endpoints on targeted region , the endpoint follow the below pattern https://codepipeline.<InsertTargetedRegion>.amazonaws.com
From the result of the step 1, make logical decision. if endpoint is reachable, create AWS Codepipeline and downstream resources, if endpoint is not reachable, create EC2 LC with userdata script and drop the AWS Codepipeline.
The other solution ( which is little clumsy ) , I can think of is to make a terraform list for the regions which do not support codepipeline as service and make some logical decision based on that.
However this clumsy solution required human effort (checking/knowing if region support aws codepipeline and update terraform list ) and updating terraform configuration every now and then.
I am wondering, if there is any other way to know if targeted region supports codepipeline or not.
Thank You.
I think that having a static list of supported regions is simply the easiest and most direct way of knowing where the script can run. Then the logic is quite easy: if the current region is supported continue, if not error and stop. Any other logic will be cumbersome and unnecessary.
Yes, you can use a static file, but is it a scalable solution? How can you track if a new region adds. I think this link will help you.
https://aws.amazon.com/blogs/aws/new-query-for-aws-regions-endpoints-and-more-using-aws-systems-manager-parameter-store/
With AWS CLI you can query services availability with regions
I would like to understand the need of tools like Terraform. When we do have Cloudformation template available and one can create/update all AWS services with that , What is the point in using a service like Terraform.
Please Suggest.
CloudFormation (CFN) and Terraform (CF) are both Infrastructure as Code (IaC) development tools.
However, CFN is only for AWS. You can't use it with Azure, GCP or anything else outside of AWS ecosystem. In contrast, TF is cloud agnostic. You can use it across not only multiple cloud providers, but also to work with non-cloud products, such as docker, various databases and even domino pizza if you want.
So the main advantage of TF is that once you learn it only once, you can apply it to a number of cloud providers. CFN is only useful in AWS, and once you stop using CFN, you have to learn something new to work with other cloud.
There are also difference in how TF and CFN work. Both have their strengths and weekends. For example:
when you deploy using CFN all resources are available to view in one central location in AWS along with template's source code. Whereas with TF there is no such place. If you login to the AWS console, you have no idea what was created by TF, what was the source code used, etc.
TF has loops and complex data structures and condtions, while CFN does not.
CFN has creation policies and update policies, TF has not.
You can control access to CFN using CFN policies and IAM policies. You can't do same with TF as it "lives" outside of AWS.
There are a couple of reasons why you might choose Terraform over CloudFormation:
Vendor Agnostic: There might be a point in the future where you need to migrate your cloud infrastructure. This could be due to several reasons (e.g. costs, regulatory compliance, etc.). With Terraform you are still able to use the same tool to deploy the new infrastructure. With smart use of Terraform modules you can even leave large parts of your infrastucture as code repository in tact.
Support for other tools: This also builds a bit on the previous point, but Terraform can deploy a lot more then just AWS resources. For example, you can use Terraform to orchestrate the deployment of an EC2 machine that is then configured with Ansible. Or you could use Terraform to deploy applications on top of your Kubernetes cluster. While CloudFormation supports custom resources via the creation of custom Lambdas, it is quite a lot of work to maintain.
Wider ecosystem: Due to the Open Source nature of Terraform, there is a huge ecosystem of tools that help you solve all kinds of issues, such as testing the infrastructure as code or building in compliance in a continuous fashion.
Arguably a better language: Personally I think Terraform is a way more suited for Infrastructure as Code then CloudFormation. Terraform has a lot more flexibility build in to the language (HCL) and their module system allows for a lot more composability then what can be achieved in CloudFormation.
I will start a new Terraform project on AWS. The VPC is already created and i want to know what's the best way to integrate it in my code. Do i have to create it again and Terraform will detect it and will not override it ? Or do i have to use Data source for that ? Or is there other best way like Terraform Import ?
I want also to be able in the future to deploy the entire infrastructure in other Region or other Account.
Thanks.
When it comes to integrating with existing objects, you first have to decide between two options: you can either import these objects into Terraform and use Terraform to manage them moving forward, or you can leave them managed by whatever existing system and use them in Terraform by reference.
If you wish to use Terraform to manage these existing objects, you must first write a configuration for the object as if Terraform were going to create it itself:
resource "aws_vpc" "example" {
# fill in here all the same settings that the existing object already has
cidr_block = "10.0.0.0/16"
}
# Can then use that vpc's id in other resources using:
# aws_vpc.example.id
But then rather than running terraform apply immediately, you can first run terraform import to instruct Terraform to associate this resource block with the existing VPC using its id assigned by AWS:
terraform import aws_vpc.example vpc-abcd1234
If you then run terraform plan you should see that no changes are required, because Terraform detected that the configuration matches the existing object. If Terraform does propose some changes, you can either accept them by running terraform apply or continue to update the configuration until it matches the existing object.
Once you have done this, Terraform will consider itself the owner of the VPC and will thus plan to update it or destroy it on future runs if the configuration suggests it should do so. If any other system was previously managing this VPC, it's important to stop it doing so or else this other system is likely to conflict with Terraform.
If you'd prefer to keep whatever existing system is managing the VPC, you can also use the Data Sources feature to look up the existing VPC without putting Terraform in charge of it.
In this case, you might use the aws_vpc data source, which can look up VPCs by various attributes. A common choice is to look up a VPC by its tags, assuming your environment has a predictable tagging scheme that allows you to describe the single VPC you are looking for:
data "aws_vpc" "example" {
tags = {
Name = "example-VPC-name"
}
}
# Can then use that vpc's id in other resources using:
# data.aws_vpc.example.id
In some cases users will introduce additional indirection to find the VPC some other way than by querying the AWS VPC APIs directly. That is a more advanced configuration and the options here are quite broad, but for example if you are using SSM Parameter Store you could place the VPC into a parameter store parameter and retrieve it using the aws_ssm_parameter data source.
If the existing system managing the VPC is CloudFormation, you could also use aws_cloudformation_export or aws_cloudformation_stack to retrieve the information from the CloudFormation API.
If you are happy to manage it via terraform moving forward then you can import existing resources into your terraform state. Here is the usage page for it https://www.terraform.io/docs/import/usage.html
You will have to define a resource block inside of your configuration for the vpc first. You could do something like:
resource "aws_vpc" "existing" {
cidr_block = "172.16.0.0/16"
tags = {
Name = "prod"
}
}
and then on the cli run the command
terraform import aws_vpc.existing <vpc-id>
Make sure you run a terraform plan afterwards, because terraform may try to make changes to it. You kind of have to reverse engineer it a bit, by adding all the necessary configuration to the aws_vpc resource. Once it is aligned, terraform will not attempt to change it. You can then re-use this to deploy to other accounts and regions.
As you suggested, you could use a data source for the vpc. This can be useful if you want to manage it outside of terraform, instead of having the potential to destroy the vpc if it is run by an inexperienced user.
Some customers I've worked with prefer to manage resources like vpcs/subnets (and other core infrastructure) in separate terraform scripts that only senior engineers have access to. This can avoid the disaster scenarios where people destroy the underlying infrastructure by accident.
I personally prefer managing all my terraform code in a git repository that is then deployed using a CI/CD tool, even if it's just myself working on it. Some people may not see the value in spending the time creating the pipeline though and may stick with running it locally.
This post has some great recommendations on running terraform in an an automated environment https://learn.hashicorp.com/terraform/development/running-terraform-in-automation