I have two modules with 30+ resources in each. I want to destroy all the resources in one particular region and nothing in the other region. How to destroy the complete module instead of destroying each resource individually using terraform.
module "mumbai" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.mumbai }
}
module "seoul" {
source = "./site-to-site-vpn-setup"
providers = { aws = aws.seoul }
}
You could just either remove the relevant module (or comment out), then run terraform plan/apply? Because Terraform is infrastructure-as-code, when you change anything in the code, it will reflect those changes in your infrastructure.
You can specify the target when running terraform destroy.
For example, if you want to delete only the Mumbai module:
terraform destroy -target module.mumbai
Related
I do have a terraform script which provisions a Kubernetes deployment and a few clusterroles and clusterrolebingings via Helm.
But now I do need to edit one of the provisioned Clusterroles via Terraform and add another block of permissions. Is there a way to do this or would I need to recreate a similar resource freshly.
This is my block to create the respective deployment for efs-csi-driver.
resource "helm_release" "aws-efs-csi-driver" {
name = "aws-efs-csi-driver"
chart = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
version = "2.x.x"
namespace = "kube-system"
timeout = 3600
values = [
file("${path.module}/config/values.yaml"),
]
}
Somehow I do need to modify https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/45c5e752d2256558170100138de835b82d54b8af/deploy/kubernetes/base/controller-serviceaccount.yaml#L11 by adding a couple of more permission blocks. Is there a way that I can patch it out (Or completely overlay)
I'm creating three EKS clusters using this module. Everything works fine, just that when I try to add the configmap to the clusters using map_roles, I face an issue.
My configuration looks like this which I have it within all three clusters
map_roles = [{
rolearn = "arn:aws:iam::${var.account_no}:role/argo-${var.environment}-${var.aws_region}"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
}
]
The problem occurs while applying the template. It says
configmaps "aws-auth" already exists
When I studied the error further I realised that when applying the template, the module creates three configmap resources of the same name like these
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
resource "kubernetes_config_map" "aws_auth" {
# ...
}
This obviously is a problem. How do I fix this issue?
The aws-auth configmap is created by EKS, when you create a managed node pool. It has the configuration required for nodes to register with the control plane. If you want to control the contents of the configmap with Terraform you have two options.
Either make sure you create the config map before the managed node pools resource. Or import the existing config map into the Terraform state manually.
I've now tested my solution, which expands on #pst's "import aws-auth" answer and looks like this: break up the terraform apply operation in your main eks project into 3 steps, which completely isolate the eks resources from the k8s resources, so that you may manage the aws-auth ConfigMap from terraform workflows.
terraform apply -target=module.eks
This creates just the eks cluster and anything else the module creates.
The eks module design now guarantees this will NOT include anything from the kubernetes provider.
terraform import kubernetes_config_map.aws-auth kube-system/aws-auth
This brings the aws-auth map, generated by the creation of the eks cluster in the previous step, into the remote terraform state.
This is only necessary when the map isn't already in the state, so we first check, with something like:
if terraform state show kubernetes_config_map.aws-auth ; then
echo "aws-auth ConfigMap already exists in Remote Terraform State."
else
echo "aws-auth ConfigMap does not exist in Remote Terraform State. Importing..."
terraform import -var-file="${TFVARS_FILE}" kubernetes_config_map.aws-auth kube-system/aws-auth
fi
terraform apply
This is a "normal" apply which acts exactly like before, but will have nothing to do for module.eks. Most importantly, this call will not encounter the "aws-auth ConfigMap already exists" error since terraform is aware of its existence, and instead the proposed plan will update aws-auth in place.
NB:
Using a Makefile to wrap your terraform workflows makes this simple to implement.
Using a monolithic root module with -target is a little ugly, and as your use of the kubernetes provider grows, it makes sense to break out all the kubernetes terraform objects into a separate project. But the above gets the job done.
The separation of eks/k8s resources is best practice anyway, and is advised to prevent known race conditions between aws and k8s providers. Follow the trail from here for details.
I know that it's too late. But sharing a solution that I found.
We should use kubernetes_config_map_v1_data instead of kubernetes_config_map_v1. This resource allows Terraform to manage data within a pre-existing ConfigMap.
Example,
resource "kubernetes_config_map_v1_data" "aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
"mapRoles" = data.template_file.aws_auth_template.rendered
}
force = true
}
data "template_file" "aws_auth_template" {
template = "${file("${path.module}/aws-auth-template.yml")}"
vars = {
cluster_admin_arn = "${local.accounts["${var.env}"].cluster_admin_arn}"
}
}
Is there a way for Terraform to make changes to an existing S3 bucket without affecting the creation or deletion of the bucket?
For example, I want to use Terraform to enable S3 replication across several AWS accounts. The S3 buckets already exist, and I simply want to enable a replication rule (via a pipeline) without recreating, deleting, or emptying the bucket.
My code looks like this:
data "aws_s3_bucket" "test" {
bucket = "example_bucket"
}
data "aws_iam_role" "s3_replication" {
name = "example_role"
}
resource "aws_s3_bucket" "source" {
bucket = data.aws_s3_bucket.example_bucket.id
versioning {
enabled = true
}
replication_configuration {
role = data.aws_iam_role.example_role.arn
rules {
id = "test"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest1"
}
}
rules {
id = "test2"
status = "Enabled"
destination {
bucket = "arn:aws:s3:::dest2"
}
}
}
}
When I try to do it this way, Terraform apply tries to delete the existing bucket and create a new one instead of just updating the configuration. I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well. I would like to simply apply and destroy the replication configuration, not the already existing bucket.
I would like to simply apply and destroy the replication configuration, not the already existing bucket.
Sadly, you can't do this. Your bucket must be imported to TF so that it can be managed by it.
I don't mind trying terraform import, but my concern is that this will destroy the bucket when I run terraform destroy as well.
To protect against this, you can use prevent_destroy:
This meta-argument, when set to true, will cause Terraform to reject with an error any plan that would destroy the infrastructure object associated with the resource, as long as the argument remains present in the configuration.
I created an AWS environment using TERRAFORM.
After that, some resources were created by console (SES, SNS, LAMBDA) they did not was provisioned by TERRAFORM.
I'm writing the TERRAFORM code for these resources (SES, SNS, LAMBDA) that were created by the console.
If I already have these resources running in my account, is it possible to generate this code via TERRAFORM for these resources without removing them?
Or even, how do I have to proceed in this case?
Welcome to the world of IaC, you're in for a treat. :)
You can import all resources that were created without terraform (using a CLI or manually provisioned - resources which are not part of the tf state) to your terraform state. Once these resources are imported you can then start managing their lifecycle using terraform.
Define the resource in your .tf files
Import existing resources
As an example:
In order to import an existing non terraform managed lambda, you first define the resource for it in your .tf files:
main.tf:
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
Then you can execute terraform import, in order to import the existing lambda:
terraform import aws_lambda_function.test_lambda my_test_lambda_function
I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.