I am facing an issue in terraform where I want to read details of some existing resource (r1) created via AWS web console.
I am using those details in creation on new resource (r2) via terraform.
Problem is that it is trying to destroy and recreate that resource which is not desired as it will be failed. How can I manage not to destroy and recreate r1 when I do terraform apply.
Here is how I am doing it :
main.tf
resource "aws_lb" "r1"{
}
...
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${aws_lb.r1.dns_name}}/o/v1/multi/get/m/content"
}
first I import that resource
terraform import aws_lb.r1 {my_arn}
next I apply terraform
terraform apply
error
aws_lb.r1: Error deleting LB: ResourceInUse: Load balancer 'my_arn' cannot be deleted because it is currently associated with another service
The import statement is meant for taking control over existing resources in your Terraform setup.
If your only intention is to derive information on existing resources (outside of your Terraform control), data sources are designed specifically for this need:
data "aws_lb" "r1" {
name = "lb_foo"
arn = "some_specific_arn" #you can use any selector you wish to query the correct LB
}
resource "aws_api_gateway_integration" "r2" {
type = "HTTP"
uri = "${data.aws_lb.r1.dns_name}/o/v1/multi/get/m/content"
}
You can add a lifecycle configuration block in the resource "aws_lb" "r1" (see: https://www.terraform.io/docs/configuration/resources.html#lifecycle) to tell Terraform to ignore changes in the resource.
I guess something like this should work:
resource "aws_lb" "r1"{
lifecycle {
ignore_changes = ["*"]
}
}
Related
I need to create several iam policies from json files.
So, I've a file called iam_policies.tf with many of these code:
resource "aws_iam_policy" "name" {
name = "policy-name"
description = "Policy desc xxx"
path = "/"
policy = file("${path.module}/_/iam_policies/policy.json")
}
In a module I would like to use these policies as argument of var, but when I try to attach the policy...
resource "aws_iam_role_policy_attachment" "me" {
for_each = toset(var.policies)
role = aws_iam_role.me.name
policy_arn = each.value
}
I get the error: The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the for_each depends on.
This is the module that create policies resources and other resources:
module "admin" {
source = "./repo/module_name"
policies = [
aws_iam_policy.common.arn,
aws_iam_policy.ses_sending.arn,
aws_iam_policy.athena_readonly.arn,
aws_iam_policy.s3_deploy.arn,
]
...
}
I've tried with depends_on but It doesn't works.
I'm using terraform cloud, so I can't use apply -target
How can I do? What's wrong?
Thank you
If you can't use target, you have to separate your deployments into two deployments. First you deploy your policies, and then they will become inputs of the main deployment.
I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I have created a set-up with main and disaster recovery website architecture in AWS using Terraform.
The main website is in region1 and disaster recovery is in region2. This script is created as different plans or different directories.
For region1, I created one directory which contains only the main website Terraform script to launch the main website infrastructure.
For region2, I created another directory which contains only the disaster recovery website Terraform script to launch the disaster recovery website infrastructure.
In my main website script, I need some values of the disaster recovery website such as VPC peering connection ID, DMS endpoint ARNs etc.
How can I reference these variables from the disaster recovery website directory to the main website directory?
One option is to use the terraform_remote_state data source to fetch outputs from the other state file like this:
vpc/main.tf
resource "aws_vpc" "foo" {
cidr_block = "10.0.0.0/16"
}
output "vpc_id" {
value = "${aws_vpc.foo.id}"
}
route/main.tf
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
resource "aws_route_table" "rt" {
vpc_id = "${data.terraform_remote_state.vpc.vpc_id}"
}
However, it's nearly always better to just use the native data sources of the provider as long as they exist for the resource you need.
So in your case you will need to use data sources such as the aws_vpc_peering_connection data source to be able to establish cross VPC routing with something like this:
data "aws_vpc_peering_connection" "pc" {
vpc_id = "${data.aws_vpc.foo.id}"
peer_cidr_block = "10.0.0.0/16"
}
resource "aws_route_table" "rt" {
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_route" "r" {
route_table_id = "${aws_route_table.rt.id}"
destination_cidr_block = "${data.aws_vpc_peering_connection.pc.peer_cidr_block}"
vpc_peering_connection_id = "${data.aws_vpc_peering_connection.pc.id}"
}
You'll need to do similar things for any other IDs or things you need to reference in your DR region.
It's worth noting that there's not any data sources for the DMS resources so you would either need to use the terraform_remote_state data source to fetch any IDs (such as the source and target endpoint ARNs to setup the aws_dms_replication_task or you could structure things so that all of the DMS stuff happens in the DR region and then you only need to refer to the other region's VPC ID, database names and potentially KMS key IDs which can all be done via data sources.
I'm trying to import a terraform aws_iam_policy that gets automatically added by automation I don't own. The import seems to work but once I run a terraform plan I get the following error
* aws_iam_policy.mypolicy1: "policy": required field is not set
I'm running the terraform import as follows.
terraform import aws_iam_policy.mypolicy1 <myarn>
Here is my relevant terraform config
resource "aws_iam_policy" "mypolicy1" {
}
resource "aws_iam_role_policy_attachment" "mypolicy1_attachment`" {
role = "${aws_iam_role.myrole1.name}"
policy_arn = "${aws_iam_policy.mypolicy1.arn}"
}
resource "aws_iam_role" "myrole1" {
name = "myrole1"
assume_role_policy = "${file("../policies/ecs-role.json")}"
}
I double checked that the terraform.tfstate included the policy i'm trying to import. Is there something else I'm missing here?
You still need to provide the required fields in the Terraform configuration for the plan to work.
If you remove the aws_iam_policy resource from your configuration and run a plan after importing the policy you should see that Terraform wants to destroy the policy because it is in the state file but not in the configuration.
Simply setup your aws_iam_policy resource to match the imported policy and then a plan should show no changes.
I finally found a relatively elegant, and universal work-around to address Amazon's poor implementation of the import IAM policy capability. The solution does NOT require that you reverse engineer Amazon, or anybody else's, implementation of the "aws_iam_policy" resource that you want to import.
There are two steps.
Create an aws_iam_policy resource definition that has a "lifecycle" argument, with an ignore_changes list. There are three fields in the aws_iam_policy resource that will trigger a replacement: policy, description and path. Add these three fields to the ignore_changes list.
Import the external IAM policy, and attach it to the resource definition that you created in your resource file.
Resource file (ex: static-resources.tf)
resource "aws_iam_policy" "MyLambdaVPCAccessExecutionRole" {
lifecycle {
prevent_destroy = true
ignore_changes = [policy, path, description]
}
policy = jsonencode({})
}
Import Statement: Using the arn of the IAM policy that you want to import, import the policy and attach it to your resource definition.
terraform import aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole
The magic is the fields that you need to add to the ignore_changes list, and adding a place-holder for the required "policy" argument. Since this is a required field, Terraform won't let you proceed without it, even though this is one of the fields that you told Terraform to ignore any changes to.
Note: If you use modules, you will need to add "module.." to the front on your resource reference. For example
terraform import module.static.aws_iam_policy.MyLambdaVPCAccessExecutionRole arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole