How to block Terraform from deleting an imported resource? - amazon-web-services

I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}

Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.

To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html

Related

Assign pre-existing static (elastic) IP to EC2 instance

Assuming I have an existing Elastic IP on my AWS account.
For reasons beyond the scope of this question, this EIP is not (and cannot) be managed via Terraform.
I know want to assign this EIP (say 11.22.33.44) to an EC2 instance I create via TF
The traditional approach would to of course create both the EIP and the EC2 instance via TF
resource "aws_eip" "my_instance_eip" {
instance = "my_instance.id"
vpc = true
}
resource "aws_eip_association" "my_eip_association" {
instance_id = "my_instance.id"
allocation_id = "aws_eip.my_instance_eip.id"
}
Is there a way however to let EC2 know via TF that it should be assigned as EIP, 11.22.33.44 that is outside of TF lifecycle?
You can use aws_eip data source to get info of your existing EIP and then use that in your aws_eip_association:
data "aws_eip" "my_instance_eip" {
public_ip = "11.22.33.44"
}
resource "aws_eip_association" "my_eip_association" {
instance_id = aws_instance.my_instance.id
allocation_id = data.aws_eip.my_instance_eip.id
}

Does Terraform allow overriding resources from modules?

I have a Terraform module where I wanted to refactor the EFS so it is managed in the module rather than in the Terraform I presently have. Presently my Terraform includes two VPCs sharing an EFS using VPC Peering connection.
Eventually I want to be rid of old VPC but the EFS is still held by the old VPC. EFS does not allow creating aws_efs_mount_target on a different VPC.
resource "aws_efs_mount_target" "main" {
for_each = toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}
So I was wondering, is it possible to set something along the lines of
disable {
module.common.aws_efs_mount_target.main
}
or
module "common" {
exclude = [ "aws_efs_mount_target.main " ]
}
This is how I solved it. Basically create a new variable and use that as a condition to make an empty set.
resource "aws_efs_mount_target" "main" {
for_each = var.skip_efs_mount_target ? [] : toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}

How to get the default vpc id with terraform

I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.

How to launch an EC2 instance without key pair via Terraform?

I would like to launch an EC2 instance without key pair with my Terraform configuration. I could not find any info on internet which indicates usage of "no keypair" in Terraform. Anyone who have configured Terraform to be used this way?
Here's Terraform script that makes an EC2 instance of type t2.micro without a key and output its IP address.
terraform.tf:
provider "aws" {
profile = "default"
region = "us-west-2"
}
variable "instance_type" {
default = "t2.micro"
}
resource "aws_instance" "ec2_instance" {
ami = "ami-0d1cd67c26f5fca19"
instance_type = "var.instance_type"
}
output "ip" {
value = "aws_instance.ec2_instance.public_ip"
}
Put it on a dir and run it by using this command terraform apply.
You can use terraform plan for testing it.
Note: Don't forget to add your access_key and secret_key to your local aws configuration (aws configure) in order for it to work. You can also use aws-vault to avoid mistakenly exposing your credentials.

Is AWS ECS with Terraform broken?

I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.