I have a Terraform module where I wanted to refactor the EFS so it is managed in the module rather than in the Terraform I presently have. Presently my Terraform includes two VPCs sharing an EFS using VPC Peering connection.
Eventually I want to be rid of old VPC but the EFS is still held by the old VPC. EFS does not allow creating aws_efs_mount_target on a different VPC.
resource "aws_efs_mount_target" "main" {
for_each = toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}
So I was wondering, is it possible to set something along the lines of
disable {
module.common.aws_efs_mount_target.main
}
or
module "common" {
exclude = [ "aws_efs_mount_target.main " ]
}
This is how I solved it. Basically create a new variable and use that as a condition to make an empty set.
resource "aws_efs_mount_target" "main" {
for_each = var.skip_efs_mount_target ? [] : toset(module.docker-swarm.subnets)
file_system_id = aws_efs_file_system.main.id
subnet_id = each.key
}
Related
I want to create a EKS cluster using Terraform, build custom docker images and then perform Kubernetes deployments on the created cluster via terraform. I want to perform all the tasks with a single terraform apply. But I see that the kubernetes provider needs the details of cluster on initialization itself. Is there a way I can achieve both cluster creation and deployment using a single terraform apply, so that once the cluster is created, the cluster details can be passed to Kubernetes provider and then the pods are deployed.
Please let me know how I can achieve this?
I am doing this with ease , below is the pseudo code , you just need to be careful with the way you are using depends_on attribute with resources and try to encapsulate as much as possible
Kubernetes Provider in a separate file like kubernetes.tf
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
module.eks.cluster_id
]
}
}
Assuming you've got the network setup, here I am relying on implicit dependencies rather than an explicit one.
module "eks" {
source = "./modules/ekscluster"
clustername = module.clustername.cluster_name
eks_version = var.eks_version
private_subnets = module.networking.private_subnets_id
vpc_id = module.networking.vpc_id
environment = var.environment
instance_types = var.instance_types
}
Creating k8s resources using the depends_on attribute.
resource "kubernetes_namespace_v1" "app" {
metadata {
annotations = {
name = var.org_name
}
name = var.org_name
}
depends_on = [module.eks]
}
I am trying to get the vpc_id of default vpc in my aws account using terraform
This is what I tried but it gives an error
Error: Invalid data source
this is what I tried:
data "aws_default_vpc" "default" {
}
# vpc
resource "aws_vpc" "kubernetes-vpc" {
cidr_block = "${var.vpc_cidr_block}"
enable_dns_hostnames = true
tags = {
Name = "kubernetes-vpc"
}
}
The aws_default_vpc is indeed not a valid data source. But the aws_vpc data source does have a boolean default you can use to choose the default vpc:
data "aws_vpc" "default" {
default = true
}
For completeness, I'll add that an aws_default_vpc resource exists that also manages the default VPC and implements the resource life-cycle without really creating the VPC* but would make changes in the resource like changing tags (and that includes its name).
* Unless you forcefully destroy the default VPC
From the docs:
This is an advanced resource and has special caveats to be aware of when using it. Please read this document in its entirety before using this resource.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/default_vpc
This
resource "aws_default_vpc" "default" {
}
will do.
I think this is convenient for terraform projects managing a whole AWS account, but I would advise against using it whenever multiple terraform projects are deployed in a single organization account. You should better stay with #blokje5's answer in that case.
I am trying to spin up an ECS cluster with Terraform, but can not make EC2 instances register as container instances in the cluster.
I first tried with the verified module from Terraform, but this seems out dated (ecs-instance-profile has wrong path).
Then I tried with another module from anrim, but still no container instances. Here is the script I used:
provider "aws" {
region = "us-east-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.21.0"
name = "ecs-alb-single-svc"
cidr = "10.10.10.0/24"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.10.10.0/27", "10.10.10.32/27", "10.10.10.64/27"]
public_subnets = ["10.10.10.96/27", "10.10.10.128/27", "10.10.10.160/27"]
tags = {
Owner = "user"
Environment = "me"
}
}
module "ecs_cluster" {
source = "../../modules/cluster"
name = "ecs-alb-single-svc"
vpc_id = module.vpc.vpc_id
vpc_subnets = module.vpc.private_subnets
tags = {
Owner = "user"
Environment = "me"
}
}
I then created a new ecs cluster (from the aws console) on the same VPC and carefully compared the differences in resources. I managed to find some small differences, fixed them and tried again. But still no container instances!
A fork of the module is available here.
Can you see instances being created in the autoscaling group? If so, I'd suggest SSHing to one of them (either directly or using a bastion host, eg. see this module) and checking ECS agent logs. In my experience those problems are usually related to IAM policies, and that's pretty visible in logs but YMMV.
I'm brand new to Terraform so I'm sure i'm missing something, but the answers i'm finding don't seem to be asking the same question I have.
I have an AWS VPC/Security Group that we need our EC2 instances to be created under and this VPC/SG is already created. To create an EC2 instance, Terraform requires that if I don't have a default VPC, I must import my own. But once I import and apply my plan, when I wish to destroy it, its trying to destroy my VPC as well. How do I encapsulate my resources so when I run "terraform apply", I can create an EC2 instance with my imported VPC, but when I run "terraform destroy" I only destroy my EC2 instance?
In case anyone wants to mention, I understand that:
lifecycle = {
prevent_destroy = true
}
is not what I'm looking for.
Here is my current practice code.
resource "aws_vpc" "my_vpc" {
cidr_block = "xx.xx.xx.xx/24"
}
provider "aws" {
region = "us-west-2"
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t3.nano"
vpc_security_group_ids = ["sg-0e27d851dxxxxxxxxxx"]
subnet_id = "subnet-0755c2exxxxxxxx"
tags = {
Name = "HelloWorld"
}
}
Terraform should not require you to deploy or import a VPC in order to deploy an EC2 instance into it. You should be able to reference the VPC, subnets and security groups by id so TF is aware of your existing network infrastructure just like you've already done for SGs and subnets. All you should need to deploy the EC2 instance "aws_instance" is give it an existing subnet id in the existing VPC like you already did. Why do you say deploying or importing a VPC is required by Terraform? What error or issue do you have deploying without the VPC and just using the existing one?
You can protect the VPC through AWS if you really wanted to, but I don't think you really want to import the VPC into your Terraform state and let Terraform manage it here. Sounds like you want the VPC to service other resources, maybe applications manually deployed or through other TF stacks, and the VPC to live independent of anyone application deployment.
To answer the original question, you can use a data source and match your VPC by id or tag name :
data "aws_vpc" "main" {
tags = {
Name = "main_vpc"
}
}
Or
data "aws_vpc" "main" {
id = "vpc-nnnnnnnn"
}
Then refer to it with : data.aws_vpc.main
Also, if you already included your VPC but would like not to destroy it while remove it from your state, you can manage to do it with the terraform state command : https://www.terraform.io/docs/commands/state/index.html
I have created a set-up with main and disaster recovery website architecture in AWS using Terraform.
The main website is in region1 and disaster recovery is in region2. This script is created as different plans or different directories.
For region1, I created one directory which contains only the main website Terraform script to launch the main website infrastructure.
For region2, I created another directory which contains only the disaster recovery website Terraform script to launch the disaster recovery website infrastructure.
In my main website script, I need some values of the disaster recovery website such as VPC peering connection ID, DMS endpoint ARNs etc.
How can I reference these variables from the disaster recovery website directory to the main website directory?
One option is to use the terraform_remote_state data source to fetch outputs from the other state file like this:
vpc/main.tf
resource "aws_vpc" "foo" {
cidr_block = "10.0.0.0/16"
}
output "vpc_id" {
value = "${aws_vpc.foo.id}"
}
route/main.tf
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
resource "aws_route_table" "rt" {
vpc_id = "${data.terraform_remote_state.vpc.vpc_id}"
}
However, it's nearly always better to just use the native data sources of the provider as long as they exist for the resource you need.
So in your case you will need to use data sources such as the aws_vpc_peering_connection data source to be able to establish cross VPC routing with something like this:
data "aws_vpc_peering_connection" "pc" {
vpc_id = "${data.aws_vpc.foo.id}"
peer_cidr_block = "10.0.0.0/16"
}
resource "aws_route_table" "rt" {
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_route" "r" {
route_table_id = "${aws_route_table.rt.id}"
destination_cidr_block = "${data.aws_vpc_peering_connection.pc.peer_cidr_block}"
vpc_peering_connection_id = "${data.aws_vpc_peering_connection.pc.id}"
}
You'll need to do similar things for any other IDs or things you need to reference in your DR region.
It's worth noting that there's not any data sources for the DMS resources so you would either need to use the terraform_remote_state data source to fetch any IDs (such as the source and target endpoint ARNs to setup the aws_dms_replication_task or you could structure things so that all of the DMS stuff happens in the DR region and then you only need to refer to the other region's VPC ID, database names and potentially KMS key IDs which can all be done via data sources.