terraform modules ec2 and vpc AWS - amazon-web-services

I have question about how use modules in terraform.
See below my code.
module "aws_vpc"{
source = "../modules/vpc"
vpc_cidr_block = "192.168.0.0/16"
name_cidr = "ec2-eks"
name_subnet = "ec2-eks-subnet"
subnet_cidr = ["192.168.1.0/25"]
}
module "ec2-eks" {
source = "../modules/ec2"
ami_id = "ami-07c8bc5c1ce9598c3"
subnet_id = module.aws_vpc.aws_subnet[0]
count_server = 1
}
output "aws_vpc" {
value = module.aws_vpc.aws_subnet[0]
}
I`m creating a vpc and want the next step to attach ec2 by my created subnet.But terraform attached by VPC of default.
What do I need to do that attach ec2 to my vpc(subnet)?
Thank you for you answers

What do I need to do that attach ec2 to my vpc(subnet)?
aws_instance has subnet_id attribute. Thus to place your instance in a given subnet, you have to set the subnet_id.
Since you are using a module to create your aws_vpc, likely the module will output subnet IDs as well. Due to lack of details of the module, its difficult to provide an exact answer, but in a general scenario you would do something along these lines (example):
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = module.aws_vpc.subnet_id
tags = {
Name = "HelloWorld"
}
}
Obviously, the above depends on the implementation of your module.

Thank you.
I`ve got success resources in AWS. I forget to set in the module of ec2 a parameter subnet_id

Related

Filter out Subnet IDs based on sufficient capacity in availability zones in Terraform

I'm trying to deploy an EKS cluster and everything seems to be fine except for one!
The facade module looks like this:
module "eks" {
source = "../../../../infrastructure_modules/eks"
## EKS ##
create_eks = var.create_eks
cluster_version = var.cluster_version
cluster_name = local.cluster_name
vpc_id = data.aws_vpc.this.id
subnets = data.aws_subnet_ids.this.ids
# note: either pass worker_groups or node_groups
# this is for (EKSCTL API) unmanaged node group
worker_groups = var.worker_groups
# this is for (EKS API) managed node group
node_groups = var.node_groups
## Common tag metadata ##
env = var.env
app_name = var.app_name
tags = local.eks_tags
region = var.region
}
The VPC id is retrieved through the following block :
data "aws_vpc" "this" {
tags = {
Name = "tagName"
}
}
Which then is used to retrieve the subnet_IDs as following:
data "aws_subnet_ids" "this" {
vpc_id = data.aws_vpc.this.id
}
Nevertheless, deploying this results in error stating:
Error: error creating EKS Cluster (data-layer-eks):
UnsupportedAvailabilityZoneException: Cannot create cluster
'data-layer-eks' because us-east-1e, the targeted availability zone,
does not currently have sufficient capacity to support the cluster.
Which is a well known error, and anybody can come across this for even EC2s.
I could solve this by simply hardcoding the subnet value, but that's really undesirable and hardly maintainable.
So the question is, how can I filter out subnet_IDs based on availability zones that have sufficient capacity?
First you need to collect the subnets with all of their attributes:
data "aws_subnets" "this" {
filter {
name = "vpc-id"
values = [data.aws_vpc.this.id]
}
}
data "aws_subnet" "this" {
for_each = toset(data.aws_subnets.this.ids)
id = each.value
}
data.aws_subnet.this is now a map(object) with all of the subnets and their attributes. You can now filter by availability zone accordingly:
subnets = [for subnet in data.aws_subnet.this : subnet.id if subnet.availability_zone != "us-east-1e"]
You can also filter by truthy conditionals if that condition is easier for you:
subnets = [for subnet in data.aws_subnet.this : subnet.id if contains(["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d"], subnet.availability_zone)]
It depends on your personal use case.

Terraform outs from a resource called via a for_each

I'm wondering if anyone can help me with the following I have a base resource to create aws subnets
resource aws_subnet subnet {
vpc_id = var.vpc_id
cidr_block = var.cidr_block
}
output subnetId {
value = aws_subnet.subnet.id
}
module private_subnet {
source = "linktoresourcedetailedabove"
for_each = var.privateSubnet
vpd.id = var.vpc_id
cidr_block = each.value.cidr_block
}
I have a module which calls using a for_each loop based on a var based in, my question is this resource might be called 10 times and I want to store each id and then access this from another module but I seem to be hitting issues here, I tried updating aws_subnet.subnet.id to aws_subnet.subnet.*.id but am still not having anyluck and can't seem to find anything out there that can help me.
If your private_subnet modules has output
output subnetId {
value = aws_subnet.subnet.id
}
then once you create your private_subnet modules, you can get the list of all subnetId creates as:
values(module.private_subnet)[*].subnetId

Correct design pattern for single server in AWS

I have customized cluster software that runs in a single AZ (subnet). One of the servers is the "controller". There can only be one of these running at a time. I need to be able to have it in the local DNS. It needs to automatically rebuild itself if it fails for any reason. I do not believe I need an elb/alb/nlb setup for this. However, when I set the system up with an autoscaling group, I am not able to get to the private IP address to update the route53 record. Is there a correct design pattern for this in AWS?
Here is the stub code, which does work in rebuilding the server from scratch if it is stopped or becomes unhealthy.
resource "aws_launch_configuration" "example" {
image_id = "${lookup(var.AmiLinux, var.region)}"
instance_type = "t2.micro"
security_groups = [aws_security_group.ingress-all-test.id]
key_name = "akeyname"
lifecycle {
create_before_destroy = true
}
}
data "aws_availability_zones" "all" {}
resource "aws_autoscaling_group" "example" {
launch_configuration = aws_launch_configuration.example.id
min_size = 1
max_size = 1
health_check_grace_period = 60
vpc_zone_identifier = ["${aws_subnet.subnetTest.id}"]
tag {
key = "Name"
value = "tf-asg-example"
propagate_at_launch = true
}
}
I do like the above as it does maintain a single server in an AZ. However, ASG makes it rather hard to get to the IP. I am not looking for the user-data to "hack" the change on boot. Since it can only run in a single subnet (AZ), I cannot use an elb. Thanks in advance for any design pattern for this type of setup.

Terraform modules and providers

I have a module that defines a provider as follows
provider "aws" {
region = "${var.region}"
shared_credentials_file = "${module.global_variables.shared_credentials_file}"
profile = "${var.profile}"
}
and an EC instance as follows
resource "aws_instance" "node" {
ami = "${lookup(var.ami, var.region)}"
key_name = "ib-us-east-2-production"
instance_type = "${var.instance_type}"
count = "${var.count}"
security_groups = "${var.security_groups}"
tags {
Name = "${var.name}"
}
root_block_device {
volume_size = 100
}
In the terraform script that calls this module, I would now like to create an ELB and attach it point it to the instance with something along the lines of
resource "aws_elb" "node_elb" {
name = "${var.name}-elb"
.........
However terraform keeps prompting me for the aws region that is already defined in the module. The only way around this is to copy the provider block into the file calling the module. Is there a cleaner way to approach this?
The only way around this is to copy the provider block into the file calling the module.
The provider block should actually be in your file calling the module and you can remove it from your module.
From the docs:
For convenience in simple configurations, a child module automatically inherits default (un-aliased) provider configurations from its parent. This means that explicit provider blocks appear only in the root module, and downstream modules can simply declare resources for that provider and have them automatically associated with the root provider configurations.
https://www.terraform.io/docs/configuration/modules.html#implicit-provider-inheritance

can terraform be used simply to create resources in different AWS regions?

I have the following deploy.tf file:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us_west_1"
region = "us-west-2"
}
resource "aws_us_east_1" "my_test" {
# provider = "aws.us_east_1"
count = 1
ami = "ami-0820..."
instance_type = "t2.micro"
}
resource "aws_us_west_1" "my_test" {
provider = "aws.us_west_1"
count = 1
ami = "ami-0d74..."
instance_type = "t2.micro"
}
I am trying to use it to deploy 2 servers, one in each region. I keep getting errors like:
aws_us_east_1.narc_test: Provider doesn't support resource: aws_us_east_1
I have tried setting alias's for both provider blocks, and referring to the correct region in a number of different ways. I've read up on multi region support, and some answers suggest this can be accomplished with modules, however, this is a simple test, and I'd like to keep it simple. Is this currently possible?
Yes it can be used to create resources in different regions even inside just one file. There is no need to use modules for your test scenario.
Your error is caused by a typo probably. If you want to launch an ec2 instance the resource you wanna create is aws_instance and not aws_us_west_1 or aws_us_east_1.
Sure enough Terraform does not know this kind of resource since it does simply not exist. Change it to aws_instance and you should be good to go! Additionally you should probably name them differently to avoid double naming using my_test for both resources.
Step 1
Add region alias in the main.tf file where you gonna execute the terraform plan.
provider "aws" {
region = "eu-west-1"
alias = "main"
}
provider "aws" {
region = "us-east-1"
alias = "useast1"
}
Step 2
Add providers block inside your module definition block
module "lambda_edge_rule" {
providers = {
aws = aws.useast1
}
source = "../../../terraform_modules/lambda"
tags = var.tags
}
Step 3
Define "aws" as providers inside your module. ( source = ../../../terraform_modules/lambda")
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 2.7.0"
}
}
}
resource "aws_lambda_function" "lambda" {
function_name = "blablabla"
.
.
.
.
.
.
.
}
Note: Terraform version v1.0.5 as of now.