I am trying to setup AWS SFTP transfer in vpc endpoint mode but there is one think I can't manage with.
The problem I have is how to get target IPs for NLB target group.
The only output I found:
output "vpc_endpoint_transferserver_network_interface_ids" {
description = "One or more network interfaces for the VPC Endpoint for transferserver"
value = flatten(aws_vpc_endpoint.transfer_server.*.network_interface_ids)
}
gives network interface ids which cannot be used as targets:
Outputs:
api_url = https://12345.execute-api.eu-west-1.amazonaws.com/prod
vpc_endpoint_transferserver_network_interface_ids = [
"eni-12345",
"eni-67890",
"eni-abcde",
]
I went through:
terraform get subnet integration ips from vpc endpoint subnets tab
and
Terraform how to get IP address of aws_lb
but none of them seems to be working. The latter says:
on modules/sftp/main.tf line 134, in data "aws_network_interface" "ifs":
134: count = "${length(local.nlb_interface_ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
You can create an Elastic IP
resource "aws_eip" "lb" {
instance = "${aws_instance.web.id}"
vpc = true
}
Then specify the Elastic IPs while creating Network LB
resource "aws_lb" "example" {
name = "example"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}
Related
I have created a transit gateway using the terraform tgw module as shown below.
module "transit-gateway" {
source = "terraform-aws-modules/transit-gateway/aws"
version = "1.4.0"
name = "tgw-nprod"
description = "My TGW shared with several other AWS accounts"
amazon_side_asn = 64532
enable_auto_accept_shared_attachments = true
vpc_attachments = {
vpc1 = {
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
dns_support = true
ipv6_support = false
transit_gateway_default_route_table_association = false
transit_gateway_default_route_table_propagation = false
}
}
ram_allow_external_principals = true
ram_principals = [1234567890, 0987654321]
tags = {
Purpose = "tgw-testing"
}
}
I have created vpc using the terraform vpc module.
When I run the above terraform Iam getting error "Error: error creating EC2 Transit Gateway VPC Attachment: DuplicateSubnetsInSameZone: Duplicate Subnets for same AZ"
I have 2 private subnet in ap-south-1 and 1 public in ap-south-1.
The AWS docs write that you can have your gateway in only one subnet per AZ:
You must select at least one subnet. You can select only one subnet per Availability Zone.
Your error msg suggests that your module.vpc.private_subnets are in same AZ. You have to redefine your VPC so that module.vpc.private_subnets are in two different AZs, or just use one subnet in your subnet_ids.
To use one subnet:
subnet_ids = [module.vpc.private_subnets[0]]
I have created two VPCs using Terraform:
resource "aws_vpc" "alpha" {
cidr_block = "10.16.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "Alpha"
}
}
resource "aws_subnet" "alpha_private_a" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.192.0/24"
availability_zone = "${var.aws_region}a"
tags = {
Name = "Alpha Private A"
}
}
resource "aws_subnet" "alpha_private_b" {
vpc_id = aws_vpc.alpha.id
cidr_block = "10.16.224.0/24"
availability_zone = "${var.aws_region}b"
tags = {
Name = "Alpha Private B"
}
}
resource "aws_route_table" "alpha_private" {
vpc_id = aws_vpc.alpha.id
tags = {
Name = "Alpha Private"
}
}
resource "aws_route_table_association" "alpha_private_a" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_a.id
}
resource "aws_route_table_association" "alpha_private_b" {
route_table_id = aws_route_table.alpha_private.id
subnet_id = aws_subnet.alpha_private_b.id
}
# The same again for VPC "Bravo"
I also have an RDS in VPC "Alpha":
resource "aws_db_subnet_group" "alpha_rds" {
subnet_ids = [ aws_subnet.alpha_private_a.id, aws_subnet.alpha_private_b.id ]
tags = {
Name = "Alpha RDS"
}
}
resource "aws_db_instance" "alpha" {
identifier = "alpha"
allocated_storage = 20
max_allocated_storage = 1000
storage_type = "gp2"
engine = "postgres"
engine_version = "11.8"
publicly_accessible = false
db_subnet_group_name = aws_db_subnet_group.alpha_rds.name
performance_insights_enabled = true
vpc_security_group_ids = [ aws_security_group.alpha_rds.id ]
lifecycle {
prevent_destroy = true
}
}
Then I have an Elastic Beanstalk instance inside VPC "Bravo".
What I want to achieve:
alpha_rds is accessible to my Elastic Beanstalk instance inside Bravo VPC
Nothing else inside Alpha VPC is accessible to Bravo VPC
Nothing else inside Bravo VPC is accessible to Alpha VPC
I think VPC Peering is required for this?
How can I implement this in Terraform?
Related but not Terraform:
Access Private RDS DB From Another VPC
AWS Fargate connection to RDS in a different VPC
You should be able to set it up like this:
Create a VPC Peering Connection between Alpha and Bravo
In the Route table for Alpha add a route for the CIDR range of Bravo and set the destination to the peering connection (pcx-XXXXXX) to Bravo
In the Route table for Bravo add a route for the IP-address(es) of the Database and point it to the peering connection to Alpha
This setup guarantees, that resources in Bravo can only communicate to the Database in Alpha, every other packet to that VPC can't be routed.
The inverse is a little tougher - right now this setup should stop TCP connections from Alpha to Bravo being established, because there is no return path except for the database. UDP traffic could still go through, it's response will be dropped though, unless it comes from the database.
At this point you could set up Network Access Control lists in the Subnets in Bravo to Deny traffic from Alpha except for the database IPs. This depends on your level of paranoia or your requirements in terms of isolation - personally I wouldn't do it, but it's Friday afternoon and I'm in a lazy mood ;-).
Update
As Mark B correctly pointed out in the comments there is a risk, that the private IP addresses of your RDS cluster may change on failover if the underlying host can't be recovered.
To address these concerns, you could create separate subnets in Alpha for your database node(s) and substitute the database IPs in my description above with the CIDRs of these subnets. That should allow for slightly more flexibility and allows you to get around the NACL problem as well, because you can just edit the routing table of the new database subnet(s) and only add the Peering Connection there.
I have a typical problem with terraform and aws. I have to deploy 26 instances via terraform but they all should have ip address in an increment order.
for example
instance 1: 0.0.0.1
instance 2: 0.0.0.2
instance 3: 0.0.0.3
Is it possible somehow to achieve in terraform?
Below, you can find an example of how to do it. It just creates for instances with ip range from 172.31.64.100 to 172.31.64.104 (you can't use first few numbers as they are reserved by AWS).
You will have to adjust subnet id and initial IP range which I used in my example to your subnets. You also must ensure that these IP addresses are not used. AWS could already use them for its load balances in your VPC, existing instances or other services. If any IP address in this range is already taken, it will fail.
locals {
ip_range = [for val in range(100, 104): "172.31.64.${val}"]
}
resource "aws_network_interface" "foo" {
for_each = toset(local.ip_range)
subnet_id = "subnet-b64b8988"
private_ips = [each.key]
tags = {
Name = "primary_network_interface"
}
}
resource "aws_instance" "web" {
for_each = toset(local.ip_range)
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
network_interface {
network_interface_id = aws_network_interface.foo[each.key].id
device_index = 0
}
tags = {
Name = "HelloWorld"
}
}
How can I create a target group for a network load balancer containing a VPC endpoint in Terraform?
In AWS console, I would have done following steps:
Create VPC Endpoint in two subnets to an endpoint service in another VPC
Create a target group of type IP and register the IP adresses of
the enpoints created in step 1
In terraform, I can create target groups and endpoints, but I don't know how to assign the enpoint's IPs to the target group. Where can I find instructions or an example how to do this? (Creating target groups for type instance is no problem, my question is specific for type IP).
Late to the party! But this is what I did.
Created a null resource that would get the IP addresses of the VPC endpoints and store it in a file
resource "null_resource" "nlb" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "dig +short ${lookup(tomap(element(aws_vpc_endpoint.api-gw.dns_entry, 0)), "dns_name", "")} > /tmp/entry"
}
}
and then read the file entries
resource aws_lb_target_group_attachment nlb {
depends_on = [
null_resource.nlb
]
for_each = toset(slice(split("\n", file("/tmp/entry")), 0, 2))
target_group_arn = resource.aws_lb_target_group.nlb.arn
target_id = each.value
port = 443
}
I need help in writing a terraform script fro AWS as follows:
I have a list of security groups in multiple regions, for example,
- us-east-2
- us-west-1
- etc.
Now when I add a new instance in any of the region, I am applying an EIP.
I need to add that EIP all traffic in every region's security group.
So far what I tried:
Saving the EIP in a file called node_ips.txt
Read that file
Apply it to security group
Here is the script sample:
variable "list_eips" { type=list" }
resource "aws_eip_association" "eip_assoc" {
count = "${local.number_of_instances}"
instance_id = "${element(aws_instance.ec2_instance.*.id, count.index)}"
allocation_id = "${element(data.aws_eip.db_ip.*.id, count.index)}"
provisioner "local-exec" {
command = "echo ${self.public_ip} >> node_ips.txt"
}
}
data "template_file" "read_node_ips" {
template = "${file("${path.cwd}/node_ips.txt")}"
}
resource "aws_security_group_rule" "allow_db_communication" {
type = "ingress"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["${split(",", "${join("/32,", concat(compact(split("\n",data.template_file.read_node_ips.rendered)),var.list_eips) )}/32")}"]
security_group_id = "${data.aws_security_group.cassandra_sg.id}"
}
This is not working for me. It is adding rules only for list_eips.
Again, when I add a new instance in different region, the secruity group is different. So, I am not able to know what was my security group in previous region.
Please advise any idea.
Thanks.
Terraform has the idea of "Multiple Provider Instances", You can create providers for each region that need to access and manipulate resources.
# West coast region
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = "aws.west"
# ...
}
https://www.terraform.io/docs/configuration/providers.html