Terraform - DB and security group are in different VPCs - amazon-web-services

What am I trying to achive:
Create and RDS Aurora cluster and place it in the same VPC as EC2 instances that I start so they can comunicate.
I'm trying to start an SG named "RDS_DB_SG" and make it part of the VPC i'm creating in the process.
I also create an SG named "BE_SG" and make it part of the same VPC.
I'm doing this so I can get access between the 2 (RDS and BE server).
What I did so far:
Created an .tf code and started everything up.
What I got:
It starts ok if I don't include the RDS cluster inside the RDS SG - The RDS creates it's own VPC.
When I include the RDS in the SG I want for him, The RDS cluster can't start and get's an error.
Error I got:
"The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-5a***63c and the EC2 security group is in vpc-0e5391*****273b3d"
Workaround for now:
I started the infrastructure without specifing a VPC for the RDS. It created it's own default VPC.
I then created manuall VPC-peering between the VPC that was created for the EC2's and the VPC that was created for the RDS.
But I want them to be in the same VPC so I won't have to create the VPC-peering manuall.
My .tf code:
variable "vpc_cidr" {
description = "CIDR for the VPC"
default = "10.0.0.0/16"
}
resource "aws_vpc" "vpc" {
cidr_block = "${var.vpc_cidr}"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_subnet" "vpc_subnet" {
vpc_id = "${aws_vpc.vpc.id}"
cidr_block = "${var.vpc_cidr}"
availability_zone = "eu-west-1a"
tags = {
Name = "${var.env}_vpc"
}
}
resource "aws_db_subnet_group" "subnet_group" {
name = "${var.env}-subnet-group"
subnet_ids = ["${aws_subnet.vpc_subnet.id}"]
}
resource "aws_security_group" "RDS_DB_SG" {
name = "${var.env}-rds-sg"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 3396
to_port = 3396
protocol = "tcp"
security_groups = ["${aws_security_group.BE_SG.id}"]
}
}
resource "aws_security_group" "BE_SG" {
name = "${var.env}_BE_SG"
vpc_id = "${aws_vpc.vpc.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "BE" {
ami = "ami-*********************"
instance_type = "t2.large"
associate_public_ip_address = true
key_name = "**********"
tags = {
Name = "WEB-${var.env}"
Porpuse = "Launched by Terraform"
ENV = "${var.env}"
}
subnet_id = "${aws_subnet.vpc_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.BE_SG.id}", "${aws_security_group.ssh.id}"]
}
resource "aws_rds_cluster" "rds-cluster" {
cluster_identifier = "${var.env}-cluster"
database_name = "${var.env}-rds"
master_username = "${var.env}"
master_password = "PASSWORD"
backup_retention_period = 5
vpc_security_group_ids = ["${aws_security_group.RDS_DB_SG.id}"]
}
resource "aws_rds_cluster_instance" "rds-instance" {
count = 1
cluster_identifier = "${aws_rds_cluster.rds-cluster.id}"
instance_class = "db.r4.large"
engine_version = "5.7.12"
engine = "aurora-mysql"
preferred_backup_window = "04:00-22:00"
}
Any suggestions on how to achieve my first goal?

Related

How to launch multiple AWS EC2 instances from a single VPC using Terraform?

Is it possible to launch multiple ec2 instances from terraform using a single VPC? I'm building something which requires multiple instances to be launched from the same region and I'm doing all this using Terraform. But there's a limit in AWS VPC: per region only 5 VPCs are allowed. What I've been doing until now is each time when I need to launch an instance I create a separate VPC for it in terraform. Below is the code for reference:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-2"
access_key = "XXXXXXXXXXXXXXXXX"
secret_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
# 1. Create vpc
resource "aws_vpc" "prod-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "production"
}
}
# 2. Create Internet Gateway
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.prod-vpc.id
}
# 3. Create Custom Route Table
resource "aws_route_table" "prod-route-table" {
vpc_id = aws_vpc.prod-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
route {
ipv6_cidr_block = "::/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "Prod"
}
}
# 4. Create a Subnet
resource "aws_subnet" "subnet-1" {
vpc_id = aws_vpc.prod-vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-2a"
tags = {
Name = "prod-subnet"
}
}
# 5. Associate subnet with Route Table
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet-1.id
route_table_id = aws_route_table.prod-route-table.id
}
# 6. Create Security Group to allow port 22,80,443
resource "aws_security_group" "allow_web" {
name = "allow_web_traffic"
description = "Allow Web inbound traffic"
vpc_id = aws_vpc.prod-vpc.id
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "self"
from_port = 8000
to_port = 8000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_web"
}
}
# 7. Create a network interface with an ip in the subnet that was created in step 4
resource "aws_network_interface" "web-server-nic" {
subnet_id = aws_subnet.subnet-1.id
private_ips = ["10.0.1.50"]
security_groups = [aws_security_group.allow_web.id]
}
# 8. Assign an elastic IP to the network interface created in step 7
resource "aws_eip" "one" {
vpc = true
network_interface = aws_network_interface.web-server-nic.id
associate_with_private_ip = "10.0.1.50"
depends_on = [aws_internet_gateway.gw]
}
output "server_public_ip" {
value = aws_eip.one.public_ip
}
# 9. Create Ubuntu server and install/enable apache2
resource "aws_instance" "web-server-instance" {
ami = var.AMI_ID
instance_type = "g4dn.xlarge"
availability_zone = "us-east-2a"
key_name = "us-east-2"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.web-server-nic.id
}
root_block_device {
volume_size = "200"
}
iam_instance_profile = aws_iam_instance_profile.training_profile.name
depends_on = [aws_eip.one]
user_data = <<-EOF
#!/bin/bash
python3 /home/ubuntu/setting_instance.py
EOF
tags = {
Name = var.INSTANCE_NAME
}
}
The only downside to this code is it creates separate VPC everytime I create an instance. I read in a stackoverflow post that we can import an existing VPC using terraform import command. Along with the VPC, I had to import the Internet Gateway and Route Table as well (it was throwing error otherwise). But then I wasn't able to access the instance using SSH and also the commands in the user_data part didn't execute (setting_instance.py will send a firebase notification once the instance starts. That's the only purpose of setting_instance.py)
Not only VPC I'd also like to know if I can use the other resources as well to it's fullest extent possible.
I'm new to terraform and AWS. Any suggestions in the above code are welcome.
EDIT: Instances are created one at a time according to the need, i.e., whenever there is a need to create a new instance I use this code. In the current scenario if there are already 5 instances running up in a region then I won't be able to use this code to create a 6th instance in the same region when the demand arises.
If as you say, they would be exactly same, the easiest way would be to use count, which would indicate how many instance you want to have. For that you can introduce new variable:
variable "number_of_instance" {
default = 1
}
and then
resource "aws_instance" "web-server-instance" {
count = var.number_of_instance
ami = var.AMI_ID
instance_type = "g4dn.xlarge"
availability_zone = "us-east-2a"
key_name = "us-east-2"
network_interface {
device_index = 0
network_interface_id = aws_network_interface.web-server-nic.id
}
root_block_device {
volume_size = "200"
}
iam_instance_profile = aws_iam_instance_profile.training_profile.name
depends_on = [aws_eip.one]
user_data = <<-EOF
#!/bin/bash
python3 /home/ubuntu/setting_instance.py
EOF
tags = {
Name = var.INSTANCE_NAME
}
}
All this must be manage by same state file, not fully separate state files, as again you will end up with duplicates of the VPC. You only change number_of_instance to what you want. For more resilient solution, you would have to use autoscaling group for the instances.

Can't connect to Terraform-created instance with Private Key, but CAN connect when I create instance in Console

I've created the following key pair and EC2 instance using Terraform. I'll leave the SG config out of it, but it allows SSH from the internet.
When I try to SSH into this instance I get the errors "Server Refused our Key" and "No supported authentication methods available (server sent: publickey).
However I am able to login when I create a separate EC2 instance in the console and assign it the same key pair assigned in the TF script.
Has anyone seen this behavior? What causes it?
# Create Dev VPC
resource "aws_vpc" "dev_vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
enable_dns_hostnames = "true"
tags = {
Name = "dev"
}
}
# Create an Internet Gateway Resource
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.dev_vpc.id
tags = {
Name = "dev-engineering-igw"
}
}
# Create a Route Table
resource "aws_route_table" " _dev_public_routes" {
vpc_id = aws_vpc. _dev.id
tags = {
name = " _dev_public_routes"
}
}
# Create a Route
resource "aws_route" " _dev_internet_access" {
route_table_id = aws_route_table. _dev_public_routes.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
# Associate the Route Table to our Public Subnet
resource "aws_route_table_association" " _dev_public_subnet_assoc" {
subnet_id = aws_subnet. _dev_public.id
route_table_id = aws_route_table. _dev_public_routes.id
}
# Create public subnet for hosting customer-facing Django app
resource "aws_subnet" " _dev_public" {
vpc_id = aws_vpc. _dev.id
cidr_block = "10.0.0.0/17"
availability_zone = "us-west-2a"
tags = {
Env = "dev"
}
}
resource "aws_security_group" "allow_https" {
name = "allow_https"
description = "Allow http and https inbound traffic"
vpc_id = aws_vpc. _dev.id
ingress {
description = "HTTP and HTTPS into VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP and HTTPS into VPC"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
description = "HTTP and HTTPS out of VPC for Session Manager"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_https"
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu20.id
instance_type = "t3.micro"
subnet_id = aws_subnet. _dev_public.id
associate_public_ip_address = "true"
vpc_security_group_ids = ["${aws_security_group.allow_https.id}"]
key_name = "key_name"
metadata_options { #Enabling IMDSv2
http_endpoint = "disabled"
http_tokens = "required"
}
tags = {
Env = "dev"
}
}
As specified in the comments, removing the metadata_options from the instance resource resolves the issue.
The fix is to update the metadata_options to be:
metadata_options { #Enabling IMDSv2
http_endpoint = "enabled"
http_tokens = "required"
}
Looking at the Terraform documentation for metadata_options shows that:
http_endpoint = "disabled" means that the metadata service is unavailable.
http_tokens = "required" means that the metadata service requires session tokens (ie IMDSv2).
This is an invalid configuration, as specified in the AWS docs:
You can opt in to require that IMDSv2 is used when requesting instance metadata. Use the modify-instance-metadata-options CLI command and set the http-tokens parameter to required. When you specify a value for http-tokens, you must also set http-endpoint to enabled.

How do I use terraform to connect a replica Postgres RDS and its Source in a different VPC?

I'm trying to set up a dev environment: 1x private subnet, 1x public subnet in the dev VPC; Postgres RDS instance in the private subnet; each subnet's resources are in its own security group. The source RDS instance is in the prod VPC. I have created a peering connection and the CIDRs of each VPC do not over lap.
I am getting
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in prod-vpc and the EC2 security group is in dev-vpc
Here are my terraform defintions. I have also added the other peer's relevant CIDRs to the route tables of each peer VPC. The source RDS and prod VPC were both created in a separate process and already exist outside of this terraform process.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.77.0"
name = "dev-vpc"
cidr = "192.168.0.0/16"
azs = ["us-west-2a"]
enable_dns_hostnames = true
enable_dns_support = true
}
module "keypair" {
source = "git::https://github.com/rhythmictech/terraform-aws-secretsmanager-keypair"
name_prefix = "ec2-ssh"
description = "SSH keypair for instances"
}
resource "aws_security_group" "dev-sg-pub" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 5432 # testing
to_port = 5432 # testing
protocol = "tcp"
cidr_blocks = ["192.168.1.0/28","192.168.2.0/24"]
self = true
}
egress {
from_port = 5432 # testing
to_port = 5432 # testing
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "dev-sg-priv" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 5432 # testing
to_port = 5432 # testing
protocol = "tcp"
cidr_blocks = ["192.168.1.0/28", "192.168.2.0/24"]
security_groups = ["sg-xxxxxxxxxxxxxxx"] # the pub subnet's sg
self = true
}
egress {
from_port = 5432 # testing
to_port = 5432 # testing
protocol = "tcp"
cidr_blocks = ["192.168.1.0/28", "192.168.2.0/24"]
}
}
resource "aws_subnet" "dev-subnet-pub" {
vpc_id = module.vpc.vpc_id
cidr_block = "192.168.1.0/28"
tags = {
Name = "dev-subnet-pub"
Terraform = "true"
Environment = "dev"
}
}
resource "aws_subnet" "dev-subnet-priv" {
vpc_id = module.vpc.vpc_id
cidr_block = "192.168.2.0/24"
tags = {
Name = "dev-subnet-priv"
Terraform = "true"
Environment = "dev"
}
}
resource "aws_vpc_peering_connection" "dev-peer-conn" {
peer_vpc_id = "vpc-xxxxxxxxxxxxxxa"
vpc_id = module.vpc.vpc_id
auto_accept = true
}
resource "aws_db_instance" "dev-replica" {
name = "dev-replica"
identifier = "dev-replica"
replicate_source_db = "arn:aws:rds:us-west-2:9999999999:db:tf-xxxxxxxx"
instance_class = "db.t3.small"
apply_immediately = false
publicly_accessible = false
skip_final_snapshot = true
vpc_security_group_ids = [aws_security_group.dev-sg-priv.id, "sg-xxxxxxxxxxx"]
depends_on = [aws_vpc_peering_connection.dev-peer-conn]
}
You can't do this. SGs have VAC-scope, and your RDS must use SG from the VPC it is located it.
Since you peered your VPCs, you can only reference SG across VPCs in your aws_security_group.dev-sg-priv.

Terraform ec2 - Permission denied (publickey)

I try to learn Terraform.
Want to install some stuff on an EC2 and connect from ssh.
I have created a new ssh-key pair for this.
When i try to ssh -i ssh-keys/id_rsa_aws ubuntu#52.47.123.18 I got the error
Permission denied (publickey).
Here a sample of my .tf script.
resource "aws_instance" "airflow" {
ami = "ami-0d3f551818b21ed81"
instance_type = "t3a.xlarge"
key_name = "admin"
vpc_security_group_ids = [aws_security_group.ssh-group.id]
tags = {
"Name" = "airflow"
}
subnet_id = aws_subnet.ec2_subnet.id
}
resource "aws_key_pair" "admin" {
key_name = "admin"
public_key = "ssh-rsa ........" # I cat my key.pub fot this
}
EDIT
Thanks to #GrzegorzOledzki I see that the issue come from my subnet work. Here the files.
gateway.tf
resource "aws_internet_gateway" "my_gateway" {
vpc_id = aws_vpc.my_vpc.id
tags = {
Name = "my-gateway"
}
}
network.tf
resource "aws_vpc" "my_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "my-vpc"
}
}
resource "aws_eip" "airflow_ip" {
instance = aws_instance.airflow.id
vpc = true
}
security_group.tf
resource "aws_security_group" "ssh-group" {
name = "ssh-group"
vpc_id = aws_vpc.my_vpc.id
ingress {
# TLS (change to whatever ports you need)
from_port = 22
to_port = 22
protocol = "tcp"
# Please restrict your ingress to only necessary IPs and ports.
# Opening to 0.0.0.0/0 can lead to security vulnerabilities.
cidr_blocks = ["my.ip.from.home/32"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
subnet.tf
resource "aws_subnet" "ec2_subnet" {
cidr_block = cidrsubnet(aws_vpc.my_vpc.cidr_block, 3, 1)
vpc_id = aws_vpc.my_vpc.id
availability_zone = "eu-west-3c"
}
resource "aws_route_table" "my_route_table" {
vpc_id = aws_vpc.my_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.my_gateway.id
}
tags = {
Name = "my_route_table"
}
}
resource "aws_route_table_association" "subnet_association" {
route_table_id = aws_route_table.my_route_table.id
subnet_id = aws_subnet.ec2_subnet.id
}
EDIT 2
I destroy everything then rebuilt it and it's working. I'm not sure but I have create my EC2 instance before making and linking the custom-vpc, subnet and security group. It was like something (what ?) went wrong and it couldn't not reassign everything to my instance.
Currenly that pem key has read permission only for users (chmod 400 *.pem). In this case, you need to enable read permission for others also (chmod 404 / chmod o=r *.pem).
I ran in to the same issue and the fix for me too was to just terraform destroy and then rebuilding the infrastructure.
The root cause for my issues was that I had first created and set the key pair for the ec2 instance, but I didn't output/save the private key at all. After creating the ec2 instance with the key pair, I added output + save functionality to the generated ssh keys, but they were actually not the key pair that was set to the ec2 instance, but instead just a new generated one. Now that I generated the key pair set for the newly created instance and saved the output from the private key I could ssh successfully in to the instance

Want to create two ec2 instance under a vpc using terraform

I want to create two ec2 instance under a vpc via terraform. With my code I can easily deploy the vpc, security group & subnet but when in instance section found some error. Can you help me to resolve this issue. But 'terraform plan' command will execute successfully.
provider "aws" {
region = "us-west-2"
access_key = "xxxxxxxx"
secret_key = "xxxxxxxxxxxxx"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
instance_tenancy = "default"
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.main.id}"
cidr_block = "10.0.1.0/24"
}
resource "aws_security_group" "allow_ssh" {
name = "allow_ssh"
description = "Allow ssh inbound traffic"
vpc_id = "${aws_vpc.main.id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "instance"{
ami = "ami-0fc025e3171c5a1bf"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.allow_ssh.id}"]
subnet_id = "${aws_subnet.public.id}"
}
output "vpc_id" {
value = "${aws_vpc.main.id}"
}
output "subnet_id" {
value = "${aws_subnet.public.id}"
}
output "vpc_security_group_id" {
value = "${aws_security_group.allow_ssh.id}"
}
Error
ami-0fc025e3171c5a1bf is for the arm64 architecture and will not work with a t2.micro. If you need an arm platform, you will need to use an instance type in the a1 family.
Otherwise, you can use the x86-64 equivalent of Ubuntu Server 18.04 LTS (HVM), SSD Volume using ami-0d1cd67c26f5fca19.