Unable to connect to s3 from Lambda after creating VPC endpoint - amazon-web-services

We have a lambda function in our VPC so that it can connect to our RDS instance. This lambda also needs to connect to s3. It seems that in order to connect to s3 from a VPC, you need to set up a VPC endpoint of the Gateway type. Given the below config we are able to connect to our database, but are still unable to get_object from s3:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.12.0"
name = var.name
cidr = var.vpc_cidr
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
public_subnets = var.vpc_public_subnets
private_subnets = var.vpc_private_subnets
database_subnets = var.vpc_database_subnets
create_database_subnet_group = true
create_database_subnet_route_table = true
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
tags = local.default_tags
}
module "endpoints_us_east_1" {
source = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints"
version = "3.10.0"
vpc_id = module.vpc.vpc_id
security_group_ids = [module.security_group_allow_all.security_group_id]
endpoints = {
s3 = {
service = "s3"
service_type = "Gateway"
route_table_ids = flatten([module.vpc.private_route_table_ids])
tags = { Name = "s3-vpc-endpoint" }
},
}
tags = local.default_tags
}
module "security_group_allow_all" {
source = "terraform-aws-modules/security-group/aws"
name = "${var.name}-allow-all"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = [var.vpc_cidr]
ingress_rules = ["all-all"]
egress_cidr_blocks = [var.vpc_cidr]
egress_rules = ["all-all"]
}
The lambda function (using the terraform module) has these settings applied to it:
vpc_subnet_ids = data.terraform_remote_state.foundation.outputs.vpc_private_subnets
vpc_security_group_ids = [data.terraform_remote_state.foundation.outputs.security_group_allow_all_id]
attach_network_policy = true

Related

GCP Terraform cloud_router module --how can I reference the self_link of a single subnetwork in my nat configuration?

I have been struggling with this for hours and can't make it work.
I am using external modules in my main.tf to deploy a GCP VPC with a public and private subnet, and I want to configure NAT for one of the subnets (the private one).
My VPC is configured like this:
module "vpc" {
source = "github.com/terraform-google-modules/terraform-google-network"
project_id = var.project_id
network_name = var.network_name
routing_mode = "REGIONAL"
subnets = [
{
subnet_name = var.pub_subnet
subnet_ip = var.pub_cidr
subnet_region = var.region
},
{
subnet_name = var.priv_subnet
subnet_ip = var.priv_cidr
subnet_region = var.region
subnet_private_access = true
subnet_flow_logs = true
},
]
routes = [
{
name = "egress-internet"
description = "route through IGW to access internet"
destination_range = "0.0.0.0/0"
tags = "egress-inet"
next_hop_internet = "true"
},
]
}
Ater the VPC module block, I have a "cloud_router" block that is meant to configure NAT for the private subnet, but I cannot get the "name" value correct. From the docs I read, this is looking for the self_link of the subnetwork. How can I get this working?
module "cloud_router" {
source = "terraform-google-modules/cloud-router/google"
version = "~> 0.4"
name = var.cloud_router
project = var.project_id
region = var.region
network = module.vpc.network_self_link
nats = [{
name = var.cloud_nat
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork = {
name = "${module.vpc.subnets_self_links[0]}"
// <self_link>
// {{API base url}}/projects/{{your project}}/{{location type}}/{{location}}/{{resource type}}/{{name}}
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
}]
}
This was resolved by moving the nat configuration into a 'resource' definition instead of nesting it inside the 'cloud_router' module:
resource "google_compute_router_nat" "nat_manual" {
name = var.cloud_nat
router = module.cloud_router.router.name
region = module.cloud_router.router.region
nat_ip_allocate_option = "AUTO_ONLY"
//nat_ips = google_compute_address.address.*.self_link
source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS"
subnetwork {
name = "${module.vpc.subnets_self_links[0]}"
source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
}
}

Elastic Beanstalk setup with public ALB and EC2 on private subnet falling health check

I am trying to setup a sample Elastic beanstalk app with ALB being in public subnets(internet facing) and ec2 instances in private subnets in terraform. If I put ec2 instances in public subnets then the elastic beanstalk app get created successfully but in private subnets I get the following error.
The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.
aws_elastic_beanstalk_environment
setting {
namespace = "aws:ec2:vpc"
name = "Subnets"
value = join(",", module.vpc.private_subnets)
}
setting {
namespace = "aws:ec2:vpc"
name = "DBSubnets"
value = join(",", module.vpc.private_subnets)
}
setting {
namespace = "aws:ec2:vpc"
name = "ELBSubnets"
value = join(",", module.vpc.public_subnets)
}
setting {
namespace = "aws:ec2:vpc"
name = "AssociatePublicIpAddress"
value = "false"
}
I have also setup vpc endpoints as describe in https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-instance-failure/
module "endpoints" {
source = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints"
vpc_id = module.vpc.vpc_id
security_group_ids = [data.aws_security_group.default.id]
endpoints = {
dynamodb = {
service = "dynamodb",
service_type = "Gateway"
route_table_ids = module.vpc.private_route_table_ids
tags = { Name = "dynamodb-vpc-endpoint" }
},
s3 = {
service = "s3",
service_type = "Gateway"
route_table_ids = module.vpc.private_route_table_ids
tags = { Name = "s3-vpc-endpoint" }
},
elasticbeanstalk-app = {
# interface endpoint
service_name = aws_vpc_endpoint_service.elasticbeanstalk.service_name
subnet_ids = module.vpc.private_subnets
tags = { Name = "elasticbeanstalk-app-vpc-endpoint" }
},
elasticbeanstalk = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.elasticbeanstalk"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-elasticbeanstalk-vpc-endpoint" }
}
elasticbeanstalk-hc = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.elasticbeanstalk-health"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-elasticbeanstalk-health-vpc-endpoint" }
},
sqs = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.sqs"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-sqs-vpc-endpoint" }
},
cloudformation = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.cloudformation"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-cloudformation-vpc-endpoint" }
},
ec2 = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.ec2"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-ec2-vpc-endpoint" }
},
ec2messages = {
# interface endpoint
service_name = "com.amazonaws.${var.aws_region}.ec2messages"
subnet_ids = module.vpc.private_subnets
private_dns_enabled = true
tags = { Name = "elasticbeanstalk-${var.aws_region}-ec2messages-vpc-endpoint" }
},
}
}
I have a vpc endpoint even for the elasticbeanstalk-app .The setup based on AWS beanstalk PrivateLink not connecting .
Security group
data "aws_security_group" "default" {
name = "default"
vpc_id = module.vpc.vpc_id
}
data "aws_vpc_endpoint_service" "dynamodb" {
service = "dynamodb"
filter {
name = "service-type"
values = ["Gateway"]
}
}
data "aws_vpc_endpoint_service" "s3" {
service = "s3"
filter {
name = "service-type"
values = ["Gateway"]
}
}
In order to be able to connect to service endpoints such as com.amazonaws.[aws_region].elasticbeanstal or com.amazonaws.[aws_region].elasticbeanstalk-health you need to have a security group which allows HTTP/HTTPS inbound connection.
My assumption is that aws_security_group.default security group, which is referenced from a data block, is a default security group and it does not allow HTTP/HTTPS inbound connectivity.

When using Terraform, why does my RDS instance tear down and stand back up when I make a change to an EC2 Instance in its in/egress rules?

I have an ec2 resource (shown) with its own security group (not shown)
resource "aws_instance" "outpost" {
ami = "ami-0469d1xxxxxxxx"
instance_type = "t2.micro"
key_name = module.secretsmanager.key_name
vpc_security_group_ids = [module.ec2_security_group.security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "${var.env}-${var.user}-ec2-outpost"
Terraform = "true"
Environment = var.env
Created = "${timestamp()}"
}
}
A security group for an RDS instance that has ingress and egress rules for that ec2's security group:
module "db_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4"
name = "${var.env}-${var.user}-${local.db_name}"
vpc_id = module.vpc.vpc_id
ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
egress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
}
And the RDS instance that is in db_security_group
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 3.4.0"
identifier = "${var.env}-${var.user}-${local.db_name}"
engine = var.postgres.engine
engine_version = var.postgres.engine_version
family = var.postgres.family
major_engine_version = var.postgres.major_engine_version
instance_class = var.postgres.instance_class
allocated_storage = var.postgres.allocated_storage
max_allocated_storage = var.postgres.max_allocated_storage
storage_encrypted = var.postgres.storage_encrypted
name = var.postgres.name
username = var.postgres.username
password = var.rds_password
port = var.postgres.port
multi_az = var.postgres.multi_az
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [module.db_security_group.security_group_id]
maintenance_window = var.postgres.maintenance_window
backup_window = var.postgres.backup_window
enabled_cloudwatch_logs_exports = var.postgres.enabled_cloudwatch_logs_exports
backup_retention_period = var.postgres.backup_retention_period
skip_final_snapshot = var.postgres.skip_final_snapshot
deletion_protection = var.postgres.deletion_protection
performance_insights_enabled = var.postgres.performance_insights_enabled
performance_insights_retention_period = var.postgres.performance_insights_retention_period
create_monitoring_role = var.postgres.create_monitoring_role
monitoring_role_name = "${var.env}-${var.user}-${var.postgres.monitoring_role_name}"
monitoring_interval = var.postgres.monitoring_interval
snapshot_identifier = var.postgres.snapshot_identifier
}
When I change something with the ec2 instance (like, say, iam_instance_profile) or anything about instances referenced in the in/outbound rules for module.db_security_group.security_group_id, why does does the RDS instance get destroyed and recreated by Terraform?
It seems that in addition to the username and password behavior seen when snapshot_identifier is given (here and here), Terraform will also mark the RDS instance for deletion and recreation when either of these parameters is set. You will see this happening when re-applying the plan in question, because the initial username and/or password is never actually set by Terraform; it thinks there is a change.

InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs

I have two VPCs. One is blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below.
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}
Normally, what you should do is to have vpc_security_group_ids from the VPC where your RDS is. In your case it would be shared vpc:
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id]
Having this one SG, you would add rules to it to allow ingress traffic from other security groups. So basically, your RDS would have one SG with multiple ingress rules. The ingress rules would specify other security groups as allowed.

Terraform, ElasticSearch, module, cloudposse/terraform-aws-elasticsearch

I use the module, https://github.com/cloudposse/terraform-aws-elasticsearch to provision ElasticSearch. I set kibana_hostname_enabled = false, and domain_hostname_enabled = false. Per document, dns_zone_id is not required. But, it asks for dns zone id when I run terraform plan.
terraform plan
var.dns_zone_id
Route53 DNS Zone ID to add hostname records for Elasticsearch domain and Kibana
Enter a value:
I prefer not to use Route53. How to avoid dns_zone_id? Below is the code:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = false
domain_hostname_enabled = false
iam_role_arns = ["*"]
iam_actions = ["es:*"]
enabled = var.enabled
vpc_enabled = var.vpc_enabled
name = var.name
tags = var.tags
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
In your code you have the following:
#dns_zone_id = var.dns_zone_id
So the plan asks for your var.dns_zone_id which you defined, not from the module.
By setting kibana_hostname_enabled = false, will terraform provide a random url/endpoint for accessing Kibana?