I use the module, https://github.com/cloudposse/terraform-aws-elasticsearch to provision ElasticSearch. I set kibana_hostname_enabled = false, and domain_hostname_enabled = false. Per document, dns_zone_id is not required. But, it asks for dns zone id when I run terraform plan.
terraform plan
var.dns_zone_id
Route53 DNS Zone ID to add hostname records for Elasticsearch domain and Kibana
Enter a value:
I prefer not to use Route53. How to avoid dns_zone_id? Below is the code:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = false
domain_hostname_enabled = false
iam_role_arns = ["*"]
iam_actions = ["es:*"]
enabled = var.enabled
vpc_enabled = var.vpc_enabled
name = var.name
tags = var.tags
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
In your code you have the following:
#dns_zone_id = var.dns_zone_id
So the plan asks for your var.dns_zone_id which you defined, not from the module.
By setting kibana_hostname_enabled = false, will terraform provide a random url/endpoint for accessing Kibana?
Related
We have a lambda function in our VPC so that it can connect to our RDS instance. This lambda also needs to connect to s3. It seems that in order to connect to s3 from a VPC, you need to set up a VPC endpoint of the Gateway type. Given the below config we are able to connect to our database, but are still unable to get_object from s3:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.12.0"
name = var.name
cidr = var.vpc_cidr
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
public_subnets = var.vpc_public_subnets
private_subnets = var.vpc_private_subnets
database_subnets = var.vpc_database_subnets
create_database_subnet_group = true
create_database_subnet_route_table = true
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
tags = local.default_tags
}
module "endpoints_us_east_1" {
source = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints"
version = "3.10.0"
vpc_id = module.vpc.vpc_id
security_group_ids = [module.security_group_allow_all.security_group_id]
endpoints = {
s3 = {
service = "s3"
service_type = "Gateway"
route_table_ids = flatten([module.vpc.private_route_table_ids])
tags = { Name = "s3-vpc-endpoint" }
},
}
tags = local.default_tags
}
module "security_group_allow_all" {
source = "terraform-aws-modules/security-group/aws"
name = "${var.name}-allow-all"
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = [var.vpc_cidr]
ingress_rules = ["all-all"]
egress_cidr_blocks = [var.vpc_cidr]
egress_rules = ["all-all"]
}
The lambda function (using the terraform module) has these settings applied to it:
vpc_subnet_ids = data.terraform_remote_state.foundation.outputs.vpc_private_subnets
vpc_security_group_ids = [data.terraform_remote_state.foundation.outputs.security_group_allow_all_id]
attach_network_policy = true
Could u please let me know why I'm not able to create a EC2 using a SG module that I built?
I'm getting the following error
Error: creating EC2 Instance: VPCIdNotSpecified: No default VPC for this user. GroupName is only supported for EC2-Classic and default VPC.
│ status code: 400, request id: e91aa79f-0d8f-44ec-84df-ba22cd3307d8
Indeed I don't wanna use a default VPC, follow below my main code:
module "vpc" {
source = "../modules/vpc/"
region = var.region
awsprofile = var.awsprofile
vpcname = var.vpcname
subnetaz1 = var.subnetaz1
subnetaz2 = var.subnetaz2
subnetaz3 = var.subnetaz3
private1_cidr = var.private1_cidr
private2_cidr = var.private2_cidr
private3_cidr = var.private3_cidr
public1_cidr = var.public1_cidr
public2_cidr = var.public2_cidr
public3_cidr = var.public3_cidr
vpc_cidr = var.vpc_cidr
}
module "security" {
source = "../modules/security/"
public_sg_name = var.public_sg_name
ingress_ports = var.ingress_ports
internet_access = var.internet_access
vpc_id = module.vpc.aws_vpc_id
}
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
}
resource "aws_instance" "web" {
count = var.instance_count
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_size
associate_public_ip_address = "true"
key_name = var.key_name
security_groups = [module.security.sg_name]
tags = {
Name = "${var.instance_name}-1"
}
}
In the "security_groups" I'm trying to get the output from security group module however unsucessfully.
output "sg_id" {
value = aws_security_group.PublicSG.id
}
output "sg_name" {
value = aws_security_group.PublicSG.name
}
Does anyone has any idea why it is not working?
Instead of security_groups you should be using vpc_security_group_ids:
resource "aws_instance" "web" {
count = var.instance_count
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_size
associate_public_ip_address = "true"
key_name = var.key_name
vpc_security_group_ids = [module.security.sg_id]
tags = {
Name = "${var.instance_name}-1"
}
}
I have an ec2 resource (shown) with its own security group (not shown)
resource "aws_instance" "outpost" {
ami = "ami-0469d1xxxxxxxx"
instance_type = "t2.micro"
key_name = module.secretsmanager.key_name
vpc_security_group_ids = [module.ec2_security_group.security_group_id]
subnet_id = module.vpc.public_subnets[0]
tags = {
Name = "${var.env}-${var.user}-ec2-outpost"
Terraform = "true"
Environment = var.env
Created = "${timestamp()}"
}
}
A security group for an RDS instance that has ingress and egress rules for that ec2's security group:
module "db_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 4"
name = "${var.env}-${var.user}-${local.db_name}"
vpc_id = module.vpc.vpc_id
ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
egress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.ec2_security_group.security_group_id
}
]
}
And the RDS instance that is in db_security_group
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 3.4.0"
identifier = "${var.env}-${var.user}-${local.db_name}"
engine = var.postgres.engine
engine_version = var.postgres.engine_version
family = var.postgres.family
major_engine_version = var.postgres.major_engine_version
instance_class = var.postgres.instance_class
allocated_storage = var.postgres.allocated_storage
max_allocated_storage = var.postgres.max_allocated_storage
storage_encrypted = var.postgres.storage_encrypted
name = var.postgres.name
username = var.postgres.username
password = var.rds_password
port = var.postgres.port
multi_az = var.postgres.multi_az
subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [module.db_security_group.security_group_id]
maintenance_window = var.postgres.maintenance_window
backup_window = var.postgres.backup_window
enabled_cloudwatch_logs_exports = var.postgres.enabled_cloudwatch_logs_exports
backup_retention_period = var.postgres.backup_retention_period
skip_final_snapshot = var.postgres.skip_final_snapshot
deletion_protection = var.postgres.deletion_protection
performance_insights_enabled = var.postgres.performance_insights_enabled
performance_insights_retention_period = var.postgres.performance_insights_retention_period
create_monitoring_role = var.postgres.create_monitoring_role
monitoring_role_name = "${var.env}-${var.user}-${var.postgres.monitoring_role_name}"
monitoring_interval = var.postgres.monitoring_interval
snapshot_identifier = var.postgres.snapshot_identifier
}
When I change something with the ec2 instance (like, say, iam_instance_profile) or anything about instances referenced in the in/outbound rules for module.db_security_group.security_group_id, why does does the RDS instance get destroyed and recreated by Terraform?
It seems that in addition to the username and password behavior seen when snapshot_identifier is given (here and here), Terraform will also mark the RDS instance for deletion and recreation when either of these parameters is set. You will see this happening when re-applying the plan in question, because the initial username and/or password is never actually set by Terraform; it thinks there is a change.
I have two VPCs. One is blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below.
Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e
status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}
Normally, what you should do is to have vpc_security_group_ids from the VPC where your RDS is. In your case it would be shared vpc:
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id]
Having this one SG, you would add rules to it to allow ingress traffic from other security groups. So basically, your RDS would have one SG with multiple ingress rules. The ingress rules would specify other security groups as allowed.
I got the Error: Error creating ElasticSearch domain: ValidationException: You must specify exactly two subnets because you’ve set zone count to two. But, how to specify exactly two subnets?
Here is the code:
main.tf:
module "elasticsearch" {
source = "git::https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
subnet_ids = data.terraform_remote_state.vpc.outputs.private_subnets
zone_awareness_enabled = var.zone_awareness_enabled
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = var.kibana_hostname_enabled
domain_hostname_enabled = var.domain_hostname_enabled
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
context = module.this.context
}
terraform.tfvars:
enabled = true
region = "us-west-2"
namespace = "dev"
stage = "pkow"
name = "pkow"
instance_type = "m5.xlarge.elasticsearch"
elasticsearch_version = "7.7"
instance_count = 2
zone_awareness_enabled = true
encrypt_at_rest_enabled = false
dedicated_master_enabled = false
elasticsearch_subdomain_name = "pkow"
kibana_subdomain_name = "pkow"
ebs_volume_size = 250
create_iam_service_linked_role = false
dns_zone_id = "Z080ZFJGLSKFJGLJDLKFGJ"
kibana_hostname_enabled = true
domain_hostname_enabled = true
vpc:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.63.0"
name = var.vpc_name
cidr = var.cidr_blocks_vpc
azs = data.aws_availability_zones.available.names
private_subnets = var.private_subnets
public_subnets = var.public_subnets
database_subnets = var.database_subnets
elasticache_subnets = var.elasticache_subnets
redshift_subnets = var.redshift_subnets
......
If you don't have any particular preference on the subnets chosen, you can get the first two private ones using slice:
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
As long as they are in different AZs it should be enough.