My terraform script is as follow: eveything in VPC
resource "aws_security_group" "cacheSecurityGroup" {
name = "${var.devname}-${var.namespace}-${var.stage}-RedisCache-SecurityGroup"
vpc_id = var.vpc.vpc_id
tags = var.default_tags
ingress {
protocol = "tcp"
from_port = 6379
to_port = 6379
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
resource "aws_elasticache_parameter_group" "usagemonitorCacheParameterGroup" {
name = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache-parameterGroup"
family = "redis6.x"
}
resource "aws_elasticache_subnet_group" "redis_subnet_group" {
name = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache-subnetGroup"
subnet_ids = var.vpc.database_subnets
}
resource "aws_elasticache_replication_group" "replication_group_usagemonitor" {
replication_group_id = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache"
replication_group_description = "Replication group for Usagemonitor"
node_type = "cache.t2.micro"
number_cache_clusters = 2
parameter_group_name = aws_elasticache_parameter_group.usagemonitorCacheParameterGroup.name
subnet_group_name = aws_elasticache_subnet_group.redis_subnet_group.name
#security_group_names = [aws_elasticache_security_group.bar.name]
automatic_failover_enabled = true
at_rest_encryption_enabled = true
port = 6379
}
if i uncomment the line
#security_group_names = [aws_elasticache_security_group.bar.name]
am getting
i get following error:
Error: Error creating Elasticache Replication Group: InvalidParameterCombination: Use of cache security groups is not permitted along with cache subnet group and/or security group Ids.
status code: 400, request id: 4e70e86d-b868-45b3-a1d2-88ab652dc85e
i read that we dont have to use aws_elasticache_security_group if all resources are inside VPC. What the correct way to assign security groups to aws_elasticache_replication_group ??? usinf subnets??? how ???
I do something like this, I believe this is the best way to assign required configuration:
resource "aws_security_group" "redis" {
name_prefix = "${var.name_prefix}-redis-"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_elasticache_replication_group" "redis" {
...
engine = "redis"
subnet_group_name = aws_elasticache_subnet_group.redis.name
security_group_ids = concat(var.security_group_ids, [aws_security_group.redis.id])
}
Your subnet group basically includes all private or public subnets from your VPC where the elasticache replication group is going to be created.
In general, use security group ids instead of names.
I have written a terraform module that definitely works and if you interested it is available under with examples https://github.com/umotif-public/terraform-aws-elasticache-redis.
Related
I'm trying to create the instances according to the following code -
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = var.ec2_instance_type
count = var.number_of_instances
tags = {
Name = "ec2_instance_${count.index}"
}
}
resource "aws_eip" "lb" {
vpc = true
count = var.number_of_instances
}
resource "aws_eip_association" "eic_assoc" {
instance_id = aws_instance.ec2[count.index].id
allocation_id = aws_eip.lb[count.index].id
count = var.number_of_instances
}
resource "aws_security_group" "allow_tls" {
name = "first-security-group-created-by-terraform"
count = var.number_of_instances
ingress {
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
}
}
And got the following error -
Error: creating Security Group (first-security-group-created-by-terraform): InvalidGroup.Duplicate: The security group 'first-security-group-created-by-terraform' already exists for VPC 'vpc-0fb3457c89d86e916'
Probably because this is not the right way to create the aws_security_group when there are multiple instances of eip and ec2.
What is the right way to do that?
You are creating multiple security groups, but giving them all exactly the same name. The name needs to be unique for each security group. You could fix it like this:
resource "aws_security_group" "allow_tls" {
count = var.number_of_instances
name = "first-security-group-created-by-terraform-${count.index}"
ingress {
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
}
}
One way to create a single security group with multiple ingress rules is to create the group without any ingress blocks, and then create the ingress rules separately:
resource "aws_security_group" "allow_tls" {
name = "first-security-group-created-by-terraform"
}
resource "aws_security_group_rule" "allow_tls_rules" {
count = var.number_of_instances
type = "ingress"
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
security_group_id = aws_security_group.allow_tls.id
}
I am trying to get the following Terraform script to run:-
provider "aws" {
region = "us-west-2"
}
provider "random" {}
resource "random_pet" "name" {}
resource "aws_instance" "web" {
ami = "ami-a0cfeed8"
instance_type = "t2.micro"
user_data = file("init-script.sh")
subnet_id = "subnet-0422e48590002d10d"
vpc_security_group_ids = [aws_security_group.web-sg.id]
tags = {
Name = random_pet.name.id
}
}
resource "aws_security_group" "web-sg" {
name = "${random_pet.name.id}-sg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "domain-name" {
value = aws_instance.web.public_dns
}
output "application-url" {
value = "${aws_instance.web.public_dns}/index.php"
}
However it errors with the following:-
Error: Error authorizing security group egress rules: InvalidGroup.NotFound: The security group 'sg-0c181b93d98173b0f' does not exist status code: 400, request id: 6cd8681b-ee70-4ec0-8509-6239c56169a1
The SG gets created with the correct name, but it claims it does not exist.
I am am unsure how to resolve this.
Typically I worked it out straight after posting. I had neglected to add the vpc_id property to the aws_security_group which meant it was an EC2 Classic SG which cannot have egress rules.
I'm trying to deploy a docker image via terraform and AWS ECS using Fargate. Using terraform, I've created a VPC, two private and two public subnets, a ECR repository to store the image, an ECS cluster, ECS task, ECS service, and a load balancer with a target group.
These resources are created successfully, but the target group is constantly:
varying in the number of targets that are shown. For instance, refreshing will sometimes show 3 registered targets. Sometimes it will show 4.
Usually have a status of "draining" and details that say "Target deregistration in progress". Sometimes one of them will have a status of "initial" and details that say "Target registration in progress"
Additionally, visiting the URL of the load balancer returns a "503 Service Temporarily Unavailable"
I came across this post, that led to me this article, which helped me better understand how Fargate works but I'm having trouble translating this into the terraform + aws method I'm trying to implement.
I'm suspecting the issue could be in how the security groups are allowing/disallowing traffic but I'm still a novice with dev ops stuff so I appreciate in advance any help offered.
Here is the terraform main.tf that I've used to create the resources. Most of it is gathered from different tutorials and adjusted with updates whenever terraform screamed at me about a deprecation.
So, which parts of the following configuration is wrong and is causing the target groups to constantly be in a draining state?
Again, thanks in advance for any help or insights provided!
# ..terraform/main.tf
# START CREATE VPC
resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"
instance_tenancy= "default"
enable_dns_hostnames = true
enable_dns_support = true
enable_classiclink = false
tags = {
Name = "vpc"
}
}
# END CREATE VPC
# START CREATE PRIVATE AND PUBLIC SUBNETS
resource "aws_subnet" "public_subnet_1" {
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1a"
tags = {
Name = "public-subnet-1"
}
}
resource "aws_subnet" "public_subnet_2" {
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.2.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-1b"
tags = {
Name = "public-subnet-2"
}
}
resource "aws_subnet" "private_subnet_1" {
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.3.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1a"
tags = {
Name = "private-subnet-1"
}
}
resource "aws_subnet" "private_subnet_2" {
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.4.0/24"
map_public_ip_on_launch = false
availability_zone = "us-east-1b"
tags = {
Name = "private-subnet-1"
}
}
# END CREATE PRIVATE AND PUBLIC SUBNETS
# START CREATE GATEWAY
resource "aws_internet_gateway" "vpc_gateway" {
vpc_id = aws_vpc.vpc.id
tags = {
Name = "vpc-gateway"
}
}
# END CREATE GATEWAY
# START CREATE ROUTE TABLE AND ASSOCIATIONS
resource "aws_route_table" "public_route_table" {
vpc_id = aws_vpc.vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.vpc_gateway.id
}
tags = {
Name = "public-route-table"
}
}
resource "aws_route_table_association" "route_table_association_1" {
subnet_id = aws_subnet.public_subnet_1.id
route_table_id = aws_route_table.public_route_table.id
}
resource "aws_route_table_association" "route_table_association_2" {
subnet_id = aws_subnet.public_subnet_2.id
route_table_id = aws_route_table.public_route_table.id
}
# END CREATE ROUTE TABLE AND ASSOCIATIONS
# START CREATE ECR REPOSITORY
resource "aws_ecr_repository" "api_ecr_repository" {
name = "api-ecr-repository"
}
# END CREATE ECR REPOSITORY
# START CREATE ECS CLUSTER
resource "aws_ecs_cluster" "api_cluster" {
name = "api-cluster"
}
# END CREATE ECS CLUSTER
# START CREATE ECS TASK AND DESIGNATE 'FARGATE'
resource "aws_ecs_task_definition" "api_cluster_task" {
family = "api-cluster-task"
container_definitions = <<DEFINITION
[
{
"name": "api-cluster-task",
"image": "${aws_ecr_repository.api_ecr_repository.repository_url}",
"essential": true,
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"memory": 512,
"cpu": 256
}
]
DEFINITION
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
memory = 512
cpu = 256
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
}
# END CREATE ECS TASK AND DESIGNATE 'FARGATE'
# START CREATE TASK POLICIES
data "aws_iam_policy_document" "assume_role_policy" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ecs-tasks.amazonaws.com"]
}
}
}
resource "aws_iam_role" "ecs_task_execution_role" {
name = "ecs-take-execution-role"
assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
resource "aws_iam_role_policy_attachment" "ecs_task_execution_role_attachment" {
role = aws_iam_role.ecs_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
# END CREATE TASK POLICIES
# START CREATE ECS SERVICE
resource "aws_ecs_service" "api_cluster_service" {
name = "api-cluster-service"
cluster = aws_ecs_cluster.api_cluster.id
task_definition = aws_ecs_task_definition.api_cluster_task.arn
launch_type = "FARGATE"
desired_count = 1
load_balancer {
target_group_arn = aws_lb_target_group.api_lb_target_group.arn
container_name = aws_ecs_task_definition.api_cluster_task.family
container_port = 4000
}
network_configuration {
security_groups = [aws_security_group.ecs_tasks.id]
subnets = [
aws_subnet.public_subnet_1.id,
aws_subnet.public_subnet_2.id
]
assign_public_ip = true
}
depends_on = [aws_lb_listener.api_lb_listener, aws_iam_role_policy_attachment.ecs_task_execution_role_attachment]
}
resource "aws_security_group" "api_cluster_security_group" {
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 0
to_port = 0
protocol = -1
security_groups = [aws_security_group.load_balancer_security_group.id]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
}
# END CREATE ECS SERVICE
# CREATE LOAD BALANCER
resource "aws_alb" "api_load_balancer" {
name = "api-load-balancer"
load_balancer_type = "application"
subnets = [
aws_subnet.public_subnet_1.id,
aws_subnet.public_subnet_2.id
]
security_groups = [aws_security_group.load_balancer_security_group.id]
}
resource "aws_security_group" "load_balancer_security_group" {
name = "allow-load-balancer-traffic"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# END CREATE LOAD BALANCER
# CREATE ECS TASK SECURITY GROUP
resource "aws_security_group" "ecs_tasks" {
name = "ecs-tasks-sg"
description = "allow inbound access from the ALB only"
vpc_id = aws_vpc.vpc.id
ingress {
protocol = "tcp"
from_port = 4000
to_port = 4000
cidr_blocks = ["0.0.0.0/0"]
security_groups = [aws_security_group.load_balancer_security_group.id]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
# END ECS TASK SECURITY GROUP
# START CREATE LOAD BALANCER TARGET GROUP
resource "aws_lb_target_group" "api_lb_target_group" {
name = "api-lb-target-group"
vpc_id = aws_vpc.vpc.id
port = 80
protocol = "HTTP"
target_type = "ip"
health_check {
healthy_threshold= "3"
interval = "90"
protocol = "HTTP"
matcher = "200-299"
timeout = "20"
path = "/"
unhealthy_threshold = "2"
}
}
# END CREATE LOAD BALANCER TARGET GROUP
# START CREATE LOAD BALANCER LISTENER
resource "aws_lb_listener" "api_lb_listener" {
load_balancer_arn = aws_alb.api_load_balancer.arn
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.api_lb_target_group.arn
}
}
# END CREATE LOAD BALANCER LISTENER
Your are not using api_cluster_security_group at all in your setup, thus its not clear what it is its purpose. Also in your aws_security_group.ecs_tasks you are allowing only port 4000. However, due to dynamic port mapping between ALB and ECS services, you should allow all ports, not only 4000.
There could be other issues, which are not apparent yet.
I've been trying to create RDS instance using terraform. The problem that I've been strugling with is that every time I create new instance, it's not reachable.
I'm creating it in subnets group containing public and private subnets, security group has rule allowing access from my IP, there is internet gateway in this vpc.
The strangest thing is that to fix that, I just need to change instance class using AWS console to any other, f.e. from db.t2.small to db.t2.micro and it suddenly starts working.
Here is fragment of my code:
resource "aws_db_subnet_group" "dbSubnetGroup" {
name = "${var.prefix}-db-subnet-group"
subnet_ids = concat(aws_subnet.publicSubnet.*.id, aws_subnet.privateSubnet.*.id)
tags = var.defaultTags
}
resource "aws_security_group" "rdsSecurityGroup" {
name = "${var.prefix}-rds-sg"
vpc_id = aws_vpc.vpc.id
ingress {
from_port = 1433
to_port = 1433
protocol = "tcp"
security_groups = [aws_eks_cluster.eksCluster.vpc_config[0].cluster_security_group_id]
}
ingress {
from_port = 1433
to_port = 1433
protocol = "tcp"
cidr_blocks = [var.myIP]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = var.defaultTags
}
resource "random_password" "rdsPassword" {
length = 32
special = true
override_special = "!#$%&*()-_=+[]{}<>:?"
}
resource "aws_db_instance" "dbInstance" {
allocated_storage = 20
storage_type = "gp2"
engine = var.dbInstanceEngine
license_model = "license-included"
instance_class = var.dbInstanceType
identifier = "${var.prefix}-db-instance"
username = var.dbUserName
password = random_password.rdsPassword.result
tags = var.defaultTags
db_subnet_group_name = aws_db_subnet_group.dbSubnetGroup.name
vpc_security_group_ids = [aws_security_group.rdsSecurityGroup.id]
skip_final_snapshot = true
allow_major_version_upgrade = true
copy_tags_to_snapshot = true
performance_insights_enabled = true
max_allocated_storage = 1000
enabled_cloudwatch_logs_exports = ["error"]
publicly_accessible = true
}
Am I doing something wrong or can it be a bug in aws provider?
If you want RDS to be connectable, the DB subnet group must be in public subnets only
So I created the autoscaling group using the following Terraform files:
Autoscaling group:
resource "aws_autoscaling_group" "orbix-mvp" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.orbix-mvp.id}"
max_size = 1
min_size = 1
name = "${var.project}-${var.stage}"
vpc_zone_identifier = ["${aws_subnet.orbix-mvp.*.id}"]
tag {
key = "Name"
value = "${var.project}-${var.stage}"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.project}-${var.stage}"
value = "owned"
propagate_at_launch = true
}
}
Launch configuration:
# This data source is included for ease of sample architecture deployment
# and can be swapped out as necessary.
data "aws_region" "current" {}
# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encoding this
# information into the AutoScaling Launch Configuration.
# More information: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
locals {
orbix-mvp-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.orbix-mvp.endpoint}' --b64-cluster-ca '${aws_eks_cluster.orbix-mvp.certificate_authority.0.data}' '${var.project}-${var.stage}'
USERDATA
}
resource "aws_launch_configuration" "orbix-mvp" {
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.orbix-mvp-node.name}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "c5.large"
name_prefix = "${var.project}-${var.stage}"
security_groups = ["${aws_security_group.orbix-mvp-node.id}"]
user_data_base64 = "${base64encode(local.orbix-mvp-node-userdata)}"
key_name = "devops"
lifecycle {
create_before_destroy = true
}
}
So I've added the already generated SSH key under name devops to the launch configuration. I can SSH into manually created EC2 instances with that key, however I can't seem to SSH into instances created by this config.
Any help is appreciated, thanks :)
EDIT:
Node Security Group Terraform:
resource "aws_security_group" "orbix-mvp-node" {
name = "${var.project}-${var.stage}-node"
description = "Security group for all nodes in the ${var.project}-${var.stage} cluster"
vpc_id = "${aws_vpc.orbix-mvp.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${
map(
"Name", "${var.project}-${var.stage}-node",
"kubernetes.io/cluster/${var.project}-${var.stage}", "owned",
)
}"
}
resource "aws_security_group_rule" "demo-node-ingress-self" {
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-node.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-ingress-cluster" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-cluster.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-port-22" {
description = "Add SSH access"
from_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
cidr_blocks = ["0.0.0.0/0"]
to_port = 22
type = "ingress"
}