I am trying to get the following Terraform script to run:-
provider "aws" {
region = "us-west-2"
}
provider "random" {}
resource "random_pet" "name" {}
resource "aws_instance" "web" {
ami = "ami-a0cfeed8"
instance_type = "t2.micro"
user_data = file("init-script.sh")
subnet_id = "subnet-0422e48590002d10d"
vpc_security_group_ids = [aws_security_group.web-sg.id]
tags = {
Name = random_pet.name.id
}
}
resource "aws_security_group" "web-sg" {
name = "${random_pet.name.id}-sg"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "domain-name" {
value = aws_instance.web.public_dns
}
output "application-url" {
value = "${aws_instance.web.public_dns}/index.php"
}
However it errors with the following:-
Error: Error authorizing security group egress rules: InvalidGroup.NotFound: The security group 'sg-0c181b93d98173b0f' does not exist status code: 400, request id: 6cd8681b-ee70-4ec0-8509-6239c56169a1
The SG gets created with the correct name, but it claims it does not exist.
I am am unsure how to resolve this.
Typically I worked it out straight after posting. I had neglected to add the vpc_id property to the aws_security_group which meant it was an EC2 Classic SG which cannot have egress rules.
Related
I'm trying to create the instances according to the following code -
resource "aws_instance" "ec2" {
ami = "ami-0fe0b2cf0e1f25c8a"
instance_type = var.ec2_instance_type
count = var.number_of_instances
tags = {
Name = "ec2_instance_${count.index}"
}
}
resource "aws_eip" "lb" {
vpc = true
count = var.number_of_instances
}
resource "aws_eip_association" "eic_assoc" {
instance_id = aws_instance.ec2[count.index].id
allocation_id = aws_eip.lb[count.index].id
count = var.number_of_instances
}
resource "aws_security_group" "allow_tls" {
name = "first-security-group-created-by-terraform"
count = var.number_of_instances
ingress {
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
}
}
And got the following error -
Error: creating Security Group (first-security-group-created-by-terraform): InvalidGroup.Duplicate: The security group 'first-security-group-created-by-terraform' already exists for VPC 'vpc-0fb3457c89d86e916'
Probably because this is not the right way to create the aws_security_group when there are multiple instances of eip and ec2.
What is the right way to do that?
You are creating multiple security groups, but giving them all exactly the same name. The name needs to be unique for each security group. You could fix it like this:
resource "aws_security_group" "allow_tls" {
count = var.number_of_instances
name = "first-security-group-created-by-terraform-${count.index}"
ingress {
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
}
}
One way to create a single security group with multiple ingress rules is to create the group without any ingress blocks, and then create the ingress rules separately:
resource "aws_security_group" "allow_tls" {
name = "first-security-group-created-by-terraform"
}
resource "aws_security_group_rule" "allow_tls_rules" {
count = var.number_of_instances
type = "ingress"
from_port = var.security_group_port
to_port = var.security_group_port
protocol = var.security_group_protocol
cidr_blocks = ["${aws_eip.lb[count.index].public_ip}/32"]
security_group_id = aws_security_group.allow_tls.id
}
I am trying to deploy EC2 instances using Terrafom and I can see the following error:
Error: Error launching source instance: InvalidGroup.NotFound: The security group 'prod-web-servers-sg' does not exist in VPC 'vpc-db3a3cb3'
Here is the Terraform template I'm using:
resource "aws_default_vpc" "default" {
}
resource "aws_security_group" "prod-web-servers-sg" {
name = "prod-web-servers-sg"
description = "security group for production grade web servers"
vpc_id = "${aws_default_vpc.default.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
#Subnet
resource "aws_subnet" "private_subnet" {
vpc_id = "${aws_default_vpc.default.id}"
cidr_block = "172.31.0.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_instance" "prod-web-server" {
ami = "ami-04b1ddd35fd71475a"
count = 2
key_name = "test_key"
instance_type = "r5.large"
security_groups = ["prod-web-servers-sg"]
subnet_id = "${aws_subnet.private_subnet.id}"
}
You have a race condition there because Terraform doesn't know to wait until the security group is created before creating the instance.
To fix this you should interpolate the aws_security_group.prod-web-servers-sg.id into aws_instance.prod-web-server resource so that it can work out the dependency chain between the resources. You should also use vpc_security_group_ids instead of security_groups as mentioned in the aws_instance resource documentation:
security_groups - (Optional, EC2-Classic and default VPC only) A list of security group names (EC2-Classic) or IDs (default VPC) to associate with.
NOTE:
If you are creating Instances in a VPC, use vpc_security_group_ids instead.
So you should have something like the following:
resource "aws_default_vpc" "default" {}
resource "aws_security_group" "prod-web-servers-sg" {
name = "prod-web-servers-sg"
description = "security group for production grade web servers"
vpc_id = aws_default_vpc.default.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
#Subnet
resource "aws_subnet" "private_subnet" {
vpc_id = aws_default_vpc.default.id
cidr_block = "172.31.0.0/24"
availability_zone = "ap-south-1a"
}
resource "aws_instance" "prod-web-server" {
ami = "ami-04b1ddd35fd71475a"
count = 2
key_name = "test_key"
instance_type = "r5.large"
vpc_security_group_ids = [aws_security_group.prod-web-servers-sg.id]
subnet_id = aws_subnet.private_subnet.id
}
My terraform script is as follow: eveything in VPC
resource "aws_security_group" "cacheSecurityGroup" {
name = "${var.devname}-${var.namespace}-${var.stage}-RedisCache-SecurityGroup"
vpc_id = var.vpc.vpc_id
tags = var.default_tags
ingress {
protocol = "tcp"
from_port = 6379
to_port = 6379
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
resource "aws_elasticache_parameter_group" "usagemonitorCacheParameterGroup" {
name = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache-parameterGroup"
family = "redis6.x"
}
resource "aws_elasticache_subnet_group" "redis_subnet_group" {
name = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache-subnetGroup"
subnet_ids = var.vpc.database_subnets
}
resource "aws_elasticache_replication_group" "replication_group_usagemonitor" {
replication_group_id = "${var.devname}${var.namespace}${var.stage}-usagemonitor-cache"
replication_group_description = "Replication group for Usagemonitor"
node_type = "cache.t2.micro"
number_cache_clusters = 2
parameter_group_name = aws_elasticache_parameter_group.usagemonitorCacheParameterGroup.name
subnet_group_name = aws_elasticache_subnet_group.redis_subnet_group.name
#security_group_names = [aws_elasticache_security_group.bar.name]
automatic_failover_enabled = true
at_rest_encryption_enabled = true
port = 6379
}
if i uncomment the line
#security_group_names = [aws_elasticache_security_group.bar.name]
am getting
i get following error:
Error: Error creating Elasticache Replication Group: InvalidParameterCombination: Use of cache security groups is not permitted along with cache subnet group and/or security group Ids.
status code: 400, request id: 4e70e86d-b868-45b3-a1d2-88ab652dc85e
i read that we dont have to use aws_elasticache_security_group if all resources are inside VPC. What the correct way to assign security groups to aws_elasticache_replication_group ??? usinf subnets??? how ???
I do something like this, I believe this is the best way to assign required configuration:
resource "aws_security_group" "redis" {
name_prefix = "${var.name_prefix}-redis-"
vpc_id = var.vpc_id
lifecycle {
create_before_destroy = true
}
}
resource "aws_elasticache_replication_group" "redis" {
...
engine = "redis"
subnet_group_name = aws_elasticache_subnet_group.redis.name
security_group_ids = concat(var.security_group_ids, [aws_security_group.redis.id])
}
Your subnet group basically includes all private or public subnets from your VPC where the elasticache replication group is going to be created.
In general, use security group ids instead of names.
I have written a terraform module that definitely works and if you interested it is available under with examples https://github.com/umotif-public/terraform-aws-elasticache-redis.
So I created the autoscaling group using the following Terraform files:
Autoscaling group:
resource "aws_autoscaling_group" "orbix-mvp" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.orbix-mvp.id}"
max_size = 1
min_size = 1
name = "${var.project}-${var.stage}"
vpc_zone_identifier = ["${aws_subnet.orbix-mvp.*.id}"]
tag {
key = "Name"
value = "${var.project}-${var.stage}"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.project}-${var.stage}"
value = "owned"
propagate_at_launch = true
}
}
Launch configuration:
# This data source is included for ease of sample architecture deployment
# and can be swapped out as necessary.
data "aws_region" "current" {}
# EKS currently documents this required userdata for EKS worker nodes to
# properly configure Kubernetes applications on the EC2 instance.
# We utilize a Terraform local here to simplify Base64 encoding this
# information into the AutoScaling Launch Configuration.
# More information: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
locals {
orbix-mvp-node-userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.orbix-mvp.endpoint}' --b64-cluster-ca '${aws_eks_cluster.orbix-mvp.certificate_authority.0.data}' '${var.project}-${var.stage}'
USERDATA
}
resource "aws_launch_configuration" "orbix-mvp" {
associate_public_ip_address = true
iam_instance_profile = "${aws_iam_instance_profile.orbix-mvp-node.name}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "c5.large"
name_prefix = "${var.project}-${var.stage}"
security_groups = ["${aws_security_group.orbix-mvp-node.id}"]
user_data_base64 = "${base64encode(local.orbix-mvp-node-userdata)}"
key_name = "devops"
lifecycle {
create_before_destroy = true
}
}
So I've added the already generated SSH key under name devops to the launch configuration. I can SSH into manually created EC2 instances with that key, however I can't seem to SSH into instances created by this config.
Any help is appreciated, thanks :)
EDIT:
Node Security Group Terraform:
resource "aws_security_group" "orbix-mvp-node" {
name = "${var.project}-${var.stage}-node"
description = "Security group for all nodes in the ${var.project}-${var.stage} cluster"
vpc_id = "${aws_vpc.orbix-mvp.id}"
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = "${
map(
"Name", "${var.project}-${var.stage}-node",
"kubernetes.io/cluster/${var.project}-${var.stage}", "owned",
)
}"
}
resource "aws_security_group_rule" "demo-node-ingress-self" {
description = "Allow node to communicate with each other"
from_port = 0
protocol = "-1"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-node.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-ingress-cluster" {
description = "Allow worker Kubelets and pods to receive communication from the cluster control plane"
from_port = 1025
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
source_security_group_id = "${aws_security_group.orbix-mvp-cluster.id}"
to_port = 65535
type = "ingress"
}
resource "aws_security_group_rule" "demo-node-port-22" {
description = "Add SSH access"
from_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.orbix-mvp-node.id}"
cidr_blocks = ["0.0.0.0/0"]
to_port = 22
type = "ingress"
}
Whenever I add key_name to my amazon resource, I can never actually connect to the resulting instance:
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
"key_name" = "po"
tags {
Name = "API_Server"
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}
When I do
ssh -i ~/Downloads/po.pem bitnami#IP
I just a blank line in my terminal, as if I was putting in a wrong IP. However, checking the Amazon console, I can see the instance is running. I'm not getting any errors on my Terraform either.
By default all network access is not allowed. You need to explicitly allow network access by setting a security group.
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
key_name = "po"
security_groups = ["${aws_security_group.api_server.id}"]
tags {
Name = "API_Server"
}
}
resource "aws_security_group" "api_server" {
name = "api_server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["XXX.XXX.XXX.XXX/32"] // Allow SSH from your global IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}