AWS SG self reference resolving different environments - amazon-web-services

I want to make it more modular for different env (Dev,UAT, PROD) in that case i believe i should use 'name' of the SG("App${local.environment}sec_group") or only Sec_group?. here will it be able to resolve source_security_group_id ? main.tf file:-
resource "aws_security_group" "sec_group" {
name = "App${local.environment}sec_group"
vpc_id = "${local.vpc_id}"
} resource "aws_security_group_rule" "sec_group_allow_1865" {
type = "ingress"
from_port = 1865
to_port = 1865
protocol = "tcp"
security_group_id = "${aws_security_group.sec_group.id}"
source_security_group_id = "${aws_security_group.App${local.environment}sec_group.id}" '''
}
Variable.tf file:-
environment = "${lookup(var.ws_to_environment_map, terraform.workspace, var.default_environment)}"
vpc_id = "${lookup(var.ws_to_vpc_map, terraform.workspace, var.default_environment)}"
variable "default_environment" {
default = "dev"
}
variable "ws_to_vpc_map" {
type = "map"
default = {
dev = "vpc-03a05d67831e1ff035"
uat = ""
prod = ""
}
}
variable "ws_to_environment_map" {
type = "map"
default = {
dev = "DEV"
uat = "UAT"
prod = "PROD"
}
}

Here you could use
source_security_group_id = "${aws_security_group.sec_group.id}"
instead of
source_security_group_id = "${aws_security_group.App${local.environment}sec_group.id}"
aws_security_group.sec_group refers to the security group resource created with the name "sec_group"(resource "aws_security_group" "sec_group") and aws_security_group.sec_group.id would get its id.

Related

vpc_id argument is not expected here

I’m new to using modules in Terraform and I’m getting an error from my main.tf in my root module, saying “an argument vpc_id is not expected here” and this error is occurring for my “sg” module block at the bottom.
Here is my main.tf in my root module
access_key = var.my_access_key
secret_key = var.my_secret_key
region = var.region
}
provider "random" {
}
resource "random_id" "prefix" {
byte_length = 8
}
module "ec2" {
source = "./modules/ec2"
infra_env = var.infra_env
public_ssh_key = var.public_ssh_key
allow_rdp = module.sg.allow_rdp.id
allow_winrm = module.sg.allow_winrm.id
}
module "iam" {
source = "./modules/iam"
infra_env = var.infra_env
}
module "s3" {
source = "./modules/s3"
infra_env = var.infra_env
}
module "sg" {
source = "./modules/sg"
infra_env = var.infra_env
vpc_id = module.vpc.vpc_1.id
}
module "vpc" {
source = "./modules/vpc"
infra_env = var.infra_env
}
Here is the Main.tf of my “SG” module- I thought I only had to put “module.vpc.vpc_1.id” to get the input from that module
terraform {
required_version = ">= 1.1.5"
}
module "vpc" {
source = "../vpc"
infra_env = var.infra_env
}
# Allow WinRM to set adminstrator password
resource "aws_security_group" "allow_winrm" {
name = "allow_winrm"
description = "Allow access the instances via WinRM over HTTP and HTTPS"
vpc_id = module.vpc.vpc_1.id
ingress {
description = "Access the instances via WinRM over HTTP and HTTPS"
from_port = 5985
to_port = 5986
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.infra_env}-allow-winrm"
}
}
# Allow RDP connectvity to EC2 instances
resource "aws_security_group" "allow_rdp" {
name = "allow_rdp"
description = "Allow access the instances via RDP"
vpc_id = module.vpc.vpc_1.id
ingress {
description = "Allow access the instances via RDP"
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.infra_env}-allow-rdp"
}
}
Here are the outputs for my VPC module, located in my VPC module:
output "subnet_1" {
value = aws_subnet.subnet_1
}
output "vpc_1" {
value = aws_vpc.vpc_1.id
}
output "gw_1" {
value = aws_internet_gateway.gw_1
}
There are a couple of problems with your code. In the outputs, you have written this:
output "vpc_1" {
value = aws_vpc.vpc_1.id
}
This means that the output value will already provide the VPC ID you want. It is enough to reference it only by name: module.vpc.vpc_1. More information about referencing module outputs is available in [1].
Second issue here is that you are trying to reference an output of a child module (VPC) in another child module (SG):
vpc_id = module.vpc.vpc_1.id
However, it would be enough to specify a variable in the child module (i.e., SG). For example, you could define a variable like this:
variable "vpc_id" {
description = "VPC ID."
type = string
}
Then, you would just write the above as:
vpc_id = var.vpc_id
When calling the SG module, you would use the following:
module "sg" {
source = "./modules/sg"
infra_env = var.infra_env
vpc_id = module.vpc.vpc_1
}
Note that the same change with var.vpc_id should be done everywhere in the SG module where module.vpc.vpc_1.id was referenced.
[1] https://www.terraform.io/language/values/outputs#accessing-child-module-outputs

Terraform error creating user pool domain

Following this Github repo, the user pool domain farm_users is created yet terraform applyreturns this error. Tried destroy. Tried deleting the user pool domain in the aws console and repeating apply.
╷
│ Error: Error creating Cognito User Pool Domain: InvalidParameterException: Domain already associated with another user pool.
│
│ with module.api.aws_cognito_user_pool_domain.farm_users_pool_domain,
│ on modules/api/main.tf line 55, in resource "aws_cognito_user_pool_domain" "farm_users_pool_domain":
│ 55: resource "aws_cognito_user_pool_domain" "farm_users_pool_domain" {
│
After running apply:
$ aws cognito-idp describe-user-pool-domain --domain "fupdomain"
An error occurred (ResourceNotFoundException) when calling the DescribeUserPoolDomain operation: User pool domain fupdomain does not exist in this account.
main.tf
provider "aws" {
version = "~> 2.31"
region = var.region
}
data "aws_caller_identity" "current" {}
resource "random_string" "build_id" {
length = 16
special = false
upper = false
number = false
}
module "network" {
source = "./modules/network"
availability_zone = var.availability_zone
vpc_cidr = var.vpc_cidr
}
module "node_iam_role" {
source = "./modules/node_iam_role"
}
resource "aws_s3_bucket" "render_bucket" {
bucket = "${random_string.build_id.result}-render-data"
acl = "private"
}
# Stores server-side code bundles. i.e. Worker node and lambda layer
resource "aws_s3_bucket" "code_bundles_bucket" {
bucket = "${random_string.build_id.result}-code-bundles"
acl = "private"
}
# Stores and serves javascript client
resource "aws_s3_bucket" "client_bucket" {
bucket = "${random_string.build_id.result}-client-bucket"
acl = "public-read"
website {
index_document = "index.html"
error_document = "error.html"
}
}
# Code bundles
data "archive_file" "worker_node_code" {
type = "zip"
source_dir = "${path.root}/src/farm_worker"
output_path = "${path.root}/src/bundles/farm_worker.zip"
}
resource "aws_s3_bucket_object" "worker_code_bundle" {
bucket = aws_s3_bucket.code_bundles_bucket.id
key = "farm_worker.zip"
source = "${path.root}/src/bundles/farm_worker.zip"
depends_on = [data.archive_file.worker_node_code]
}
# Security groups for the worker nodes
resource "aws_security_group" "ssh" {
name = "allow_ssh"
vpc_id = module.network.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "nfs" {
name = "NFS"
vpc_id = module.network.vpc_id
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Build queues for project init and frame rendering
resource "aws_sqs_queue" "frame_render_deadletter" {
name = "frame_render_deadletter_queue"
}
resource "aws_sqs_queue" "frame_render_queue" {
name = "frame_render_queue"
visibility_timeout_seconds = 7000
redrive_policy = "{\"deadLetterTargetArn\":\"${aws_sqs_queue.frame_render_deadletter.arn}\",\"maxReceiveCount\":5}"
}
resource "aws_sqs_queue" "project_init_queue" {
name = "project_init_queue"
visibility_timeout_seconds = 7000
}
# EFS for shared storage during baking and rendering
resource "aws_efs_file_system" "shared_render_vol" {
tags = {
Name = "SharedRenderEFS"
}
}
resource "aws_efs_mount_target" "shared_mount" {
file_system_id = aws_efs_file_system.shared_render_vol.id
subnet_id = module.network.subnet_id
security_groups = [aws_security_group.nfs.id]
}
module "worker_node" {
source = "./modules/worker_node"
key_name = var.node_key_name
image_id = var.blender_node_image_id
vpc_security_group_ids = [aws_security_group.ssh.id, aws_security_group.nfs.id]
iam_instance_profile = module.node_iam_role.worker_iam_profile_name
build_id = random_string.build_id.result
region = var.region
render_bucket = aws_s3_bucket.render_bucket.id
code_bucket = aws_s3_bucket.code_bundles_bucket.id
frame_queue_url = aws_sqs_queue.frame_render_queue.id
project_init_queue_url = aws_sqs_queue.project_init_queue.id
shared_file_system_id = aws_efs_file_system.shared_render_vol.id
instance_types = var.instance_types
asg_name = var.worker_asg_name
asg_subnets = [module.network.subnet_id]
asg_max_workers = var.worker_node_max_count
asg_min_workers = 0
cloudwatch_namespace = var.cloudwatch_namespace
}
module "bpi_emitter" {
source = "./modules/bpi_emitter"
cloudwatch_namespace = var.cloudwatch_namespace
asg_name = module.worker_node.asg_name
frame_queue = aws_sqs_queue.frame_render_queue.id
project_init_queue = aws_sqs_queue.project_init_queue.id
frame_queue_bpi = var.frame_queue_bpi
project_init_queue_bpi = var.project_init_queue_bpi
}
# module "bucket_upload_listener" {
# source = "./modules/bucket_upload_listener"
# bucket_name = aws_s3_bucket.render_bucket.id
# bucket_arn = aws_s3_bucket.render_bucket.arn
# project_init_queue = aws_sqs_queue.project_init_queue.id
# }
resource "aws_dynamodb_table" "projects_table" {
name = "FarmProjects"
billing_mode = "PAY_PER_REQUEST"
hash_key = "ProjectId"
attribute {
name = "ProjectId"
type = "S"
}
}
resource "aws_dynamodb_table" "application_settings" {
name = "FarmApplicationSettings"
billing_mode = "PAY_PER_REQUEST"
hash_key = "SettingName"
attribute {
name = "SettingName"
type = "S"
}
}
module "api" {
source = "./modules/api"
region = var.region
bucket = aws_s3_bucket.render_bucket.id
frame_queue = aws_sqs_queue.frame_render_queue.id
project_init_queue = aws_sqs_queue.project_init_queue.id
client_endpoint = "https://${aws_s3_bucket.client_bucket.website_endpoint}"
dynamo_tables = {
projects = aws_dynamodb_table.projects_table.name,
application_settings = aws_dynamodb_table.application_settings.name
}
}
The domain name should be globally unique. This means that, if in another account the same domain is used, then you can't use it. Try for example:
aws cognito-idp create-user-pool-domain --domain fupdomain --user-pool-id <pool-id>
The output will be:
An error occurred (InvalidParameterException) when calling the
CreateUserPoolDomain operation: Domain already associated with another
user pool.
This makes sense, as the domain name is used to build a url of the form:
https://{domain}.auth.us-east-1.amazoncognito.com
This is where users should be authenticated against.
You need to edit the template and pick another name.

My Lambda can't connect to my RDS instance

I'm trying to create both services within the same VPC and give them appropriate security groups but they I can't make it work.
variable "vpc_cidr_block" {
default = "10.1.0.0/16"
}
variable "cidr_block_subnet_public" {
default = "10.1.1.0/24"
}
variable "cidr_block_subnets_private" {
default = ["10.1.2.0/24", "10.1.3.0/24", "10.1.4.0/24"]
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
}
resource "aws_subnet" "private" {
count = length(var.cidr_block_subnets_private)
cidr_block = var.cidr_block_subnets_private[count.index]
vpc_id = aws_vpc.vpc.id
availability_zone = data.aws_availability_zones.available.names[count.index]
}
resource "aws_security_group" "lambda" {
vpc_id = aws_vpc.vpc.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
// security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private: s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.identifier}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}
resource "aws_db_subnet_group" "db" {
subnet_ids = aws_subnet.private.*.id
}
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "random_password" "password" {
length = 32
special = false
}
I tried to not clutter the question by only posting the relevant part of my HCL. Please let me know if I missed anything important.
The biggest issue is the commented out security_groups parameter on the ingress block of the rds security group. Uncommenting that should then allow Postgresql traffic from the lambda security group:
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
As well as that your JDBC string is basically resolving to something like jdbc:postgresql://terraform-20091110230000000000000001.xxxx.us-east-1.rds.amazonaws.com:5432/terraform-20091110230000000000000001 because you aren't specifying an identifier for the RDS instance and so it defaults to generating an identifier prefixed with terraform- plus the timestamp and a counter. The important part to note here is that your RDS instance doesn't yet include a database of the name terraform-20091110230000000000000001 for your application to connect to because you haven't specified it.
You can have RDS create a database on the RDS instance by using the name parameter. You can then update your JDBC connection string to specify the database name as well:
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
name = "foo"
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private : s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.name}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}

Key Files in Terraform

Whenever I add key_name to my amazon resource, I can never actually connect to the resulting instance:
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
"key_name" = "po"
tags {
Name = "API_Server"
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}
When I do
ssh -i ~/Downloads/po.pem bitnami#IP
I just a blank line in my terminal, as if I was putting in a wrong IP. However, checking the Amazon console, I can see the instance is running. I'm not getting any errors on my Terraform either.
By default all network access is not allowed. You need to explicitly allow network access by setting a security group.
provider "aws" {
"region" = "us-east-1"
"access_key" = "**"
"secret_key" = "****"
}
resource "aws_instance" "api_server" {
ami = "ami-013f1e6b"
instance_type = "t2.micro"
key_name = "po"
security_groups = ["${aws_security_group.api_server.id}"]
tags {
Name = "API_Server"
}
}
resource "aws_security_group" "api_server" {
name = "api_server"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["XXX.XXX.XXX.XXX/32"] // Allow SSH from your global IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
output "API IP" {
value = "${aws_instance.api_server.public_ip}"
}

terraform : Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation

I have a below terraform script which works fine when use it on terminal.
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_instance" "jenkins-poc" {
count = "2"
ami = "${var.aws_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
availability_zone = "${var.aws_region}${element(split(",",var.zones),count.index)}"
vpc_security_group_ids = ["${aws_security_group.jenkins-poc.id}"]
subnet_id = "${element(split(",",var.subnet_id),count.index)}"
user_data = "${file("userdata.sh")}"
tags {
Name = "jenkins-poc${count.index + 1}"
Owner = "Shailesh"
}
}
resource "aws_security_group" "jenkins-poc" {
vpc_id = "${var.vpc_id}"
name = "${var.security_group_name}"
description = "Allow http,httpd and SSH"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "jenkins-poc-elb" {
name = "jenkins-poc-elb"
subnets = ["subnet-","subnet-"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = "80"
lb_protocol = "http"
}
health_check {
healthy_threshold = "2"
unhealthy_threshold = "3"
timeout = "3"
target = "tcp:80"
interval = 30
}
instances = ["${aws_instance.jenkins-poc.*.id}"]
}
and variables file is as given below.
variable "aws_ami" {
default = "ami-"
}
variable "zones"{
default = "a,b"
}
variable "aws_region" {
default = "us-east-1"
}
variable "key_name" {
default = "test-key"
}
variable "instance_type" {
default = "t2.micro"
}
variable "count" {
default = "2"
}
variable "security_group_name" {
default = "jenkins-poc"
}
variable "vpc_id" {
default = "vpc-"
}
variable "subnet_id" {
default = "subnet-,subnet"
}
Everything works fine when I run through terminal using terraform apply. But same code gives me below error when I run it through jenkins.
aws_security_group.jenkins-poc: Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation
Note :: This is a non-default vpc in which I am performing this operation.
I would highly appreciate any comments. I didn't mention sensitive values.
Just make sure if you are in the right aws profile and the default aws profile could restrict you from creating the instance
provider "aws" {
region = "${var.aws_region}"
shared_credentials_file = "~/.aws/credentials"
profile = "xxxxxxx"
}