I'm trying to create both services within the same VPC and give them appropriate security groups but they I can't make it work.
variable "vpc_cidr_block" {
default = "10.1.0.0/16"
}
variable "cidr_block_subnet_public" {
default = "10.1.1.0/24"
}
variable "cidr_block_subnets_private" {
default = ["10.1.2.0/24", "10.1.3.0/24", "10.1.4.0/24"]
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
}
resource "aws_subnet" "private" {
count = length(var.cidr_block_subnets_private)
cidr_block = var.cidr_block_subnets_private[count.index]
vpc_id = aws_vpc.vpc.id
availability_zone = data.aws_availability_zones.available.names[count.index]
}
resource "aws_security_group" "lambda" {
vpc_id = aws_vpc.vpc.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
// security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private: s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.identifier}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}
resource "aws_db_subnet_group" "db" {
subnet_ids = aws_subnet.private.*.id
}
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "random_password" "password" {
length = 32
special = false
}
I tried to not clutter the question by only posting the relevant part of my HCL. Please let me know if I missed anything important.
The biggest issue is the commented out security_groups parameter on the ingress block of the rds security group. Uncommenting that should then allow Postgresql traffic from the lambda security group:
resource "aws_security_group" "rds" {
vpc_id = aws_vpc.vpc.id
ingress {
description = "PostgreSQL"
from_port = 5432
protocol = "tcp"
to_port = 5432
security_groups = [aws_security_group.lambda.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
As well as that your JDBC string is basically resolving to something like jdbc:postgresql://terraform-20091110230000000000000001.xxxx.us-east-1.rds.amazonaws.com:5432/terraform-20091110230000000000000001 because you aren't specifying an identifier for the RDS instance and so it defaults to generating an identifier prefixed with terraform- plus the timestamp and a counter. The important part to note here is that your RDS instance doesn't yet include a database of the name terraform-20091110230000000000000001 for your application to connect to because you haven't specified it.
You can have RDS create a database on the RDS instance by using the name parameter. You can then update your JDBC connection string to specify the database name as well:
resource "aws_db_instance" "rds" {
allocated_storage = 10
engine = "postgres"
engine_version = "11.5"
instance_class = "db.t2.micro"
username = "postgres"
password = random_password.password.result
skip_final_snapshot = true
apply_immediately = true
name = "foo"
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.db.name
}
resource "aws_lambda_function" "event" {
function_name = "ServerlessExampleEvent"
timeout = 30
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/lambda-1.0.0-all.jar"
handler = "dk.fitfit.handler.EventRequestHandler"
runtime = "java11"
memory_size = 256
role = aws_iam_role.event.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for s in aws_subnet.private : s.id]
}
environment {
variables = {
JDBC_DATABASE_URL = "jdbc:postgresql://${aws_db_instance.rds.address}:${aws_db_instance.rds.port}/${aws_db_instance.rds.name}"
DATABASE_USERNAME = aws_db_instance.rds.username
DATABASE_PASSWORD = aws_db_instance.rds.password
}
}
}
Related
I am struggling to understand CIDR blocks in the way I am using them. My understanding is (probably wrong) that they are a way of reserving a range of IP addresses for your environment, and you can apportion them across applications. But I can't get it working in my case. I am using terraform to manage a simple environment. A VPC containing a Lambda and an RDS instance. The RDS will not be publicly accessible, the lambda will be invoked by an HTTP trigger. Each of the Lambda and RDS instance need their own subnets, the RDS needs two. I have this configuration in terraform which keeps failing with this and similar errors:
The new Subnets are not in the same Vpc as the existing subnet group
The terraform set up is:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "vpc"
}
}
resource "aws_subnet" "rds_subnet_1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "eu-west-1a"
tags = {
Name = "rds_subnet_1a"
}
}
resource "aws_subnet" "rds_subnet_1b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "eu-west-1b"
tags = {
Name = "rds_subnet_1b"
}
}
resource "aws_subnet" "lambda_subnet_1a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "eu-west-1a"
tags = {
Name = "lambda_subnet_1a"
}
}
resource "aws_db_subnet_group" "default" {
name = "main"
subnet_ids = [aws_subnet.rds_subnet_1a.id, aws_subnet.rds_subnet_1b.id]
tags = {
Name = "My DB subnet group"
}
}
resource "aws_security_group" "rds" {
name = "rds-sg"
vpc_id = aws_vpc.main.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
egress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
tags = {
Name = "rds-sg"
}
}
resource "aws_security_group" "lambda" {
name = "lambda_sg"
vpc_id = aws_vpc.main.id
ingress {
protocol = -1
self = true
from_port = 0
to_port = 0
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["10.0.0.0/16"]
}
tags = {
Name = "lambda_sg"
}
}
I know this is basic, but I just think if I get some answers to my situation it may help me understand the concepts better.
EDIT - lambda config:
resource "aws_lambda_function" "api_uprn" {
function_name = "api-uprn"
s3_bucket = aws_s3_bucket.lambdas_bucket.id
s3_key = "api-uprn/function_0.0.8.zip"
runtime = "python3.9"
handler = "app.main.handler"
role = aws_iam_role.lambda_exec.arn
vpc_config {
subnet_ids = [aws_subnet.subnet_1a.id]
security_group_ids = [aws_security_group.lambda.id]
}
}
resource "aws_cloudwatch_log_group" "api_uprn" {
name = "/aws/lambda/${aws_lambda_function.api_uprn.function_name}"
retention_in_days = 30
}
resource "aws_iam_role" "lambda_exec" {
name = "api_uprn"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "rds_read" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "lambda_vpc_access" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
Can you please post here, the full error? It will be easier to understand what state is throwing the error!
My tip is that you need to change your subnet_ids at your lambda configuration. From what I understand, your lambda configuration should be like this:
resource "aws_lambda_function" "api_uprn" {
function_name = "api-uprn"
s3_bucket = aws_s3_bucket.lambdas_bucket.id
s3_key = "api-uprn/function_0.0.8.zip"
runtime = "python3.9"
handler = "app.main.handler"
role = aws_iam_role.lambda_exec.arn
vpc_config {
subnet_ids = [aws_subnet. lambda_subnet_1a.id]
security_group_ids = [aws_security_group.lambda.id]
}
}
I am currently learning Terraform and I need help with regard to the below code. I want to create a simple architecture of an autoscaling group of EC2 instances behind an Application load balancer. The setup gets completed but when I try to access the application endpoint, it gets timed out. When I tried to access the EC2 instances, I was unable to (because EC2 instances were in a security group allowing access from the ALB security group only). I changed the instance security group ingress values and ran the user_data script manually following which I reverted the changes to the instance security group to complete my setup.
My question is why is my setup not working via the below code? Is it because the access is being restricted by the load balancer security group or is my launch configuration block incorrect?
data "aws_ami" "amazon-linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-2.0.20220426.0-x86_64-gp2"]
}
}
data "aws_availability_zones" "available" {
state = "available"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.14.0"
name = "main-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
public_subnets = ["10.0.4.0/24","10.0.5.0/24","10.0.6.0/24"]
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_launch_configuration" "TestLC" {
name_prefix = "Lab-Instance-"
image_id = data.aws_ami.amazon-linux.id
instance_type = "t2.nano"
key_name = "CloudformationKeyPair"
user_data = file("./user_data.sh")
security_groups = [aws_security_group.TestInstanceSG.id]
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "TestASG" {
min_size = 1
max_size = 3
desired_capacity = 2
launch_configuration = aws_launch_configuration.TestLC.name
vpc_zone_identifier = module.vpc.public_subnets
}
resource "aws_lb_listener" "TestListener"{
load_balancer_arn = aws_lb.TestLB.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.TestTG.arn
}
}
resource "aws_lb" "TestLB" {
name = "Lab-App-Load-Balancer"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
subnets = module.vpc.public_subnets
}
resource "aws_lb_target_group" "TestTG" {
name = "LabTargetGroup"
port = "80"
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
}
resource "aws_autoscaling_attachment" "TestAutoScalingAttachment" {
autoscaling_group_name = aws_autoscaling_group.TestASG.id
lb_target_group_arn = aws_lb_target_group.TestTG.arn
}
resource "aws_security_group" "TestInstanceSG" {
name = "LAB-Instance-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
security_groups = [aws_security_group.TestLoadBalanceSG.id]
}
vpc_id = module.vpc.vpc_id
}
resource "aws_security_group" "TestLoadBalanceSG" {
name = "LAB-LoadBalancer-SecurityGroup"
ingress{
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress{
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
vpc_id = module.vpc.vpc_id
}
I have a question regarding AWS API Gateway which forwards traffic to an internal facing ALB. This is configured with terraform:
resource "aws_lb" "foo-load-balancer" {
name = "foo-dev"
load_balancer_type = "application"
internal = true
subnets = local.private_subnet_ids
security_groups = [aws_security_group.lb_security_group.id]
}
resource "aws_security_group" "lb_security_group" {
name = "${local.name}-lb-security-group"
vpc_id = local.vpc_id
ingress {
from_port = 8080
to_port = 8383
protocol = "tcp"
cidr_blocks = local.private_subnet_cidrs
}
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = local.private_subnet_cidrs
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = local.private_subnet_cidrs
}
egress {
from_port = 0
# Allowing any incoming port
to_port = 0
# Allowing any outgoing port
protocol = "-1"
# Allowing any outgoing protocol
cidr_blocks = [
"0.0.0.0/0"]
# Allowing traffic out to all IP addresses
}
tags = local.tags
}
resource "aws_apigatewayv2_vpc_link" "alb-vpc-link" {
name = local.name
security_group_ids = [
aws_security_group.lb_security_group.id]
subnet_ids = local.private_subnet_ids
tags = local.tags
}
resource "aws_apigatewayv2_api" "foo" {
name = "${local.name}-foo"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_integration" "foo" {
api_id = aws_apigatewayv2_api.foo.id
integration_type = "HTTP_PROXY"
integration_method = "ANY"
connection_type = "VPC_LINK"
connection_id = aws_apigatewayv2_vpc_link.alb-vpc-link.id
integration_uri = module.foo-backend-service.aws_lb_listener.arn
depends_on = [
module.foo-backend-service]
}
resource "aws_apigatewayv2_route" "foo" {
api_id = aws_apigatewayv2_api.foo.id
route_key = "ANY /{proxy+}"
target = "integrations/${aws_apigatewayv2_integration.foo.id}"
}
resource "aws_apigatewayv2_deployment" "foo" {
api_id = aws_apigatewayv2_api.foo.id
depends_on = [
aws_apigatewayv2_route.foo]
}
resource "aws_apigatewayv2_stage" "foo" {
name = "$default"
deployment_id = aws_apigatewayv2_deployment.foo.id
api_id = aws_apigatewayv2_api.foo.id
}
resource "aws_apigatewayv2_domain_name" "foo" {
domain_name = aws_acm_certificate.foo.domain_name
domain_name_configuration {
certificate_arn = aws_acm_certificate.foo.arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
depends_on = [
aws_acm_certificate_validation.foo]
}
resource "aws_apigatewayv2_api_mapping" "foo" {
api_id = aws_apigatewayv2_api.foo.id
domain_name = aws_apigatewayv2_domain_name.foo.id
stage = aws_apigatewayv2_stage.foo.id
}
resource "aws_ecs_task_definition" "foo-service-task" {
family = "${var.application_name}-${var.name}-service-task"
container_definitions = jsonencode(
[
{
name: "${var.application_name}-${var.name}-service-task",
image: var.image,
essential: true,
portMappings: [
{
containerPort: 8282,
hostPort: var.port
}
],
memory: var.memory,
cpu: var.cpu,
environment: var.environment,
logConfiguration: {
logDriver: "awslogs",
options: {
awslogs-group: var.cloudwatch_group,
awslogs-region: var.aws_region,
awslogs-stream-prefix: var.name
}
}
}
])
network_mode = "awsvpc"
tags = merge(var.tags, {
service = var.name
})
depends_on = [var.depends_on_relation]
}
resource "aws_ecs_service" "foo-service" {
name = "${var.application_name}-${var.name}"
cluster = data.aws_ecs_cluster.ecs.id
task_definition = aws_ecs_task_definition.foo-service-task.arn
desired_count = var.desired_count
load_balancer {
target_group_arn = aws_lb_target_group.target-group.arn
container_name = aws_ecs_task_definition.foo-service-task.family
container_port = 8282
}
network_configuration {
subnets = var.subnet_ids
security_groups = [aws_security_group.securitygroup.id]
}
depends_on = [aws_lb_listener.listener]
}
resource "aws_security_group" "securitygroup" {
name = "${var.application_name}-${var.name}"
vpc_id = var.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = local.subnet_cidrs
}
egress {
from_port = 0
# Allowing any incoming port
to_port = 0
# Allowing any outgoing port
protocol = "-1"
# Allowing any outgoing protocol
cidr_blocks = [
"0.0.0.0/0"]
# Allowing traffic out to all IP addresses
}
tags = merge(var.tags, {
service = var.name
})
}
resource "aws_lb_target_group" "target-group" {
name = "${local.stage}-${var.name}"
port = var.port
protocol = var.protocol
target_type = "ip"
vpc_id = var.vpc_id
health_check {
enabled = var.health_check_enabled
matcher = var.protocol == "TCP" ? null : "200,301,302"
path = var.health_check_protocol == "TCP" ? null : var.health_check_path
port = var.health_check_port
protocol = var.health_check_protocol
interval = var.health_check_interval
timeout = var.protocol == "TCP" ? null : var.health_check_timeout
healthy_threshold = var.health_check_healthy_threshold
unhealthy_threshold = var.health_check_unhealthy_threshold
}
slow_start = var.protocol == "TCP" ? null : var.health_check_slow_start
tags = merge(var.tags, {
service = var.name
})
depends_on = [
data.aws_lb.loadbalancer]
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = data.aws_lb.loadbalancer.arn
port = var.port
protocol = var.protocol
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.target-group.arn
}
depends_on = [
aws_lb_target_group.target-group]
}
resource "aws_route53_record" "foo" {
name = aws_apigatewayv2_domain_name.foo.domain_name
type = "A"
zone_id = data.aws_route53_zone.hosted_zone.zone_id
alias {
name = aws_apigatewayv2_domain_name.foo.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.foo.domain_name_configuration[0].hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_acm_certificate" "foo" {
domain_name = local.application_domain
validation_method = "DNS"
}
resource "aws_route53_record" "foo_acm" {
for_each = {
for dvo in aws_acm_certificate.foo.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = data.aws_route53_zone.hosted_zone.zone_id
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = each.value.zone_id
}
resource "aws_acm_certificate_validation" "foo" {
certificate_arn = aws_acm_certificate.foo.arn
validation_record_fqdns = [for record in aws_route53_record.foo_acm : record.fqdn]
}
resource "aws_route53_record" "application_load_balancer" {
name = "alb.${local.application_domain}"
type = "CNAME"
zone_id = data.aws_route53_zone.hosted_zone.zone_id
ttl = "60"
records = [aws_lb.foo-load-balancer.dns_name]
}
The Service/Task, which I am starting is a simple Spring-Boot app ("Greeting-Controller") with an enabled health-check. The service is up and running and healthy.
The problem is the ALB doesn't recognize any traffic (I can't see any requests at the ALB in the AWS console) and when I try to access to the registered Route53 DNS I get the "Service unavailable" JSON.
Any ideas what to change?
I have vpc with public and private.
How do I create bastion host on the public?
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 2.0"
name = "${local.name}-vpc"
cidr = "10.1.0.0/16"
azs = ["us-east-2a", "us-east-2b", "us-east-2c"]
private_subnets = ["10.1.1.0/24", "10.1.2.0/24", "10.1.3.0/24"]
public_subnets = ["10.1.101.0/24", "10.1.102.0/24", "10.1.103.0/24"]
single_nat_gateway = true
enable_nat_gateway = true
enable_vpn_gateway = false
enable_dns_hostnames = true
public_subnet_tags = {
Name = "public"
}
private_subnet_tags = {
Name = "private"
}
public_route_table_tags = {
Name = "public-RT"
}
private_route_table_tags = {
Name = "private-RT"
}
tags = {
Environment = local.environment
Name = local.name
}
}
Edit
I add this to the code above:
resource "aws_security_group" "bastion-sg" {
name = "bastion-security-group"
vpc_id = "${module.vpc.vpc_id}"
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["0.0.0.0/0"]
}
egress {
protocol = -1
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "bastion" {
ami = "ami-0d5d9d301c853a04a"
key_name = "key"
instance_type = "t2.micro"
vpc_security_group_ids = ["${aws_security_group.bastion-sg.id}"]
associate_public_ip_address = true
}
But when I run terraform apply I get error:
Error: Error launching source instance: InvalidParameter: Security group sg-0e3d05f76119af726 and subnet subnet-4b0c1123 belong to different networks.
status code: 400, request id: ddce7fc3-3ef9-407d-b0cd-0dda640bb3a9
on vpc.tf line 108, in resource "aws_instance" "bastion":
108: resource "aws_instance" "bastion" {
resource "aws_security_group" "bastion-sg" {
name = "bastion-security-group"
vpc_id = "aws_vpc.My_VPC.id"
ingress {
protocol = var.bastion_ingress_protocol
from_port = var.bastion_ingress_from_port
to_port = var.bastion_ingress_to_port
cidr_blocks = var.bastion_ingress_cidr
}
egress {
protocol = var.bastion_egress_protocol
from_port = var.bastion_egress_from_port
to_port = var.bastion_egress_to_port
cidr_blocks = var.bastion_egress_cidr
}
}
resource "aws_instance" "bastion" {
ami = var.bastion_ami
key_name = var.key
instance_type = var.bastion_instance_type
security_groups = [aws_security_group.bastion-sg.id]
associate_public_ip_address = true
}
I have a below terraform script which works fine when use it on terminal.
provider "aws" {
region = "${var.aws_region}"
}
resource "aws_instance" "jenkins-poc" {
count = "2"
ami = "${var.aws_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
availability_zone = "${var.aws_region}${element(split(",",var.zones),count.index)}"
vpc_security_group_ids = ["${aws_security_group.jenkins-poc.id}"]
subnet_id = "${element(split(",",var.subnet_id),count.index)}"
user_data = "${file("userdata.sh")}"
tags {
Name = "jenkins-poc${count.index + 1}"
Owner = "Shailesh"
}
}
resource "aws_security_group" "jenkins-poc" {
vpc_id = "${var.vpc_id}"
name = "${var.security_group_name}"
description = "Allow http,httpd and SSH"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_elb" "jenkins-poc-elb" {
name = "jenkins-poc-elb"
subnets = ["subnet-","subnet-"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = "80"
lb_protocol = "http"
}
health_check {
healthy_threshold = "2"
unhealthy_threshold = "3"
timeout = "3"
target = "tcp:80"
interval = 30
}
instances = ["${aws_instance.jenkins-poc.*.id}"]
}
and variables file is as given below.
variable "aws_ami" {
default = "ami-"
}
variable "zones"{
default = "a,b"
}
variable "aws_region" {
default = "us-east-1"
}
variable "key_name" {
default = "test-key"
}
variable "instance_type" {
default = "t2.micro"
}
variable "count" {
default = "2"
}
variable "security_group_name" {
default = "jenkins-poc"
}
variable "vpc_id" {
default = "vpc-"
}
variable "subnet_id" {
default = "subnet-,subnet"
}
Everything works fine when I run through terminal using terraform apply. But same code gives me below error when I run it through jenkins.
aws_security_group.jenkins-poc: Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation
Note :: This is a non-default vpc in which I am performing this operation.
I would highly appreciate any comments. I didn't mention sensitive values.
Just make sure if you are in the right aws profile and the default aws profile could restrict you from creating the instance
provider "aws" {
region = "${var.aws_region}"
shared_credentials_file = "~/.aws/credentials"
profile = "xxxxxxx"
}