Terraformed AWS API Gateway Custom Domain Names throws 403 Forbidden - amazon-web-services

I am trying to expose all the stages of my Regional API Gateway through a regional Custom Domain.
Problem
If I curl directly my API Gateway (ie. https://xx.execute-api.eu-west-3.amazonaws.com/default/users), it works, but I get a 403 if I curl de domain name (ie. https://api.acme.com/default/users).
Configuration
My Terraform files looks like that:
data "aws_route53_zone" "acme" {
name = "acme.com."
}
resource "aws_api_gateway_rest_api" "backend" {
name = "acme-backend-api"
description = "Backend API"
body = "SOMETHING"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_deployment" "backend" {
rest_api_id = aws_api_gateway_rest_api.backend.id
stage_name = "default"
lifecycle {
create_before_destroy = true
}
}
resource "aws_api_gateway_domain_name" "backend" {
domain_name = "api.acme.com"
regional_certificate_arn = "arn:aws:acm:xx:certificate/xx"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_route53_record" "backend" {
name = aws_api_gateway_domain_name.backend.domain_name
type = "A"
zone_id = data.aws_route53_zone.acme.id
alias {
evaluate_target_health = true
name = aws_api_gateway_domain_name.backend.regional_domain_name
zone_id = aws_api_gateway_domain_name.backend.regional_zone_id
}
}
resource "aws_api_gateway_base_path_mapping" "backend" {
api_id = aws_api_gateway_rest_api.backend.id
domain_name = aws_api_gateway_domain_name.backend.domain_name
# No stage_name: expose all stages
}
According to the Terraform api_gateway_domain_name and api_gateway_base_path_mapping examples, it should be ok.
I have also followed many howtos, and I have these elements:
The certificate
The A record to the API custom domain
The mapping to the deployed stage (which works if you call it directly)
What do I miss/do wrong?

This is v2 example working for me as off today, this "aws_apigatewayv2_api_mapping" is key to avoid port 80: Connection refused
or {"message":"Forbidden"} errors which I see you have but I did struggle with.
// ACM
resource "aws_acm_certificate" "cert_api" {
domain_name = var.api_domain
validation_method = "DNS"
tags = {
Name = var.api_domain
}
}
resource "aws_acm_certificate_validation" "cert_api" {
certificate_arn = aws_acm_certificate.cert_api.arn
}
// API Gateway V2
resource "aws_apigatewayv2_api" "lambda" {
name = "serverless_lambda_gw"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_stage" "lambda" {
api_id = aws_apigatewayv2_api.lambda.id
name = "serverless_lambda_stage"
auto_deploy = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gw.arn
format = jsonencode({
requestId = "$context.requestId"
sourceIp = "$context.identity.sourceIp"
requestTime = "$context.requestTime"
protocol = "$context.protocol"
httpMethod = "$context.httpMethod"
resourcePath = "$context.resourcePath"
routeKey = "$context.routeKey"
status = "$context.status"
responseLength = "$context.responseLength"
integrationErrorMessage = "$context.integrationErrorMessage"
}
)
}
}
resource "aws_apigatewayv2_integration" "testimonials" {
api_id = aws_apigatewayv2_api.lambda.id
integration_uri = aws_lambda_function.testimonials.invoke_arn
integration_type = "AWS_PROXY"
integration_method = "POST"
}
resource "aws_apigatewayv2_route" "testimonials" {
api_id = aws_apigatewayv2_api.lambda.id
route_key = "GET /testimonials"
target = "integrations/${aws_apigatewayv2_integration.testimonials.id}"
}
resource "aws_cloudwatch_log_group" "api_gw" {
name = "/aws/api_gw/${aws_apigatewayv2_api.lambda.name}"
retention_in_days = 30
}
resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.testimonials.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"
}
resource "aws_apigatewayv2_domain_name" "api" {
domain_name = var.api_domain
domain_name_configuration {
certificate_arn = aws_acm_certificate.cert_api.arn
endpoint_type = "REGIONAL"
security_policy = "TLS_1_2"
}
}
resource "aws_apigatewayv2_api_mapping" "api" {
api_id = aws_apigatewayv2_api.lambda.id
domain_name = aws_apigatewayv2_domain_name.api.id
stage = aws_apigatewayv2_stage.lambda.id
}
// Route53
resource "aws_route53_zone" "api" {
name = var.api_domain
}
resource "aws_route53_record" "cert_api_validations" {
allow_overwrite = true
count = length(aws_acm_certificate.cert_api.domain_validation_options)
zone_id = aws_route53_zone.api.zone_id
name = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_name, count.index)
type = element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_type, count.index)
records = [element(aws_acm_certificate.cert_api.domain_validation_options.*.resource_record_value, count.index)]
ttl = 60
}
resource "aws_route53_record" "api-a" {
name = aws_apigatewayv2_domain_name.api.domain_name
type = "A"
zone_id = aws_route53_zone.api.zone_id
alias {
name = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].target_domain_name
zone_id = aws_apigatewayv2_domain_name.api.domain_name_configuration[0].hosted_zone_id
evaluate_target_health = false
}
}

Related

How to figure out why health checks aren't passing in ecs fargate with alb?

I'm quite new into devops, and suffering with setting up a test project for a couple of weeks now.
I have made a terraform file which is supposed to set up most of the project:
# Get subnets
data "aws_subnets" "subnets" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
# Get security groups
data "aws_security_groups" "security_groups" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
resource "aws_s3_bucket" "lb_logs" {
bucket = "${var.app_name}-load-balancer-${var.env}-logs"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption" {
bucket = aws_s3_bucket.lb_logs.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.lb_logs.bucket
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_acl" "acl" {
bucket = aws_s3_bucket.lb_logs.bucket
acl = "private"
}
data "aws_iam_policy_document" "lb_logs_s3_put_object" {
statement {
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::156460612806:root"]
}
actions = ["s3:PutObject"]
resources = ["${aws_s3_bucket.lb_logs.arn}/*"]
}
}
resource "aws_s3_bucket_policy" "lb_logs_s3_put_object" {
bucket = aws_s3_bucket.lb_logs.id
policy = data.aws_iam_policy_document.lb_logs_s3_put_object.json
}
# Create load balancer
resource "aws_lb" "load_balancer" {
name = "${var.app_name}-load-balancer-${var.env}"
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
load_balancer_type = "application"
access_logs {
bucket = aws_s3_bucket.lb_logs.bucket
enabled = true
}
tags = {
Environment = "${var.env}"
}
}
resource "aws_lb_target_group" "blue_target" {
name = "${var.app_name}-blue-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
resource "aws_lb_target_group" "green_target" {
name = "${var.app_name}-green-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
data "aws_acm_certificate" "cert" {
domain = var.domain
statuses = ["ISSUED"]
most_recent = true
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.load_balancer.arn
port = var.port
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = data.aws_acm_certificate.cert.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.blue_target.arn
}
}
# ECS
resource "aws_ecs_cluster" "cluster" {
name = "${var.app_name}-cluster-${var.env}"
}
data "aws_ecr_repository" "ecr_repository" {
name = var.image_repo_name
}
resource "aws_iam_role" "ecs_task_role" {
name = "EcsTaskRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "",
Effect = "Allow",
Action = [
"secretsmanager:GetRandomPassword",
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds",
"secretsmanager:ListSecrets"
],
Resource = "*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = aws_iam_policy.secrets_manager_read_policy.arn
}
resource "aws_iam_role_policy_attachment" "attach_s3_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "attach_ses_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role" "ecs_exec_role" {
name = "EcsExecRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy" "log_groups_write_policy" {
name = "LogGroupsWrite"
description = "Read only access to secrets manager"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "",
Effect = "Allow",
Action = [
"logs:CreateLogGroup"
],
Resource = "*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = aws_iam_policy.log_groups_write_policy.arn
}
resource "aws_iam_role_policy_attachment" "attach_ecs_task_exec_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role_policy_attachment" "attach_fault_injection_simulator_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSFaultInjectionSimulatorECSAccess"
}
resource "aws_ecs_task_definition" "task_def" {
family = "${var.app_name}-task-def-${var.env}"
network_mode = "awsvpc"
task_role_arn = aws_iam_role.ecs_task_role.arn
execution_role_arn = aws_iam_role.ecs_exec_role.arn
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
runtime_platform {
cpu_architecture = "X86_64"
operating_system_family = "LINUX"
}
container_definitions = jsonencode([
{
name = "${var.app_name}-container-${var.env}"
image = "${data.aws_ecr_repository.ecr_repository.repository_url}:latest"
cpu = 0
essential = true
portMappings = [
{
containerPort = var.port
hostPort = var.port
},
]
environment = [
{
name = "PORT",
value = tostring("${var.port}")
},
{
name = "NODE_ENV",
value = var.env
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-create-group" = "true"
"awslogs-group" = "${var.app_name}-task-def-${var.env}"
"awslogs-region" = "${var.region}"
"awslogs-stream-prefix" = "ecs"
}
}
},
])
}
resource "aws_ecs_service" "service" {
lifecycle {
ignore_changes = [
task_definition,
load_balancer,
]
}
cluster = aws_ecs_cluster.cluster.arn
name = "${var.app_name}-service-${var.env}"
task_definition = aws_ecs_task_definition.task_def.arn
load_balancer {
target_group_arn = aws_lb_target_group.blue_target.arn
container_name = "${var.app_name}-container-${var.env}"
container_port = var.port
}
capacity_provider_strategy {
capacity_provider = "FARGATE"
base = 0
weight = 1
}
scheduling_strategy = "REPLICA"
deployment_controller {
type = "CODE_DEPLOY"
}
platform_version = "1.4.0"
network_configuration {
assign_public_ip = true
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
}
desired_count = 1
}
# DEPLOYMENT
resource "aws_codedeploy_app" "codedeploy_app" {
name = "${var.app_name}-application-${var.env}"
compute_platform = "ECS"
}
resource "aws_iam_role" "codedeploy_role" {
name = "CodedeployRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "codedeploy.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole"
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role_for_ecs" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS"
}
resource "aws_codedeploy_deployment_group" "deployment_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
wait_time_in_minutes = 0
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 5
}
}
deployment_group_name = "${var.app_name}-deployment-group-${var.env}"
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = [aws_lb_listener.listener.arn]
}
target_group {
name = aws_lb_target_group.blue_target.name
}
target_group {
name = aws_lb_target_group.green_target.name
}
}
}
service_role_arn = aws_iam_role.codedeploy_role.arn
ecs_service {
service_name = aws_ecs_service.service.name
cluster_name = aws_ecs_cluster.cluster.name
}
}
resource "aws_appautoscaling_target" "scalable_target" {
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
min_capacity = 1
max_capacity = 5
}
resource "aws_appautoscaling_policy" "cpu_scaling_policy" {
name = "${var.app_name}-cpu-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
resource "aws_appautoscaling_policy" "memory_scaling_policy" {
name = "${var.app_name}-memory-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageMemoryUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
I've created a project which is without HTTPS, and custom domain (started small, built it step-by-step, first without auto-scaling, logging, and other fancy stuff). it works fine, health checks are passing, can connect, etc.
I've decided to create the exact same thing just with HTTPS, and instead of using the alb's dns to call the api, assign a custom domain.
The load balancer is constantly creating/destroying instances because health checks are failing.
I was doing some research, and I couldn't find a way to debug why this is happening. All I know from the container logs is that it starts, all good, no errors, but they are being terminated because health checks are failing. I cannot access any logs about why, I can only see that there are unhealthy targets.
Now because it is in a VPC, and they don't have a static ip address, and set to HTTPS, it seems like from the load balancer level down to containers it's a black box where it's impossible to debug.
Couldn't think of anything else, I set my security group to allow all requests from all ports to check if I can call the health check endpoint.
Turns out I can, but it returns 502. More detailed logs from the load balancer:
type https
time 2023-02-10T14:37:00.099726Z
elb app/myapp-load-balancer-staging/c6aabdb240600ca8
client:port myip:38255
target:port targetip:3000
request_processing_time -1
target_processing_time -1
response_processing_time -1
elb_status_code 502
target_status_code -
received_bytes 360
sent_bytes 277
"request" "GET https://api.myapp.com:3000/rest/health HTTP/1.1"
"user_agent" "PostmanRuntime/7.29.0"
ssl_cipher <some text>-SHA256
ssl_protocol TLSv1.2
target_group_arn arn:aws:elasticloadbalancing:eu-west-1:myaccountnumber:targetgroup/myapp-blue-target-staging/id
"trace_id" "Root=1-63e6568c-7c78be0f1e967e59370fbb80"
"domain_name" "api.myapp.com"
"chosen_cert_arn" "arn:aws:acm:eu-west-1:myaccountnumber:certificate/certid"
matched_rule_priority 0
request_creation_time 2023-02-10T14:37:00.096000Z
"actions_executed" "forward"
"redirect_url" "-"
"error_reason" "-"
"target:port_list" "172.31.2.112:3000"
"target_status_code_list" "-"
"classification" "Ambiguous"
"classification_reason" "UndefinedContentLengthSemantics"
All I could find is this guide on the topic, but it just explained the problem, didn't show a solution.
Helping me to spot what I'm doing wrong would help a lot, but I'd really appreciate a guide on how to debug these things between the load balancer and containers as they are set so secure with vpcs and everything that even the admins cannot access them.
This is because you are using var.port for all the port settings, for the load balancer listener, target group traffic port, and container port. And you have configured the target group to use the HTTPS protocol. However the SSL traffic is terminated at the load balancer. Only the load balancer has an SSL certificate, so only the load balancer can handle HTTPS traffic. The traffic from the load balancer to the container is still HTTP.
You need to separate out your port settings and traffic protocol settings so that only the load balancer listener is using port 443/HTTPS. The other ports should be configured to use port HTTP just like they were before when everything was working for you, before you enabled SSL.

Why is deploying ecr image to ecs uses the first (and not latest) task definition?

I use circleci with orbs: aws/ecr, aws/ecs that uploads the new docker image, and supposed to update ecs service.
My issue is that it uses the first, and not the latest task definition, and cannot seem to find a way to set the deployment to update with the latest task definition.
I use terraform to manage my infrastructure which looks like:
# Get subnets
data "aws_subnets" "subnets" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
# Get security groups
data "aws_security_groups" "security_groups" {
filter {
name = "vpc-id"
values = [var.vpc_id]
}
}
# Create load balancer
resource "aws_lb" "load_balancer" {
name = "${var.app_name}-load-balancer-${var.env}"
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
load_balancer_type = "application"
tags = {
Environment = "${var.env}"
}
}
resource "aws_lb_target_group" "blue_target" {
name = "${var.app_name}-blue-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
resource "aws_lb_target_group" "green_target" {
name = "${var.app_name}-green-target-${var.env}"
protocol = "HTTPS"
port = var.port
target_type = "ip"
vpc_id = var.vpc_id
health_check {
healthy_threshold = 5
interval = 30
matcher = 200
path = "${var.health_check_path}"
protocol = "HTTPS"
timeout = 10
unhealthy_threshold = 2
}
}
data "aws_acm_certificate" "cert" {
domain = var.domain
statuses = ["ISSUED"]
most_recent = true
}
resource "aws_lb_listener" "listener" {
load_balancer_arn = aws_lb.load_balancer.arn
port = var.port
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-2016-08"
certificate_arn = data.aws_acm_certificate.cert.arn
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.blue_target.arn
}
}
# ECS
resource "aws_ecs_cluster" "cluster" {
name = "${var.app_name}-cluster-${var.env}"
}
data "aws_ecr_repository" "ecr_repository" {
name = var.image_repo_name
}
resource "aws_iam_role" "ecs_task_role" {
name = "EcsTaskRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_policy" "secrets_manager_read_policy" {
name = "SecretsManagerRead"
description = "Read only access to secrets manager"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "",
Effect = "Allow",
Action = [
"secretsmanager:GetRandomPassword",
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds",
"secretsmanager:ListSecrets"
],
Resource = "*"
}
]
})
}
resource "aws_iam_role_policy_attachment" "attach_secrets_manager_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = aws_iam_policy.secrets_manager_read_policy.arn
}
resource "aws_iam_role_policy_attachment" "attach_s3_read_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "attach_ses_to_task_role" {
role = aws_iam_role.ecs_task_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"
}
resource "aws_iam_role" "ecs_exec_role" {
name = "EcsExecRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "attach_ecs_task_exec_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}
resource "aws_iam_role_policy_attachment" "attach_fault_injection_simulator_to_exec_role" {
role = aws_iam_role.ecs_exec_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSFaultInjectionSimulatorECSAccess"
}
resource "aws_ecs_task_definition" "task_def" {
family = "${var.app_name}-task-def-${var.env}"
network_mode = "awsvpc"
task_role_arn = aws_iam_role.ecs_task_role.arn
execution_role_arn = aws_iam_role.ecs_exec_role.arn
requires_compatibilities = ["FARGATE"]
cpu = "256"
memory = "512"
runtime_platform {
cpu_architecture = "X86_64"
operating_system_family = "LINUX"
}
container_definitions = jsonencode([
{
name = "${var.app_name}-container-${var.env}"
image = "${data.aws_ecr_repository.ecr_repository.repository_url}:latest"
cpu = 0
essential = true
portMappings = [
{
containerPort = var.port
hostPort = var.port
},
]
environment = [
{
name = "PORT",
value = tostring("${var.port}")
},
{
name = "NODE_ENV",
value = var.env
}
]
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-create-group" = "true"
"awslogs-group" = "${var.app_name}-task-def-${var.env}"
"awslogs-region" = "${var.region}"
"awslogs-stream-prefix" = "ecs"
}
}
},
])
}
resource "aws_ecs_service" "service" {
lifecycle {
ignore_changes = [
task_definition,
load_balancer,
]
}
cluster = aws_ecs_cluster.cluster.arn
name = "${var.app_name}-service-${var.env}"
task_definition = aws_ecs_task_definition.task_def.arn
load_balancer {
target_group_arn = aws_lb_target_group.blue_target.arn
container_name = "${var.app_name}-container-${var.env}"
container_port = var.port
}
capacity_provider_strategy {
capacity_provider = "FARGATE"
base = 0
weight = 1
}
scheduling_strategy = "REPLICA"
deployment_controller {
type = "CODE_DEPLOY"
}
platform_version = "1.4.0"
network_configuration {
assign_public_ip = true
subnets = data.aws_subnets.subnets.ids
security_groups = data.aws_security_groups.security_groups.ids
}
desired_count = 1
}
# DEPLOYMENT
resource "aws_codedeploy_app" "codedeploy_app" {
name = "${var.app_name}-application-${var.env}"
compute_platform = "ECS"
}
resource "aws_iam_role" "codedeploy_role" {
name = "CodedeployRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "codedeploy.amazonaws.com"
}
},
]
})
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole"
}
resource "aws_iam_role_policy_attachment" "attach_codedeploy_role_for_ecs" {
role = aws_iam_role.codedeploy_role.name
policy_arn = "arn:aws:iam::aws:policy/AWSCodeDeployRoleForECS"
}
resource "aws_codedeploy_deployment_group" "deployment_group" {
app_name = aws_codedeploy_app.codedeploy_app.name
deployment_config_name = "CodeDeployDefault.ECSAllAtOnce"
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
blue_green_deployment_config {
deployment_ready_option {
action_on_timeout = "CONTINUE_DEPLOYMENT"
wait_time_in_minutes = 0
}
terminate_blue_instances_on_deployment_success {
action = "TERMINATE"
termination_wait_time_in_minutes = 5
}
}
deployment_group_name = "${var.app_name}-deployment-group-${var.env}"
deployment_style {
deployment_option = "WITH_TRAFFIC_CONTROL"
deployment_type = "BLUE_GREEN"
}
load_balancer_info {
target_group_pair_info {
prod_traffic_route {
listener_arns = [aws_lb_listener.listener.arn]
}
target_group {
name = aws_lb_target_group.blue_target.name
}
target_group {
name = aws_lb_target_group.green_target.name
}
}
}
service_role_arn = aws_iam_role.codedeploy_role.arn
ecs_service {
service_name = aws_ecs_service.service.name
cluster_name = aws_ecs_cluster.cluster.name
}
}
resource "aws_appautoscaling_target" "scalable_target" {
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
min_capacity = 1
max_capacity = 5
}
resource "aws_appautoscaling_policy" "cpu_scaling_policy" {
name = "${var.app_name}-cpu-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
resource "aws_appautoscaling_policy" "memory_scaling_policy" {
name = "${var.app_name}-memory-scaling-policy-${var.env}"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.cluster.name}/${aws_ecs_service.service.name}"
scalable_dimension = "ecs:service:DesiredCount"
policy_type = "TargetTrackingScaling"
target_tracking_scaling_policy_configuration {
target_value = 70
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageMemoryUtilization"
}
scale_out_cooldown = 300
scale_in_cooldown = 300
disable_scale_in = false
}
}
And my circleci config (the relevant parts):
deploy:
executor: aws-ecr/default
working_directory: ~/code
parameters:
env:
type: string
steps:
- attach_workspace:
at: ~/
- aws-ecr/build-and-push-image:
# attach-workspace: true
# create-repo: true
repo: "test-<< parameters.env >>"
tag: "latest,${CIRCLE_BUILD_NUM}"
- aws-ecs/update-service: # orb built-in job
family: "test-task-def-<< parameters.env >>"
service-name: "test-service-<< parameters.env >>"
cluster: "test-cluster-<< parameters.env >>"
container-image-name-updates: "container=test-container-<< parameters.env >>,tag=${CIRCLE_BUILD_NUM}"
deployment-controller: "CODE_DEPLOY"
codedeploy-application-name: "test-application-<< parameters.env >>"
codedeploy-deployment-group-name: "test-deployment-group-<< parameters.env >>"
codedeploy-load-balanced-container-name: "test-container-<< parameters.env >>"
codedeploy-load-balanced-container-port: 3000
Can anyone spot why my service isn't using the latest task definition, and the deployments are using the old task definition for the service?
Some additional info:
The deployment created in circleci uses the latest task definition, and it's visible in the deployment revision as well (appspec).
I've made a mistake in the first creation of the task definiton, I used container_definitions.image as an ecr image's id (which is sha id), and it cannot pull images.
The issue is that the ecs service is using that task definition, constantly trying to create tasks, and failing while codedeploy deployment - which is somewhat unexpected, as it should use the new task definition to create new tasks.
So I have a service with wrongly configured task definition, and when I try to fix it, the deployment cannot go through with the fix, because it's using the wrong task definition to create new tasks, and the deployment is stuck.

Terraform error updating CloudFront Distribution InvalidLambdaFunctionAssociation: The function cannot have environment variables

I am trying to build a terraform template that creates an AWS S3 Bucket, Cloudfront Distribution and a Lambda function that should be associated with the Cloudfront Distribution.
As soon as I add "lambda_function_association" to the Cloudfront ressource I experience following error.
Error: error updating CloudFront Distribution (XXXXXXXXXXXXXXX): InvalidLambdaFunctionAssociation: The function cannot have environment variables. Function: arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:testtools:4
status code: 400, request id: 3ce25af1-8341-41c0-8d35-4c3c91c2c001
with aws_cloudfront_distribution.testtools,
on main.tf line 42, in resource "aws_cloudfront_distribution" "testtools":
42: resource "aws_cloudfront_distribution" "testtools" {
lambda_function_association {
event_type = "origin-response"
lambda_arn = "${aws_lambda_function.testtools.qualified_arn}"
include_body = false
}
I think it is related to the lambda_arn that is used inside the function association.
resource "aws_cloudfront_distribution" "testtools" {
depends_on = [aws_s3_bucket.testtools, aws_lambda_function.testtools]
origin {
domain_name = aws_s3_bucket.testtools.bucket_regional_domain_name
origin_id = var.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.testtools.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
comment = "testtools"
default_root_object = "index.html"
provider = aws
logging_config {
include_cookies = false
bucket = "testtools.s3.amazonaws.com"
prefix = "testtools"
}
aliases = ["testtools.int.test.net"]
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = var.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
lambda_function_association {
event_type = "origin-response"
lambda_arn = "${aws_lambda_function.testtools.qualified_arn}"
include_body = false
}
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["DE", "AU", "CH", "BG"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
acm_certificate_arn = var.ssl_cert_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1"
}
}
resource "aws_lambda_function" "testtools" {
filename = "lambda_function_payload.zip"
function_name = "testtools"
role = aws_iam_role.testtools.arn
handler = "index.test"
publish = true
provider = aws.useast1
source_code_hash = filebase64sha256("lambda_function_payload.zip")
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
When using a Lambda#edge, your lambda has a lot more restrictions that it has to adhere to. Some restrictions also depend on whether you're linking the lambda to origin req/res or viewer req/res.
One of these restrictions is that you can't use environment variables. You can find more info on this page: Lambda#Edge function restrictions

AWS API Gateway and static HTML: "Execution failed due to configuration error: statusCode should be an integer which defined in request template"

I am trying to serve a static content using AWS API Gateway.
When I attempt to invoke the URL, both from the test page and from curl, I get the error:
"Execution failed due to configuration error: statusCode should be an integer which defined in request template".
This is my configuration on Terraform:
resource "aws_api_gateway_rest_api" "raspberry_api" {
name = "raspberry_api"
}
resource "aws_acm_certificate" "raspberry_alexa_mirko_io" {
domain_name = "raspberry.alexa.mirko.io"
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "raspberry_alexa_mirko_io_cert_validation" {
name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_name
type = aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_type
zone_id = var.route53_zone_id
records = [aws_acm_certificate.raspberry_alexa_mirko_io.domain_validation_options.0.resource_record_value]
ttl = 60
}
resource "aws_route53_record" "raspberry_alexa_mirko_io" {
zone_id = var.route53_zone_id
name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_name
type = "A"
alias {
name = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.cloudfront_domain_name
zone_id = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.cloudfront_zone_id
evaluate_target_health = true
}
}
resource "aws_acm_certificate_validation" "raspberry_alexa_mirko_io" {
certificate_arn = aws_acm_certificate.raspberry_alexa_mirko_io.arn
validation_record_fqdns = [aws_route53_record.raspberry_alexa_mirko_io_cert_validation.fqdn]
provider = aws.useast1
}
resource "aws_api_gateway_domain_name" "raspberry_alexa_mirko_io" {
certificate_arn = aws_acm_certificate_validation.raspberry_alexa_mirko_io.certificate_arn
domain_name = aws_acm_certificate.raspberry_alexa_mirko_io.domain_name
}
resource "aws_api_gateway_base_path_mapping" "raspberry_alexa_mirko_io_base_path_mapping" {
api_id = aws_api_gateway_rest_api.raspberry_api.id
domain_name = aws_api_gateway_domain_name.raspberry_alexa_mirko_io.domain_name
}
resource "aws_api_gateway_resource" "home" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
parent_id = aws_api_gateway_rest_api.raspberry_api.root_resource_id
path_part = "login"
}
resource "aws_api_gateway_method" "login" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
resource_id = aws_api_gateway_resource.home.id
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
resource_id = aws_api_gateway_resource.subscribe_raspberry.id
http_method = aws_api_gateway_method.subscribe.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.raspberry_lambda.invoke_arn
# This was just a failed attempt. It did not fix anything
request_templates = {
"text/html" = "{\"statusCode\": 200}"
}
}
resource "aws_api_gateway_integration" "login_page" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
resource_id = aws_api_gateway_resource.home.id
http_method = aws_api_gateway_method.login.http_method
type = "MOCK"
timeout_milliseconds = 29000
}
resource "aws_api_gateway_method_response" "response_200" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
resource_id = aws_api_gateway_resource.home.id
http_method = aws_api_gateway_method.login.http_method
status_code = "200"
}
resource "aws_api_gateway_integration_response" "login_page" {
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
resource_id = aws_api_gateway_resource.home.id
http_method = aws_api_gateway_method.login.http_method
status_code = aws_api_gateway_method_response.response_200.status_code
response_templates = {
"text/html" = data.template_file.login_page.rendered
}
}
resource "aws_api_gateway_deployment" "example" {
depends_on = [
aws_api_gateway_integration.login_page
]
rest_api_id = aws_api_gateway_rest_api.raspberry_api.id
stage_name = "production"
}
I have followed the instructions as in this blog, with no success.
"200" (with quotes) is considered a string, not an integer
try status_code = 200 (without quotes)
Just to repost the excellent answer of TheClassic here, the format seems to be:
request_templates = {
"application/json" = jsonencode(
{
statusCode = 200
}
)
}
I also had this same problem, but looks like this works.
I had the same error because my code looked like this beforehand - inspired by the terraform docs.
resource "aws_api_gateway_integration" "api_gateway" {
http_method = aws_api_gateway_method.api_gateway.http_method
resource_id = aws_api_gateway_resource.api_gateway.id
rest_api_id = aws_api_gateway_rest_api.api_gateway.id
type = "MOCK"
}
After reading this thread it now works looking like this:
resource "aws_api_gateway_integration" "api_gateway" {
http_method = aws_api_gateway_method.api_gateway.http_method
resource_id = aws_api_gateway_resource.api_gateway.id
rest_api_id = aws_api_gateway_rest_api.api_gateway.id
type = "MOCK"
request_templates = {
"application/json" = jsonencode(
{
statusCode = 200
}
)
}
}
As per Bernie comment, this status code needs to be explicitly provided in request_templates attribute in aws_api_gateway_integration terraform resource.
After adding it I finally got 200 for OPTIONS that are integrated via MOCK endpoint.
For others who might see this, this error can also be caused by a need to verify that when you use Mock as your Integration type, you confirm that your RequestTemplates contain statusCode and the value of statusCode is equal to one of your IntegrationResponses/ResponseTemplates/StatusCode
Something like:
requestTemplates: {
"application/json": "{\"statusCode\": 200}"
}

Using terraform modules for multiple regional api gateway

I am using terraform to create aws infrastructure with 4 regional api gateways with corresponding dynamodb in that region.
I want to create one module consisting of ( API + dynamo ) with configurable region specific values. Is it possible with terraform? Or I would have to create 4 separate API + 4 separate dynamodb resources.
Any links or documentation would be helpful as well.
Currently working for regional API gateway and corresponding dynamodb.
variable "access_key" {}
variable "secret_key" {}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-east-1"
region = "us-east-1"
}
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
alias = "us-west-2"
region = "us-west-2"
}
resource "aws_dynamodb_table" "us-east-1" {
provider = "aws.us-east-1"
hash_key = "test_tf"
name = "test_tf"
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
read_capacity = 1
write_capacity = 1
attribute {
name = "test_tf"
type = "S"
}
}
resource "aws_dynamodb_table" "us-west-2" {
provider = "aws.us-west-2"
hash_key = "test_tf"
name = "test_tf"
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
read_capacity = 1
write_capacity = 1
attribute {
name = "test_tf"
type = "S"
}
}
resource "aws_dynamodb_global_table" "test_tf" {
depends_on = ["aws_dynamodb_table.us-east-1", "aws_dynamodb_table.us-west-2"]
provider = "aws.us-east-1"
name = "test_tf"
replica {
region_name = "us-east-1"
}
replica {
region_name = "us-west-2"
}
}
resource "aws_api_gateway_rest_api" "test-us-east-1" {
name = "test-us-east-1"
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_resource" "sample_test" {
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
parent_id = "${aws_api_gateway_rest_api.test-us-east-1.root_resource_id}"
path_part = "{testid}"
}
resource "aws_api_gateway_method" "sample_get" {
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
resource_id = "${aws_api_gateway_resource.sample_test.id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_deployment" "Deployment" {
depends_on = ["aws_api_gateway_method.sample_get"]
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
stage_name = "test"
}
resource "aws_api_gateway_integration" "test" {
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
resource_id = "${aws_api_gateway_resource.sample_test.id}"
http_method = "${aws_api_gateway_method.sample_get.http_method}"
integration_http_method = "POST"
type = "AWS"
uri = "arn:aws:apigateway:us-east-1:dynamodb:action/GetItem"
credentials = "${aws_iam_role.apiGatewayDynamoDbAccessRole.arn}"
passthrough_behavior = "WHEN_NO_TEMPLATES"
request_templates = {
"application/json" = <<EOF
{
"TableName": "test_tf",
"Key":
{
"test_tf":
{
"S": "$input.params('testid')"
}
}
}
EOF
}
}
resource "aws_iam_policy" "api_dbaccess_policy" {
name = "api_dbaccess_policy"
policy = "${file("api-dynamodb-policy.json")}"
depends_on = [
"aws_dynamodb_table.us-east-1"
]
}
resource "aws_iam_role" "apiGatewayDynamoDbAccessRole" {
name = "apiGatewayDynamoDbAccessRole"
assume_role_policy = "${file("assume-role-policy.json")}"
depends_on = [
"aws_dynamodb_table.us-east-1"
]
}
resource "aws_iam_policy_attachment" "api-dbaccess-policy-attach" {
name = "api-dbaccess-policy-attachment"
roles = ["${aws_iam_role.apiGatewayDynamoDbAccessRole.name}"]
policy_arn = "${aws_iam_policy.api_dbaccess_policy.arn}"
}
resource "aws_api_gateway_method_response" "200" {
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
resource_id = "${aws_api_gateway_resource.sample_test.id}"
http_method = "${aws_api_gateway_method.sample_get.http_method}"
status_code = "200"
}
resource "aws_api_gateway_integration_response" "us-east-1-response" {
rest_api_id = "${aws_api_gateway_rest_api.test-us-east-1.id}"
resource_id = "${aws_api_gateway_resource.sample_test.id}"
http_method = "${aws_api_gateway_method.sample_get.http_method}"
status_code = "${aws_api_gateway_method_response.200.status_code}"
response_templates = {
"application/json" = <<EOF
{
#set($sampletest = $input.path('Item.test_tf.S'))
"test": #if ($sampletest && $sampletest != '')
true
#else
false
#end
}
EOF
}
}
Yes, this is possible with Terraform.
In the root module you define 4 AWS providers, giving alias to each one:
provider "aws" {
alias = "oregon"
region = "us-west-2"
}
provider "aws" {
alias = "virginia"
region = "us-east-1"
}
Then, when you instantiate your modules, instead of relying on provider inheritance you pass the provider explicitly by alias:
module "api_gateway" {
source = "./api_gateway"
providers = {
aws = "aws.oregon"
}
}
Rinse and repeat 4 times for each region.
You can find the docs here: https://www.terraform.io/docs/modules/usage.html