Issue with Creating Application Auto Scaling with AWS Lambda using Terraform - amazon-web-services

I'm converting some Cloudformation into Terraform that creates a Lambda and then sets up Provisioned Concurrency and Application Auto Scaling for the Lambda. When Terraform runs the aws_appautoscaling_target resource, it fails with the following message:
Error: Error creating application autoscaling target: ValidationException: Unsupported service namespace, resource type or scalable dimension
I haven't found too many examples of the aws_appautoscaling_target resource being used with Lambdas. Is this no longer supported? For reference, I'm running Terraform version 1.0.11 and I'm using AWS provider version 3.66.0. I'm posting my Terraform below. Thanks.
data "archive_file" "foo_create_dist_pkg" {
source_dir = var.lambda_file_location
output_path = "foo.zip"
type = "zip"
}
resource "aws_lambda_function" "foo" {
function_name = "foo"
description = "foo lambda"
handler = "foo.main"
runtime = "python3.8"
publish = true
role = "arn:aws:iam::${local.account_id}:role/serverless-role"
memory_size = 256
timeout = 900
depends_on = [data.archive_file.foo_create_dist_pkg]
source_code_hash = data.archive_file.foo_create_dist_pkg.output_base64sha256
filename = data.archive_file.foo_create_dist_pkg.output_path
}
resource "aws_lambda_provisioned_concurrency_config" "foo_provisioned_concurrency" {
function_name = aws_lambda_function.foo.function_name
provisioned_concurrent_executions = 15
qualifier = aws_lambda_function.foo.version
}
resource "aws_appautoscaling_target" "autoscale_foo" {
max_capacity = var.PCMax
min_capacity = var.PCMin
resource_id = "function:${aws_lambda_function.foo.function_name}"
scalable_dimension = "lambda:function:ProvisionedConcurrency"
service_namespace = "lambda"
}

You need to publish your Lambda to get a new version. This can be done by setting publish = true in aws_lambda_function resource. This will give a numeric version for your function which can be used in the aws_appautoscaling_target:
resource "aws_appautoscaling_target" "autoscale_foo" {
max_capacity = var.PCMax
min_capacity = var.PCMin
resource_id = "function:${aws_lambda_function.foo.function_name}:${aws_lambda_function.foo.version}"
scalable_dimension = "lambda:function:ProvisionedConcurrency"
service_namespace = "lambda"
}
Alternatively, you can create an aws_lambda_alias and use that in the aws_appautoscaling_target instead of the Lambda version. Nevertheless, this would require also the function to be published.

Related

AWS Lambda not updating using Terraform source_code_hash property

I have a Terraform configuration for creating a Lambda resource and am using the source_code_hash property to detect changes to the zip. I am also uploading a separate file that contains the SHA256 hash of the file along with the zip file to S3.
I am able to do the deploy once, but then the problem is that the running Lambda is not getting updated after I update the zip and in the build log I am seeing "Still creating..."
How can I see the value of the source_code_hash property? I just see + source_code_hash = (known after apply) both in the plan output and the apply output so I don't know if the value is getting updated or not.
My code is below:
data "aws_s3_object" "source_hash" {
bucket = "dap-bucket-2"
key = "lambda.zip.sha256"
}
resource "aws_lambda_function" "lambda" {
function_name = "lambda_function_name"
s3_bucket = "dap-bucket-2"
s3_key = "lambda.zip"
handler = "template.handleRequest"
runtime = "java11"
role = aws_iam_role.lambda_exec.arn
source_code_hash = "${data.aws_s3_object.source_hash.body}"
publish = true
}
For s3 objects, ususually you would use etag:
source_code_hash = data.aws_s3_object.source_hash.etag

How to avoid Terraform's repeated in-place updates of AWS autoscaling policy?

More specifically, adjustment_type keeps getting (seemingly) updated on each terraform plan. Why is that and can it be avoided?
Terraform will perform the following actions:
# aws_autoscaling_policy.this will be updated in-place
~ resource "aws_autoscaling_policy" "this" {
+ adjustment_type = "ChangeInCapacity"
Here's autoscaling policy definition:
resource "aws_autoscaling_policy" "this" {
name = var.service_name # Same between `terraform` invocations.
autoscaling_group_name = aws_autoscaling_group.this.name
adjustment_type = "ChangeInCapacity"
policy_type = "TargetTrackingScaling"
# ASG merely serves as an EC2-ready-to-be-made-scalable, hence `false`.
enabled = false
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 99.0
}
}
terraform 1.3.1
terragrunt 0.40.2
From the info in question there is no reason why it would change. Its possibile that you are experiencing "ghost" changes, which are known, long lasting TF issue, e.g. here. You can try using ignore_changes:
resource "aws_autoscaling_policy" "this" {
name = var.service_name # Same between `terraform` invocations.
autoscaling_group_name = aws_autoscaling_group.this.name
adjustment_type = "ChangeInCapacity"
policy_type = "TargetTrackingScaling"
# ASG merely serves as an EC2-ready-to-be-made-scalable, hence `false`.
enabled = false
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 99.0
}
lifecycle {
ignore_changes = [
adjustment_type
]
}
}
The hashicorp/aws provider uses a single resource type aws_autoscaling_policy to represent both Target Tracking Scaling Policies and Simple/Step Scaling Policies. Unfortunately each of these policy types expects a different subset of arguments, and (as discussed in provider bug #18853) there are some missing validation rules to catch when you use an argument that isn't appropriate for your selected type.
You are using policy_type = "TargetTrackingScaling", but Scaling Adjustment Types are for step/simple scaling policies. Therefore I believe the AWS API is silently ignoring your adjustment_type value when creating/updating. When the AWS provider reads your policy back from the remote API during the refresh step, the API indicates that adjustment_type isn't set and so the AWS provider thinks it's changed outside of Terraform.
The best fix here would be for the provider to return an error if you use an argument that isn't appropriate for your policy_type, but in the meantime I suspect you can make this problem go away by leaving adjustment_type unset in your configuration, which should then make the desired state described by your configuration match the actual state of the remote object.

Modularising Terraform IaC across microservice environments

I am trying to refactor my IaC Terraform set up to repeat less code and be quicker to make changes. I am working on a serverless microservice application, and so, for example i am running a few instances of aws-ecs-autoscaling and aws-ecs. I have develop and production environments, and within each one a modules folder where each microservice module is defined. Please see image for mock folder structure.
As you can see there are many repeated folders. In the main.tf of the dev and prod environments each module is called and vars assigned.
EG:
ecs-autoscaling-microservice-A main.tf
resource "aws_appautoscaling_target" "dev_ecs_autoscaling_microservice_A_target" {
max_capacity = 2
min_capacity = 1
resource_id = "service/${var.ecs_cluster.name}/${var.ecs_service.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}
resource "aws_appautoscaling_policy" "dev_ecs_autoscaling_microservice_A_memory" {
name = "dev_ecs_autoscaling_microservice_A_memory"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.resource_id
scalable_dimension = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.scalable_dimension
service_namespace = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.service_namespace
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageMemoryUtilization"
}
target_value = 80
}
}
resource "aws_appautoscaling_policy" "dev_ecs_autoscaling_microservice_A_cpu" {
name = "dev_ecs_autoscaling_microservice_A_cpu"
policy_type = "TargetTrackingScaling"
resource_id = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.resource_id
scalable_dimension = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.scalable_dimension
service_namespace = aws_appautoscaling_target.dev_ecs_autoscaling_microservice_A_target.service_namespace
target_tracking_scaling_policy_configuration {
predefined_metric_specification {
predefined_metric_type = "ECSServiceAverageCPUUtilization"
}
target_value = 60
}
}
DEVELOP main.tf
module "ecs_autoscaling_microservice_A" {
source = "./modules/ecs-autoscaling-microservice-A"
ecs_cluster = module.ecs_autoscaling_microservice_A.ecs_cluster_A
ecs_service = module.ecs_autoscaling_microservice_A.ecs_service_A
}
My question is what is the best way to go about removing all modules. SO that instead of having an ecs module for each microservice for both prod and dev environments, I can just have 1 module for ecs, that can be re used for any microservice in any environment. See Image for required folder structure. Is this possible or am i wasting my time? I was thinking of using some kind of for_each where each microservice is defined before hand with its owned mapped variables. But would like some guidance please. Thanks in advance!
I suggest you read the excellent series of blog posts on Terraform by Yevgeniy Brikman which cleared my understanding of Terraform:
https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
This exact question seems to be touched on in this one: https://blog.gruntwork.io/how-to-create-reusable-infrastructure-with-terraform-modules-25526d65f73d

Setting up ssl cert for load balancer terraform

I have a cert setup in the London region and attached to a load balancer listener which works perfectly. I am attempting to create another cert from the same Route53 domain and attach it to a listener but this time in the Ireland region.
My terraform looks like
resource "aws_acm_certificate" "default" {
count = var.prod ? 1 : 0
domain_name = "www.example.uk"
subject_alternative_names = [
"example.uk",
]
validation_method = "DNS"
}
resource "aws_route53_record" "validation" {
count = var.prod ? 1 : 0
name = aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_name
type = aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default[count.index].domain_validation_options[count.index].resource_record_value]
ttl = "60"
}
resource "aws_route53_record" "validation_alt1" {
count = var.prod ? 1 : 0
name = aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_name
type = aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_type
zone_id = "Z0725470IF9R8J77LPTU"
records = [
aws_acm_certificate.default[count.index].domain_validation_options[count.index + 1].resource_record_value]
ttl = 60
}
resource "aws_acm_certificate_validation" "default" {
count = var.prod ? 1 : 0
certificate_arn = aws_acm_certificate.default[count.index].arn
validation_record_fqdns = [
aws_route53_record.validation[count.index].fqdn,
aws_route53_record.validation_alt1[count.index].fqdn,
]
}
This worked perfectly the first time I set this up in the London region, when I try and run it in the Ireland region on AWS I get the following errors:
I'm not 100% on why the cert validation seems to bring back no records.
There is a change in domain_validation_options attribute with aws provider version 3. Previously it was list type and now it's changed to set type. So you have 2 options:
Version lock aws provider to version 2
provider "aws" {
version = "~>2"
}
Update code to work with new provider version. For that you need to update count with for_each and make similar updates as shown below.
resource "aws_route53_record" "existing" {
for_each = {
for dvo in aws_acm_certificate.existing.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.public_root_domain.zone_id
}
resource "aws_acm_certificate_validation" "existing" {
certificate_arn = aws_acm_certificate.existing.arn
validation_record_fqdns = [for record in aws_route53_record.existing : record.fqdn]
}
You can check this for more details: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#resource-aws_acm_certificate
It looks like the validation record is no longer an array. I'm guessing you upgraded the AWS Terraform provider at some point since you ran this last (if you don't have the version pinned it could have updated automatically). There have been some breaking changes to the aws_acm_certificate_validation Terraform resource. I suggest you look at the latest example usage in the documentation and refactor your Terraform.

Creating RDS instances from not the recent snapshot using Terraform

In Terraform project I am creating an RDS instance from a not recent snapshot (fifth before the last), my script here:
data "aws_db_snapshot" "db_snapshot" {
db_instance_identifier = "production-db-intern"
db_snapshot_arn = "arn:aws:rds:eu-central-1:123114111478:snapshot:rds:production-db-intern-2019-05-09-16-10"
}
resource "aws_db_instance" "db_intern" {
skip_final_snapshot = true
identifier = "db-intern"
auto_minor_version_upgrade = false
instance_class = "db.m4.4xlarge"
deletion_protection = false
vpc_security_group_ids = ["${var.security_group_id}"]
db_subnet_group_name = "${var.subnet_group_name}"
timeouts {
create = "3h"
delete = "2h"
}
lifecycle {
prevent_destroy = false
}
snapshot_identifier = "${data.aws_db_snapshot.db_snapshot.id}"
}
I did a "terraform plan" and
I got the next error:
Error: data.aws_db_snapshot.db_snapshot: "db_snapshot_arn": this field cannot be set
db_snapshot_arn is not a valid field of the aws_db_snapshot data resource. Did you mean db_snapshot_identifier.
Also, you can't pass the ARN to this data resource, you can pass the snapshot ID instead, e.g. snap-1234567890abcdef0.
Besides that, the data resource only expects either the db_instance_identifier to be set or the db_snapshot_identifier. See the documentation on the snapshot CLI for more details on the specifics. Terraform leverages the CLI to retrieve these resources.