Terraform: Is it possible to define an empty image based lambda? - amazon-web-services

When defining a lambda of package_type = Zip, it's possible to create a dummy temp.zip file and set it as the lambda's filename.
When created, the lambda will basically have an empty zip, which can later be replaced by something like a continuous delivery pipeline that pushes an artifact to it.
I've noticed this pattern used at work.
However, I'm playing with lambda container images for something personal.
I set it package_type = Image, and set other required arguments (per the Terraform docs). But when I run terraform apply, I get an error saying the lambda's image_uri argument must be set.
What if I don't have an image built yet? Is there some equivalent technique to satisfy the image_uri requirement, to essentially create an "empty" lambda, which I later plan to update via a CD pipeline?
Been looking around but have not yet found a solution.

What if I don't have an image built yet?
Then you can't create container lambda. You have to provide some image url. It can be dummy image that does nothing, but it must preexist before you can create such a lambda function.
Then later you can update the dummy image with something else.

Yes you can! This question has already been answered here, which I've copied below
data "aws_ecr_authorization_token" "token" {}
resource "aws_ecr_repository" "repository" {
name = "lambda-${local.name}-${local.environment}"
image_tag_mutability = "MUTABLE"
tags = local.common_tags
image_scanning_configuration {
scan_on_push = true
}
lifecycle {
ignore_changes = all
}
provisioner "local-exec" {
# This is a 1-time execution to put a dummy image into the ECR repo, so
# terraform provisioning works on the lambda function. Otherwise there is
# a chicken-egg scenario where the lambda can't be provisioned because no
# image exists in the ECR
command = <<EOF
docker login ${data.aws_ecr_authorization_token.token.proxy_endpoint} -u AWS -p ${data.aws_ecr_authorization_token.token.password}
docker pull alpine
docker tag alpine ${aws_ecr_repository.repository.repository_url}:SOME_TAG
docker push ${aws_ecr_repository.repository.repository_url}:SOME_TAG
EOF
}
}

Related

Terraform handle multiple lambda functions

I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.

"Force" docker image creation in Terraform with docker_registry_image (kreuzwerker/docker)

I am developing series of lambdas that are using docker images. The first step is to create them and registering in AWS ECR (not sure if everything I am doing is ok, so any advice is welcomed :-) ):
terraform {
...
required_providers {
docker = {
source = "kreuzwerker/docker"
version = ">= 2.12"
}
}
}
resource aws_ecr_repository lambda_repo {
name = "lambda"
}
resource docker_registry_image lambda_image {
name = "<account_id>.dkr.ecr.<region>.amazonaws.com/lambda:latest"
build {
context = "./code/lambda"
}
depends_on = [
aws_ecr_repository.lambda_repo
]
keep_remotely = true
}
resource aws_lambda_function lambda {
...
image_uri = "<account_id>.dkr.ecr.<region>.amazonaws.com/lambda:latest"
source_code_hash = docker_registry_image.lambda_image.sha256_digest
...
}
So with this code:
docker_registry_image > lambda_image : build the image and uploaded it in AWS
aws_lambda_function > lambda : if the image "lambda:latest" the lambda is updated with the new code
The problem I have is how to "force" docker_registry_image > lambda_image to rebuild the image and update the "lambda:latest" when the Dockerfile or app.py (the main code that is added in the file) has changed. Also I am not sure if this is the way to build the images.
Thanks!!
I was stuck with the exact same problem, and was disappointed to find your question hadn't been answered. I struggled a good bit, but I just clicked late tonight and got mine working.
The problem is incorrect thinking based on bad Docker habits (guilty of the same here!):
latest is a bad habit: it's based on tag mutability, which isn't how docker was designed, and pulling latest is non-deterministic, anyway - you never know what you're going to get. Usually, latest will pull the most recent version on a docker pull.
More on tag immutability: as developers, when encountering a small bug, we often quickly rebuild and overwrite the existing image with the same tag because "nobody will know, and it's just a small fix".
Thinking of the Lambda code files as something with state that should trigger a Terraform replace - the code files are not a resource, they're an asset for creating the Lambda.
Here is the better way to think about this:
docker_registry_image and the kreusewerker/docker provider are based on tag immutability.
docker_registry_image gets "replaced" in Terraform state (you'll see that in the Terraform plan when you try it), but the
effect in your ECR repository is to add a new image with a the next
sequential tag number, not to replace the actual image as one
usually thinks with Terraform.
Each change to your Lambda source code files requires a new image build with a new tag number.
If you find your images piling up on you, then you need a lifecycle policy to automate managing that.
Looking at your code, here is the solution:
Create a new variable called image_tag:
variable "image_tag" {
default = 1
}
Modify your docker_registry_image so that it uses the image_tag variable (also touching up the docker_registry_image name so you're not doing to much error-prone string building):
resource docker_registry_image lambda_image {
name = "${aws_ecr_repository.lambda_repo.repository_url}:${var.image_tag}"
build {
context = "./code/lambda"
}
...
}
Modify your aws_lambda_function. Change the image_uri to the name of the docker_registry_image so that those two are never out of sync:
resource aws_lambda_function lambda {
image_uri = docker_registry_image.lambda_image.name
source_code_hash = docker_registry_image.lambda_image.sha256_digest
}
Each time you change the code in the Lambda's source file(s), increment the image_tag variable by 1. Then try a terraform plan and you'll see that the docker_registry_image and aws_lambda_function will be replaced. A good exercise, would be to look at your ECR repo and Lambda function in the console while you do this. You'll see the images appearing in your ECR repo, and the Lambda function's image uri being updated with the new image_tag.
If you don't like how your image versions are piling up in your ECR repo, look into implementing a ecr_lifecycle_policy
Hope this helps. I sure feel a whole lot better tonight!

Updating Lambda using CDK doesn't deploy latest image

Using the AWS C# CDK.
I get a docker image from an ECR repository & then create a lambda function using it.
The problem is that when I run the CDK, it clearly creates CloudFormation that updates the function. Within the AWS console, the latest image is then shown under "Image > Image URI". However the behaviour of my lambda clearly shows that the latest image has NOT been deployed.
If I click "Deploy New Image", leave everything as normal & click Save, my Lambda then shows that it is updating & then the behaviour of my lambda is as expected (latest image).
Unsure where I'm going wrong:
var dockerImageCode = DockerImageCode.FromEcr(ecrRepositoryContainingImage);
var dockerImageFunction = new DockerImageFunction(this,
Constants.LAMBDA_ID,
new DockerImageFunctionProps()
{
Code = dockerImageCode,
Description = versionString,
Vpc = foundationStackVpc,
SecurityGroups = new ISecurityGroup[]
{
securityStackVpcSecurityGroup
},
Timeout = Duration.Seconds(30),
MemorySize = 512
});
It is almost like, my lambda gets updated & shows that it is apparently pointing at the correct image within ECR. However the reality is, that it is not actually deployed.
Edit: A temporary fix is to ensure that rather than pushing a new image:latest image to ECR, I now call it image:buildnumber. It seems that even if the image in ECR is underlyingly different & cdk has supposedly updated the lambda image reference to the newly uploaded one in ECR, it doesn't actually redeploy/consider a change has occurred worthy of redeployment when the old image tag & new image tag are both named the same, in this case latest. Now since the build number will always be different & thus the new image tag will always be different to the previous one, this is deemed enough of a change for the lambda to be redeployed properly.
When using API fromEcr, you can specify EcrImageCodeProps with specified image tag.
See doc for detail.
The tag: latest did not work for me also. I think an easy way is to use SSM in the CodeBuild
'aws ssm put-parameter --name FhrEcrImageTagDemo --type String --value ${CODEBUILD_RESOLVED_SOURCE_VERSION} --overwrite'
Then in CDK lambda
code: aws_lambda.Code.fromEcrImage(
aws_ecr.Repository.fromRepositoryName(
this,
'id',
'ecrRepositoryName',
),
{
tag: aws_ssm.StringParameter.valueForStringParameter(
this,
'parameterName'
)
}
)
Another potential solution is using the exported variables and override parameters in this example class TagParameterContainerImage. It works for ecs but not sure for lambda and ecr.

Terraform does not update AWS canary code

I have being changing an AWS canary code.
After running terraform apply, I see the updates in the new zip file but in AWS console the code is the old on.
What have I done wrong?
My terraform code:
resource "aws_synthetics_canary" "canary" {
depends_on = [time_sleep.wait_5_minutes]
name = var.name
artifact_s3_location = "s3://${local.artifacts_bucket_and_path}"
execution_role_arn = aws_iam_role.canary_role.arn
handler = "apiCanary.handler"
start_canary = true
zip_file = data.archive_file.source_zip.output_path
runtime_version = "syn-nodejs-puppeteer-3.3"
tags = {
Description = var.description
Entity = var.entity
Service = var.service
}
run_config {
timeout_in_seconds = 300
}
schedule {
expression = "rate(${var.rate_in_minutes} ${var.rate_in_minutes == 1 ? "minute" : "minutes"})"
}
}
I read this but it didn't help me.
I agree with #mjd2 but in the meantime worked around it by manually hashing the lambda source and embedding that hash into the source file name:
locals {
source_code = <whatever your source is>
source_code_hash = sha256(local.source_code)
}
data "archive_file" "canary_lambda" {
type = "zip"
output_path = "/tmp/canary_lambda_${local.source_code_hash}.zip"
source {
content = local.source_code
filename = "nodejs/node_modules/heartbeat.js"
}
}
This way, anytime the source_code is edited a new output filename will be used, triggering a replacement of the archive resource.
This could be a permission issue with your deployment role. Your role must have permission to modify the lambda behind the canary in order to apply the new layer that your zip file change creates.
Unfortunately any errors that occur when applying changes to the lambda are not communicated via terraform, or anywhere in the AWS console, but if it fails then your canary will continue to point to an old version of the lambda, without your code changes.
You should be able to see which version of the lambda your canary is using by checking the "Script location" field on the Configuration tab for your canary. Additionally, if you click in to the script location you will be able to see if you have newer, unpublished versions of the lambda layer available with your code changes in it.
To verify if the failure is a permission issue you need to query your canary via the AWS CLI.
Run aws synthetics get-canary --name <your canary name> and check the Status.StateReason.
If there was a permission issue when attempting to apply your change you should see something along the lines of:
<user> is not authorized to perform: lambda:UpdateFunctionConfiguration on resource: <lamdba arn>
Based on the above you should be able to add any missing permissions to your deployment roles iam policy and try your deployment again.
Hit the same issue. Seems like canary itself is a beta project that made it to production and the terraform resource that manages it also leaves much to be desired. There is no source_code_hash attr like with lambda, so you need to taint the entire canary resource so it gets recreated with any updated code. AWS Canary as of Nov 2022 is not mature at all. It should support integration with slack or at least AWS Chatbot out of the box, but it doesn't. Hopefully AWS team gives it some love because it's terrible as is right now in comparison to NewRelic, Dynatrace, and most other monitoring services that support synthetics.

Exporting AWS Data Pipeline as CloudFormation template to use it in Terraform

I'm trying to export existing AWS Data Pipeline task to Terraform infrastructure somehow.
Accordingly, to this issue, there is no direct support for Data Pipelines, but it still seems achievable using CloudFormation templates (terraform resource).
The problem is that I cannot find a way to export existing pipeline into CloudFormation template.
Exporting the pipeline with its specific definition syntax won't work as I've not found a way to include this definition into CloudFormation. CloudFormer does not support exporting pipelines either.
Does anybody know how to export a pipeline to CloudFormation or any other way to get AWS Data Pipeline automated with Terraform?
Thank you for your help!
UPD [Jul. 2019]: Some progress has been made in the terraform repository. aws_datapipeline_pipeline resource has been implemented, but it is not yet clear how to use it. Merged pull request
Original answer:
As a solution to this problem, I've come up with a node.js script, which covers my use case. In addition, I've created a Terraform module to be used in Terraform configuration.
Here is the link to the gist with the code
Will copy usage examples here.
Command Line:
node converter-cli.js ./template.json "Data Pipeline Cool Name" "Data Pipeline Cool Description" "true" >> cloudformation.json
Terraform:
module "some_cool_pipeline" {
source = "./pipeline"
name = "cool-pipeline"
description = "The best pipeline!"
activate = true
template = "${file("./cool-pipeline-template.json")}"
values = {
myDatabase = "some_database",
myUsername = "${var.db_user}",
myPassword = "${var.db_password}",
myTableName = "some_table",
}
}