I am trying to run a python function on an AWS Lambda layer, I don't find any documentation on terraform to use an AWS provided lambda layer. How do I use AWS Provided lambda layer AWSLambda-Python27-SciPy1x and runtime Python 2.7?
#----compute/lambda.tf----
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "index.py"
output_path = "check_foo.zip"
}
resource "aws_lambda_function" "check_foo" {
filename = "check_foo.zip"
function_name = "checkFoo"
role = "${aws_iam_role.iam_for_lambda_tf.arn}"
handler = "index.handler"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
# i want to use lambda layer - AWSLambda-Python27-SciPy1x and run this function on it
runtime = "python2.7"
}
You have to specify lambda layers as ARNs in terraform using layers parameter:
layers - (Optional) List of Lambda Layer Version ARNs (maximum of 5) to attach to your Lambda Function.
Using the following syntax in terraform:
layers = ["layer-arn"]
For example, the ARN for AWSLambda-Python27-SciPy1x in us-east-1 region is:
arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python27-SciPy1x:24
If you not sure what is your ARN, you can create a dummy a Python 2.7 lambda function, add AWS layer AWSLambda-Python27-SciPy1x layer, and the console will give you its ARN.
Related
AWS manages a layer called AWSDataWrangler-Python38. How do I import it into my Terraform Code. I tried using the Layer Module
resource "aws_lambda_layer_version" "lambda_layer" {
layer_name = "AWSDataWrangler-Python39"
compatible_runtimes = ["python3.9"]
}
It throws an error to specify filename, but there is no file for this layer since it is managed by AWS and it is not a custom layer
You can not import a resource which is not managed by you.
Since this is a layer managed by AWS, there is a public list with all the ARN numbers available for this layer: https://aws-data-wrangler.readthedocs.io/en/stable/layers.html
If you want to use this layer for a Lambda in your Terraform code, you will have to take an ARN from this list and simply hard-code it (or provide it externally with a variable). For example:
resource "aws_lambda_function" "lambda" {
function_name = "MyFunction"
...
layers = [
"arn:aws:lambda:${var.region}:336392948345:layer:AWSDataWrangler-Python39:6"
]
}
I'm writing an AWS Lambda function which is basically Java code doing some stuff. We're using Terraform to create the resources in AWS and we've created IAM role/policies through TF. I need the IAM Role's ARN that's created through TF to use in my Java code (Lambda) to use in AWS STS AssumeRole call.
Is there any way to get the Role's ARN in the Java code?
Even if I define a variable to output.tf how can I access that in the code? Am I missing something here? Thanks in advance.
You can pass any values inside your lambda function through environment argument in terraform.
resource "aws_lambda_function" "lambda" {
count = var.create_lambda_function ? 1 : 0
filename = "${var.lambda_function_name}.${var.archive_type}"
function_name = "${var.lambda_function_name}-${lower(var.environment)}"
role = var.role_arn
handler = var.handler
timeout = var.timeout
runtime = var.runtime
layers = var.add_layers ? aws_lambda_layer_version.pymysql_lambda_layer.*.arn : []
source_code_hash = one(data.archive_file.lambda_function_zip.*.output_base64sha256)
environment {
variables = {
ROLE_ARN = var.role_arn
}
}
}
I hope you already know how to get the IAM role ARN through input variable (var.role_arn). If you are creating the IAM role in a different module, you can access it through module output or you can refer the role arn as aws_iam_role.example.arn if you are creating the role on the same location where lambda function code exists.
Once this is done, you can now call the environment variable ROLE_ARN inside your Java code.
If I manually add an Integration Request of type Lambda function, an Api Gateway trigger is automatically added to the lambda function.
If I do it via Terraform, everything looks correct but when I go look at the Lambda function it has no trigger.
If I then manually update the Integration Request (change to Mock and back to Lambda Function) the trigger is added to the Lambda function? Everything works after that.
What am I missing?
resource "aws_api_gateway_integration" "integration" {
count = var.lambda_definition.apigateway ? 1 : 0
rest_api_id = "${data.terraform_remote_state.apigateway.outputs.apigateway_id}"
resource_id = aws_api_gateway_resource.api_proxy_resource[count.index].id
http_method = "${aws_api_gateway_method.method[count.index].http_method}"
integration_http_method = "ANY"
type = "AWS_PROXY"
uri = aws_lambda_function.lambda.invoke_arn
}
Since you've not mentioned whether you specified proper permissions for your function, my guess is that you are missing aws_lambda_permission. This will explicitly give permissions for the api to invoke your function.
The resource would be (example only):
resource "aws_lambda_permission" "allow_api" {
statement_id = "AllowAPIgatewayInvokation"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.lambda.invoke_arn
principal = "apigateway.amazonaws.com"
}
When you do it manually in console, the AWS setups all these permissions in the background.
Make sure that integration_http_method is set to POST and not to ANY as in your sample:
integration_http_method = "POST"
See AWS Docs - midway - red box that says '! Important':
For Lambda integrations, you must use the HTTP method of POST for the integration request, according to the specification of the Lambda service action for function invocations. The IAM role of apigAwsProxyRole must have policies allowing the apigateway service to invoke Lambda functions. For more information about IAM permissions, see API Gateway permissions model for invoking an API.
I created an AWS environment using TERRAFORM.
After that, some resources were created by console (SES, SNS, LAMBDA) they did not was provisioned by TERRAFORM.
I'm writing the TERRAFORM code for these resources (SES, SNS, LAMBDA) that were created by the console.
If I already have these resources running in my account, is it possible to generate this code via TERRAFORM for these resources without removing them?
Or even, how do I have to proceed in this case?
Welcome to the world of IaC, you're in for a treat. :)
You can import all resources that were created without terraform (using a CLI or manually provisioned - resources which are not part of the tf state) to your terraform state. Once these resources are imported you can then start managing their lifecycle using terraform.
Define the resource in your .tf files
Import existing resources
As an example:
In order to import an existing non terraform managed lambda, you first define the resource for it in your .tf files:
main.tf:
resource "aws_lambda_function" "test_lambda" {
filename = "lambda_function_payload.zip"
function_name = "lambda_function_name"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "exports.test"
# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${filebase64sha256("lambda_function_payload.zip")}"
runtime = "nodejs12.x"
environment {
variables = {
foo = "bar"
}
}
}
Then you can execute terraform import, in order to import the existing lambda:
terraform import aws_lambda_function.test_lambda my_test_lambda_function
I'm setting up some Terraform to manage a lambda and s3 bucket with versioning on the contents of the s3. Creating the first version of the infrastructure is fine. When releasing a second version, terraform replaces the zip file instead of creating a new version.
I've tried adding versioning to the s3 bucket in terraform configuration and moving the api-version to a variable string.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "main.js"
output_path = "main.zip"
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = "s3-bucket-for-tft-project"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_object" "lambda_zip_file" {
bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
key = "v${var.api-version}-${data.archive_file.lambda_zip.output_path}"
source = "${data.archive_file.lambda_zip.output_path}"
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = "${aws_s3_bucket.lambda_bucket.bucket}"
s3_key = "${aws_s3_bucket_object.lambda_zip_file.key}"
function_name = "lambda_test_with_s3_version"
role = "${aws_iam_role.lambda_exec.arn}"
handler = "main.handler"
runtime = "nodejs8.10"
}
I would expect the output to be another zip file but with the lambda now pointing at the new version, with the ability to change back to the old version if var.api-version was changed.
Terraform isn't designed for creating this sort of "artifact" object where each new version should be separate from the ones before it.
The data.archive_file data source was added to Terraform in the early days of AWS Lambda when the only way to pass values from Terraform into a Lambda function was to retrieve the intended zip artifact, amend it to include additional files containing those settings, and then write that to Lambda.
Now that AWS Lambda supports environment variables, that pattern is no longer recommended. Instead, deployment artifacts should be created by some separate build process outside of Terraform and recorded somewhere that Terraform can discover them. For example, you could use SSM Parameter Store to record your current desired version and then have Terraform read that to decide which artifact to retrieve:
data "aws_ssm_parameter" "lambda_artifact" {
name = "lambda_artifact"
}
locals {
# Let's assume that this SSM parameter contains a JSON
# string describing which artifact to use, like this
# {
# "bucket": "s3-bucket-for-tft-project",
# "key": "v2.0.0/example.zip"
# }
lambda_artifact = jsondecode(data.aws_ssm_parameter.lambda_artifact)
}
resource "aws_lambda_function" "lambda_function" {
s3_bucket = local.lambda_artifact.bucket
s3_key = local.lambda_artifact.key
function_name = "lambda_test_with_s3_version"
role = aws_iam_role.lambda_exec.arn
handler = "main.handler"
runtime = "nodejs8.10"
}
This build/deploy separation allows for three different actions, whereas doing it all in Terraform only allows for one:
To release a new version, you can run your build process (in a CI system, perhaps) and have it push the resulting artifact to S3 and record it as the latest version in the SSM parameter, and then trigger a Terraform run to deploy it.
To change other aspects of the infrastructure without deploying a new function version, just run Terraform without changing the SSM parameter and Terraform will leave the Lambda function untouched.
If you find that a new release is defective, you can write the location of an older artifact into the SSM parameter and run Terraform to deploy that previous version.
A more complete description of this approach is in the Terraform guide Serverless Applications with AWS Lambda and API Gateway, which uses a Lambda web application as an example but can be applied to many other AWS Lambda use-cases too. Using SSM is just an example; any data that Terraform can retrieve using a data source can be used as an intermediary to decouple the build and deploy steps from one another.
This general idea can apply to all sorts of code build artifacts as well as Lambda zip files. For example: custom AMIs created with HashiCorp Packer, Docker images created using docker build. Separating the build process, the version selection mechanism, and the deployment process gives a degree of workflow flexibility that can support both the happy path and any exceptional paths taken during incidents.