I need to enable "CloudWatch Lambda Insights" for a lambda using Terraform, but could not find the documentation. How I can do it in Terraform?
Note: This question How to add CloudWatch Lambda Insights to serverless config? may be relevant.
There is no "boolean switch" in the aws_lambda_function resource of the AWS Terraform provider that you can set to true, that would enable Cloudwatch Lambda Insights.
Fortunately, it is possible to do this yourself. The following Terraform definitions are based on this AWS documentation: Using the AWS CLI to enable Lambda Insights on an existing Lambda function
The process involves two steps:
Add a layer to your Lambda
Attach a AWS policy to your Lambdas role.
The Terraform definitions would look like this:
resource "aws_lambda_function" "insights_example" {
[...]
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
Important: The arn of the layer is different for each region. The documentation I linked above has a link to a list of them. Furthermore, there is an additional step required if your Lambda is in a VPC, which you can read about in the documentation. The described "VPC step" can be put into Terraform as well.
For future readers: The version of that layer in my example is 14. This will change over time. So please do not just copy & paste that part. Follow the provided links and look for the current version of that layer.
Minimal, Complete, and Verifiable example
Tested with:
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/aws v3.24.0
Create the following two files (handler.py and main.tf) in a folder. Then run the following commands:
terraform init
terraform plan
terraform apply
Besides deploying the required resources, it will also create a zip archive containing the handler.py which is the deployment artifact used by the aws_lambda_function resource. So this is an all-in-one example without the need of further zipping etc.
handler.py
def lambda_handler(event, context):
return {
'message' : 'CloudWatch Lambda Insights Example'
}
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_lambda_function" "insights_example" {
function_name = "insights-example"
runtime = "python3.8"
handler = "handler.lambda_handler"
role = aws_iam_role.insights_example.arn
filename = "${path.module}/lambda.zip"
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
depends_on = [
data.archive_file.insights_example
]
}
resource "aws_iam_role" "insights_example" {
name = "InsightsExampleLambdaRole"
assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}
resource "aws_iam_role_policy_attachment" "insights_example" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
data "aws_iam_policy_document" "lambda_assume" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
data "archive_file" "insights_example" {
type = "zip"
source_file = "${path.module}/handler.py"
output_path = "${path.module}/lambda.zip"
}
In case you are using container images as the deployment package for your Lambda function, the required steps to enable CloudWatch Lambda Insights are slightly different (since Lambda layers can't be used here):
attach the arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy to your functions role as described by Jens
Add the Lambda Insights extension to your container image
FROM public.ecr.aws/lambda/nodejs:12
RUN curl -O https://lambda-insights-extension.s3-ap-northeast-1.amazonaws.com/amazon_linux/lambda-insights-extension.rpm && \
rpm -U lambda-insights-extension.rpm && \
rm -f lambda-insights-extension.rpm
COPY app.js /var/task/
see documentation for details
Based off #jens' answer, here's a snippet that will automatically supply the correct LambdaInsightsExtension layer based on the current region:
data "aws_region" "current" {}
locals {
aws_region = data.aws_region.current.name
# List taken from https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Lambda-Insights-extension-versionsx86-64.html
lambdaInsightsLayers = {
"us-east-1" : "arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:18",
"us-east-2" : "arn:aws:lambda:us-east-2:580247275435:layer:LambdaInsightsExtension:18",
"us-west-1" : "arn:aws:lambda:us-west-1:580247275435:layer:LambdaInsightsExtension:18",
"us-west-2" : "arn:aws:lambda:us-west-2:580247275435:layer:LambdaInsightsExtension:18",
"af-south-1" : "arn:aws:lambda:af-south-1:012438385374:layer:LambdaInsightsExtension:11",
"ap-east-1" : "arn:aws:lambda:ap-east-1:519774774795:layer:LambdaInsightsExtension:11",
"ap-south-1" : "arn:aws:lambda:ap-south-1:580247275435:layer:LambdaInsightsExtension:18",
"ap-northeast-3" : "arn:aws:lambda:ap-northeast-3:194566237122:layer:LambdaInsightsExtension:1",
"ap-northeast-2" : "arn:aws:lambda:ap-northeast-2:580247275435:layer:LambdaInsightsExtension:18",
"ap-southeast-1" : "arn:aws:lambda:ap-southeast-1:580247275435:layer:LambdaInsightsExtension:18",
"ap-southeast-2" : "arn:aws:lambda:ap-southeast-2:580247275435:layer:LambdaInsightsExtension:18",
"ap-northeast-1" : "arn:aws:lambda:ap-northeast-1:580247275435:layer:LambdaInsightsExtension:25",
"ca-central-1" : "arn:aws:lambda:ca-central-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-central-1" : "arn:aws:lambda:eu-central-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-west-1" : "arn:aws:lambda:eu-west-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-west-2" : "arn:aws:lambda:eu-west-2:580247275435:layer:LambdaInsightsExtension:18",
"eu-south-1" : "arn:aws:lambda:eu-south-1:339249233099:layer:LambdaInsightsExtension:11",
"eu-west-3" : "arn:aws:lambda:eu-west-3:580247275435:layer:LambdaInsightsExtension:18",
"eu-north-1" : "arn:aws:lambda:eu-north-1:580247275435:layer:LambdaInsightsExtension:18",
"me-south-1" : "arn:aws:lambda:me-south-1:285320876703:layer:LambdaInsightsExtension:11",
"sa-east-1" : "arn:aws:lambda:sa-east-1:580247275435:layer:LambdaInsightsExtension:18"
}
}
resource "aws_lambda_function" "my_lambda" {
...
layers = [
local.lambdaInsightsLayers[local.aws_region]
]
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.my_lambda.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
Related
I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.
I'm trying to create elasticsearch cluster using terraform.
Using terraform 0.11.13
Please can someone point out why I'm not able to create log groups? What is the Resource Access Policy? is it the same as the data "aws_iam_policy_document" I'm creating?
Note: I'm using elasticsearch_version = "7.9"
code:
resource "aws_cloudwatch_log_group" "search_test_log_group" {
name = "/aws/aes/domains/test-es7/index-logs"
}
resource "aws_elasticsearch_domain" "amp_search_test_es7" {
domain_name = "es7"
elasticsearch_version = "7.9"
.....
log_publishing_options {
cloudwatch_log_group_arn = "${aws_cloudwatch_log_group.search_test_log_group.arn}"
log_type = "INDEX_SLOW_LOGS"
enabled = true
}
access_policies = "${data.aws_iam_policy_document.elasticsearch_policy.json}"
}
data "aws_iam_policy_document" "elasticsearch_policy" {
version = "2012-10-17"
statement {
effect = "Allow"
principals {
identifiers = ["*"]
type = "AWS"
}
actions = ["es:*"]
resources = ["arn:aws:es:us-east-1:xxx:domain/test_es7/*"]
}
statement {
effect = "Allow"
principals {
identifiers = ["es.amazonaws.com"]
type = "Service"
}
actions = [
"logs:PutLogEvents",
"logs:PutLogEventsBatch",
"logs:CreateLogStream",
]
resources = ["arn:aws:logs:*"]
}
}
I'm getting this error
aws_elasticsearch_domain.test_es7: Error creating ElasticSearch domain: ValidationException: The Resource Access Policy specified for the CloudWatch Logs log group /aws/aes/domains/test-es7/index-logs does not grant sufficient permissions for Amazon Elasticsearch Service to create a log stream. Please check the Resource Access Policy.
For ElasticSearch (ES) to be able to write to CloudWatch (CW) Logs, you have to provide a resource-based policy on your CW logs.
This is achieved using aws_cloudwatch_log_resource_policy which is missing from your code.
In fact, TF docs have a ready to use example of how to do it for ES, thus you should be able to just copy and paste it.
ES access policies are different from CW log policies, as they determine who can do what on your ES domain. Thus, you would have to adjust that part of your code to meet your requirements.
I am trying to create and attach and attach s3 bucket policies to s3 buckets with terraform. Terraform is throwing the following errors: BucketRegionError and AccessDenied errors. It is saying the bucket I am trying to attach the policy to is not the specified region even though it is deployed in that region. Any advice on how I can attach this policy would be helpful. Below are the errors and how I am creating the buckets, the bucket policy, and how I am attaching. Thanks!
resource "aws_s3_bucket" "dest_buckets" {
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
data "aws_iam_policy_document" "dest_policy" {
statement {
actions = [
"s3:GetBucketVersioning",
"s3:PutBucketVersioning",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : bucket.arn
]
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::${data.aws_caller_identity.source.account_id}:role/${var.replication_role}"
]
}
}
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
}
Errors:
Error: Error putting S3 policy: AccessDenied: Access Denied
status code: 403, request id: 7F17A032D84DE672, host id: EjX+cDYt57caooCIlGX9wPf5s8B2JlXqAZpG8mA5KZtuw/4varoutQfxlkC/9JstdMdjN8EYBtg=
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
Error: Error putting S3 policy: BucketRegionError: incorrect region, the bucket is not in 'us-east-2' region at endpoint ''
status code: 301, request id: , host id:
on main.tf line 36, in resource "aws_s3_bucket_policy" "dest_policy":
36: resource "aws_s3_bucket_policy" "dest_policy" {
The buckets create with no issue, I'm just having issues with attaching this policy.
UPDATE:
Below is the provider block for aws.dest, the variables I have defined, and also my .aws/config file.
provider "aws" {
alias = "dest"
profile = var.dest_profile
region = var.dest_region
}
variable "dest_region" {
default = "us-east-2"
}
variable "dest_profile" {
default = "replica"
}
[profile replica]
region = us-east-2
output = json
I managed to execute your configuration and noticed some issues:
In your policy, in the second statement the principals is missing.
statement {
actions = [
"s3:ReplicateObject",
"s3:ReplicateDelete",
]
resources = [
for bucket in aws_s3_bucket.dest_buckets : "${bucket.arn}/*"
]
}
This block is creating the bucket correctly (with -replica in the end):
provider = aws.dest
for_each = toset(var.s3_bucket_names)
bucket = "${each.value}-replica"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
However, by activating the debug, I've noticed that for this resource each.key references the bucket name without -replica so that I was receiving a 404.
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = each.key
policy = data.aws_iam_policy_document.dest_policy.json
}
Changing to it to the same pattern as the bucket creation it worked:
resource "aws_s3_bucket_policy" "dest_policy" {
provider = aws.dest
for_each = aws_s3_bucket.dest_buckets
bucket = "${each.key}-replica"
policy = data.aws_iam_policy_document.dest_policy.json
}
Regarding the 403, it may be the lack of permissions for the user which is creating this resource.
Let me know if this helps you.
I believe you need to add provider = aws.dest to your data "aws_iam_policy_document" "dest_policy" data object.
The provider directive also works with data objects.
I am using OpenAPI 3.0 spec to deploy an AWS API Gateway. I am not able to figure out how to enable cloud watch logs for the deployment.
Here is the terraform code:
data "template_file" "test_api_swagger" {
template = file(var.api_spec_path)
vars = {
//ommitted
}
}
resource "aws_api_gateway_rest_api" "test_api_gateway" {
name = "test_backend_api_gateway"
description = "API Gateway for some x"
body = data.template_file.test_api_swagger.rendered
endpoint_configuration {
types = ["REGIONAL"]
}
}
resource "aws_api_gateway_deployment" "test_lambda_gateway" {
rest_api_id = aws_api_gateway_rest_api.test_api_gateway.id
stage_name = var.env
}
I checked Amazon OpenAPI extensions and none seem to have this option. Only way I see is using api_gateway_method_settings which I cannot use in this case.
I think that it is not supported in terraform. I'm currently using terraform provisioner to run aws cli command after the deployment is created, like in the example below:
The example that I'm providing is to enable XRay tracing. You'll need to research the correct path and value to be used for CloudWatch logs. You can find more information in the docs.
resource "aws_api_gateway_deployment" "test_lambda_gateway" {
rest_api_id = aws_api_gateway_rest_api.test_api_gateway.id
stage_name = var.env
provisioner "local-exec" {
command = "aws apigateway update-stage --region ${data.aws_region.current.name} --rest-api-id ${aws_api_gateway_rest_api.test_api_gateway.id} --stage-name ${var.env} --patch-operations op=replace,path=/tracingEnabled,value=true"
}
}
You just need to make a reference to the aws data provider in your terraform template:
data "aws_region" "current" {}
Even though you're creating the gateway with OpenAPI import, you can still use api_gateway_method_settings to reference the stage, assuming you're using a stage as recommended. See AWS documentation. You would just indicate "*/*" on the method_path as per the example.
resource "aws_api_gateway_stage" "example" {
deployment_id = aws_api_gateway_deployment.test_lambda_gateway.id
rest_api_id = aws_api_gateway_rest_api.test_api_gateway.id
stage_name = "example"
}
resource "aws_api_gateway_method_settings" "all" {
rest_api_id = aws_api_gateway_rest_api.test_api_gateway.id
stage_name = aws_api_gateway_stage.example.stage_name
method_path = "*/*"
settings {
logging_level = "INFO"
}
}
This should set up the logging on the gateway for all requests with INFO level logging as if you had done it in the console on the stage.
I'm trying to create data roles in three environments in AWS using Terraform.
One is an role in root account. This role can is used to login to AWS and can assume data roles in production and staging. This works fine. This is using a separate module.
I have problems when trying to create the roles in prod and staging from a module.
My module looks like this main.tf:
resource "aws_iam_role" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}
data "aws_iam_policy_document" "assume_role" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${var.account_id}:root"]
}
}
}
data "aws_iam_policy_document" "assume_role_custom_principals" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [
"${var.custom_principals}",
]
}
}
}
resource "aws_iam_role_policy_attachment" "this" {
role = "${aws_iam_role.this.name}"
policy_arn = "${aws_iam_policy.this.arn}"
}
I also have the following in output.tf:
output "role_name" {
value = "${aws_iam_role.this.name}"
}
Next I try to use the module to create two roles in prod and staging.
main.tf:
module "data_role" {
source = "../tf_data_role"
account_id = "${var.account_id}"
name = "data"
policy_description = "Role for data engineers"
custom_principals = [
"arn:aws:iam::${var.master_account_id}:root",
]
policy = "${data.aws_iam_policy_document.data_access.json}"
}
Then I'm trying to attach a AWS policies like this:
resource "aws_iam_role_policy_attachment" "data_readonly_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/ReadOnlyAccess"
}
resource "aws_iam_role_policy_attachment" "data_redshift_full_access" {
role = "${module.data_role.role_name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonRedshiftFullAccess"
}
The problem I encounter here is that when I try to run this module the above two policies are not attached in staging but in root account. How can I fix this to make it attach the policies in staging?
I'll assume from your question that staging is its own AWS account, separate from your root account. From the Terraform docs
You can define multiple configurations for the same provider in order to support multiple regions, multiple hosts, etc.
This also applies to creating resources in multiple AWS accounts. To create Terraform resources in two AWS accounts, follow these steps.
In your entrypoint main.tf, define aws providers for the accounts you'll be targeting:
# your normal provider targeting your root account
provider "aws" {
version = "1.40"
region = "us-east-1"
}
provider "aws" {
version = "1.40"
region = "us-east-1"
alias = "staging" # define custom alias
# either use an assumed role or allowed_account_ids to target another account
assume_role {
role_arn = "arn:aws:iam:STAGINGACCOUNTNUMBER:role/Staging"
}
}
(Note: the role arn must exist already and your current AWS credentials must have permission to assume it)
To use them in your module, call your module like this
module "data_role" {
source = "../tf_data_role"
providers = {
aws.staging = "aws.staging"
aws = "aws"
}
account_id = "${var.account_id}"
name = "data"
... remainder of module
}
and define the providers within your module like this
provider "aws" {
alias = "staging"
}
provider "aws" {}
Now when you are declaring resources within your module, you can dictate which AWS provider (and hence which account) to create the resources in, e.g
resource "aws_iam_role" "this" {
provider = "aws.staging" # this aws_iam_role will be created in your staging account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
assume_role_policy = "${length(var.custom_principals) == 0 ? data.aws_iam_policy_document.assume_role.json : data.aws_iam_policy_document.assume_role_custom_principals.json}"
}
resource "aws_iam_policy" "this" {
# no explicit provider is set here so it will use the "default" (un-aliased) aws provider and create this aws_iam_policy in your root account
name = "${var.name}"
description = "${format("%s (managed by Terraform)", var.policy_description)}"
policy = "${var.policy}"
}