Terraform: Workaround for aws S3 RTC - amazon-web-services

I am using Terraform version 1.0.5. I would like to enable aws S3's RTC for S3 buckets created using my Terraform script. As Terraform does not offer the ability to enable/disable RTC yet (https://github.com/hashicorp/terraform-provider-aws/issues/10974), I was thinking of using a local-exec provisioner to update replication config of the created buckets (see below).
resource "null_resource" "s3_bucket" {
depends_on = [
# a module that creates S3 buckets
]
triggers = {
# below statement makes sure the local-exec provisioner is invoked on every run
always_run = timestamp()
encoded_replication_config = local.replication_config
}
provisioner "local-exec" {
command = "aws s3api put-bucket-replication --bucket '${primary_bucket_name}' --replication-configuration '${self.triggers.encoded_replication_config}'"
}
}
locals {
replication_config = jsonencode({
"Role" : role_arn,
"Rules" : [
{
"ID": "replication-id"
"Status" : "Enabled",
"Priority" : 1,
"DeleteMarkerReplication" : { "Status" : "Disabled" },
"Filter" : { "Prefix" : "" },
"Destination" : {
"Bucket" : replica_bucket_arn,
"ReplicationTime" : {
"Status" : "Enabled",
"Time" : {
"Minutes" : 15
}
},
"Metrics" : {
"Status" : "Enabled",
"EventThreshold" : {
"Minutes" : 15
}
}
}
}
]
})
}
While this works alright for a few buckets, in case of a large no. of S3 buckets (say 500), because the replication config is not stored in tfstate, the local-exec to update replication config is executed every time for all the buckets on terraform apply, even when a completely new bucket is created.
I would really appreciate it if anyone could suggest other workarounds for this problem.

Related

How to Configure CloudWatch Lambda Insights in Terraform

I need to enable "CloudWatch Lambda Insights" for a lambda using Terraform, but could not find the documentation. How I can do it in Terraform?
Note: This question How to add CloudWatch Lambda Insights to serverless config? may be relevant.
There is no "boolean switch" in the aws_lambda_function resource of the AWS Terraform provider that you can set to true, that would enable Cloudwatch Lambda Insights.
Fortunately, it is possible to do this yourself. The following Terraform definitions are based on this AWS documentation: Using the AWS CLI to enable Lambda Insights on an existing Lambda function
The process involves two steps:
Add a layer to your Lambda
Attach a AWS policy to your Lambdas role.
The Terraform definitions would look like this:
resource "aws_lambda_function" "insights_example" {
[...]
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
Important: The arn of the layer is different for each region. The documentation I linked above has a link to a list of them. Furthermore, there is an additional step required if your Lambda is in a VPC, which you can read about in the documentation. The described "VPC step" can be put into Terraform as well.
For future readers: The version of that layer in my example is 14. This will change over time. So please do not just copy & paste that part. Follow the provided links and look for the current version of that layer.
Minimal, Complete, and Verifiable example
Tested with:
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/aws v3.24.0
Create the following two files (handler.py and main.tf) in a folder. Then run the following commands:
terraform init
terraform plan
terraform apply
Besides deploying the required resources, it will also create a zip archive containing the handler.py which is the deployment artifact used by the aws_lambda_function resource. So this is an all-in-one example without the need of further zipping etc.
handler.py
def lambda_handler(event, context):
return {
'message' : 'CloudWatch Lambda Insights Example'
}
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_lambda_function" "insights_example" {
function_name = "insights-example"
runtime = "python3.8"
handler = "handler.lambda_handler"
role = aws_iam_role.insights_example.arn
filename = "${path.module}/lambda.zip"
layers = [
"arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:14"
]
depends_on = [
data.archive_file.insights_example
]
}
resource "aws_iam_role" "insights_example" {
name = "InsightsExampleLambdaRole"
assume_role_policy = data.aws_iam_policy_document.lambda_assume.json
}
resource "aws_iam_role_policy_attachment" "insights_example" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.insights_example.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}
data "aws_iam_policy_document" "lambda_assume" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
data "archive_file" "insights_example" {
type = "zip"
source_file = "${path.module}/handler.py"
output_path = "${path.module}/lambda.zip"
}
In case you are using container images as the deployment package for your Lambda function, the required steps to enable CloudWatch Lambda Insights are slightly different (since Lambda layers can't be used here):
attach the arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy to your functions role as described by Jens
Add the Lambda Insights extension to your container image
FROM public.ecr.aws/lambda/nodejs:12
RUN curl -O https://lambda-insights-extension.s3-ap-northeast-1.amazonaws.com/amazon_linux/lambda-insights-extension.rpm && \
rpm -U lambda-insights-extension.rpm && \
rm -f lambda-insights-extension.rpm
COPY app.js /var/task/
see documentation for details
Based off #jens' answer, here's a snippet that will automatically supply the correct LambdaInsightsExtension layer based on the current region:
data "aws_region" "current" {}
locals {
aws_region = data.aws_region.current.name
# List taken from https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Lambda-Insights-extension-versionsx86-64.html
lambdaInsightsLayers = {
"us-east-1" : "arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:18",
"us-east-2" : "arn:aws:lambda:us-east-2:580247275435:layer:LambdaInsightsExtension:18",
"us-west-1" : "arn:aws:lambda:us-west-1:580247275435:layer:LambdaInsightsExtension:18",
"us-west-2" : "arn:aws:lambda:us-west-2:580247275435:layer:LambdaInsightsExtension:18",
"af-south-1" : "arn:aws:lambda:af-south-1:012438385374:layer:LambdaInsightsExtension:11",
"ap-east-1" : "arn:aws:lambda:ap-east-1:519774774795:layer:LambdaInsightsExtension:11",
"ap-south-1" : "arn:aws:lambda:ap-south-1:580247275435:layer:LambdaInsightsExtension:18",
"ap-northeast-3" : "arn:aws:lambda:ap-northeast-3:194566237122:layer:LambdaInsightsExtension:1",
"ap-northeast-2" : "arn:aws:lambda:ap-northeast-2:580247275435:layer:LambdaInsightsExtension:18",
"ap-southeast-1" : "arn:aws:lambda:ap-southeast-1:580247275435:layer:LambdaInsightsExtension:18",
"ap-southeast-2" : "arn:aws:lambda:ap-southeast-2:580247275435:layer:LambdaInsightsExtension:18",
"ap-northeast-1" : "arn:aws:lambda:ap-northeast-1:580247275435:layer:LambdaInsightsExtension:25",
"ca-central-1" : "arn:aws:lambda:ca-central-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-central-1" : "arn:aws:lambda:eu-central-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-west-1" : "arn:aws:lambda:eu-west-1:580247275435:layer:LambdaInsightsExtension:18",
"eu-west-2" : "arn:aws:lambda:eu-west-2:580247275435:layer:LambdaInsightsExtension:18",
"eu-south-1" : "arn:aws:lambda:eu-south-1:339249233099:layer:LambdaInsightsExtension:11",
"eu-west-3" : "arn:aws:lambda:eu-west-3:580247275435:layer:LambdaInsightsExtension:18",
"eu-north-1" : "arn:aws:lambda:eu-north-1:580247275435:layer:LambdaInsightsExtension:18",
"me-south-1" : "arn:aws:lambda:me-south-1:285320876703:layer:LambdaInsightsExtension:11",
"sa-east-1" : "arn:aws:lambda:sa-east-1:580247275435:layer:LambdaInsightsExtension:18"
}
}
resource "aws_lambda_function" "my_lambda" {
...
layers = [
local.lambdaInsightsLayers[local.aws_region]
]
}
resource "aws_iam_role_policy_attachment" "insights_policy" {
role = aws_iam_role.my_lambda.id
policy_arn = "arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy"
}

terraform provisioner variable type listofmaps doesn't work for tags

In Terraform, I am using null_resource provider to create aws eventbridge (custom event bus). Since terraform is NOT providing inbuild resource type to create custom event bus.
# cat main.tf
resource "null_resource" "event_bus" {
triggers = {
event_bus_name = var.event_bus_name
}
provisioner "local-exec" {
command = "aws events create-event-bus --name ${var.event_bus_name} --tags ${var.event_bus_tags}"
}
provisioner "local-exec" {
when = destroy
command = "aws events delete-event-bus --name ${self.triggers.event_bus_name}"
}
}
I am defining the variable for tags as below
# cat variable.tf
variable "event_bus_tags" {
type = list(map(any))
}
variable "event_bus_name" {
type = string
}
and I am calling the variable in auto.tfvars as below
# cat var.auto.tfvars
event_bus_tags = [
{
"Key": "environment", "Value": "dev"
},
{
"Key": "type", "Value": "custom"
}
]
event_bus_name = "my-event-bus"
but I am getting the bellow error.
# terraform apply --auto-approve
null_resource.event_bus: Creating...
Error: Invalid template interpolation value: Cannot include the given value in a string template: string required.
the command line works just fine.
aws events create-event-bus --name test --tags '[ { "Key": "env", "Value": "dev" } ]'
Not sure what would be the appropriate variable type for tags, incase of "Key" & "Value" are string type.
You need to convert the Terraform object into a JSON string. Terraform provides the jsonencode function for this.
command = "aws events create-event-bus --name ${var.event_bus_name} --tags ${jsonencode(var.event_bus_tags)}"

Passing event data from Amazon EventBridge into an AWS Fargate task

Objective
I'd like to pass event data from Amazon EventBridge directly into an AWS Fargate task. However, it doesn't seem like this is currently possible.
Workaround
As a work-around, I've inserted an extra resource in between AWS Fargate and EventBridge. AWS Step Functions allows you to specify ContainerOverrides, in which the Environment property allows you to configure environment variables that will be passed into the Fargate task, from the EventBridge event.
Unfortunately, this workaround increases the solution complexity and cost unnecessarily.
Question: Is there a way to pass event data from EventBridge directly into an AWS Fargate (ECS) task, that I am simply unaware of?
To pass data from Eventbridge Event to ECS Task for e.g with a Launch Type FARGATE you can use Input Transformation. For example let's say we have an S3 bucket configured to send all event notifications to eventbridge and we have an eventbridge rule that looks like this.
{
"detail": {
"bucket": {
"name": ["mybucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
Now let's say we would like to pass the bucket name, object key, and the object version id to our ecs task running on fargate you can create a aws_cloudwatch_event_target resource in terraform with an input transformer below.
resource "aws_cloudwatch_event_target" "EventBridgeECSTaskTarget"{
target_id = "EventBridgeECSTaskTarget"
rule = aws_cloudwatch_event_rule.myeventbridgerule.name
arn = "arn:aws:ecs:us-east-1:123456789012:cluster/myecscluster"
role_arn = aws_iam_role.EventBridgeRuleInvokeECSTask.arn
ecs_target {
task_count = 1
task_definition_arn = "arn:aws:ecs:us-east-1:123456789012:task-definition/mytaskdefinition"
launch_type = "FARGATE"
network_configuration {
subnets = ["subnet-1","subnet-2","subnet-3"]
security_groups = ["sg-group-id"]
}
}
input_transformer {
input_paths = {
bucketname = "$.detail.bucket.name",
objectkey = "$.detail.object.key",
objectversionid = "$.detail.object.version-id",
}
input_template = <<EOF
{
"containerOverrides": [
{
"name": "containername",
"environment" : [
{
"name" : "S3_BUCKET_NAME",
"value" : <bucketname>
},
{
"name" : "S3_OBJECT_KEY",
"value" : <objectkey>
},
{
"name" : "S3_OBJ_VERSION_ID",
"value": <objectversionid>
}
]
}
]
}
EOF
}
}
Once your ECS Task is running you can easily access these variables to check what bucket the object was created in, what was the object and the version and do a GetObject.
For e.g: In Go we can easily do it as follows. (snippets only not adding imports and stuff but you get the idea.
filename := aws.String(os.Getenv("S3_OBJECT_KEY"))
bucketname := aws.String(os.Getenv("S3_BUCKET_NAME"))
versionId := aws.String(os.Getenv("S3_OBJ_VERSION_ID"))
//You can print and verify the values in CloudWatch
//Prepare the s3 GetObjectInput
s3goi := &s3.GetObjectInput{
Bucket: bucketname,
Key: filename,
VersionId: versionId,
}
s3goo, err := s3svc.GetObject(ctx, s3goi)
if err != nil {
log.Fatalf("Error retreiving object: %v", err)
}
b, err := ioutil.ReadAll(s3goo.Body)
if err != nil {
log.Fatalf("Error reading file: %v", err)
}
There's no current direct invocation between EventBridge and Fargate. You can find the list of targets supported at https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-targets.html
The workarounds is to use an intermediary that supports calling fargate (like step-functions) or send the message to compute (like lambda [the irony]) before sending it downstream.

How do I trigger SSM Run Command/document on two different schedules using Terraform

I am trying to run a command that creates backups of drives on an ec2 windows instance, this commands should be run on two different schedules, one which runs hourly and the other once every 24 hours. Currently I pass a schedule expression to my association which runs hourly. I want to be able to create two schedules which i will be able to view on the AWS systems manager dashboard. I am not sure if i can leverage Maintenance window here.
resource aws_ssm_document backup_script {
name = "backup_script"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.2",
"description": "execute snapshot script",
"parameters": {
},
"runtimeConfig": {
"aws:runPowerShellScript": {
"properties": [
{
"id": "0.aws:runPowerShellScript",
"runCommand": ["powershell.exe -ExecutionPolicy Bypass -file 'C:\\backup.ps1'"]
}
]
}
}
}
DOC
}
resource aws_ssm_association backup_script {
name = aws_ssm_document.backup_script.name
targets {
key = "tag:backup"
values = ["${var.backup}"]
}
schedule_expression = "rate(60 minutes)"
}

Storing elasticsearch snapshots in amazon s3 repository. How does it work

I have an elasticsearch 2.3 installed on my local Linux machine.
I have Amazon S3 storage: I know region, bucketname, accesskey and secretkey.
I want to make a snapshot of elasticsearch indexes in my S3 storage. There is documentation about it here, but it doesn't explain me anything (I am totally new in it.).
So, for example, I am trying to execute this command:
curl -XPUT 'localhost:9200/_snapshot/my_s3_repository?pretty' -H 'Content-Type: application/json' -d '{"type": "s3",
"settings": {"bucket": "ilyabackuptest1", "region": "us-east-1" }}'
And I get a response:
{
"error" : {
"root_cause" : [ {
"type" : "repository_exception",
"reason" : "[my_s3_repository] failed to create repository"
} ],
"type" : "repository_exception",
"reason" : "[my_s3_repository] failed to create repository",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Unknown [repository] type [s3]"
}
},
"status" : 500
}
So how does it work?
UPDATE:
After installing repository-s3 I use the same command and get this. How should it work?
{
"error" : {
"root_cause" : [ {
"type" : "process_cluster_event_timeout_exception",
"reason" : "failed to process cluster event (put_repository [my_s3_repository]) within 30s"
} ],
"type" : "process_cluster_event_timeout_exception",
"reason" : "failed to process cluster event (put_repository [my_s3_repository]) within 30s"
},
"status" : 503
}
You simply need to install the S3 repository plugin first:
bin/plugin install repository-s3
Then you can run again your command to create the S3 repo.