Terraform filter_suffix multiple values - amazon-web-services

Using aws and terraform is it possible to add multiple values to the filtersuffix looking at the documentation they have an example here
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = "${aws_s3_bucket.bucket.id}"
topic {
topic_arn = "${aws_sns_topic.topic.arn}"
events = ["s3:ObjectCreated:*"]
filter_suffix = ".dcm"
}
I have tried
filter_suffix = [".dcm",".DCM"]
With no success

To compile the comments into an answer:
It is not possible to have multiple prefixes or suffixes per s3 bucket notification. This is not specific to terraform, as it is also not possible in the AWS management console. A workaround is to define multiple notifcations.

Related

FirehoseDestination - Could not assume IAM role

I am getting following error while Setting up a kinesis data firehose event destination for Amazon SES event publishing using terraform. It seems like the terraform created the IAM role but throwing the error while creating the firehose event destination with IAM role.
Whereas able to attach same IAM role with firehose event destination from AWS console which was created by terraform.
If I manually create the same IAM role using AWS console and then pass the ARN of the role to the terraform it works. However if I try to create the role using terraform and then create the event destination it doesn’t work. Can someone pls help me on this.
Error creating SES configuration set event destination: InvalidFirehoseDestination: Could not assume IAM role <arn:aws:iam::<AWS account name >:role/<AWS IAM ROLE NAME>>.
data "aws_iam_policy_document" "ses_configuration_set_assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ses.amazonaws.com"]
}
}
}
data "aws_iam_policy_document" "ses_firehose_destination_policy" {
statement {
effect = "Allow"
actions = [
"firehose:PutRecord",
"firehose:PutRecordBatch"
]
resources = [
"<ARN OF AWS FIREHOSE DELIVERY STREAM >"
]
}
}
resource "aws_iam_policy" "ses_firehose_destination_iam_policy" {
name = "SesfirehosedestinationPolicy"
policy = data.aws_iam_policy_document.ses_firehose_destination_policy.json
}
resource "aws_iam_role" "ses_firehose_destination_role" {
name = "SesfirehosedestinationRole"
assume_role_policy = data.aws_iam_policy_document.ses_configuration_set_assume_role.json
}
resource "aws_iam_role_policy_attachment" "ses_firehose_destination_role_att" {
role = aws_iam_role.ses_firehose_destination_role.name
policy_arn = aws_iam_policy.ses_firehose_destination_iam_policy.arn
}
resource "aws_ses_configuration_set" "create_ses_configuration_set" {
name = var.ses_config_set_name
}
resource "aws_ses_event_destination" "ses_firehose_destination" {
name = "event-destination-kinesis"
configuration_set_name = aws_ses_configuration_set.create_ses_configuration_set.name
enabled = true
matching_types = ["send", "reject", "bounce", "complaint", "delivery", "renderingFailure"]
depends_on = [aws_iam_role.ses_firehose_destination_role]
kinesis_destination {
stream_arn = "<ARN OF AWS FIREHOSE DELIVERY STREAM>"
role_arn = aws_iam_role.ses_firehose_destination_role.arn
}
}
You might need to look at your Firehose datasource. If it is a Kinesis Datastream, it will not work. It will only work when using a Direct PUT and other datasource for the Kinesis Firehose. I ran into this issue while setting this up for my Kinesis Firehose to Datadog as well. I hope that this helps.
I found the same issue and was able to resolve it with a slight workaround.
The issue is likely due to the time it takes AWS to propagate the IAM role to all the regions. As an IAM role is global, it will be created first in a 'random' region and then propagated to all regions. So if it is not first created in your region it may take some time to propagate and you will get this error if the SES Event Destination is created before the IAM role has propagated.
It does not help to add a depends_on clause, as terraform (correctly?) thinks the IAM role has been created, it has just not been propagated to your region yet.
The solution that worked for me was to create an IAM role that grants access to the "sts:AssumeRole" action for the SES service and the "firehose:PutRecordBatch" action for the Firehose. When I apply the Terraform, I did a targeted apply first for only this role, waited a minute (to allow the IAM role to propagate) and then do a normal terraform apply to complete.
In your example the command will look something like this:
terraform apply --target aws_iam_role_policy_attachment.ses_firehose_destination_role_att
terraform apply

S3 notification configuration is ambiguously defined?

I am trying to use Terraform to connect two AWS lambda functions to an S3 event.
My configuration looks like this:
resource "aws_s3_bucket_notification" "collection_write" {
bucket = aws_s3_bucket.fruit.id
lambda_function {
lambda_function_arn = aws_lambda_function.apple.arn
events = [ "s3:ObjectCreated:*" ]
filter_prefix = "foo/"
}
lambda_function {
lambda_function_arn = aws_lambda_function.banana.arn
events = [ "s3:ObjectCreated:*" ]
filter_prefix = "foo/"
}
}
But AWS / Terraform doesn't like this:
Error: Error putting S3 notification configuration: InvalidArgument: Configuration is ambiguously defined. Cannot have overlapping suffixes in two rules if the prefixes are overlapping for the same event type.
status code: 400
How should I write this in Terraform?
your terraform is not wrong it is just that s3 is limited to a single event notification. It is better to have the s3 event sent to an SNS topic which then triggers the lambdas to achieve the same functionality.
You can also achieve triggering multiple Lambda functions via AWS SQS. AWS SQS is powerful and easy to use Messaging queue for such use cases.

Variably create 1 or more AWS Buckets across multiple regions

I’m looking for recommendations and help with an issue that I am having with setting up and managing bucket and bucket policy creation for multiple environments and multiple regions within a single environment.
I have 4 AWS accounts (dev, stg, prod1, prod2 which is a copy of prod1). In prod1 we have two kubernetes clusters aws-us-prod1 and aws-eu-prod1. These two clusters are completely independent of one another and they merely serve customers in those regions.
I have an applications running on these two different clusters (aws-us-prod1 and aws-eu-prod1) that need to write content to an S3 bucket. But these two clusters share an AWS account (prod1).
I’m trying to write some terraform resource automation to manage this, and I haven’t been able to variably control what region a bucket gets put in. The latest doc shows that there is a region attribute but it doesn’t work because of how the provider has been implemented with the aws provider region attribute.
What I’d like to do is something like this:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
resource "aws_s3_bucket" "my_buckets" {
for_each = var.buckets
bucket = each.key
region = each.value
}
resource "aws_s3_bucket_policy" "my_buckets_policy" {
for_each = aws_s3_bucket.my_buckets
bucket = each.value.id
policy = ...
}
I’ve tried using multiple providers using aliases, but you can’t programmatically use a provider based on the value of a variable you are iterating over. What’s the proper way to organize this project and resources to accomplish this?
These issues I have come across are related to this:
https://github.com/hashicorp/terraform/issues/3656
https://github.com/terraform-providers/terraform-provider-aws/issues/5999
The region attribute just got removed from the s3_bucket in terraform-provider-aws v3.0.0 from July 31, 2020. Before then you could set the region for a bucket and it would have been respected and the bucket would have been created in that selected region. However that was not how any other resource is managed, it was probably just there because S3 is globally scoped and the bucket has no region in the arn. All other services use the region of the provider itself (as it should be).
I would recommend to create the different providers for all the different regions you may want to support and then splitting the var.buckets according to their region and then create one resource "aws_s3_bucket" "this_region" { } for each region:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
provider "aws" {
region = "eu-west-2"
alias = "eu-west-2"
}
locals {
eu_west_2_buckets = [for name, region in var.buckets: name if region == "eu-west-2"]
}
resource "aws_s3_bucket" "eu_west_2_buckets" {
count = length(local.eu_west_2_buckets)
bucket = eu_west_2_buckets[count.index]
provider = aws.eu-west-2
}
If you want to only rollout the buckets that match the current deployment region you can do that by simply changing the bucket filtering logic:
variable "buckets" {
type = map(string) # e.g. buckets='{"a-us-prod1": "us-west-2", "a-eu-prod1":"eu-west-2"}'
}
locals {
buckets = [for name, region in var.buckets: name if region == data.aws_region.current.name]
}
data "aws_region" "current" { }
resource "aws_s3_bucket" "buckets" {
count = length(local.buckets)
bucket = buckets[count.index]
}

How to integrate terraform with atlassian/localstack?

Terraform can be configured with custom S3 endpoints and it seems that localstack can create local stacks for S3, SES, Cloudformation and few others services.
The question is what to write in Terraform configuration to use localstack's S3 endpoint?
Terraform does not officially support "AWS-workalike" systems, since they often have subtle quirks and differences relative to AWS itself. However, it is supported on a best-effort basis and may work if localstack is able to provide a sufficiently realistic impression of S3 for Terraform's purposes.
According to the localstack docs, by default the S3 API is exposed at http://localhost:4572 , so setting the custom endpoint this way may work:
provider "aws" {
endpoints {
s3 = "http://localhost:4572"
}
}
Depending on the capabilities of localstack, you may need to set some other settings:
s3_force_path_style to use a path-based addressing scheme for buckets and objects.
skip_credentials_validation, since localstack seems to lack an implementation of the AWS token service.
skip_metadata_api_check if IAM-style credentials will not be used, to prevent Terraform from trying to get credentials from the EC2 metadata API.
Building off #martin-atkins’ answer, here’s a sample Terraform file that works with Localstack:
provider "aws" {
region = "us-east-1"
access_key = "anaccesskey"
secret_key = "asecretkey"
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
s3 = "http://localhost:4572"
}
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "public-read"
}

Terraform state locking using DynamoDB

Our Terraform layout is such that we run Terraform for many aws (100+) accounts, and save Terraform state file remotely to a central S3 bucket.
The new locking feature sounds useful and wish to implement it but I am unsure if I can make use of a central DynamoDB table in the same account as that of our S3 bucket or do I need to create a DynamoDB table in each of the AWS accounts?
You can use a single DynamoDB table to control the locking for the state file for all of the accounts. This would work even if you had multiple S3 buckets to store state in.
The DynamoDB table is keyed on LockID which is set as a bucketName/path. So as long as you have a unique combination of those you will be fine (you should or you have bigger problems with your state management).
Obviously you will need to set up cross account IAM policies to allow users creating things in one account to be able to manage items in DynamoDB.
To use terraform DynamoDB locking, follow the steps below
1.Create an AWS DynamoDB with terraform to lock the terraform.tfstate.
provider "aws" {
region = "us-east-2"
}
resource "aws_dynamodb_table" "dynamodb-terraform-lock" {
name = "terraform-lock"
hash_key = "LockID"
read_capacity = 20
write_capacity = 20
attribute {
name = "LockID"
type = "S"
}
tags {
Name = "Terraform Lock Table"
}
}
2.Execute terraform to create the DynamoDB table on AWS
terraform apply
Usage Example
1.Use the DynamoDB table to lock terraform.state creation on AWS. As an EC2 example
terraform {
backend "s3" {
bucket = "terraform-s3-tfstate"
region = "us-east-2"
key = "ec2-example/terraform.tfstate"
dynamodb_table = "terraform-lock"
encrypt = true
}
}
provider "aws" {
region = "us-east-2"
}
resource "aws_instance" "ec2-example" {
ami = "ami-a4c7edb2"
instance_type = "t2.micro"
}
The dynamodb_table value must match the name of the DynamoDB table we created.
2.Initialize the terraform S3 and DynamoDB backend
terraform init
3.Execute terraform to create EC2 server
terraform apply
To see the code, go to the
Github DynamoDB Locking Example