Create S3 bucket and lambda policies with terraform - amazon-web-services

I have some trouble accessing my AWS bucket from a lambda. I create and configure my bucket/lambdas using terraform (terraform newbie here).
Here is the module that creates the S3 bucket :
module "create-my-bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = "my-bucket"
acl = "private"
versioning = {
enabled = true
}
block_public_acls = true
block_public_policy = true
restrict_public_buckets = true
ignore_public_acls = true
attach_deny_insecure_transport_policy = true
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
sse_algorithm = "AES256"
}
}
}
}
Here is the module that configure policies for the lambda :
module "my_lambda_policy" {
source = "terraform-aws-modules/iam/aws//modules/iam-policy"
name = "validate_lambda_policy"
path = "/"
description = "Validate Policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:Put*",
"s3:Get*",
"s3:List*",
"ses:SendEmail"
],
"Resource": [
"arn:aws:s3:::my-bucket/",
"arn:aws:s3:::my-bucket/*",
"arn:aws:ses:..."
]
}]
}
EOF
}
Terraform properly creates my bucket and configure my lambda, however, when my lambda tries to perform a "ListObjectsV2" or a "GetObject" operation, it get an "Access Denied".
I have set up with my policies some SES policy. These policies are properly applied (my lambda sends mails) so I expect that my S3 policies are also properly applied. Am I missing something with the bucket configuration ? What should I do to correct this (without setting my bucket full public of course)

This ARN is wrong for the S3 bucket:
"arn:aws:s3:::my-bucket/",
the / makes it not match the bucket ARN. This set of documentation is the best place I know of to determine exactly what an ARN looks like for a given resource.
So you should change it to
"arn:aws:s3:::my-bucket",
Without the slash. Leave "arn:aws:s3:::my-bucket/*" also, because that will match the objects' arns for Get/Put Object.

Related

Terraform AWS Cloudwatch Rule with Event Pattern not working [duplicate]

I would like to run an AWS lambda function every five minutes. In the AWS Management Console this is easy to set up, under the lambda function's "Event Sources" tab, but how do I set it up with Terraform?
I tried to use an aws_lambda_event_source_mapping resource, but it turns out that the API it uses only supports events from Kinesis and DynamoDB. When I try to use it with a scheduled event source, creation times out.
You can use an aws_cloudwatch_event_target resource to tie the scheduled event source (event rule) to your lambda function. You need to grant it permission to invoke your lambda function; you can use an aws_lambda_permission resource for this.
Example:
resource "aws_lambda_function" "check_foo" {
filename = "check_foo.zip"
function_name = "checkFoo"
role = "arn:aws:iam::424242:role/something"
handler = "index.handler"
}
resource "aws_cloudwatch_event_rule" "every_five_minutes" {
name = "every-five-minutes"
description = "Fires every five minutes"
schedule_expression = "rate(5 minutes)"
}
resource "aws_cloudwatch_event_target" "check_foo_every_five_minutes" {
rule = aws_cloudwatch_event_rule.every_five_minutes.name
target_id = "check_foo"
arn = aws_lambda_function.check_foo.arn
}
resource "aws_lambda_permission" "allow_cloudwatch_to_call_check_foo" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.check_foo.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.every_five_minutes.arn
}
Verbjorns Ljosa's answer only includes permissions for cloudwatch to invoke the lambda. Have you specified the proper policy and iam role that allows the lambda to perform its actions?
resource "aws_iam_role" "check_foo_role" {
name="check-foo-assume-role"
assume_role_policy="assume_role_policy.json"
}
with assume_role_policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
and a policy referencing the above resource iam role I.e. something like
resource "iam_role_policy" "check-foo-policy" {
name="check-foo-lambda-policy"
# referencing the iam role above
role="${aws_iam_role.check_foo_role.id}"
policy="check-foo-policy.json"
}
and finally the json specifying the policy, check-foo-policy.json.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"abc:SomeAction",
"abc:AnotherAction",
],
"Resource": "some-arn-matching-the-actions"
}
Do note that you cannot specify a Resource restriction for the logs-related actions. abc:SomeAction might be ssm:GetParameter with an accompanying resource arn like "arn:aws:ssm:us-east-1:${your-aws-account-id}:parameter/some/parameter/path/*
As an addition to the accepted answer. Often times one would want the zip-file for the lambda to be created by terraform as well. To do so one can use the archive_file data source:
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "check_foo.zip"
}
resource "aws_lambda_function" "check_foo" {
filename = "check_foo.zip"
function_name = "checkFoo"
role = "arn:aws:iam::424242:role/something"
handler = "index.handler"
}
# then the rest from the accepted answer to trigger this
That is particularly helpful if the code is under version control, because then you can add the check_foo.zip to the .gitignore and there can never be a missmatch between the zip file and the source code that it is based on.

Terraform Failure configuring LB attributes

I've followed the first answer on this post on StackOverflow but I obtain this error:
Failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: myproject-log. Please check S3bucket permission status code: 400
This is my code:
s3_bucket
data "aws_elb_service_account" "main" {}
resource "aws_s3_bucket" "bucket_log" {
bucket = "${var.project}-log"
acl = "log-delivery-write"
policy = <<POLICY
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::${var.project}-log/AWSLogs/*",
"Principal": {
"AWS": [
"${data.aws_elb_service_account.main.arn}"
]
}
}
]
}
POLICY
}
load balancer
resource "aws_lb" "vm_stage" {
name = "${var.project}-lb-stg"
internal = false
load_balancer_type = "application"
subnets = [aws_subnet.subnet_1.id, aws_subnet.subnet_2.id, aws_subnet.subnet_3.id]
security_groups = [aws_security_group.elb_project_stg.id]
access_logs {
bucket = aws_s3_bucket.bucket_log.id
prefix = "lb-stg"
enabled = true
}
tags = {
Name = "${var.project}-lb-stg"
}
}
Just going to drop this here since this cross applied to another question that was asked.
This took me awhile to figure out, but the S3 bucket has two requirements per the documentation:
The bucket must be located in the same Region as the load balancer.
Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.
Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html
While it makes it seem like it's a permissions issue with the error message given it may actually be an issue with the bucket having the wrong encryption type. In my case the issue was that my bucket was unencrypted.
Updated the bucket to SSE-S3 encryption and I no longer received the error:
resource "aws_s3_bucket" "s3_access_logs_bucket" {
bucket = var.access_logs_bucket_name
acl = "private"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
versioning {
enabled = true
}
}
And just because, here's the policy I used:
data "aws_elb_service_account" "main" {}
data "aws_iam_policy_document" "s3_lb_write" {
statement {
principals {
identifiers = ["${data.aws_elb_service_account.main.arn}"]
type = "AWS"
}
actions = ["s3:PutObject"]
resources = [
"${aws_s3_bucket.s3_access_logs_bucket.arn}/*"
]
}
}
resource "aws_s3_bucket_policy" "load_balancer_access_logs_bucket_policy" {
bucket = aws_s3_bucket.s3_access_logs_bucket.id
policy = data.aws_iam_policy_document.s3_lb_write.json
}
Official AWS Docs
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html
Solution
Reference the docs above and change your bucket's iam policy to reflect what the documentation states. The logging is actually done by AWS and not your roles or IAM users. So you need to give ÅWS permission to do this. That's why the docs show statements in the policy that specify the delivery.logs.amazonaws.com principal. That principal is the AWS logging service. Even though your bucket is hosted on AWS, they don't give themselves access to your bucket by default. You have to explicitly grant access to AWS if you want their services to work.
As per this post, I was able to resolve this issue by disabling KMS and using SSE-S3 for bucket encryption. Also, there are additional permissions listed in the AWS docs.
I struggled with this as well the entire terraform bucket policy that worked for me is below.
data "aws_iam_policy_document" "elb_bucket_policy" {
statement {
effect = "Allow"
resources = [
"arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
]
actions = ["s3:PutObject"]
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${local.elb_account_id}:root"]
}
}
statement {
effect = "Allow"
resources = [
"arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
]
actions = ["s3:PutObject"]
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
}
statement {
effect = "Allow"
resources = [
"arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
]
actions = ["s3:PutObject"]
principals {
type = "Service"
identifiers = ["logdelivery.elb.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = ["bucket-owner-full-control"]
}
}
}

Terraform error setting S3 bucket tags: InvalidTag: The TagValue you have provided is invalid status code: 400

I have managed to make my Terraform loop through all of my buckets creating an IAMs user and a bucket
resource "aws_s3_bucket" "aws_s3_buckets" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
tags = {
Name = "${var.s3_bucket_name[count.index]}"
Environment = "live"
policy = <<POLICY
{
"Id": "Policy1574607242703",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574607238413",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": {
"arn:aws:s3:::"."${var.s3_bucket_name[count.index]}"."/*"}
},
"Principal": {
"AWS": [
"${var.s3_bucket_name[count.index]}"}
]}
}
]
}
POLICY
}
}
I'm getting error setting S3 bucket tags: InvalidTag: The TagValue you have provided is invalid status code: 400 is there a way to create policies like this? Or have I done something incorrect in my code?
The error is because policy section is not part of tag argument. It is a separate section within the aws_s3_bucket resource. You can also use aws_s3_bucket_policy resource to create bucket policy.
Note: There are quite a few issues with the policy. You would have to fix them for the policy to go through fine. Some of the issues are:
"arn:aws:s3:::"."${var.s3_bucket_name[count.index]}"."/*"} -- this should not be inside a JSON.
There are some curly braces that are not aligned properly (some extra curly braces).
The principal should be an IAM resource (IAM User or IAM role or an account or *).

AWS IAM Policies to connect AWS Cloudwatch Logs, Kinesis Firehose, S3 and ElasticSearch

I am trying to stream the AWS cloudwatch logs to ES via Kinesis Firehose. Below terraform code is giving an error. Any suggestions..
The error is:
aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: 1 error(s) occurred:
aws_cloudwatch_log_subscription_filter.test_kinesis_logfilter: InvalidParameterException: Could not deliver test message to specified Firehose stream. Check if the given Firehose stream is in ACTIVE state.
resource "aws_s3_bucket" "bucket" {
bucket = "cw-kinesis-es-bucket"
acl = "private"
}
resource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_elasticsearch_domain" "es" {
domain_name = "firehose-es-test"
elasticsearch_version = "1.5"
cluster_config {
instance_type = "t2.micro.elasticsearch"
}
ebs_options {
ebs_enabled = true
volume_size = 10
}
advanced_options {
"rest.action.multi.allow_explicit_index" = "true"
}
access_policies = <<CONFIG
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "es:*",
"Principal": "*",
"Effect": "Allow",
"Condition": {
"IpAddress": {"aws:SourceIp": ["xxxxx"]}
}
}
]
}
CONFIG
snapshot_options {
automated_snapshot_start_hour = 23
}
tags {
Domain = "TestDomain"
}
}
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = "${aws_iam_role.firehose_role.arn}"
bucket_arn = "${aws_s3_bucket.bucket.arn}"
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = "${aws_elasticsearch_domain.es.arn}"
role_arn = "${aws_iam_role.firehose_role.arn}"
index_name = "test"
type_name = "test"
}
}
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
name = "test_kinesis_logfilter"
role_arn = "${aws_iam_role.iam_for_lambda.arn}"
log_group_name = "loggorup.log"
filter_pattern = ""
destination_arn = "${aws_kinesis_firehose_delivery_stream.test_stream.arn}"
}
In this configuration you are directing Cloudwatch Logs to send log records to Kinesis Firehose, which is in turn configured to write the data it receives to both S3 and ElasticSearch. Thus the AWS services you are using are talking to each other as follows:
In order for one AWS service to talk to another the first service must assume a role that grants it access to do so. In IAM terminology, "assuming a role" means to temporarily act with the privileges granted to that role. An AWS IAM role has two key parts:
The assume role policy, that controls which services and/or users may assume the role.
The policies controlling what the role grants access to. This decides what a service or user can do once it has assumed the role.
Two separate roles are needed here. One role will grant Cloudwatch Logs access to talk to Kinesis Firehose, while the second will grant Kinesis Firehose access to talk to both S3 and ElasticSearch.
For the rest of this answer, I will assume that Terraform is running as a user with full administrative access to an AWS account. If this is not true, it will first be necessary to ensure that Terraform is running as an IAM principal that has access to create and pass roles.
Access for Cloudwatch Logs to Kinesis Firehose
In the example given in the question, the aws_cloudwatch_log_subscription_filter has a role_arn whose assume_role_policy is for AWS Lambda, so Cloudwatch Logs does not have access to assume this role.
To fix this, the assume role policy can be changed to use the service name for Cloudwatch Logs:
resource "aws_iam_role" "cloudwatch_logs" {
name = "cloudwatch_logs_to_firehose"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "logs.us-east-1.amazonaws.com"
},
"Effect": "Allow",
"Sid": "",
},
],
})
}
The above permits the Cloudwatch Logs service to assume the role. Now the role needs an access policy that permits writing to the Firehose Delivery Stream:
resource "aws_iam_role_policy" "cloudwatch_logs" {
role = aws_iam_role.cloudwatch_logs.name
policy = jsonencode({
"Statement": [
{
"Effect": "Allow",
"Action": ["firehose:*"],
"Resource": [aws_kinesis_firehose_delivery_stream.test_stream.arn],
},
],
})
}
The above grants the Cloudwatch Logs service access to call into any Kinesis Firehose action as long as it targets the specific delivery stream created by this Terraform configuration. This is more access than is strictly necessary; for more information, see Actions and Condition Context Keys for Amazon Kinesis Firehose.
To complete this, the aws_cloudwatch_log_subscription_filter resource must be updated to refer to this new role:
resource "aws_cloudwatch_log_subscription_filter" "test_kinesis_logfilter" {
name = "test_kinesis_logfilter"
role_arn = aws_iam_role.cloudwatch_logs.arn
log_group_name = "loggorup.log"
filter_pattern = ""
destination_arn = aws_kinesis_firehose_delivery_stream.test_stream.arn
# Wait until the role has required access before creating
depends_on = aws_iam_role_policy.cloudwatch_logs
}
Unfortunately due to the internal design of AWS IAM, it can often take several minutes for a policy change to come into effect after Terraform submits it, so sometimes a policy-related error will occur when trying to create a new resource using a policy very soon after the policy itself was created. In this case, it's often sufficient to simply wait 10 minutes and then run Terraform again, at which point it should resume where it left off and retry creating the resource.
Access for Kinesis Firehose to S3 and Amazon ElasticSearch
The example given in the question already has an IAM role with a suitable assume role policy for Kinesis Firehose:
resource "aws_iam_role" "firehose_role" {
name = "firehose_test_role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
})
}
The above grants Kinesis Firehose access to assume this role. As before, this role also needs an access policy to grant users of the role access to the target S3 bucket:
resource "aws_iam_role_policy" "firehose_role" {
role = aws_iam_role.firehose_role.name
policy = jsonencode({
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [aws_s3_bucket.bucket.arn]
},
{
"Effect": "Allow",
"Action": ["es:ESHttpGet"],
"Resource": ["${aws_elasticsearch_domain.es.arn}/*"]
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:log-group:*:log-stream:*"
]
},
],
})
}
The above policy allows Kinesis Firehose to perform any action on the created S3 bucket, any action on the created ElasticSearch domain, and to write log events into any log stream in Cloudwatch Logs. The final part of this is not strictly necessary, but is important if logging is enabled for the Firehose Delivery Stream, or else Kinesis Firehose is unable to write logs back to Cloudwatch Logs.
Again, this is more access than strictly necessary. For more information on the specific actions supported, see the following references:
Action and Context Keys for Amazon S3
Grant Firehose Access to an Amazon Elasticsearch Service Destination
Since this single role has access to write to both S3 and to ElasticSearch, it can be specified for both of these delivery configurations in the Kinesis Firehose delivery stream:
resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
name = "terraform-kinesis-firehose-test-stream"
destination = "elasticsearch"
s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.bucket.arn
buffer_size = 10
buffer_interval = 400
compression_format = "GZIP"
}
elasticsearch_configuration {
domain_arn = aws_elasticsearch_domain.es.arn
role_arn = aws_iam_role.firehose_role.arn
index_name = "test"
type_name = "test"
}
# Wait until access has been granted before creating the firehose
# delivery stream.
depends_on = [aws_iam_role_policy.firehose_role]
}
With all of the above wiring complete, the services should have the access they need to connect the parts of this delivery pipeline.
This same general pattern applies to any connection between two AWS services. The important information needed for each case is:
The service name for the service that will initiate the requests, such as logs.us-east-1.amazonaws.com or firehose.amazonaws.com. These are unfortunately generally poorly documented and hard to find, but can usually be found in policy examples within each service's user guide.
The names of the actions that need to be granted. The full set of actions for each service can be found in AWS Service Actions and Condition Context Keys for Use in IAM Policies. Unfortunately again the documentation for specifically which actions are required for a given service-to-service integration is generally rather lacking, but in simple environments (notwithstanding any hard regulatory requirements or organizational policies around access) it usually suffices to grant access to all actions for a given service, using the wildcard syntax used in the above examples.

Terraform ELB S3 Permissions Issue

I am having an issue using Terraform (v0.9.2) adding services to an ELB (I'm using: https://github.com/segmentio/stack/blob/master/s3-logs/main.tf).
When I run terraform apply I get this error:
* module.solr.module.elb.aws_elb.main: 1 error(s) occurred:
* aws_elb.main: Failure configuring ELB attributes:
InvalidConfigurationRequest: Access Denied for bucket: my-service-
logs. Please check S3bucket permission
status code: 409, request id: xxxxxxxxxx-xxxx-xxxx-xxxxxxxxx
My service looks like this:
module "solr" {
source = "github.com/segmentio/stack/service"
name = "${var.prefix}-${terraform.env}-solr"
environment = "${terraform.env}"
image = "123456789876.dkr.ecr.eu-west-2.amazonaws.com/my-docker-image"
subnet_ids = "${element(split(",", module.vpc_subnets.private_subnets_id), 3)}"
security_groups = "${module.security.apache_solr_group}"
port = "8983"
cluster = "${module.ecs-cluster.name}"
log_bucket = "${module.s3_logs.id}"
iam_role = "${aws_iam_instance_profile.ecs.id}"
dns_name = ""
zone_id = "${var.route53_zone_id}"
}
My s3-logs bucket looks like this:
module "s3_logs" {
source = "github.com/segmentio/stack/s3-logs"
name = "${var.prefix}"
environment = "${terraform.env}"
account_id = "123456789876"
}
I checked in S3 and the bucket policy looks like this:
{
"Version": "2012-10-17",
"Id": "log-bucket-policy",
"Statement": [
{
"Sid": "log-bucket-policy",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789876:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-service-logs/*"
}
]
}
As far as I can see ELB should have access to the S3 bucket to store the logs (it's running in the same AWS account).
The bucket and the ELB are all in eu-west-2.
Any ideas on what the problem could be would be much appreciated.
The docs for ELB access logs say that you want to allow a specific Amazon account to be able to write to S3, not your account.
As such you want something like:
{
"Id": "Policy1429136655940",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1429136633762",
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-loadbalancer-logs/my-app/AWSLogs/123456789012/*",
"Principal": {
"AWS": [
"652711504416"
]
}
}
]
}
In Terraform you can use the aws_elb_service_account data source to automatically fetch the account ID used for writing logs as can be seen in the example in the docs:
data "aws_elb_service_account" "main" {}
resource "aws_s3_bucket" "elb_logs" {
bucket = "my-elb-tf-test-bucket"
acl = "private"
policy = <<POLICY
{
"Id": "Policy",
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-elb-tf-test-bucket/AWSLogs/*",
"Principal": {
"AWS": [
"${data.aws_elb_service_account.main.arn}"
]
}
}
]
}
POLICY
}
resource "aws_elb" "bar" {
name = "my-foobar-terraform-elb"
availability_zones = ["us-west-2a"]
access_logs {
bucket = "${aws_s3_bucket.elb_logs.bucket}"
interval = 5
}
listener {
instance_port = 8000
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
}
Even when having everything by the docs, I kept getting the "Access Denied for bucket" error. Removing the encryption from the bucket worked for me.
In the bucket policy, the account number must be NOT yours. Instead it belongs to AWS, and for each region, the account numbers you should use in your bucket policy are listed at: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy
For instance, for us-east-1 region the account number is 127311923021.
Although the question is about Terraform, I post CloudFormation snippet created a bucket for ELB's access logs its Bucket policy:
MyAccessLogsBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
MyAllowELBAccessBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref MyAccessLogsBucket
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
AWS: "arn:aws:iam::127311923021:root"
Action:
- "s3:PutObject"
Resource: !Sub "arn:aws:s3:::${MyAccessLogsBucket}/AWSLogs/*"
In the principle, 127311923021 is used as this is AWS account number which should be used for account number in us-east-1.
Bucket permissions
When you enable access logging, you must specify an S3 bucket for the access logs. The bucket must meet the following requirements.
Requirements
The bucket must be located in the same Region as the load balancer.
Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.
The bucket must have a bucket policy that grants Elastic Load Balancing permission to write the access logs to your bucket. Bucket policies are a collection of JSON statements written in the access policy language to define access permissions for your bucket. Each statement includes information about a single permission and contains a series of elements.
Use one of the following options to prepare an S3 bucket for access logging.
Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.
So AWS docu says KMS is not supported...
In my case, it was the request_payer option set to Requester. Need to set to BucketOwner to work.