InsufficientS3BucketPolicyFault when enabling AWS Redshift audit logging through Terraform - amazon-web-services

Problem
I'm trying to enable audit logging on an AWS redshift cluster. I've been following the instructions provided by AWS here: https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging
Current Configuration
I've defined the relevant IAM role as follows
resource "aws_iam_role" "example-role" {
name = "example-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
And have granted the following IAM permissions to the example-role role:
{
"Sid": "AllowAccessForAuditLogging",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
},
The relevant portion of the redshift cluster configuration is as follows:
resource "aws_redshift_cluster" "example-cluster-name" {
cluster_identifier = "example-cluster-name"
...
# redshift audit logging to S3
logging {
enable = true
bucket_name = "example-bucket-name"
}
master_username = var.master_username
iam_roles = [aws_iam_role.example-role.arn]
...
Error
terraform plan runs correctly, and produces the expected plan based on the above configuration. However, when running terraform apply the following error occurs:
Error: error enabling Redshift Cluster (example-cluster-name) logging: InsufficientS3BucketPolicyFault: Cannot read ACLs of bucket example-bucket-name. Please ensure that your IAM permissions are set up correctly.
note: i've replaced all resource identifiers with example-* resource names and identifiers.

#shimo's answer is correct. I just detail for someone like me
Redshift has full access to S3. But you need add bucket policy too. ( S3's permission)
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::361669875840:user/logs"
},
"Action": [
"s3:GetBucketAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<your-bucket>",
"arn:aws:s3:::<your-bucket>/*"
]
}
- `361669875840` is match with your region check [here][1]
[1]: https://github.com/finos/compliant-financial-infrastructure/blob/main/aws/redshift/redshift_template_public.yml

Related

AWS AccessDenied when calling sts:AssumeRole

I'm trying to allow a set of users in a group access to a role through which they can upload objects to an s3 bucket.
The group as the policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::ACCOUNTID:role/Clinic_Sync"
}
}
The role "Clinic_Sync" has the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SyncReqs",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::*/*"
},
{
"Sid": "SyncReqs2",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::*"
}
]
}
The bucket has the policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTID:role/Clinic_Sync"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mydata"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTID:role/Clinic_Sync"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::mydata/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mydata",
"arn:aws:s3:::mydata/*"
],
"Condition": {
"StringNotLike": {
"aws:userId": [
"ADMINUSERID:*",
"ACCOUNTNO"
]
}
}
}
]
}
The idea being that no one can access the bucket except through assuming this role (other than the admin). I have created the credentials files as follows:
[default]
aws_access_key_id = ACCESSID1
aws_secret_access_key = SECRETKEY1
[csync]
role_arn = arn:aws:iam::ACCOUNTID:role/Clinic_Sync
source_profile = default
And the config file:
[default]
output = json
region = eu-west-2
[profile csync]
role_arn = arn:aws:iam::ACCOUNTID:role/Clinic_Sync
source_profile = default
The bucket policy seems to work, as running the command "aws s3 cp hello.txt s3://mydata" gives the error: Upload failed. An error occured when calling the PutObject operation: Access Denied.
But when I try to use the role, using the command "aws s3 cp hello.txt s3://run3d-data --profile csync", it gives this error:
upload failed: .\hello.txt to s3://mydata/hello.txt An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::ACCOUNTID:user/TestAcc2 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::ACCOUNTID:role/Clinic_Sync
I've been searching the web for an answer for ages and can't find any answers. The aws documentation is frankly unintelligible to me. If anyone can help me find a solution to this I'd be much appreciated as I'm tearing my hair out here.
To reiterate, I just want the users in a particular group to have access to a role that grants them permission to use an s3 bucket, but block all other access to the bucket.
Your bucket policy seems to say: "Deny access to the bucket unless aws:userId is a given Admin User ID or Account Number. It does not reference the Role.
Therefore, accessing the bucket via the Role will be denied. This is because Deny always overrides Allow.
Writing policies with Deny can be quite difficult, as seen in this situation.
If you really want to keep a bucket secure, it is easier to put the bucket in a separate AWS Account and only grant cross-account access to the entities that should have access. This way, no Deny policy is required.
If you receive a not authorised to perform sts:AssumeRole error, make sure the Trust Policy grants access to users by selecting the Another AWS account option when creating the role. The policy should look similar to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}

Copy S3 object to another S3 location Elastic Beanstalk SSH setup error

Getting this Elastic Beanstalk permission error when trying to do:
eb ssh --setup
2020-07-06 07:36:50 INFO Environment update is starting.
2020-07-06 07:36:53 ERROR Service:Amazon S3, Message:You don't have permission to copy an Amazon S3 object to another S3 location. Source: bucket = 'tempsource', key = 'xxx'. Destination: bucket = 'tempdest', key = 'yyy'.
2020-07-06 07:36:53 ERROR Failed to deploy configuration.
Is there a specific policy that I should be adding to my IAM permissions? I've tried adding full S3 access to my IAM User, but the error remains. Or is a permissions error associated with the source bucket?
Some more details:
Both buckets are in the same AWS account. The copying operation doesn't work for AWS CLI copy commands.
Bucket Profiles
Source Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::SOURCE_BUCKET/*"
},
{
"Sid": "Stmt2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::SOURCE_BUCKET"
}
]
}
Destination Bucket (elasticbeanstalk-us-west-2-XXXXXXXXXXXX)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "eb-ad78f54a-f239-4c90-adda-49e5f56cb51e",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/*",
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/resources/environments/logs/*"
]
},
{
"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX",
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/resources/environments/*"
]
},
{
"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX"
}
]
}
I've tried adding full S3 access to my IAM User, but the error remains.
The error is not about about your IAM permissions (i.e. your IAM user). But its about a role that EB is using your the instance (i.e. instance role/profile):
Managing Elastic Beanstalk instance profiles
The defualt role used on the instances in aws-elasticbeanstalk-ec2-role. Thus you can locate it in IAM console, and add required S3 permissions. Depending on your setup, you may be using different role.
Or is a permissions error associated with the source bucket?
If you have bucket policies that deny the access, it could also be the reason.

Terraform: Issue with assume_role

I'm trying to solve this mystery for few days now, but no joy. Basically, Terraform cannot assume role and failing with:
Initializing the backend...
2019/10/28 09:13:09 [DEBUG] New state was assigned lineage "136dca1a-b46b-1e64-0ef2-efd6799b4ebc"
2019/10/28 09:13:09 [INFO] Setting AWS metadata API timeout to 100ms
2019/10/28 09:13:09 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2019/10/28 09:13:09 [INFO] AWS Auth provider used: "SharedCredentialsProvider"
2019/10/28 09:13:09 [INFO] Attempting to AssumeRole arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np (SessionName: "terra_cnp", ExternalId: "", Policy: "")
Error: The role "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" cannot be assumed.
There are a number of possible causes of this - the most common are:
* The credentials used in order to assume the role are invalid
* The credentials do not have appropriate permission to assume the role
* The role ARN is not valid
In AWS:
I have role: terraform-admin-np with 2 AWS managed policy: AmazonS3FullAccess & AdministratorAccess and a trust relationship with this:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "sts:AssumeRole"
}
]
}
Then I have an user with policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TfFullAccessSts",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"sts:DecodeAuthorizationMessage",
"sts:AssumeRoleWithSAML",
"sts:AssumeRoleWithWebIdentity"
],
"Resource": "*"
},
{
"Sid": "TfFullAccessAll",
"Effect": "Allow",
"Action": "*",
"Resource": [
"*",
"arn:aws:ec2:region:account:network-interface/*"
]
}
]
}
and a S3 bucket: txxxxxxxxxxxxxxte with this policy document attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TFStateListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte"
},
{
"Sid": "TFStateGetPutObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::72xxxxxxxxxx:root"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::txxxxxxxxxxxxxxte/*"
}
]
}
In Terraform:
The snippet from the provider.tf:
###---- Default Backend and Provider config values -----------###
terraform {
required_version = ">= 0.12"
backend "s3" {
encrypt = true
}
}
provider "aws" {
region = var.region
version = "~> 2.20"
profile = var.profile
assume_role {
role_arn = var.role_arn
session_name = var.session_name
}
}
Snippet from tgw_cnp.tfvars backend config:
## S3 backend config
key = "backend/tgw_cnp_state"
bucket = "txxxxxxxxxxxxxxte"
region = "us-east-2"
profile = "local-tgw"
role_arn = "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np"
session_name = "terra_cnp"
and then running this way:
TF_LOG=debug terraform init -backend-config=tgw_cnp.tfvars
With this, I can assume role using AWS CLI without any issue:
# aws --profile local-tgw sts assume-role --role-arn "arn:aws:iam::72xxxxxxxxxx:role/terraform-admin-np" --role-session-name AWSCLI
{
"Credentials": {
"AccessKeyId": "AXXXXXXXXXXXXXXXXXXA",
"SecretAccessKey": "UixxxxxxxxxxxxxxxxxxxxxxxxxxxxMt",
"SessionToken": "FQoGZXIvYXdzEJb//////////wEaD......./5LFwNWf6riiNw9vtBQ==",
"Expiration": "2019-10-28T13:39:41Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA2P7ZON5TSWMOBQEBC:AWSCLI",
"Arn": "arn:aws:sts::72xxxxxxxxxx:assumed-role/terraform-admin-np/AWSCLI"
}
}
but terraform fails with the above error. Any idea what's I'm doing wrong?
Okay, answering to my own question........
It worked now. I have had a silly mistake - the region in tgw_cnp.tfvars was wrong, which I was keep missing out. In AWS CLI as I didn't have to specify the region (it was getting it from the profile), so it worked without any issue but in TF I specified the region and the value was wrong, hence the failure. The suggestions in the error reporting was a bit misleading.
I can confirm the above config works fine. It's all good now.

Terraform AWS Athena to use Glue catalog as db

I'm confused as to how I should use terraform to connect Athena to my Glue Catalog database.
I use
resource "aws_glue_catalog_database" "catalog_database" {
name = "${var.glue_db_name}"
}
resource "aws_glue_crawler" "datalake_crawler" {
database_name = "${var.glue_db_name}"
name = "${var.crawler_name}"
role = "${aws_iam_role.crawler_iam_role.name}"
description = "${var.crawler_description}"
table_prefix = "${var.table_prefix}"
schedule = "${var.schedule}"
s3_target {
path = "s3://${var.data_bucket_name[0]}"
}
s3_target {
path = "s3://${var.data_bucket_name[1]}"
}
}
to create a Glue DB and the crawler to crawl an s3 bucket (here only two), but I don't know how I link the Athena query service to the Glue DB. In the terraform documentation for Athena, there doesn't appear to be a way to connect Athena to a Glue catalog but only to an S3 Bucket. Clearly, however, Athena can be integrated with Glue.
How can I terraform an Athena database to use my Glue catalog as its data source rather than an S3 bucket?
Our current basic setup for having Glue crawl one S3 bucket and create/update a table in a Glue DB, which can then be queried in Athena, looks like this:
Crawler role and role policy:
The assume_role_policy of the IAM role needs only Glue as principal
The IAM role policy allows actions for Glue, S3, and logs
The Glue actions and resources can probably be narrowed down to the ones really needed
The S3 actions are limited to those needed by the crawler
resource "aws_iam_role" "glue_crawler_role" {
name = "analytics_glue_crawler_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "glue_crawler_role_policy" {
name = "analytics_glue_crawler_role_policy"
role = "${aws_iam_role.glue_crawler_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"glue:*",
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:GetBucketAcl",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::analytics-product-data",
"arn:aws:s3:::analytics-product-data/*",
]
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:/aws-glue/*"
]
}
]
}
EOF
}
S3 Bucket, Glue Database and Crawler:
resource "aws_s3_bucket" "product_bucket" {
bucket = "analytics-product-data"
acl = "private"
}
resource "aws_glue_catalog_database" "analytics_db" {
name = "inventory-analytics-db"
}
resource "aws_glue_crawler" "product_crawler" {
database_name = "${aws_glue_catalog_database.analytics_db.name}"
name = "analytics-product-crawler"
role = "${aws_iam_role.glue_crawler_role.arn}"
schedule = "cron(0 0 * * ? *)"
configuration = "{\"Version\": 1.0, \"CrawlerOutput\": { \"Partitions\": { \"AddOrUpdateBehavior\": \"InheritFromTable\" }, \"Tables\": {\"AddOrUpdateBehavior\": \"MergeNewColumns\" } } }"
schema_change_policy {
delete_behavior = "DELETE_FROM_DATABASE"
}
s3_target {
path = "s3://${aws_s3_bucket.product_bucket.bucket}/products"
}
}
I had many things wrong in my Terraform code. To start with:
The S3 bucket argument in the aws_athena_database code refers to the bucket for query output not the data the table should be built from.
I had set up my aws_glue_crawler to write to a Glue database rather than an Athena db. Indeed, as Martin suggested above, once correctly set up, Athena was able to see the tables in the Glue db.
I did not have the correct policies attached to my crawler. Initially, the only policy attached to the crawler role was
resource "aws_iam_role_policy_attachment" "crawler_attach" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole"
role = "${aws_iam_role.crawler_iam_role.name}"
}
after setting a second policy that explicitly allowed all S3 access to all of the buckets I wanted to crawl and attaching that policy to the same crawler role, the crawler ran and updated tables successfully.
The second policy:
resource "aws_iam_policy" "crawler_bucket_policy" {
name = "crawler_bucket_policy"
path = "/"
description = "Gives crawler access to buckets"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1553807998309",
"Action": "*",
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "Stmt1553808056033",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket0"
},
{
"Sid": "Stmt1553808078743",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket1"
},
{
"Sid": "Stmt1553808099644",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket2"
},
{
"Sid": "Stmt1553808114975",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket3"
},
{
"Sid": "Stmt1553808128211",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket4"
}
]
}
EOF
}
I'm confident that I can get away from hardcoding the bucket names in this policy but I don't yet know how to do that.

AWS CloudWatch Cross-Account Logging with EC2 Instance Profile

When I originally setup CloudWatch, I created an EC2 Instance Profile to automatically grant access to write to the account's own CloudWatch service. Now, I would like to consolidate the logs from several accounts into a central account.
I'd like to implement a simplified architecture that is based on Centralized Logging on AWS. However, these logs will feed an on-premise ELK stack, so I'm only trying to implement the components outlined in red. I would like to solve this without the use of Kinesis.
Either the CloudWatch Agent (CWAgent) doesn't support assuming a role or I can't wrap my mind around how to craft the EC2 Instance Profile to allow the CWAgent to assume a role in a different account.
Logging Target (AWS Account 111111111111)
IAM LogStreamerRole:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::999999999999:role/EC2CloudWatchLoggerRole"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
Logging Source (AWS Account 999999999999)
IAM Instance Profile Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111111111111:role/LogStreamerRole"
}
]
}
The CWAgent is producing the following error:
/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log:
2018-02-12T23:27:43Z E! CreateLogStream / CreateLogGroup with log group name Linux/var/log/messages stream name i-123456789abcdef has errors. Will retry the request: AccessDeniedException: User: arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef is not authorized to perform: logs:CreateLogStream on resource: arn:aws:logs:us-west-2:999999999999:log-group:Linux/var/log/messages:log-stream:i-123456789abcdef
status code: 400, request id: 53271811-1234-11e8-afe1-a3c56071215e
It is still trying to write to its own CloudWatch service, instead of to the central CloudWatch service.
From the logs, I see that the instance profile is used.
arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef
Just add the following to the /etc/awslogs/awscli.conf to assume the LogStreamerRole role.
role_arn = arn:aws:iam::111111111111:role/LogStreamerRole
credential_source=Ec2InstanceMetadata