Condition in a bucket policy to only allow specific service - amazon-web-services

im looking for a bucket policy where I have a specific principal ID for a complete account 'arn:aws:iam::000000000000:root' which is allowed to write to a my bucket.
I now want to implement a condition which will only give firehose as a service the abillity to write to my bucket.
My current ideas were:
{
"Sid": "AllowWriteViaFirehose",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:root"
},
"Action": "s3:Put*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
#*#
}
}
Whereas #*# should be the specific condition.
I already tried some things like :
{"IpAddress": {"aws:SourceIp": "firehose.amazonaws.com"}}
I thought the requests would come from a firehose endpoint of AWS. But it seems not :-/
"Condition": {"StringLike": {"aws:PrincipalArn": "*Firehose*"}}
i thought this would work since the role which firehose uses to write files should contain a session name with something like 'firehose' in it. But it didn't work.
Any idea how to get this working?
Thanks
Ben

Do not create a bucket policy.
Instead, assign the desired permission to an IAM Role and assign the role to your Kinesis Firehose.
See: Controlling Access with Amazon Kinesis Data Firehose - Amazon Kinesis Data Firehose

This answer is for the situation where the destination S3 bucket is in a different account.
From AWS Developer Forums: Kinesis Firehose cross account write to S3, the method is:
Create cross account roles in Account B and enable trust relationships for Account A to assume Account B's Role.
Enable Bucket policy in Account B to allow Account A to write records into Account B.
Map Account B's S3 bucket to Firehose, had to create the firehose to point to a temporary bucket and then use AWS CLI commands to update the S3 bucket in account A.
CLI Command:
aws firehose update-destination --delivery-stream-name MyDeliveryStreamName --current-delivery-stream-version-id 1 --destination-id destinationId-000000000001 --extended-s3-destination-update file://MyFileName.json
MyFileName.json looks like the one below:
{
"BucketARN": "arn:aws:s3:::MyBucketname",
"Prefix": ""
}

Related

Allowing permission to Generate a policy based on CloudTrail events where the selected Trail logs events in an S3 bucket in another account

I have an AWS account (Account A) with CloudTrail enabled and logging management events to an S3 'logs' bucket in another, dedicated logs account (Account B, which I also own).
The logging part works fine, but I'm now trying (and failing) to use the 'Generate policy based on CloudTrail events' tool in the IAM console (under the Users > Permissions tab) in Account A.
This is supposed to read the CloudTrail logs for a given user/region/no. of days, identify all of the actions the user performed, then generate a sample IAM security policy to allow only those actions, which is great for setting up least privilege policies etc.
When I first ran the generator, it created a new service role to assume in the same account (Account A): AccessAnalyzerMonitorServiceRole_ABCDEFGHI
When I selected the CloudTrail trail to analyse, it (correctly) identified that the trail logs are stored in an S3 bucket in another account, and displayed this warning messsage:
Important: Verify cross-account access is configured for the selected
trail The selected trail logs events in an S3 bucket in another
account. The role you choose or create must have read access to the
bucket in that account to generate a policy. Learn more.
Attempting to run the generator at this stage fails after a short amount of time, and if you hover over the 'Failed' status in the console you see the message:
Incorrect permissions assigned to access CloudTrail S3 bucket. Please
fix before trying again.
Makes sense, but actually giving read access to the S3 bucket to the automatically generated AccessAnalyzerMonitorServiceRole_ABCDEFGHI is where I'm now stuck!
I'm relatively new to AWS so I might have done something dumb or be missing something obvious, but I'm trying to give the automatically generated role in Account A permission to the S3 bucket by adding to the 'Bucket Policy' attached to the S3 logs bucket in our Account B. I've added the below extract to the existing bucket policy (which is just the standard policy for a CloudTrail logs bucket, extended to allow CloudTrail in Account A to write logs to it as well.
"Sid": "IAMPolicyGeneratorRead",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::1234567890:role/service-role/AccessAnalyzerMonitorServiceRole_ABCDEFGHI"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI",
"arn:aws:s3:::aws-cloudtrail-logs-ABCDEFGHI/*"
]
}
Any suggestions how I can get this working?
Turns out I just needed to follow the steps described here: https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html#access-analyzer-policy-generation-cross-account in the section 'Generate a policy using AWS CloudTrail data in another account', specifically for the 'Object Ownership' settings in addition to changing my Bucket Policy to match the example.

Unable to view results in S3 bucket after executing Athena query in different account?

I have two accounts: Account A and Account B.
I'm executing an Athena query in Account A and want to have the query results populated in an S3 bucket in Account B.
I've tested the script that does this countless times within a singular account so know that there is no issues with my code. The query history in Athena also indicates that my code has ran successfully, so it must be a permissions issue.
I'm able to see an object containing a CSV file with the query results in Account B (as expected) but for some reason cannot open or download it to view the contents. When I attempt to do so, I only see XML code that says:
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
Within the file properties, I see Unknown Error under Server-side encryption settings and You don't have permission to get object ACL with a message about not having allowed the s3:GetObjectAcl action.
I've tried to give both Account A and Account B full S3 permissions as follows via the bucket policy in Account B:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "This is for Account A",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iam-number-account-a:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
},
{
"Sid": "This is for Account B",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iam-number-account-b:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Some other bucket (Account B) configuration settings that may be contributing to my issue:
Default encryption: Disabled
Block public access: Off for everything
Object ownership: Bucket owner preferred
Access control list:
Bucket Owner - Account B: Objects (List, Write), Bucket ACL (Read, Write)
External Account - Account A: Objects (Write), Bucket ACL (Write)
If anyone can help identify my issue and what I need to fix, that'd be greatly appreciated. I've been struggling to find a solution for this for a few hours.
A common problem when creating objects in an Amazon S3 bucket belonging to a different AWS Account is that the object 'owner' remains the original Account. When copying objects in Amazon S3, this can be resolved by specifying ACL=bucket-owner-full-control.
However, this probably isn't possible when creating the file with Amazon Athena.
See other similar StackOverflow questions:
How to ensure that Athena result S3 object with bucket-owner-full-control - Stack Overflow
AWS Athena: cross account write of CTAS query result - Stack Overflow
A few workarounds might be:
Write to an S3 bucket in Account A and use a Bucket Policy to grant Read access to Account B, or
Write to an S3 bucket in Account A and have S3 trigger an AWS Lambda function that copies the object to the bucket in Account B, while specifying ACL=bucket-owner-full-control, or
Grant access to the source data to an IAM User or Role in Account B, and run the Athena query from Account B, so that it is Account B writing to the 'output' bucket
CTAS queries have the bucket-owner-full-control ACL by default for cross-account writes via Athena

Need help to deny S3 bucket creation without specific Tags

I want to create an IAM policy to only allow the "Test" user to create S3 bucket with "Name" and "Bucket" Tags while creating. But not able to do.
I have tried this, but even with the specified condition, the user is not able to create an Bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Deny",
"Action": "s3:CreateBucket",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestTag/Name": "Bucket"
}
}
}
]
}
Thanks in advance.
The Actions, resources, and condition keys for Amazon S3 - Service Authorization Reference documentation page lists the conditions that can be applied to the CreateBucket command.
Tags are not included in this list. Therefore, it is not possible to restrict the CreateBucket command based on tags being specified with the command.
#pop I believe you can't do this using an IAM policy nor an SCP because by design S3 create tag API is configured to be triggered as a subsequent call to CreateBucket API. So your IAM policy would prevent creation of S3 Bucket itself even if you have added this tag. This is by design for S3 service compared to other AWS services.
Only option in my opinion would be a post-deployment action i.e. to choose an event driven model where you use S3 events to take actions (delete bucket/ add access block bucket policy etc.) based on how a bucket got created.
As John Rotenstein pointed out, this is not possibly (yet at least) to explicitly deny this but there are a few options that people do for this since this type of tagging policy is a common things in many organizations.
Compliance Reports
You can use the AWS Config service to detect S3 bucket resources that are out-of-compliance. You can define your tagging policy for S3 Buckets with a Config rule.
This will not prevent users from creating buckets but it will provide a way to audit your accounts and also be proactively notified.
Auto-remediation
If you want a bucket to be auto-deleted or flagged, you can create a lambda function that is triggered by the CloudTrail API for when buckets are created.
The Lambda could be implemented to check the tags and, if the bucket is non-compliant, try and delete the bucket or mark it for deletion via some other process you define.

AWS Firehose delivery to Cross Account Elasticsearch in VPC

I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/

Give a Redshift Cluster access to S3 bucket owned by another account

I am trying to unload data from Redshift to S3 using iam_role. The unload command works fine as long as I am unloading data to a S3 bucket owned by the same account as the Redshift cluster.
However, if I try to unload data into a S3 bucket owned by another account it doesn't work. I have tried the approach mentioned in these tutorials:
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Example: Bucket Owner Granting Cross-Account Bucket Permissions
However, I always get S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid
Has anyone done this before?
I got it to work. Here's what I did:
Created an IAM Role in Account A that has AmazonS3FullAccess policy (for testing)
Launched an Amazon Redshift cluster in Account A
Loaded data into the Redshift cluster
Test 1: Unload to a bucket in Account A -- success
Test 2: Unload to a bucket in Account B -- fail
Added a bucket policy to the bucket in Account B (see below)
Test 3: Unload to a bucket in Account B -- success!
This is the bucket policy I used:
{
"Id": "Policy11",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PermitRoleAccess",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:role/Redshift-loader"
]
}
}
]
}
The Redshift-loader role was already associated with my Redshift cluster. This policy grants the role (that lives in a different AWS account) access to this S3 bucket.
I solved it using access_key_id and secret_access_key instead iam_rol