I created this policy below to prevent users from downloading files in a specific Amazon S3 bucket, but they were also unable to run a query in Amazon Athena, getting a "Permission denied on S3 path: ..." error. Once I removed the policy, they were immediately able to run the query again. On the other hand, they can read a file in EMR Notebooks (PySpark), which is desirable.
How can I block files from being downloaded without compromising Athena's usage?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
The challenge here is in how Athena works. According to the Access to Amazon S3 docs:
users must have permission to access Amazon S3 buckets in order to query them with Athena.
When users interact with Athena, their permissions pass through Athena to determine what Athena can access. So the user fundamentally needs to have GetObject permission in order for Athena to be able to read the objects.
That said, one option would be to modify your S3 bucket policy to deny access if the client is not actually Athena. You can do that using aws:CalledVia which is present and indicates athena.amazonaws.com when Athena makes requests on behalf of the IAM user (or role). For example as follows:
"Condition": {
"StringNotEquals": {"aws:CalledVia": "athena.amazonaws.com"}
}
Related
We are asked to upload a file to client's S3 bucket; however, we do not have AWS account (nor we plan on getting one). What is the easiest way for the client to grant us access to their S3 bucket?
My recommendation would be for your client to create an IAM user for you that is used for the upload. Then, you will need to install the AWS cli. On your client's side there will be a user that the only permission they have is to write to their bucket. This can be done pretty simply and will look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::the-bucket-name/*",
"arn:aws:s3:::the-bucket-name"
]
}
]
}
I have not thoroughly tested the above permissions!
Then, on your side, after you install the AWS cli you need to have two files. They both live in the home directory of the user that runs your script. The first is $HOME/.aws/config. This has something like:
[default]
output=json
region=us-west-2
You will need to ask them what AWS region the bucket is in. Next is $HOME/.aws/credentials. This will contain something like:
[default]
aws_access_key_id=the-access-key
aws_secret_access_key=the-secret-key-they-give-you
They must give you the region, the access key, the secret key, and the bucket name. With all of this you can now run something like:
aws s3 cp local-file-name.ext s3://the-client-bucket/destination-file-name.ext
This will transfer the local file local-file-name.ext to the bucket the-client-bucket with the file name there of destination-file-name.ext. They may have a different path in the bucket.
To recap:
Client creates an IAM user that has very limited permission. Only API permission is needed, not console.
You install the AWS CLI
Client gives you the access key and secret key.
You configure the machine that does the transfers with the credentials
You can now push files to the bucket.
You do not need an AWS account to do this.
I have two accounts: Account A and Account B.
I'm executing an Athena query in Account A and want to have the query results populated in an S3 bucket in Account B.
I've tested the script that does this countless times within a singular account so know that there is no issues with my code. The query history in Athena also indicates that my code has ran successfully, so it must be a permissions issue.
I'm able to see an object containing a CSV file with the query results in Account B (as expected) but for some reason cannot open or download it to view the contents. When I attempt to do so, I only see XML code that says:
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
Within the file properties, I see Unknown Error under Server-side encryption settings and You don't have permission to get object ACL with a message about not having allowed the s3:GetObjectAcl action.
I've tried to give both Account A and Account B full S3 permissions as follows via the bucket policy in Account B:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "This is for Account A",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iam-number-account-a:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
},
{
"Sid": "This is for Account B",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::iam-number-account-b:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
Some other bucket (Account B) configuration settings that may be contributing to my issue:
Default encryption: Disabled
Block public access: Off for everything
Object ownership: Bucket owner preferred
Access control list:
Bucket Owner - Account B: Objects (List, Write), Bucket ACL (Read, Write)
External Account - Account A: Objects (Write), Bucket ACL (Write)
If anyone can help identify my issue and what I need to fix, that'd be greatly appreciated. I've been struggling to find a solution for this for a few hours.
A common problem when creating objects in an Amazon S3 bucket belonging to a different AWS Account is that the object 'owner' remains the original Account. When copying objects in Amazon S3, this can be resolved by specifying ACL=bucket-owner-full-control.
However, this probably isn't possible when creating the file with Amazon Athena.
See other similar StackOverflow questions:
How to ensure that Athena result S3 object with bucket-owner-full-control - Stack Overflow
AWS Athena: cross account write of CTAS query result - Stack Overflow
A few workarounds might be:
Write to an S3 bucket in Account A and use a Bucket Policy to grant Read access to Account B, or
Write to an S3 bucket in Account A and have S3 trigger an AWS Lambda function that copies the object to the bucket in Account B, while specifying ACL=bucket-owner-full-control, or
Grant access to the source data to an IAM User or Role in Account B, and run the Athena query from Account B, so that it is Account B writing to the 'output' bucket
CTAS queries have the bucket-owner-full-control ACL by default for cross-account writes via Athena
I am trying to unload data from Redshift to S3 using iam_role. The unload command works fine as long as I am unloading data to a S3 bucket owned by the same account as the Redshift cluster.
However, if I try to unload data into a S3 bucket owned by another account it doesn't work. I have tried the approach mentioned in these tutorials:
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Example: Bucket Owner Granting Cross-Account Bucket Permissions
However, I always get S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid
Has anyone done this before?
I got it to work. Here's what I did:
Created an IAM Role in Account A that has AmazonS3FullAccess policy (for testing)
Launched an Amazon Redshift cluster in Account A
Loaded data into the Redshift cluster
Test 1: Unload to a bucket in Account A -- success
Test 2: Unload to a bucket in Account B -- fail
Added a bucket policy to the bucket in Account B (see below)
Test 3: Unload to a bucket in Account B -- success!
This is the bucket policy I used:
{
"Id": "Policy11",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PermitRoleAccess",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::123456789012:role/Redshift-loader"
]
}
}
]
}
The Redshift-loader role was already associated with my Redshift cluster. This policy grants the role (that lives in a different AWS account) access to this S3 bucket.
I solved it using access_key_id and secret_access_key instead iam_rol
I uploaded several .zip files to my AWS S3 bucket a while back using the AWS CLI. Now I can't seem to download those files when using the following command:
aws s3 cp s3://mybucket/test.zip test2.zip
because it yields the following error:
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
How can I resolve this issue?
Edit:
Running the following command shows the existing bucket policies
aws s3api get-bucket-policy --bucket mybucket
{
"Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Example permissions\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::221234567819:root\"},\"Action\":[\"s3:ListBucket\",\"s3:GetBucketLocation\"],\"Resource\":\"arn:aws:s3:::mybucket\"},{\"Sid\":\"Examplepermissions\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"arn:aws:iam::221234567819:root\"},\"Action\":[\"s3:PutObject\",\"s3:AbortMultipartUpload\",\"s3:PutObjectAcl\",\"s3:GetObject\",\"s3:DeleteObject\",\"s3:GetObjectAcl\",\"s3:ListMultipartUploadParts\",\"s3:PutObjectAcl\"],\"Resource\":\"arn:aws:s3:::mybucket/*\"}]}"
}
This is most likely one of three causes:
either that one of your policies is not permitting you to read the resources (yes, it's possible to have write permissions but not read permissions), or
that your client environment is no longer setup with the correct credentials.
you don't have ListBucket permission and the file is not present (it returns 403 instead of 404, as if you don't have ListBucket, you shouldn't be able to tell if a file exists or not).
For 1, S3 objects can be secured by either Bucket Policies, User Policies or ACLs and there are complex interactions between the 3. See http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html for more details.
If you update your question with details of relevant user polices, bucket policies and ACLs I could take a look and see if anything explains the symptom.
Edit:
Reviewing the included bucket policy, it appears to be tied to the root principal. Are you using root credentials for the aws s3 cli? If you are using an IAM user, you will need to modify the Principal (See http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-bucket-user-policy-specifying-principal-intro.html)
Add "arn:aws:s3:::your-bucket-name" together with "arn:aws:s3:::your-bucket-name/*" to Recourses in your policy. Also, you may need non-obvious "s3:ListBucket" permission and maybe some other permissions.
My policy that works for downloading files in lambda function:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::XXXXX-pics",
"arn:aws:s3:::XXXXX-pics/*"
]
}
]
}
It is attached to the Lambda function role. No bucket policy attached to the bucket was needed.
I wish to achieve the following
Create S3 bucket that contains EMR bootstrap script and config file
Apply policy to this bucket so that only EMR default roles can access it along with specific admin users
EMR bootstrap action runs when cluster starts that accesses S3 bucket to retrieve script and config file and execute on EMR nodes
Here is the policy I have applied to the S3 bucket. I am using the NotPrincipal statement so it will deny access to everyone except the listed arn's
{
"Id": "policy1",
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": ["arn:aws:iam::xxxxxxxxxxxx:user/user1#mydomain.com",
"arn:aws:iam::xxxxxxxxxxxx:user/user2#mydomain.com",
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:role/EMR_DefaultRole",
"arn:aws:iam::xxxxxxxxxxxx:role/EMR_EC2_DefaultRole"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::bucket-restricted-access",
"arn:aws:s3:::bucket-restricted-access/*"]
}
]
}
I then am trying to create an EMR cluster via the C# AWS SDK that includes a bootstrap action to run a script from the following location
s3://bucket-restricted-access/config/runscript.sh
However, as soon as the cluster starts I get an error
Terminated with errors - Access denied trying to read bootstrap action
file 's3://bucket-restricted-access/config/runscript.sh'
Is the EMR cluster using the assumed permissions from the EMR_EC2_DefaultRole role to try and retrieve the bootstrap action?
If not, is there another user/role that I need to add to the S3 bucket policy to fix the permissions issue?
The EMR cluster is launched with security group ElasticMapReduce-master and ElasticMapReduce-slave
The access key and secrete key which you use to create EMR should have permission to access the s3 bucket which has your bootstrap script