I wanted to get huge amount of data from another organization to my organization.
I created an s3 bucket with name as: srikanth-poc-can-be-deleted.
This bucket under the access column is showing as "Public". All my other buckets are showing it as "Bucket and objects not public". (i.e. I disabled the option "Block All public access" under "Block Public access"). I also set up below policy.
and defined below bucket policy.
Question: Under the bucket, I have one folder: 'upload_here' and I am getting this folder URL so that any body can upload the files under this folder. However, its not working as expected. When I enter the folder URL in the browser, an empty file with the name of the folder is downloading and nothing happening. I was expecting it to open the folder, so that others could place their files in there. Could you please let me know what is the issue?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::srikanth-poc-can-be-deleted/*"
},
{
"Sid": "Statement2",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::srikanth-poc-can-be-deleted"
}
]
}
If you want to copy data in Amazon S3 between AWS Accounts, you should use one of these methods below. They will ensure that your buckets, and your data, are kept private at all times.
Using source credentials
If you are using credentials from the source account:
Grant the IAM User permission to read from the source bucket and write to the destination bucket
Add a Bucket Policy on the destination bucket in the other account that grants access to the IAM User from the source account (similar to your policy above, but specifying the source IAM User as the Principal)
In the destination bucket, make sure ACLs are disabled so that the destination account 'owns' the objects
Use the AWS CLI to copy the objects, using the IAM User credentials
Using destination credentials
If you are using credentials from the destination account:
Grant the IAM User permission to read from the source bucket and write to the destination bucket
Add a Bucket Policy on the source bucket that grants access to the IAM User from the destination account (similar to your policy above, but specifying the destination IAM User as the Principal)
Use the AWS CLI to copy the objects, using the IAM User credentials
Related
I have created an S3 bucket which we will call mytest-bucket where I am trying to grant access to the bucket and its objects to an IAM user at a different company, not within my organization. The user, which we call Bob has given me their account ID, IAM username, and canonical ID. I've done the following to attempt to grant Bob access:
1) I have set the bucket policy for mytest-bucket as such:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:root",
"arn:aws:iam::111111111111:user/Bob"
]
},
"Action": [
"s3:*",
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::mytest-bucket",
"arn:aws:s3:::mytest-bucket/*"
]
}
]
}
2) I have set my Individual Block Public Access settings for this bucket to the following:
3) I have also granted List, Write ACL permissions to the External account using the Canonical ID provided, as well as Read, Write Bucket ACL permissions. For object ownership, I have ACLs are enabled and can be used to grand access to this bucket and its objects.
Yet, still, Bob is unable to both 1) see the bucket listed under their account, 2) access any objects or the bucket itself due to Access Denied error.
Is there something I can change in the above configuration to provide Bob access to this one bucket and it's objects?
How can I help them get access?
Edit: Bob will not be uploading objects, but only reading and downloading objects from this bucket.
You say that "bob is unable to see the bucket listed under their account". This is normal -- the bucket does not belong to his account, so it will not be listed when he uses the S3 management console. However, Bob should be able to access it when using the AWS CLI, such as:
aws s3 ls s3://mytest-bucket
If Bob really wants to see it in the console, he can 'cheat' by using a URL that will show the bucket, but Bob would need to paste the URL directly rather than going through the bucket hierarchy. To demonstrate, here is a URL that would normally show a bucket:
https://us-east-1.console.aws.amazon.com/s3/buckets/mytest-bucket
You can change the bucket name at the end to 'jump' directly to a desired bucket.
Look for a policy for S3 bucket that will allow all IAM roles and users from different account, to be able to download files from the bucket that is located in my AWS account.
Thanks for help
You can apply object level permissions to another account via a bucket policy.
By using the principal of the root of the account, every IAM entity in that account is able to interact with the bucket using the permissions in your bucket policy.
An example bucket policy using the root of the account is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB-ID:root"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::awsexamplebucket1"
]
}
]
}
More information is available in the Bucket owner granting cross-account bucket permissions documentation
Fo that, you would need to provide a cross-account access to the objects in your buckets by giving the IAM role or user in the second Account permission to download (GET Object) objects from the needed bucket.
The following AWS post
https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/ provides details on how to define the IAM policy.
In your case, you just need the Get object permission.
First, I have full access to all my s3 buckets (I've administrator permission).
after paying with my s3 bucket policy I'm getting a problem that I cannot view or edit anything in my bucket, and getting the "Access Denied" error message.
It sounds like you have added a Deny rule on a Bucket Policy, which is overriding your Admin permissions. (Yes, it is possible to block access even for Administrators!)
In such a situation:
Log on as the "root" login (the one using an email address)
Delete the Bucket Policy
Fortunately, the account's "root" user always has full permissions. This is also why it should be used infrequently and access should be well-protected (eg using Multi-Factor Authentication).
I hope you have s3-bucket-Full-access in IAM role policies along with you need to setup
1.set Access-Control-list and Bucket Policies has to be public.
Bucket policies like below
{
"Version": "2012-10-17",
"Id": "Policy159838074858",
"Statement": [
{
"Sid": "S3access",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucketname/*"
}
]
}
here i just added read and update access to my s3 bucket in Action section if you need create and delete access add those actions there.
You can try with
aws s3api delete-bucket-policy --bucket s3-bucket-name
Or otherwise, enter with root access and modify the policy
I have two accounts (acc-1 and acc-2).
acc-1 hosts an API that handles file uploads into a bucket of acc-1 (let's call it upload). An upload triggers a SNS to convert images or transcode videos. The resulting files are placed into another bucket in acc-1 (output) which again triggers a SNS. I then copy the files (as user api from acc-1) to their final bucket in acc-2 (content).
content bucket policy in acc-2
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::content/*"
}
]
}
api user policy in acc-1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::upload/*",
"arn:aws:s3:::output/*",
"arn:aws:s3:::content/*"
]
}
]
}
I copy the files using the aws-sdk for nodejs and setting the ACL to bucket-owner-full-control, so that users from acc-2 can access the copied files in content although the api user from acc-1 is still the owner of the files.
This all works fine - files are stored in the content bucket with access for bucket-owner and the api user.
Files from content bucket are private for everyone else and should be served through a Cloudfront distribution.
I created a new Cloudfront distribution for web and used the following settings:
Origin Domain Name: content
Origin Path: /folder1
Restrict Bucket Access: yes
Origin Access Identity: create new identity
Grant Read Permissions on Bucket: yes, update bucket policy
This created a new Origin Access Identity and changed the bucket policy to:
content bucket policy afterwards
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::content/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI_ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::content/*"
}
]
}
But trying to access files from the content bucket inside the folder1 folder isn't working when I use the Cloudfront URL:
❌ https://abcdef12345.cloudfront.net/test1.jpg
This returns a 403 'Access denied'.
If I upload a file (test2.jpg) from acc-2 directly to content/folder1 and try to access it, it works ...!?
✅ https://abcdef12345.cloudfront.net/test2.jpg
Other than having different owners, test1.jpg and test2.jpg seem completely identical.
What am I doing wrong?
Unfortunately, this is the expected behavior. OAIs can't access objects owned (created) by a different account because bucket-owner-full-control uses an unusual definition of "full" that excludes bucket policy grants to principals outside your own AWS account -- and the OAI's canonical user is, technically, outside your AWS account.
If another AWS account uploads files to your bucket, that account is the owner of those files. Bucket policies only apply to files that the bucket owner owns. This means that if another account uploads files to your bucket, the bucket policy that you created for your OAI will not be evaluated for those files.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-granting-permissions-to-oai
As #Michael - sqlbot pointed out in his answer, this is the expected behavior.
A possible solution is to perform the copy to the final bucket using credentials from the acc-2 account, so the owner of the objects will be always the acc-2. There are at least 2 options for doing that:
1) Use Temporary Credentials and AssumeRole AWS STS API: you create an IAM Role in acc-2 with enough permissions to perform the copy to the content bucket (PutObject and PutObjectAcl), then from the acc-1 API you call AWS STS AssumeRole for getting temporary credentials by assuming the IAM Role, and perform the copy using these temporary access keys.
This is the most secure approach.
2) Use Access Keys: you could create an IAM User in acc-2, generate regular Access Keys for it, and handle those keys to the acc-1, so the acc-1 uses those "permanent" credentials to perform the copy.
Distributing access keys across AWS accounts is not a good idea from a security standpoint, and AWS discourages you from doing so, but it's certainly possible. Also, from a maintainability point of view can be a problem too - as acc-1 should store the Access Keys in a very safe way and acc-2 should be rotating Access Keys somewhat frequently.
The solution to this is of two steps.
Run below command using source account credentials
aws s3api put-object-acl --bucket bucket_name --key object_name --acl bucket-owner-full-control
Run below command using destination account credentials
aws s3 cp s3://object_path s3://object_path --metadata-directive COPY
My solution is using s3 putobject event and lambda.
On putobject by acc-1, emit s3 putobject event, and the object override by acc-2's lambda.
This is my program (Python3).
import boto3
from urllib.parse import unquote_plus
s3_client = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = unquote_plus(record['s3']['object']['key'])
filename = '/tmp/tmpfile'
s3_client.download_file(bucket, key, filename)
s3_client.upload_file(filename, bucket, key)
I have made my Amazon S3 bucket public, by going to its Permissions tab, and setting public access to everyone:
List objects
Write objects
List bucket permissions
Write bucket permissions
There is now an orange "Public" label on the bucket.
But when I go into the bucket, click on one of the images stored there, and click on the Link it provides, I get Access Denied. The link looks like this:
https://s3.eu-central-1.amazonaws.com/[bucket-name]/images/36d03456fcfaa06061f.jpg
Why is it still unavailable despite setting the bucket's permissions to public?
You either need to set Object Level Permissions on each object that you want to be available to the internet as Read Object.
or, you can use Bucket Policies to make this more widely permissioned, and not worry about resetting the permissions on each upload:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}