I was able to restrict access to private content on my bucket using Cloudfront but now I'm unable to read from the bucket for Elemental Media Convert. Is there any way to allow only media convert services and restrict everything else?
Here is my bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3U7X28UWXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myawsbucket5696/*"
}
]
}
Any help is appreciated. Thank you.
The 3403 error is 'HTTP Access Forbidden'. MediaConvert cannot read that file. Is it perhaps owned by a user other than the bucket owner? The role within your Account which MediaConvert assumes when running jobs on your behalf, will be subject to whatever access restrictions exist on objects within your source S3 bucket.
You can test & debug this file access outside of MediaConvert by assuming the designated Role in your AWS Console and then using the CloudShell prompt. Use the S3api command to attempt to get metadata about the object in question. This should succeed if your Role has permission to touch the object. For Example: aws s3api head-object --bucket mynewbucket --key myfile.mov
FYI you can see all MediaConvert error codes at https://docs.aws.amazon.com/mediaconvert/latest/ug/mediaconvert_error_codes.html
Related
First, I have full access to all my s3 buckets (I've administrator permission).
after paying with my s3 bucket policy I'm getting a problem that I cannot view or edit anything in my bucket, and getting the "Access Denied" error message.
It sounds like you have added a Deny rule on a Bucket Policy, which is overriding your Admin permissions. (Yes, it is possible to block access even for Administrators!)
In such a situation:
Log on as the "root" login (the one using an email address)
Delete the Bucket Policy
Fortunately, the account's "root" user always has full permissions. This is also why it should be used infrequently and access should be well-protected (eg using Multi-Factor Authentication).
I hope you have s3-bucket-Full-access in IAM role policies along with you need to setup
1.set Access-Control-list and Bucket Policies has to be public.
Bucket policies like below
{
"Version": "2012-10-17",
"Id": "Policy159838074858",
"Statement": [
{
"Sid": "S3access",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucketname/*"
}
]
}
here i just added read and update access to my s3 bucket in Action section if you need create and delete access add those actions there.
You can try with
aws s3api delete-bucket-policy --bucket s3-bucket-name
Or otherwise, enter with root access and modify the policy
I have two accounts (acc-1 and acc-2).
acc-1 hosts an API that handles file uploads into a bucket of acc-1 (let's call it upload). An upload triggers a SNS to convert images or transcode videos. The resulting files are placed into another bucket in acc-1 (output) which again triggers a SNS. I then copy the files (as user api from acc-1) to their final bucket in acc-2 (content).
content bucket policy in acc-2
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::content/*"
}
]
}
api user policy in acc-1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::upload/*",
"arn:aws:s3:::output/*",
"arn:aws:s3:::content/*"
]
}
]
}
I copy the files using the aws-sdk for nodejs and setting the ACL to bucket-owner-full-control, so that users from acc-2 can access the copied files in content although the api user from acc-1 is still the owner of the files.
This all works fine - files are stored in the content bucket with access for bucket-owner and the api user.
Files from content bucket are private for everyone else and should be served through a Cloudfront distribution.
I created a new Cloudfront distribution for web and used the following settings:
Origin Domain Name: content
Origin Path: /folder1
Restrict Bucket Access: yes
Origin Access Identity: create new identity
Grant Read Permissions on Bucket: yes, update bucket policy
This created a new Origin Access Identity and changed the bucket policy to:
content bucket policy afterwards
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACC_1_ID>:user/api"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::content/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <OAI_ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::content/*"
}
]
}
But trying to access files from the content bucket inside the folder1 folder isn't working when I use the Cloudfront URL:
❌ https://abcdef12345.cloudfront.net/test1.jpg
This returns a 403 'Access denied'.
If I upload a file (test2.jpg) from acc-2 directly to content/folder1 and try to access it, it works ...!?
✅ https://abcdef12345.cloudfront.net/test2.jpg
Other than having different owners, test1.jpg and test2.jpg seem completely identical.
What am I doing wrong?
Unfortunately, this is the expected behavior. OAIs can't access objects owned (created) by a different account because bucket-owner-full-control uses an unusual definition of "full" that excludes bucket policy grants to principals outside your own AWS account -- and the OAI's canonical user is, technically, outside your AWS account.
If another AWS account uploads files to your bucket, that account is the owner of those files. Bucket policies only apply to files that the bucket owner owns. This means that if another account uploads files to your bucket, the bucket policy that you created for your OAI will not be evaluated for those files.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-granting-permissions-to-oai
As #Michael - sqlbot pointed out in his answer, this is the expected behavior.
A possible solution is to perform the copy to the final bucket using credentials from the acc-2 account, so the owner of the objects will be always the acc-2. There are at least 2 options for doing that:
1) Use Temporary Credentials and AssumeRole AWS STS API: you create an IAM Role in acc-2 with enough permissions to perform the copy to the content bucket (PutObject and PutObjectAcl), then from the acc-1 API you call AWS STS AssumeRole for getting temporary credentials by assuming the IAM Role, and perform the copy using these temporary access keys.
This is the most secure approach.
2) Use Access Keys: you could create an IAM User in acc-2, generate regular Access Keys for it, and handle those keys to the acc-1, so the acc-1 uses those "permanent" credentials to perform the copy.
Distributing access keys across AWS accounts is not a good idea from a security standpoint, and AWS discourages you from doing so, but it's certainly possible. Also, from a maintainability point of view can be a problem too - as acc-1 should store the Access Keys in a very safe way and acc-2 should be rotating Access Keys somewhat frequently.
The solution to this is of two steps.
Run below command using source account credentials
aws s3api put-object-acl --bucket bucket_name --key object_name --acl bucket-owner-full-control
Run below command using destination account credentials
aws s3 cp s3://object_path s3://object_path --metadata-directive COPY
My solution is using s3 putobject event and lambda.
On putobject by acc-1, emit s3 putobject event, and the object override by acc-2's lambda.
This is my program (Python3).
import boto3
from urllib.parse import unquote_plus
s3_client = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = unquote_plus(record['s3']['object']['key'])
filename = '/tmp/tmpfile'
s3_client.download_file(bucket, key, filename)
s3_client.upload_file(filename, bucket, key)
This is the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxxxxx:user/userName"
},
"Action": "*",
"Resource": "arn:aws:s3:::my-super-awesome-bucket-name-test/*"
}
}
Using AWS CLI I am able to list the contents of the bucket:
aws s3 ls s3://my-super-awesome-bucket-name-test
2017-06-28 19:50:42 97 testFile.csv
However, I can't upload files:
aws s3 cp csv_sum.js s3://my-super-awesome-bucket-name-test/
upload failed: ./csv_sum.js to s3://my-super-awesome-bucket-name-test/csv_sum.js An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Is there something else I need to do to grant my IAM user access? I added the required information via aws configure, is there something else needed?
This doesn't answer your specific question, but...
If you wish to grant Amazon S3 access to a specific IAM User, it is much better to assign a policy directly to the IAM User rather than adding them as a special-case on the S3 bucket policy.
You can similarly assign permissions to IAM Groups, and then any User who is assigned to that Group will inherit the permissions. You can even assign permissions for multiple S3 buckets this way, rather than having to modify several bucket policies.
I have a static website hosted on S3, I have set all files to be public.
Also, I have an EC2 instance with nginx that acts as a reverse proxy and can access the static website, so S3 plays the role of the origin.
What I would like to do now is set all files on S3 to be private, so that the website can only be accessed by traffic coming from the nginx (EC2).
So far I have tried the following. I have created and attached a new policy role to the EC2 instance with
Policies Granting Permission: AmazonS3ReadOnlyAccess
And have rebooted the EC2 instance.
I then created a policy in my S3 bucket console > Permissions > Bucket Policy
{
"Version": "xxxxx",
"Id": "xxxxxxx",
"Statement": [
{
"Sid": "xxxxxxx",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXX-bucket/*"
}
]
}
As principal I have set the ARN I got when I created the role for the EC2 instance.
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXX:role/MyROLE"
},
However, this does not work, any help is appreciated.
If the Amazon EC2 instance with nginx is merely making generic web requests to Amazon S3, then the question becomes how to identify requests coming from nginx as 'permitted', while rejecting all other requests.
One method is to use a VPC Endpoint for S3, which allows direct communication from a VPC to Amazon S3 (rather than going out an Internet Gateway).
A bucket policy can then restrict access to the bucket such that it can only be accessed via that endpoint.
Here is a bucket policy from Example Bucket Policies for VPC Endpoints for Amazon S3:
The following is an example of an S3 bucket policy that allows access to a specific bucket, examplebucket, only from the VPC endpoint with the ID vpce-1a2b3c4d. The policy uses the aws:sourceVpce condition key to restrict access to the specified VPC endpoint.
{
"Version": "2012-10-17",
"Id": "Policy",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
},
"Principal": "*"
}
]
}
So, the complete design would be:
Object ACL: Private only (remove any current public permissions)
Bucket Policy: As above
IAM Role: Not needed
Route Table configured for VPC Endpoint
Permissions in Amazon S3 can be granted in several ways:
Directly on an object (known as an Access Control List or ACL)
Via a Bucket Policy (which applies to the whole bucket, or a directory)
To an IAM User/Group/Role
If any of the above grant access, then the object can be accessed publicly.
Your scenario requires the following configuration:
The ACL on each object should not permit public access
There should be no Bucket Policy
You should assign permissions in the Policy attached to the IAM Role
Whenever you have permissions relating to a User/Group/Role, it is better to assign the permission in IAM rather than on the Bucket. Use Bucket Policies for general access to all users.
The policy on the Role would be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowBucketAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-bucket/*"
]
}
]
}
This policy is directly applied to the IAM Role, so there is no need for a principal field.
Please note that this policy only allows GetObject -- it does not permit listing of buckets, uploading objects, etc.
You also mention that "I have set all files to be public". If you did this by making each individual object publicly readable, then anyone will still be able to access the objects. There are two ways to prevent this -- either remove the permissions from each object, or create a Bucket Policy with a Deny statement that stops access, but still permits the Role to get access.
That's starting to get a bit tricky and hard to maintain, so I'd recommend removing the permissions from each object. This can be done via the management console by editing the permissions on each object, or by using the AWS Command-Line Interface (CLI) with a command like:
aws s3 cp s3://my-bucket s3://my-bucket --recursive --acl private
This copies the files in-place but changes the access settings.
(I'm not 100% sure whether to use --acl private or --acl bucket-owner-full-control, so play around a bit.)
I have an EC2 instance attached with an IAM Role. That role has full s3 access. The aws cli work perfectly, and so does the meta-data curl check to get the temporary Access and Secret keys.
I have also read that when the Access and Secret keys are missing from the settings module, boto will automatically get the temporary keys from the meta-data url.
I however cannot access the css/js files stored on the bucket via the browser. When I add a bucket policy allowing a principal of *, everything works.
I tried the following policy:
{
"Version": "2012-10-17",
"Id": "PolicyNUM",
"Statement": [
{
"Sid": "StmtNUM",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:role/my-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
But all css/js are still getting 403's. What can I change to make it work?
Requests from your browser don't have the ability to send the required authz headers, which boto is handling for you elsewhere. The bucket policy cannot determine the principal and is correctly denying the request.
Add another sid to Allow principle * access to everything under /public, for instance.
The reason is that AWS is setting your files to binary/octet-stream.
check this solution to handle it.