S3 Bucket Policy not accepting .io domain - amazon-web-services

I was trying to make an alias for my bucket but I can't make the setting correct as S3 bucket policy doesn't accept my policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "qweewfewr",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mydomain.io/*"
}
]
}
Error msg
Add a new policy or edit an existing bucket policy in the text area below. Learn more.
Policy has invalid resource - arn:aws:s3:::mcommerce.io/*

The error message seems pretty clear to me. The resource name you are using does not exist. You need to replace mydomain.io with the name of your bucket you are trying to open access to.
As far as how to make an alias for the bucket, that has absolutely nothing to do with the bucket policy. That bucket policy is to tell S3 that it has permission to serve the items in it to whoever asks for them. Making an alias is a different process altogether.
To make an alias, you need to open the Route53 console and add a new A Record that is the domain address you are trying to use. Then you can add your bucket address https://<region>.amazonaws.com/<bucketname>/ and viola, you have an alias web address tied to the bucket. For a full tutorial check out this article.

Related

S3 Bucket access denied, even for Administrator

First, I have full access to all my s3 buckets (I've administrator permission).
after paying with my s3 bucket policy I'm getting a problem that I cannot view or edit anything in my bucket, and getting the "Access Denied" error message.
It sounds like you have added a Deny rule on a Bucket Policy, which is overriding your Admin permissions. (Yes, it is possible to block access even for Administrators!)
In such a situation:
Log on as the "root" login (the one using an email address)
Delete the Bucket Policy
Fortunately, the account's "root" user always has full permissions. This is also why it should be used infrequently and access should be well-protected (eg using Multi-Factor Authentication).
I hope you have s3-bucket-Full-access in IAM role policies along with you need to setup
1.set Access-Control-list and Bucket Policies has to be public.
Bucket policies like below
{
"Version": "2012-10-17",
"Id": "Policy159838074858",
"Statement": [
{
"Sid": "S3access",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::your bucketname/*"
}
]
}
here i just added read and update access to my s3 bucket in Action section if you need create and delete access add those actions there.
You can try with
aws s3api delete-bucket-policy --bucket s3-bucket-name
Or otherwise, enter with root access and modify the policy

Bucket policy to prevent bucket delete except by a specific role [duplicate]

I am looking for a bucket policy which allows only the root account user and the bucket creator to delete the bucket. something like below. Please suggest. How to restrict to only bucket creator and root?
{
"Version": "2012-10-17",
"Id": "PutObjBucketPolicy",
"Statement": [
{
"Sid": "Prevent bucket delete",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxx:root"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
},
{
"Sid": "Prevent bucket delete",
"Effect": "Deny",
"Principal": *,
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::test-bucket-s3"
}
]
}
A Deny always beats an Allow. Therefore, with this policy, nobody would be allowed to delete the bucket. (I assume, however, that the root user would be able to do so, since it exists outside of IAM.)
There is no need to assign permissions to the root, since it can always do anything.
Also, there is no concept of the "bucket creator". It belongs to the account, not a user.
Therefore:
Remove the Allow section (it does nothing)
Test whether the policy prevents non-root users from deleting it
Test whether the policy still permits the root user to delete it
There are 2 different type of permission in S3.
Resource Based policies
User Policies
So Bucket policies and access control lists (ACLs) are part of Resource Based and which attached to the bucket.
if all users are in same aws account. you can consider user policy which is attached to user or role.
if you are dealing with multiple aws accounts, Bucket policies or ACL is better.
only different is, Bucket policies allows you grant or deny access and apply too all object in the bucket.
ACL is grant basic read or write permission and can't add conditional check.

File in Amazon S3 bucket denied after making bucket public

I have made my Amazon S3 bucket public, by going to its Permissions tab, and setting public access to everyone:
List objects
Write objects
List bucket permissions
Write bucket permissions
There is now an orange "Public" label on the bucket.
But when I go into the bucket, click on one of the images stored there, and click on the Link it provides, I get Access Denied. The link looks like this:
https://s3.eu-central-1.amazonaws.com/[bucket-name]/images/36d03456fcfaa06061f.jpg
Why is it still unavailable despite setting the bucket's permissions to public?
You either need to set Object Level Permissions on each object that you want to be available to the internet as Read Object.
or, you can use Bucket Policies to make this more widely permissioned, and not worry about resetting the permissions on each upload:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}

AWS Bucket Policy Invalid Resource

I'm having some trouble with AWS Bucket policies, I followed the instruction and it doesn't let me set the policy, so I can't get my domain to work with the buckets.
Here is a picture. The tutorial told me to replace example.com with my bucket name.
I've been trying to set up my buckets with my domain for over a month now and I just can't seem to get it going. I already purchased my domain, and it's the exact domain name I want, so I don't want to be forced to go to Bluehost with a new domain.
It is quite simple:
Your bucket is called www.justdiditonline.com
Your bucket policy is attempting to create a rule for a bucket named justdiditonline.com
The bucket names do not match
Solution: Use a policy with the correct bucket name:
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::www.justdiditonline.com/*",
"Principal": "*"
}
]
}
I notice you have another bucket called justdiditonline.com. Your existing policy would work on that bucket.
The Setting Up a Static Website Using a Custom Domain instructions detail what to do, and they work fine with an external DNS service using a CNAME to point to the static website URL. The main steps are:
Create a bucket with the domain name www.justdiditonline.com
Add a bucket policy to make content public, or make sure the individual objects you want to serve are publicly readable
Activate Static Website Hosting on the bucket, which will return a URL like: www.justdiditonline.com.s3.amazonaws.com
Create a DNS entry for www.justdiditonline.com with a CNAME pointing to the Static Website Hosting URL

Django Storage S3 bucket Access with IAM Role

I have an EC2 instance attached with an IAM Role. That role has full s3 access. The aws cli work perfectly, and so does the meta-data curl check to get the temporary Access and Secret keys.
I have also read that when the Access and Secret keys are missing from the settings module, boto will automatically get the temporary keys from the meta-data url.
I however cannot access the css/js files stored on the bucket via the browser. When I add a bucket policy allowing a principal of *, everything works.
I tried the following policy:
{
"Version": "2012-10-17",
"Id": "PolicyNUM",
"Statement": [
{
"Sid": "StmtNUM",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:role/my-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
But all css/js are still getting 403's. What can I change to make it work?
Requests from your browser don't have the ability to send the required authz headers, which boto is handling for you elsewhere. The bucket policy cannot determine the principal and is correctly denying the request.
Add another sid to Allow principle * access to everything under /public, for instance.
The reason is that AWS is setting your files to binary/octet-stream.
check this solution to handle it.