cant upload files to s3 with awscli - access denied - amazon-web-services

Here's the step's I've taken:
create a s3 bucket, copy the permission policy for public reads (see below)
Enable static web hosting and set the root to index.html (which hasn't been uploaded yet)
Try and use the web interface to upload a folder, but it's not supported on Linux
run awscli configure and enter my access token, secret token, region
edit ~/.aws/config and add signature_version = s3v4 (this is to avoid an error if I leave this out)
Try aws s3 sync . s3://my-music
See this error:
An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
Completed 1 part(s) with ... file(s) remaining
With no other info.
the bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-music/*"
}
]
}
I have to say I don't know much about policy / iam on aws. I really just want to upload a static website and visit it, shouldn't be too hard right? Except the website has a lot of files, and I need to bulk-upload them somehow. If it makes any difference, I did create a iam user and that is the credentials I am using for access / secret token.

I had failed to attach a policy to my new IAM user.
I visited https://console.aws.amazon.com/iam/home
and clicked the new user, this brought up a button to attach a policy.
I added the first policy there ("administrator access"), and that was all I needed to do.

Related

aws-cdk s3:PutBucketPolicy Access Denied when deploying bucket with public read access

I am trying to set up a static website using an S3 bucket using the cdk. However, when I deploy the stack I receive the error API: s3:PutBucketPolicy Access Denied. The CLI user I am using has administrator permissions.
I have tried to manually create a bucket with the "Static website hosting" property configured, but when I add the following bucket policy, I receive an Access denied error, even though I am the root user.
{
"Id": "PolicyId",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Sid",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::BUCKET_NAME",
"Principal": "*"
}
]
}
Something similar to here.
I have deselected all the public access settings like is suggested - but I still receive an access denied.
I believe the problem when deploying the cdk code may be related to the problem when creating the bucket manually, but I don't know how to debug it.
This worked for me:
//Create the web bucket and give it public read access
this.webBucket = new Bucket(this, 'WebBucket', {
websiteIndexDocument: 'index.html',
publicReadAccess: true
});
//Deploy the frontend to the to the web bucket
new BucketDeployment(this, 'DeployFrontend', {
source: Source.asset('../ui/dist'),
destinationBucket: this.webBucket
});
Also, make sure the "Block public access (account settings)" is turned off in the S3 Console.
For folks struggling with this error using aws-cdk and already existing bucket:
Take a look if you are not trying to modify bucket policy when you have set "blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL" in Bucket properties.
You have to turn it off or remove that property if you want to modify the policy. After deploying (modifying) policy you can set the blockPublicAccess property back again.

Access denied when put bucket policy on aws s3 bucket with root user (= bucket owner)

I have an AWS root user which I used to create a S3 bucket on Amazon. Now I want to make this bucket public by adding following policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<my bucket name>/*"
}]
}
Where <my bucket name> is the name of the bucket. When I try to save this policy I get a 403 access denied.
I tried explicitly setting the s3:PutBucketPolicy permission but it still gives a 403. Anybody knows why?
This is the image error:
Uncheck 2 rows for fixing the access denied. But please remember reading it clearly and consider it before you create a new bucket. Permission is really important.
I've tried creating a new bucket and by setting the following permission parameters unchecked (false) the bucket policy can now be adjusted to make the bucket objects public. Afterwards I ticked off the four previous checkboxes and now it works.
permissions
For folks struggling with this error using aws-cdk and already existing bucket:
Take a look if you are not trying to modify bucket policy when you have set "blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL" or any other blocking s3.BlockPublicAccess in Bucket properties.
You have to turn it off or remove that property if you want to modify the policy. After deploying (modifying) policy you can set the blockPublicAccess property back again.

Admin level user denied access to S3 objects

I'm really struggling to gain access to objects in an S3 bucket.
Things I've done:
IAM user has admin level privileges already
Granted AmazonS3FullAccess
Set the bucket policy to public allowed get...
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::images.bucketname.com/*"
}
]
}
I don't want get to be public but I'm just trying to get this to work right now. I had set up a new IAM user for the application itself that will fetch objects as the principal but again, for some reason that didn't work.
I'm uploading the images with putObjectin Node.
Am I missing something here because I'm getting full access denied to everything in S3. I can't open an image even logged in as the root user. I can't download an object. There is no viable way for me to view the images I'm uploading.
All of these buttons within the console either throw a blank error or route to the standard AWS access denied XML page.
On the other hand I can successfully, programmatically, upload files to the bucket using the root users credentials.
What am I missing here? Thanks for the help.
If you just want to access the bucket for some MVP or hobby project and you don't care about security then I would recommend you switch off the default settings of the bucket here:
To re-iterate, only do this in development as it may not be recommended for production

Getting 403 forbidden from s3 when attempting to download a file

I have a bucket on s3, and a user given full access to that bucket.
I can perform an ls command and see the files in the bucket, but downloading them fails with:
A client error (403) occurred when calling the HeadObject operation: Forbidden
I also attempted this with a user granted full S3 permissions through the IAM console. Same problem.
For reference, here is the IAM policy I have:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}
]
}
I also tried adding a bucket policy, even making the bucket public, and still no go...also, from the console, I tried to set individual permissions on the files in the bucket, and got an error saying I cannot view the bucket, which is strange, since I was viewing it from the console when the message appeared, and can ls anything in the bucket.
EDIT the files in my bucket were copied there from another bucket belonging to a different account, using credentials from my account. May or may not be relevant...
2nd EDIT just tried to upload, download and copy my own files to and from this bucket from other buckets, and it works fine. The issue is specifically with the files placed there from another account's bucket.
Thanks!
I think you need to make sure that the permissions are applied to objects when moving/copying them between buckets with the "bucket-owner-full-control" acl.
Here are the details about how to do this when moving or copying files as well as retroactively:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/
Also, you can read about the various predefined grants here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
The problem here stems from how you get the files into the bucket. Specifically the credentials you have and/or privileges you grant at the time of upload. I ran into a similar permissions issue issue when I had multiple AWS accounts, even though my bucket policy was quite open (as yours is here). I had accidentally used credentials from one account (call it A1) when uploading to a bucket owned by a different account (A2). Because of this A1 kept the permissions on the object and the bucket owner did not get them. There are at least 3 possible ways to fix this in this scenario at time of upload:
Switch accounts. Run $export AWS_DEFAULT_PROFILE=A2 or, for a more permanent change, go modify ~/.aws/credentials and ~/.aws/config to move the correct credentials and configuration under [default]. Then re-upload.
Specify the other profile at time of upload: aws s3 cp foo s3://mybucket --profile A2
Open up the permissions to bucket owner (doesn't require changing profiles): aws s3 cp foo s3://mybucket --acl bucket-owner-full-control
Note that the first two ways involve having a separate AWS profile. If you want to keep two sets of account credentials available to you, this is the way to go. You can set up a profile with your keys, region etc by doing aws configure --profile Foo. See here for more info on Named Profiles.
There are also slightly more involved ways to do this retroactively (post upload) which you can read about here.
To correctly set the appropriate permissions for newly added files, add this bucket policy:
[...]
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012::user/their-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
Your bucket policy is even more open, so that's not what's blocking you.
However, the uploader needs to set the ACL for newly created files. Python example:
import boto3
client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
local_file_path,
bucket_name,
bucket_file_path,
ExtraArgs={'ACL':'bucket-owner-full-control'}
)
source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)

Django Storage S3 bucket Access with IAM Role

I have an EC2 instance attached with an IAM Role. That role has full s3 access. The aws cli work perfectly, and so does the meta-data curl check to get the temporary Access and Secret keys.
I have also read that when the Access and Secret keys are missing from the settings module, boto will automatically get the temporary keys from the meta-data url.
I however cannot access the css/js files stored on the bucket via the browser. When I add a bucket policy allowing a principal of *, everything works.
I tried the following policy:
{
"Version": "2012-10-17",
"Id": "PolicyNUM",
"Statement": [
{
"Sid": "StmtNUM",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:role/my-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
But all css/js are still getting 403's. What can I change to make it work?
Requests from your browser don't have the ability to send the required authz headers, which boto is handling for you elsewhere. The bucket policy cannot determine the principal and is correctly denying the request.
Add another sid to Allow principle * access to everything under /public, for instance.
The reason is that AWS is setting your files to binary/octet-stream.
check this solution to handle it.