Access denied on S3 PUT request with pre-signed URL - amazon-web-services

I'm trying to upload a file directly to S3 bucket with pre-signed URL but getting AccessDenied (403 Forbidden) error on PUT request.
PUT request is allowed in bucket's CORS configuration.
Do I also need to update bucket policy with allowing s3:PutObject, s3:PutObjectAcl action?
P.S. Forgot to add. I already tried to add s3:PutObject and s3:PutObjectAcl with Principal: * and in this case uploading is working just fine, but how to restrict access for uploading? It's should be only available for pre-signed URL's, right?

OK, I figured out how to fix it. Here are the steps:
Replace Principal: * with "Principal": {"AWS":"arn:aws:iam::USER-ID:user/username"}. Instead of USER-ID:user/username put desirable user credentials which you can find in Amazon IAM section. Read more about Principal here: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html.
Be sure that the user which you specified in Principal has s3:PutObject and s3:PutObjectAcl permissions for a needed bucket.
Check your Lambda's function permissions. It also should have s3:PutObject, s3:PutObjectAcl for a needed bucket. You can check it on IAM Roles page (if you created separate role for a Lambda function) or through function Designer page (read only)

In my case (and maybe help others) the problem was that (due to a typo in SAM Template) the proper policy was not being apply TO THE LAMBDA that create the SignedUrl
It took me sometime because I thought the problem was in the Actual Uploading, while the real problem was really in creating the url (though s3 didnt say anything about Permission Problems...)
So, you can check if the S3CrudPolicy is applying to the currect Bucket and that may fix the issue for you.

Related

Does age of an IAM account affect object-level permissions in AWS S3?

I am working with Terraform and cannot initialise the working directory. For context, the bucket and state file was made by someone who has since left the company.
I have granted myself permission to Write, List objects and Read, Write Bucket ACL. The debug log shows that I am able to ListObject from the bucket but I fail at GetObjectwith an AccessDenied error. Attempting to download the state file with AWS CLI returns the same error as expected. I am an admin and I am able to download the state file from the S3 console.
My co-worker who has the same permission set as me (admin) is able to download the state file via AWS CLI without issue and her IAM account was made before the terraform state bucket was made. Does the age of our IAM account affect access?
No, the age of an account does not affect in any way the permissions attached to it. You can't access the S3 bucket because either your role used by Terraform does not have the necessary permissions ore the bucket policy explicitly denies the access, but chances are you do not have the necessary permissions for the role itself.
In order for Terraform to be able to work with a remote state in S3, the following permissions are required (source):
s3:ListBucket on arn:aws:s3:::mybucket
s3:GetObject on arn:aws:s3:::mybucket/path/to/my/key
s3:PutObject on arn:aws:s3:::mybucket/path/to/my/key
s3:DeleteObject on arn:aws:s3:::mybucket/path/to/my/key

Writing to Amazon S3 bucket from Lambda results in "InvalidARN: ARN accountID does not match regex "[0-9]{12}""

Been digging through tutorials for days, but they all say the same thing, and it seems like I should be in slam dunk territory here, but I get the above error whenever I try to read or write from my Amazon S3 bucket.
I only have one AWS account, so my lambda function should be owned by the same account as my Amazon S3 bucket. I have given my lambda role s3:GetObject and PutObject permissions, as well as just s3:*, I have verified that my S3 bucket policy is not denying access explicitly, but nothing changes the message.
I am new to AWS policies and permissions, but google isn't giving up a lot of other people getting this message. I don't know where I am supposed to be supplying my AccountID or why it isn't already there. Would be grateful for any insights.
EDIT: I have added AmazonS3FullAccess to my policies and removed my previous policy, which only allowed GetObject and PutObject specifically. Sadly, behavior has not changed.
Here are a couple of screenshots:
And, since my roles seem to be correct, here is my code, any chance there is anything here that could be causing my problem?
You should use the bucket name only - without the full arn stuff.
You can solve this issue by ensuring that the IAM role associated with your Lambda function has the correct permissions. For example, here is the IAM role i use to invoke Amazon S3 operations from a Lambda function:
Also make sure in the Lambda console, you select the proper IAM role, as shown here:
Had this issue but I later realized I provided s3 arn instead of the bucket name as an environment variable
I got this problem when I had incorrect REGION in S3Client inicialization. Here is the correct code example (change region to yours):
const REGION = "eu-central-1"; //e.g. "us-east-1"
const s3Client = new S3Client({ region: REGION });
Source: step 2 in AWS Getting started in Node.js Tutorial

Boto3 access denied when calling the listobjects operation on a s3 bucket directory

I'm trying to access a bucket via cross account reference, the connection is established, but the put/list permissions are set on a specific directory (folder) i.e. bucketname/folder_name/*
s3 = boto3.client('s3')
s3.upload_file("filename.csv","bucketname","folder_name/file.csv"
,ExtraArgs={'ACL':'bucket-owner-full-control'})
Not sure how do I allow the same via code, it throws access denied on both list/put. Nothing wrong with permissions as such, have verified the access via awscli, it works.
let me know if i'm missing something here, thanks!
There was an issue with the assumed role, followed the documentation as mentioned here https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html along with the code mentioned above.

Cloud IAM Conditions for GCS objects

I'm having problems using Cloud IAM Conditions to limit my service account to only have permissions to read certain files in a GCS bucket.
I'm trying to use the following condition:
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems%2f12345")
where I want to allow the service account to ONLY have READ access to files with prefix fooItems/12345 inside the bucket some-bucket
I.e. The following files should not be authorized:
gs://some-bucket/fooItems/555/f.txt
gs://some-bucket/fooItems/555/foo/s.log
while the following files should be authorized:
gs://some-bucket/fooItems/1234/f.txt
gs://some-bucket/fooItems/1234/foo/s.log
The problem I'm having is that even files such as gs://some-bucket/fooItems/555/* are readable.
I tried both with and without encoded /, i.e.:
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems/12345")
and
resource.name.startsWith("projects/_/buckets/some-bucket/objects/fooItems%2f12345")
You should use ACL's instead of IAM permissions if you want to grant permission only over some of the files in a bucket. This is called Fine grained control
In order to achieve this, you should change the policy of your bucket to Fine-Grained and then you can navigate to the object where you want to give permissions to the service account and add it as documented

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)