Unable to edit S3 object metadata in console or Cyber Duck - amazon-web-services

I am trying to edit the metadata for S3 objects in both Cyber Duck and the AWS Console. Cyber Duck does not give me an error, but does not apply the settings. When I try to edit the data in the AWS Console I get the following error message:
Sorry! You were denied access to do that.
I can upload and do everything else in the bucket, and I have the AmazonS3FullAccess policy applied to my account.

I figured it out. The files were uploaded as an anonymous user, therefor I could not edit anything about them. I re-uploaded everything with correct credentials and was able to change the metadata.

Related

AccessDenied application using S3 in AWS EC2 Instance

I have hosted a meteor-angular application using S3 in AWS EC2.
Now when I run the Application I am receiving this error message below.
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>8WK30J2DDCZRTWXK</RequestId>
<HostId>6+bSjn5yWA5olpHZb7pcYCBAIlCzPjN8MxxBs3kTuGfuNuNk+CgHDjDeBpXCIjpd5WDVoFnc5Zw=</HostId>
</Error>
I have search a lot for a suitable answer and also to understand that might be the issue.
When I run aws s3 ls in the terminal I am able to see all the S3 buckets and also in the IAM Role I have added the AmazonS3FullAccess Permissions policies. But still, the issue remains.
When I go to my bucket permissions it says that "objects can be public" and the public access is not blocked.
Here is an example of a Object permission Screenshot.
Can anyone help me to fix this error?
On your s3 bucket, check the security rules and check if they are available to the public. It looks like you're trying to access it on the web browser if so you'll need to make it public.
I believe you can do this either to the entire bucket or each object the concept should be the same for each.
Go to your bucket and select the object using the checkbox.
Click on actions and select make public.

How to access aws s3 current bucketlist content info

I have been provided with the access and secret key for an Amazon S3 container. No more details were provided other than to drop some files into some specific folder.
I downloaded Amazon CLI and also the Amazon SDK. So far, seems to be no way for me to check the bucket name or list the folders where I'm supposed to drop my files. Every single command seems to require the knowledge of a bucket name.
Trying to list with aws s3 ls gives me the error:
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Is there a way to list the content of my current location (I'm guessing the credentials I was given are linked directly to a bucket?). I'd like to see at least the folders where I'm supposed to drop my files, but the SDK client for the console app I'm building seems to always require a bucket name.
Was I provided incomplete info or limited rights?
Do you know the bucket name or not? If you don't and you don't have permission to ListAllMyBuckets and GetBucketLocation on * and ListBucket on the bucket in question, then you can't get the bucket name. That's how it is supposed to work. If you know the bucket, then you can run aws s3 s3://bucket-name/ to get objects in the bucket.
Note, that S3 buckets don't have the concept of "folder". It's User interface "sugar" to make it look like folders and files. Internally, it's just the key and the object
Looks like it was just not possible without enhanced rights or with the actual bucketname. I was able to procure both later on from the client and able to complete the task. Thanks for the comments.

How to check permissions on folders in S3?

I want to simply check the permissions that I have on a buckets/folders/files in AWS S3. Something like:
ls -l
Sounds like it should be pretty easy but I cannot find any information on the subject. I just want to know if I have read access to a content, or if I can load a file locally without trying to load the data, to have an "Error Code: 403 Forbidden" thrown at me.
Note: I am using databricks and want to check the permission from there.
Thanks!
You can check the permissions using the command,
aws s3api get-object-acl --bucket my-bucket --key index.html
You acl for each object can vary across your bucket.
More documentation at,
https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-acl.html
Hope it helps.
There are several different ways to grant access to objects in Amazon S3.
Permissions can be granted on a whole bucket, or a path within a bucket, via a Bucket Policy.
Permissions can also be granted to an IAM User or Role, giving that specific user/role permissions similar to a bucket policy.
Then there are permissions on the object itself, such as making it publicly readable.
So, there is no simple way to say "what are the permissions on this particular object" because it depends who you are. Also, the policies can restrict by IP address and time of day, so there isn't always one answer.
You could use the IAM Policy Simulator to test whether a certain call (eg PutObject or GetObject) would work for a given user.
Some commands in the AWS Command-Line Interface (CLI) come with a --dryrun option that will simply test whether the command would have worked, without actually executing the command.
Or, sometimes it is just easiest to try to access the object and see what happens!

AWS user can access stuff I didn't grant them access to

I created an AWS user and granted them access to just a few things as you can see on this screenshot:
And then they logged in and told me they can access my S3 and a few other services. I even had them test uploading a file to my S3.
So what did I do wrong?
Thanks.
Look at the policy for AWSLambdaFullAccess. Under "Action" you will see "s3:". This gives the user full access to all S3 commands. Under "Resource" you will see "". This give the user full access to all resources (in this case all S3 commands on all S3 resources).

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)