AWS - S3 - Create bucket which is already existing - through CLI - amazon-web-services

Through AWS Console if you create a bucket if its already existing - console will not allow creating again.
But, through CLI it will allow you to create it again - when you execute make bucket command with the existing bucket - it just shows the success message.
It's really confusing, as doesn't show error in CLI. Confusing as different behaviors with two process.
Any idea why is this behavior and why CLI doesn't throw any error for the same?

In a distributed system, when you ask to create most of the time it will upsert. Throwing error back is a costly process.
If you want to check whether bucket exists and if you have appropriate privileges use the following command.
aws s3api head-bucket --bucket my-bucket
Documentation:
http://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
This operation is useful to determine if a bucket exists and you have
permission to access it.
Hope it helps.

Related

Trying to create AWS spot datafeed, getting error: InaccessibleStorageLocation

I am following the 'documentation' here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html
With the goal of creating an ec2 spot instance price datafeed.
I use this command:
aws ec2 create-spot-datafeed-subscription --region us-east-1 --bucket mybucketname-spot-instance-price-data-feed
And get this response:
An error occurred (InaccessibleStorageLocation) when calling the CreateSpotDatafeedSubscription operation: The specified bucket does not exist or does not have enough permissions
The bucket exists, I am able to upload files into it.
I don't have any idea what to do - there's a blizzard of AWS options for giving permissions and the documentation makes only vague statements, nothing concrete about what might need to be done.
Can anyone suggest please what I can do to get past this error? thanks!
What worked me for this issue is enabling the ACL. I initially had the ACLs disabled and I was running into the same permissions issue. I updated the S3 bucket and enabled ACLs then the message went away and I was able to create see the spot pricing feed.

Difference between the APIs `aws s3` and `aws s3api` when granting permissions to a canonical ID

In order to upload a file to an s3 bucket, we use the following CLI command (and this command works fine):
aws s3api put-object --grant-full-control id=e2cxxxxxxxxx --bucket my_bucket --key folder/filename --body filename
The aws s3api doesn't support uploading files > 5GB, which we will have to start supporting soon. I tried doing the upload using aws s3 (the high level API) instead. The command now looks like:
aws s3 cp filename s3://my_bucket/folder/filename --grants full=id=e2cxxxxxxxxx
However, this command throws An error occurred (AccessDenied) when calling the UploadPart operation: Access Denied
I'm trying to understand how the two are different and why one throws AccessDenied while the other doesn't.
The s3api set of commands is a 1:1 mapping with the low level S3 API. The s3 set of commands adds some higher level functionalities, like syncing for example. To be able to do so, it often requires multiple API-level permissions. So, behind the scene, a simple aws s3 cp command might use multiple low level APIs, and callers needs permissions for each of them.
You need to add all low level APIs in the 'Actions' section of the IAM policy.
There is no easy way to check what is the list of low level APIs used by a high level command. In your example, it looks like UploadPart is the missing one. You should try to add it to the user/role/group policy and try again until you get all of them ("Actions" : ["s3:Uploadpart", ...])
You can also try with a user having admin permission ("Actions" : "s3:*") and configure Cloudtrail to analyse from the logs all the S3 APIs used by one specific command.

How to check permissions on folders in S3?

I want to simply check the permissions that I have on a buckets/folders/files in AWS S3. Something like:
ls -l
Sounds like it should be pretty easy but I cannot find any information on the subject. I just want to know if I have read access to a content, or if I can load a file locally without trying to load the data, to have an "Error Code: 403 Forbidden" thrown at me.
Note: I am using databricks and want to check the permission from there.
Thanks!
You can check the permissions using the command,
aws s3api get-object-acl --bucket my-bucket --key index.html
You acl for each object can vary across your bucket.
More documentation at,
https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-acl.html
Hope it helps.
There are several different ways to grant access to objects in Amazon S3.
Permissions can be granted on a whole bucket, or a path within a bucket, via a Bucket Policy.
Permissions can also be granted to an IAM User or Role, giving that specific user/role permissions similar to a bucket policy.
Then there are permissions on the object itself, such as making it publicly readable.
So, there is no simple way to say "what are the permissions on this particular object" because it depends who you are. Also, the policies can restrict by IP address and time of day, so there isn't always one answer.
You could use the IAM Policy Simulator to test whether a certain call (eg PutObject or GetObject) would work for a given user.
Some commands in the AWS Command-Line Interface (CLI) come with a --dryrun option that will simply test whether the command would have worked, without actually executing the command.
Or, sometimes it is just easiest to try to access the object and see what happens!

How does S3 get permission from Lambda trigger?

I'm working out the security details for working with Lambda. One thing I can't find out is how S3 gets permission to push to Lambda when you add a trigger from the Lambda console or via S3 - Properties - Events. I know how it works using the CLI and I know you could do it via the SDK but I also noticed it isn't always necessary. Mostly the trigger just 'works' without me adding any permissions. Does anybody know why?
And is there a way to find out what Permissions S3/an S3 bucket has? I know there's a tab 'Permissions' but that's not giving me any information. I also know about Truster Advisor but that's just telling me there's no explicit problem with the permissions. I'm wondering if I can get a list of permissions though?
I hope someone can help me out, thanks in advance!
Adding a trigger in the console is the equivalent of assigning permissions and setting a bucket notification. You can see the policy associated with a particular lambda function by using the get-policy cli command:
aws lambda get-policy --function-name <name>
This will tell you what the policy is for your function. Including the resources with rights to invoke it. This policy isn't applied to the S3 bucket, but instead your lambda function.
You can also see what your bucket is set up to notify in the console under Properties > Events or review this with the cli using the get-bucket-notification command:
aws s3api get-bucket-notification --bucket <bucket>

S3 download works from console, but not from commandline

Can anyone explain this behaviour:
When I try to download a file from S3, I get the following error:
An error occurred (403) when calling the HeadObject operation: Forbidden.
Commandline used:
aws s3 cp s3://bucket/raw_logs/my_file.log .
However, when I use the S3 console website, I'm able to download the file without issues.
The access key used by the commandline is correct. I verified this, and other AWS operations via commandline work fine. The access key is tied to the same user account I use in the AWS console.
So I assume you're sure about the IAM policy of your user and the file exists in your bucket
If you have set a default region in your configuration but the bucket has not been created in this region (Yes s3 buckets are created in a region), it will not find it. Make sure to add the region flag to the CLI
aws s3 cp s3://bucket/raw_logs/my_file.log . --region <region of the bucket>
Other notes:
make sure to upgrade to latest version
can be cause if system clock is not synchronized, if you're not indicating any synchronize params, it might be ok but I dont know the internal and for some commands the CLI is looking at the system clock to compare to S3, if you're out of sync it might cause issues
I had a similar issue due to having two-factor authentication enabled on my account. Check out how to configure 2FA for the aws cli here: https://aws.amazon.com/premiumsupport/knowledge-center/authenticate-mfa-cli/