When I list the buckets using the AWS CLI, I see a bunch of buckets. The same buckets are visible from the AWS S3 Management Console also.
But, when I try to remove the bucket from the CLI, it throws an error as shown below. There seems to be some inconstancy in the S3 state. Not able to delete them from the AWS S3 Management Console also.
Why is this happening and how to get around this?
Due to the consistency model of S3, something you have to wait a few hours to delete a just created bucket.
In which region are you trying to do this?
This behavior relates to the fact that deleting an S3 bucket can cause static hosting issues.
Let's say that you have a static S3 website (whose bucket name has to be the same as the domain name), say www.example.com. If you delete the www.example.com S3 bucket and then someone else in another account happens to create a bucket with that same name then you have lost the bucket name and consequently you have lost the ability to host an S3 static website with your own domain name www.example.com.
So, AWS gives you a grace period after deleting an S3 bucket. During this grace period, only your account can create an S3 bucket with the same name (and it has to be in the same AWS region). The grace period is typically of the order of a few hours.
If you intend to re-use an S3 bucket, the best advice is not to delete it, but to simply delete its contents.
Related
I have one AWS S# and Redshift question:
A company uses two AWS accounts for accessing various AWS services. The analytics team has just configured an Amazon S3 bucket in AWS account A for writing data from the Amazon Redshift cluster provisioned in AWS account B. The team has noticed that the files created in the S3 bucket using UNLOAD command from the Redshift cluster are not accessible to the bucket owner user of the AWS account A that created the S3 bucket.
What could be the reason for this denial of permission for resources belonging to the same AWS account?
I tried to reproduce the scenario for the question, but I can't.
I don't get the S3 Object Ownership and Bucket Ownership.
You are not the only person confused by Amazon S3 object ownership. When writing files from one AWS Account to a bucket owned by a different AWS Account, is possible for the 'ownership' of objects to remain with the 'sending' account. This causes all types of problems.
Fortunately, AWS has introduced a new feature into S3 called Edit Object Ownership that overrides all these issues:
By setting "ACLs disabled" for an S3 Bucket, objects will always be owned by the AWS Account that owns the bucket.
So, you should configure this option for the S3 bucket in your AWS account B and it should all work nicely.
The problem is that the bucket owner in account A does not have access to files that were uploaded by the account B, usually that is solved by specifying acl parameter when uploading files --acl bucket-owner-full-control. Since the upload is done via Redshift you need to tell Redshift to assume a role in the account A for UNLOAD command so files don't change the ownership and continue to belong to account A. Check the following page for more examples on configuring cross account LOAD/UNLOAD https://aws.amazon.com/premiumsupport/knowledge-center/redshift-s3-cross-account/
The problem:
I have an old S3 bucket: bucket A and a new S3 bucket: bucket B. These buckets are in separate accounts. Up until now, I have been serving assets from bucket A. Moving forward, I want to serve assets from bucket B. I must still support pushing to bucket A. However, those assets pushed to bucket A must be retrievable from bucket B.
Possible solutions:
On every new push to bucket A (PutObject), I must sync that object from bucket A to bucket B. As I understand it, there are two ways to achieve this:
Using AWS Lambda with Amazon S3
Using DataSync <-- preferred solution
Issue with solution 2:
I have a feeling the path using DataSync will be less complex. However, it's not clear to me how to accomplish this, or if it is even possible. The examples I see in the documentation (granted there is a lot to sift through) are not quite the same as this use-case. In the console, it does not seem to allow a task across multiple AWS accounts.
The disconnect I'm seeing here is, the documentation implies it is possible. However, when you navigate to DataSync Locations in the AWS Console, there is only the option to add locations in your AWS accounts S3 bucket list.
I have an Amazon S3 bucket that is being used by CloudTrail.
However, the S3 bucket is not visible in S3.
When I click on the bucket in CloudTrail, it links to S3 but I get access denied.
The bucket is currently in use by CloudTrail, and based on the icons that seems to be working fine.
So, it seems this is an existing bucket but I cannot access it!
I also tried to access the S3 bucket with the root account, but the same issue occurs there.
Please advise on how I would regain access.
Just because cloudtrail has access to the bucket, doesn't mean your account also does.
You would need to talk to whoever manages your security and request access. or if this is your account, make sure you are logged in with credentials that have the proper access.
Unlike other AWS services, s3 resources, ARN does not contain AWS account number.
Few sample ARNs are:
-- Elastic Beanstalk application version --
arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My
App/MyEnvironment
-- IAM user name --
arn:aws:iam::123456789012:user/David
-- Amazon RDS instance used for tagging -- arn:aws:rds:eu-west-1:123456789012:db:mysql-db
On the other hand s3 bucket ARN looks like:
arn:aws:s3:::my_corporate_bucket/exampleobject.png
S3 Bucket ARNs do not require an account number or region since bucket names are unique across all accounts/regions.
The question is "Why does S3 bucket ARN not contain AWS account number?" and the answer to that is because S3 was the first AWS service to be launched and many things have changed since then. S3 hasn't managed yet to implement the ARN in the bucket name. We don't know why that is. It could be that it's technically challenging or that it's just not being prioritized by the service team.
One way to validate that the bucket objects are being uploaded to belongs to you to avoid accidental data leak to other people's buckets is to use the recently released bucket owner condition:
https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-s3-bucket-owner-condition-helps-validate-correct-bucket-ownership
https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html
Another way (where supported) is to use S3 Access Points: https://aws.amazon.com/s3/features/access-points/
The problem with this, however, is that it is not possible to write a policy that restricts actions only on a bucket in my account. The risk being that some user in my account may leak data out by pushing data to another account’s bucket.
I recently created an AWS free tier account and created an S3 bucket for an experimental project using rails deployed in heroku for production. But I am getting an error telling that something went wrong.
Through my heroku logs, I received this description :-
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'</Message><Region>us-east-2</Region><RequestId>08B714808971C8B8</RequestId><HostId>lLQ+li2yctuI/sTI5KQ74icopSLsLVp8gqGFoP8KZG9wEnX6somkKj22cA8UBmOmDuDJhmljy/o=</HostId></Error>
I had put my S3 location to US East (Ohio) instead of the US Standard (I think) while creating the bucket. Is it because of of this?
How I can resolve this error? Is there any way to change the properties of my S3 bucket? If not should I build a fresh bucket and set up a new policy allowing access to that bucket?
Please let me know if there is anything else you need from me regarding this question
The preferred authentication mechanism for AWS services, known as Signature Version 4, creates different security credentials for each user, for each service, in each region, for each day. When a request is signed, it is signed with a signing key specific to that user, date, region, and service.
the region 'us-east-1' is wrong; expecting 'us-east-2'
This error means that a request was sent to us-east-2 using the credentials for us-east-1.
The 'region' that is wrong, here, refers to the region of the credentials.
You should be able to specify the correct region in your code, and resolve the issue. For legacy reasons, S3 is a little different than most AWS services, because if you specify the wrong region in your code (or the default region isn't the same as the region of the bucket) then your request is still automatically routed to the correct region... but the credentials don't match. (Most other services will not route to the correct region automatically, so the request will typically fail in a different way if the region your code is using is incorrect.)
Otherwise, you'll need to create a new bucket in us-east-1, because buckets cannot be moved between regions.
You can keep the same bucket name for the new bucket if you delete the old bucket, first, but there is typically a delay of a few minutes between the time you delete a bucket and the time that the service allows you to reuse the same name to create a new bucket, because the bucket directory is a global resource and it takes some time for directory changes (the bucket deletion) to propagate to all regions. Before you can delete a bucket, it needs to be empty.
Yup, you nailed the solution to your problem. Just create a bucket in the correct region and use that. If you want it to be called the same thing as your original bucket you'll need to delete it on us-east-2, then create it in us-east-1 as bucket names are globally unique.