AWS S3 Bucket name already exists - amazon-web-services

I know this question has been asked before, but replies didn't really help my case.
I am trying to create a bucket in S3 and I always receive the 'Bucket name already exists' error. I tried any possible combination but no luck, also the format is compliant to the docs.
Any idea of what I am doing wrong?
Thanks

So, I finally solved my issue.
As I stated in the description of my question, my problem was not the formatting or the validity of the bucket name, the name was also unique and not used. Weirdly enough, I could not create the bucket when signed in using Chrome, but I succeeded doing it using Edge.
One note for the happy down-voters: maybe you can share some of your wisdom explaining the reason for down-voting, so we can all learn from it.

According to the AWS docs:
An Amazon S3 bucket name is globally unique, and the namespace is shared by all AWS accounts. This means that after a bucket is created, the name of that bucket cannot be used by another AWS account in any AWS Region until the bucket is deleted.
Someone else has already created a bucket with this name.

S3 buckets require a globally unique name. The reason it's saying it already exists, is because another user account in AWS already used "t1-bucket' to name their S3.
One thing you can do is come up with your own naming convention. If you have a company, maybe use that to name your resources. For example "t1-bucket-myCompany"
If you aren't using AWS for a company, try your name or initials.

I had this exact same problem. I am the sole user/owner of my aws account and I keep getting this message. There is exactly 1 bucket on my account which was created from an example project I've already forgotten about. I was attempting to add a bucket named "t1-bucket" and I'm being told over and over that it already exists. I know this not to be false so I tried doing the same thing using edge instead of chrome and got the exact same error. I changed the name to "t1-bucket-force" and it worked...
I don't want to imply adding "-force" actually forced anything.
I think there's something going on with the name enforcement rules that aws is not telling us: "Bucket name must be between 3 and 63 characters long".

Related

AWS SageMaker GroundTruth permissions issue (can't read manifest)

I'm trying to run a simple GroundTruth labeling job with a public workforce. I upload my images to S3, start creating the labeling job, generate the manifest using their tool automatically, and explicitly specify a role that most certainly has permissions on both S3 bucket (input and output) as well as full access to SageMaker. Then I create the job (standard rest of stuff -- I just wanted to be clear that I'm doing all of that).
At first, everything looks fine. All green lights, it says it's in progress, and the images are properly showing up in the bottom where the dataset is. However, after a few minutes, the status changes to Failure and I get this: ClientError: Access Denied. Cannot access manifest file: arn:aws:sagemaker:us-east-1:<account number>:labeling-job/<job name> using roleArn: null in the reason for failure.
I also get the error underneath (where there used to be images but now there are none):
The specified key <job name>/manifests/output/output.manifest isn't present in the S3 bucket <output bucket>.
I'm very confused for a couple of reasons. First of all, this is a super simple job. I'm just trying to do the most basic bounding box example I can think of. So this should be a very well-tested path. Second, I'm explicitly specifying a role arn, so I have no idea why it's saying it's null in the error message. Is this an Amazon glitch or could I be doing something wrong?
The role must include SageMakerFullAccess and access to the S3 bucket, so it looks like you've got that covered :)
Please check that:
the user creating the labeling job has Cognito permissions: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-getting-started-step1.html
the manifest exists and is at the right S3 location.
the bucket is in the same region as SageMaker.
the bucket doesn't have any bucket policy restricting access.
If that still doesn't fix it, I'd recommend opening a support ticket with the labeling job id, etc.
Julien (AWS)
There's a bug whereby sometimes the console will say something like 401 ValidationException: The specified key s3prefix/smgt-out/yourjobname/manifests/output/output.manifest isn't present in the S3 bucket yourbucket. Request ID: a08f656a-ee9a-4c9b-b412-eb609d8ce194 but that's not the actual problem. For some reason the console is displaying the wrong error message. If you use the API (or AWS CLI) to DescribeLabelingJob like
aws sagemaker describe-labeling-job --labeling-job-name yourjobname
you will see the actual problem. In my case, one of the S3 files that define the UI instructions was missing.
I had the same issue when I tried to write to a different bucket to the one that was used successfully before.
Apparently the IAM role ARN can be assigned permissions for a particular bucket only.
I would suggest to refer to CloudWatch logs and look for a CloudWatch>>CloudWatch Logs >> Log groups >> /aws/sagemaker/LabelingJobs group. I had all points ticked from another post, but my pre-processing Lambda function had wrong id for my region and the error was obvious in the logs.

Domain bucket name taken Google Cloud Platform

I have a project on Google Cloud and I am trying to create a bucket to store my web files for my website. The only problem is I have a CNAME going from my website to 'c.storage.googleapis.com' so my bucket name has to be the same as my website name which is 'plains.cc'. When I try to create the bucket however, it says the name is already in use. I used this bucket name on a previous account but deleted it so I don't understand why I can't reuse it.
Are you still unable to create it? As per the doc, If you have deleted the bucket from your previous project then I guess this should be a timing issue. But if you have deleted the previous project directly without deleting the bucket contained within it, it could take more a month or more to get the associated data to be eventually deleted. Read document on this here.

Connecting DMS to S3

We are trying to get DMS set up with an S3 Source however we are unable to connect the replication instance to the Source S3 endpoint.
When we run a connection test on the source endpoint, the error we receive is:
Error Details: [errType=ERROR_RESPONSE, status=1020414, errMessage= Failed to connect to database., errDetails=]
We have followed the documentation however we are still unable to get the connection to work. The bucket is within the VPC that the replication instance has access to, and the IAM role has the GetObject, ListBucket and dms* permissions. I'm 95% sure that the JSON mapping file is set up correctly with schema and table names pointing to the right place.
Due to the lack of error messages or detailed reasons why we can't connect to the source database (the S3 bucket/CSV file), debugging this feels a tad hit and miss. We are using the Amazon Console and not the CLI, if that makes much of a difference.
I had this same error.
Check this troubleshooting guide. It covers the basic configuration problems you might run into.
My answer wasn't there, tho, and I couldn't find it anywhere, not even asking in the official forums.
In my case, for some reason I thought I should use the full bucket name in the "Bucket Name" field, like "arn:aws:s3:::my-bucket". Probably because I had to use the ARN for the role in the previous field.
And the error message when you try to connect to it will not be clear, it only says it couldn't connect to the bucket. Anyway, you don't need to provide an ARN, just the bucket's name, as in "my-bucket".

BitMovin - Unable to connect Amazon S3 Output

I am setting up an Amazon S3 output on BitMovin and it is telling me my values are incorrect. I don't know which ones because they all have been copied and pasted over. It may be another issue with my bucket.
I have setup a bucket in Oregon so us-west-2, copy and pasted the name, access key and access secret in. My policies match what they have on this document too:
Tutorial: Policies for BitMovin
your Copy&Paste went wrong, but just a bit :)
In your second statement, you would have to remove the "/*"-part from the string "arn:aws:s3:::test-bitmovin/*" within the "Resource"-Array.
The allowed actions of the second statement apply to the bucket but not to the objects within. Therefore the stated resource should refer to a bucket.
Then it should work as expected!

AWS S3 Folder deleted History

Is there ways to get deleted history of AWS s3 bucket?
Problem Statement :
Some of s3 folders got deleted . Is there way to figure out when it got deleted
There are at least two ways to accomplish what you want to do, but both are disabled by default.
The first one is to enable server access logging on your bucket(s), and the second one is to use AWS CloudTrail.
You might be out of luck if this already happened and you had no auditing set up, though.