I recently created an AWS free tier account and created an S3 bucket for an experimental project using rails deployed in heroku for production. But I am getting an error telling that something went wrong.
Through my heroku logs, I received this description :-
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'</Message><Region>us-east-2</Region><RequestId>08B714808971C8B8</RequestId><HostId>lLQ+li2yctuI/sTI5KQ74icopSLsLVp8gqGFoP8KZG9wEnX6somkKj22cA8UBmOmDuDJhmljy/o=</HostId></Error>
I had put my S3 location to US East (Ohio) instead of the US Standard (I think) while creating the bucket. Is it because of of this?
How I can resolve this error? Is there any way to change the properties of my S3 bucket? If not should I build a fresh bucket and set up a new policy allowing access to that bucket?
Please let me know if there is anything else you need from me regarding this question
The preferred authentication mechanism for AWS services, known as Signature Version 4, creates different security credentials for each user, for each service, in each region, for each day. When a request is signed, it is signed with a signing key specific to that user, date, region, and service.
the region 'us-east-1' is wrong; expecting 'us-east-2'
This error means that a request was sent to us-east-2 using the credentials for us-east-1.
The 'region' that is wrong, here, refers to the region of the credentials.
You should be able to specify the correct region in your code, and resolve the issue. For legacy reasons, S3 is a little different than most AWS services, because if you specify the wrong region in your code (or the default region isn't the same as the region of the bucket) then your request is still automatically routed to the correct region... but the credentials don't match. (Most other services will not route to the correct region automatically, so the request will typically fail in a different way if the region your code is using is incorrect.)
Otherwise, you'll need to create a new bucket in us-east-1, because buckets cannot be moved between regions.
You can keep the same bucket name for the new bucket if you delete the old bucket, first, but there is typically a delay of a few minutes between the time you delete a bucket and the time that the service allows you to reuse the same name to create a new bucket, because the bucket directory is a global resource and it takes some time for directory changes (the bucket deletion) to propagate to all regions. Before you can delete a bucket, it needs to be empty.
Yup, you nailed the solution to your problem. Just create a bucket in the correct region and use that. If you want it to be called the same thing as your original bucket you'll need to delete it on us-east-2, then create it in us-east-1 as bucket names are globally unique.
Related
The problem:
I have an old S3 bucket: bucket A and a new S3 bucket: bucket B. These buckets are in separate accounts. Up until now, I have been serving assets from bucket A. Moving forward, I want to serve assets from bucket B. I must still support pushing to bucket A. However, those assets pushed to bucket A must be retrievable from bucket B.
Possible solutions:
On every new push to bucket A (PutObject), I must sync that object from bucket A to bucket B. As I understand it, there are two ways to achieve this:
Using AWS Lambda with Amazon S3
Using DataSync <-- preferred solution
Issue with solution 2:
I have a feeling the path using DataSync will be less complex. However, it's not clear to me how to accomplish this, or if it is even possible. The examples I see in the documentation (granted there is a lot to sift through) are not quite the same as this use-case. In the console, it does not seem to allow a task across multiple AWS accounts.
The disconnect I'm seeing here is, the documentation implies it is possible. However, when you navigate to DataSync Locations in the AWS Console, there is only the option to add locations in your AWS accounts S3 bucket list.
I followed a really really simple manual to create S3 bucket and put CloudFront in front of it.
See here [1]. If I create the S3 bucket in us-east-1 everything is working as expected: After I uploaded a file, I can see it via e.g. xyz.cloudfront.net/myExampleFile.txt link.
But when I create the S3 bucket in e.g. eu-west-1 or eu-central-1, then as soon as I open the xyz.cloudfront.net/myExampleFile.txt link, my browser gets redirected to the direct S3 bucket link xyz.s3.amazonaws.com/myExampleFile.txt which of course is not working.
--
I have no clue what I could be possibly doing wrong... And due to the fact, that I am not able to submit any support request to AWS directly ("Technical support is unavailable under Basic Support Plan"), I thought I might ask the community here, if anybody else experience the same strange behavior or has any hints, what is going wrong here?
Thank you in advance for any help
Phenix
[1] Step 1,2 and 4 under Using a REST API endpoint as the origin, with access restricted by an OAI on https://aws.amazon.com/de/premiumsupport/knowledge-center/cloudfront-serve-static-website/
You are probably encountering the issue described here.
If you're using an Amazon CloudFront distribution with an Amazon S3 origin, CloudFront forwards requests to the default S3 endpoint (s3.amazonaws.com), which is in the us-east-1 Region. If you must access Amazon S3 within the first 24 hours of creating the bucket, you can change the Origin Domain Name of the distribution to include the regional endpoint of the bucket. For example, if the bucket is in us-west-2, you can change the Origin Domain Name from bucketname.s3.amazonaws.com to bucketname.s3-us-west-2.amazonaws.com.
When I list the buckets using the AWS CLI, I see a bunch of buckets. The same buckets are visible from the AWS S3 Management Console also.
But, when I try to remove the bucket from the CLI, it throws an error as shown below. There seems to be some inconstancy in the S3 state. Not able to delete them from the AWS S3 Management Console also.
Why is this happening and how to get around this?
Due to the consistency model of S3, something you have to wait a few hours to delete a just created bucket.
In which region are you trying to do this?
This behavior relates to the fact that deleting an S3 bucket can cause static hosting issues.
Let's say that you have a static S3 website (whose bucket name has to be the same as the domain name), say www.example.com. If you delete the www.example.com S3 bucket and then someone else in another account happens to create a bucket with that same name then you have lost the bucket name and consequently you have lost the ability to host an S3 static website with your own domain name www.example.com.
So, AWS gives you a grace period after deleting an S3 bucket. During this grace period, only your account can create an S3 bucket with the same name (and it has to be in the same AWS region). The grace period is typically of the order of a few hours.
If you intend to re-use an S3 bucket, the best advice is not to delete it, but to simply delete its contents.
I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!
Unlike other AWS services, s3 resources, ARN does not contain AWS account number.
Few sample ARNs are:
-- Elastic Beanstalk application version --
arn:aws:elasticbeanstalk:us-east-1:123456789012:environment/My
App/MyEnvironment
-- IAM user name --
arn:aws:iam::123456789012:user/David
-- Amazon RDS instance used for tagging -- arn:aws:rds:eu-west-1:123456789012:db:mysql-db
On the other hand s3 bucket ARN looks like:
arn:aws:s3:::my_corporate_bucket/exampleobject.png
S3 Bucket ARNs do not require an account number or region since bucket names are unique across all accounts/regions.
The question is "Why does S3 bucket ARN not contain AWS account number?" and the answer to that is because S3 was the first AWS service to be launched and many things have changed since then. S3 hasn't managed yet to implement the ARN in the bucket name. We don't know why that is. It could be that it's technically challenging or that it's just not being prioritized by the service team.
One way to validate that the bucket objects are being uploaded to belongs to you to avoid accidental data leak to other people's buckets is to use the recently released bucket owner condition:
https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-s3-bucket-owner-condition-helps-validate-correct-bucket-ownership
https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html
Another way (where supported) is to use S3 Access Points: https://aws.amazon.com/s3/features/access-points/
The problem with this, however, is that it is not possible to write a policy that restricts actions only on a bucket in my account. The risk being that some user in my account may leak data out by pushing data to another account’s bucket.