AWS CloudFront behaving different (wrong?) with S3 in EU vs in US - amazon-web-services

I followed a really really simple manual to create S3 bucket and put CloudFront in front of it.
See here [1]. If I create the S3 bucket in us-east-1 everything is working as expected: After I uploaded a file, I can see it via e.g. xyz.cloudfront.net/myExampleFile.txt link.
But when I create the S3 bucket in e.g. eu-west-1 or eu-central-1, then as soon as I open the xyz.cloudfront.net/myExampleFile.txt link, my browser gets redirected to the direct S3 bucket link xyz.s3.amazonaws.com/myExampleFile.txt which of course is not working.
--
I have no clue what I could be possibly doing wrong... And due to the fact, that I am not able to submit any support request to AWS directly ("Technical support is unavailable under Basic Support Plan"), I thought I might ask the community here, if anybody else experience the same strange behavior or has any hints, what is going wrong here?
Thank you in advance for any help
Phenix
[1] Step 1,2 and 4 under Using a REST API endpoint as the origin, with access restricted by an OAI on https://aws.amazon.com/de/premiumsupport/knowledge-center/cloudfront-serve-static-website/

You are probably encountering the issue described here.
If you're using an Amazon CloudFront distribution with an Amazon S3 origin, CloudFront forwards requests to the default S3 endpoint (s3.amazonaws.com), which is in the us-east-1 Region. If you must access Amazon S3 within the first 24 hours of creating the bucket, you can change the Origin Domain Name of the distribution to include the regional endpoint of the bucket. For example, if the bucket is in us-west-2, you can change the Origin Domain Name from bucketname.s3.amazonaws.com to bucketname.s3-us-west-2.amazonaws.com.

Related

Connecting GovCloud S3 resource with CloudFront

Can we connect a Non-Public s3 bucket sitting on AWS Gov Cloud to a cloudfront distribution on a non gov cloud AWS account. There is not much docs or steps given anywhere.
We did try connecting it with Canonical Account ID, Cloudfront Origin in the s3 bucket policy. But nothing has worked so far.
Is this not possible or is there a way to achieve this?
Edit:
I ask this because there is a section of AWS docs talks about tips on having gov-cloud s3 content on cloudfront. But it has no details on how to do it.
Link: https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/setting-up-cloudfront-tips.html
It is hard to see how that would be possible.
AWS GovCloud regions are physically isolated, including logical network isolation from all other AWS regions, except for very specific service endpoints.
Here is another solution, provided in the official AWS documentation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
For Origin Custom Headers, under Header Name, enter Referer. Under Value, enter a custom header that you want to forward to the origin (S3 bucket). To restrict access to the origin, you can enter a random or secret value that only you know.
Basically, Cloudfront will use a custom header containing a secret value, your S3 bucket on govcloud will be public read access with a custom policy to allow only this secret value in the header.
Don't forget to force HTTPS between cloudfront and your govcloud s3 bucket with OriginProtocolPolicy https-only set.

How to hide AWS S3 Bucket URL with custom CNAME

I want to connect CDN to an AWS S3 Bucket, but the AWS Document indicates that the bucket name must be the same as the CNAME. Therefore, it is very easy to guess the real s3 bucket url by others.
For example,
- My domain: example.com
- My S3 Bucket name: image.example.com
- My CDN CNAME(image.example.com) will point to image.example.com.s3.amazonaws.com
After that, people can access the CDN URL -> http://image.example.com to obtain the resources from my S3 Bucket. However, under this restriction, people can guess my real S3 bucket url from the CNAME (CNAME + s3.amazonaws.com) easily.
So, my question is that how can I hide my real s3 bucket url? Because I don't want to expose my real S3 url to anyone for preventing any attacks.
I am not sure I understand what you are asking for or what you are trying to do [hiding your bucket does not really help anything], however I will attempt to answer your question regarding "hiding" your bucket name. Before I answer, I would like to ask these two questions:
Why do you want to hide your S3 bucket url?
What kind of attacks are you trying to prevent?
You are correct that the S3 bucket name had to be the same as your URL. This is no longer a requirement as you can mask the S3 bucket using cloudfront. CloudFront as you know is a CDN from AWS. Thus the bucket name could be anything (randomstring).
You can restrict access to the bucket, such that only CloudFront can access it. Data in the bucket is then replicated to edge locations and served from there. Even if one knows the S3 URL, it will not do anything as access to the s3 bucket is restricted, an IAM rule grants CloudFront access and no one else.
Access restriction is done via origin access and while you can manually configure this using a bucket policy, you can also set a flag in CloudFront to do this on your behalf. More information is available here.
Use the CloudFront name in Route53. Do not use CNAME, but rather use A type, and set it up as an Alias. For more information see this document.
If you are using a different DNS provider, AWS aliases will naturally not be available. I suggest moving the zone file from your other provider to AWS. If you cannot do this, then you can still use a CNAME. Again see here for more information.
I suggest using your own domain name for CloudFront and setting up HTTPS. AWS offers certificates at no additional cost for services within AWS. You can register a certificate for your domain name which is either validated by a DNS entry or an Email. To set this up please see this document.
If you want to restrict access to specific files within AWS, you can use signed URLs. More information about that is provided here.

Unable to upload files to my S3 bucket

I recently created an AWS free tier account and created an S3 bucket for an experimental project using rails deployed in heroku for production. But I am getting an error telling that something went wrong.
Through my heroku logs, I received this description :-
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'</Message><Region>us-east-2</Region><RequestId>08B714808971C8B8</RequestId><HostId>lLQ+li2yctuI/sTI5KQ74icopSLsLVp8gqGFoP8KZG9wEnX6somkKj22cA8UBmOmDuDJhmljy/o=</HostId></Error>
I had put my S3 location to US East (Ohio) instead of the US Standard (I think) while creating the bucket. Is it because of of this?
How I can resolve this error? Is there any way to change the properties of my S3 bucket? If not should I build a fresh bucket and set up a new policy allowing access to that bucket?
Please let me know if there is anything else you need from me regarding this question
The preferred authentication mechanism for AWS services, known as Signature Version 4, creates different security credentials for each user, for each service, in each region, for each day. When a request is signed, it is signed with a signing key specific to that user, date, region, and service.
the region 'us-east-1' is wrong; expecting 'us-east-2'
This error means that a request was sent to us-east-2 using the credentials for us-east-1.
The 'region' that is wrong, here, refers to the region of the credentials.
You should be able to specify the correct region in your code, and resolve the issue. For legacy reasons, S3 is a little different than most AWS services, because if you specify the wrong region in your code (or the default region isn't the same as the region of the bucket) then your request is still automatically routed to the correct region... but the credentials don't match. (Most other services will not route to the correct region automatically, so the request will typically fail in a different way if the region your code is using is incorrect.)
Otherwise, you'll need to create a new bucket in us-east-1, because buckets cannot be moved between regions.
You can keep the same bucket name for the new bucket if you delete the old bucket, first, but there is typically a delay of a few minutes between the time you delete a bucket and the time that the service allows you to reuse the same name to create a new bucket, because the bucket directory is a global resource and it takes some time for directory changes (the bucket deletion) to propagate to all regions. Before you can delete a bucket, it needs to be empty.
Yup, you nailed the solution to your problem. Just create a bucket in the correct region and use that. If you want it to be called the same thing as your original bucket you'll need to delete it on us-east-2, then create it in us-east-1 as bucket names are globally unique.

S3 bucket name with dots returns "must be addressed using the specified endpoint"

I've been using node s3-cli library for a while to upload files into my S3 buckets. This worked for example:
s3-cli sync --delete-removed dist s3://domain-admin-dev
But when I run this
s3-cli sync --delete-removed dist s3://sudomain.domain.com
it returns this error:
Error: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
What should I do?
Note: I found some posts on the web talking about the S3 bucket not belonging to the right region, however the s3://sudomain.domain.com belongs to the same region as the s3://domain-admin-dev one. So it doesn't make sense for that to be the problem.
It turns out that this whole approach was wrong. I simply had to add a cloud front distribution (ie a CDN) and then link it's origin to the s3 bucket (the name of which I changed back from s3://sudomain.domain.com to s3://domain-admin-dev, thus making my cli command work just fine). I then created a CNAME record in my godaddy pointing the subdomain to the cloud front Origin.
The process is depicted here:
note: since cloud front is a cdn, the cache must be invalidated everytime it's updated.

Can we set weighted policy on s3

Can we set weighted policy on s3, if yes. What is the step by step process.
I tried that and have a problem that traffic is routed to one endpoint only.
I done research on that and found might it is a problem with CNAME mentioned in cloudfront.
Please suggest correct values also for that.
S3 objects are only stored in a single region, meaning that in order to access that particular object, you must go through that regions API Endpoint.
For example, if you had "image.jpg" stored in a bucket "s3-images", that was created in the eu-west-1 region - in order to download that file you must go through the appropiate S3 Endpoint for the eu-west-1 Region:
s3-eu-west-1.amazonaws.com
If you tried to use another Endpoint, you will get an error, pointing out that you are using the wrong endpoint
If your question is relating to using CloudFront in front of S3, you need to set your DNS CNAME to resolve to your CloudFront Distributions CNAME in order for your users to be routed through CloudFront, rather than hitting S3 directly:
[cdn.example.com] -CNAME-> [d12345.cloudfront.net] -> s3://some-bucket