Can we set weighted policy on s3, if yes. What is the step by step process.
I tried that and have a problem that traffic is routed to one endpoint only.
I done research on that and found might it is a problem with CNAME mentioned in cloudfront.
Please suggest correct values also for that.
S3 objects are only stored in a single region, meaning that in order to access that particular object, you must go through that regions API Endpoint.
For example, if you had "image.jpg" stored in a bucket "s3-images", that was created in the eu-west-1 region - in order to download that file you must go through the appropiate S3 Endpoint for the eu-west-1 Region:
s3-eu-west-1.amazonaws.com
If you tried to use another Endpoint, you will get an error, pointing out that you are using the wrong endpoint
If your question is relating to using CloudFront in front of S3, you need to set your DNS CNAME to resolve to your CloudFront Distributions CNAME in order for your users to be routed through CloudFront, rather than hitting S3 directly:
[cdn.example.com] -CNAME-> [d12345.cloudfront.net] -> s3://some-bucket
Related
I followed a really really simple manual to create S3 bucket and put CloudFront in front of it.
See here [1]. If I create the S3 bucket in us-east-1 everything is working as expected: After I uploaded a file, I can see it via e.g. xyz.cloudfront.net/myExampleFile.txt link.
But when I create the S3 bucket in e.g. eu-west-1 or eu-central-1, then as soon as I open the xyz.cloudfront.net/myExampleFile.txt link, my browser gets redirected to the direct S3 bucket link xyz.s3.amazonaws.com/myExampleFile.txt which of course is not working.
--
I have no clue what I could be possibly doing wrong... And due to the fact, that I am not able to submit any support request to AWS directly ("Technical support is unavailable under Basic Support Plan"), I thought I might ask the community here, if anybody else experience the same strange behavior or has any hints, what is going wrong here?
Thank you in advance for any help
Phenix
[1] Step 1,2 and 4 under Using a REST API endpoint as the origin, with access restricted by an OAI on https://aws.amazon.com/de/premiumsupport/knowledge-center/cloudfront-serve-static-website/
You are probably encountering the issue described here.
If you're using an Amazon CloudFront distribution with an Amazon S3 origin, CloudFront forwards requests to the default S3 endpoint (s3.amazonaws.com), which is in the us-east-1 Region. If you must access Amazon S3 within the first 24 hours of creating the bucket, you can change the Origin Domain Name of the distribution to include the regional endpoint of the bucket. For example, if the bucket is in us-west-2, you can change the Origin Domain Name from bucketname.s3.amazonaws.com to bucketname.s3-us-west-2.amazonaws.com.
I want to connect CDN to an AWS S3 Bucket, but the AWS Document indicates that the bucket name must be the same as the CNAME. Therefore, it is very easy to guess the real s3 bucket url by others.
For example,
- My domain: example.com
- My S3 Bucket name: image.example.com
- My CDN CNAME(image.example.com) will point to image.example.com.s3.amazonaws.com
After that, people can access the CDN URL -> http://image.example.com to obtain the resources from my S3 Bucket. However, under this restriction, people can guess my real S3 bucket url from the CNAME (CNAME + s3.amazonaws.com) easily.
So, my question is that how can I hide my real s3 bucket url? Because I don't want to expose my real S3 url to anyone for preventing any attacks.
I am not sure I understand what you are asking for or what you are trying to do [hiding your bucket does not really help anything], however I will attempt to answer your question regarding "hiding" your bucket name. Before I answer, I would like to ask these two questions:
Why do you want to hide your S3 bucket url?
What kind of attacks are you trying to prevent?
You are correct that the S3 bucket name had to be the same as your URL. This is no longer a requirement as you can mask the S3 bucket using cloudfront. CloudFront as you know is a CDN from AWS. Thus the bucket name could be anything (randomstring).
You can restrict access to the bucket, such that only CloudFront can access it. Data in the bucket is then replicated to edge locations and served from there. Even if one knows the S3 URL, it will not do anything as access to the s3 bucket is restricted, an IAM rule grants CloudFront access and no one else.
Access restriction is done via origin access and while you can manually configure this using a bucket policy, you can also set a flag in CloudFront to do this on your behalf. More information is available here.
Use the CloudFront name in Route53. Do not use CNAME, but rather use A type, and set it up as an Alias. For more information see this document.
If you are using a different DNS provider, AWS aliases will naturally not be available. I suggest moving the zone file from your other provider to AWS. If you cannot do this, then you can still use a CNAME. Again see here for more information.
I suggest using your own domain name for CloudFront and setting up HTTPS. AWS offers certificates at no additional cost for services within AWS. You can register a certificate for your domain name which is either validated by a DNS entry or an Email. To set this up please see this document.
If you want to restrict access to specific files within AWS, you can use signed URLs. More information about that is provided here.
So currently I have a CNAME mapped to my Amazon S3 bucket so that I can access my files like subdomain.domain.com/file.js
The problem is SSL doesn't work on this.
Now I could add Cloudfront, however, that creates a cache for the files and the files need to update dynamically.
How do I go about doing this?
CloudFront is a complementary service to S3, and is the only way AWS provides to combine a custom domain name and SSL with an S3 bucket.
If you really don't want CloudFront to cache any content, you can set the Minimum, Default, and Maximum TTL to 0.
I have two buckets a and b with static websites enabled that redirect to original buckets A and B. I created two route53 record sets(A records) slave-1 and slave-2 pointing to each bucket a and b. I then created a Master record set(A record) with failover, slave-1 as primary and slave-2 as secondary. When I try to access the S3 contents using the Master, I get a 404 'No Such Bucket.' Is there a way that I can get this set up to work? If are there are any workarounds for configurations like this?
S3 only supports directly accessing a bucket using either one of the endpoint hostnames directly (such as example-bucket.s3.amazonaws.com) or via a DNS record pointing to the bucket endpoint when the name of the bucket matches the entire hostname presented in the Host: header (the hostname my-bucket.example.com works with a bucket named exactly "my-bucket.example.com").
If your tool will be signing requests for the bucket, there is no simple and practical workaround, since the signatures will not match on the request. (This technically could be done with a proxy that has knowledge of the keys and secrets, validates the original signature, strips it, then re-signs the request, but this is a complex solution.)
If you simply need to fetch content from the buckets, then use CloudFront. When CloudFront is configured in front of a bucket, you can point a domain name to CloudFront, and specify one or more buckets to handle the requests, based on pattern matching in the request paths. In this configuration, the bucket names and regions are unimportant and independent of the hostname associated with the CloudFront distribution.
The default endpoint & URL for an AWS hosted website is http://*username*.s3-website-us-east-1.amazonaws.com/index.html
But I have seen some like this https://s3.amazonaws.com/username/index.html
how do you do this?
You can always access an s3 hosted resource at:
http://s3.amazonaws.com/<bucket name>/<resource key (usually a filename)>
+1 for free-dom's answer...
...but even cooler is that you can use your own host/domain to access the content of the bucket.
Step 1. Create a bucket named, say, img.example.com, where you control the DNS for that domain.
Step 2. Add a CNAME entry in DNS that maps img.example.com to s3.amazonaws.com.
Step 3. There is no Step 3.
You can then access the files in that S3 bucket using http://img.example.com/FILENAME
This protects you in the event you want to change where your files are hosted to somewhere other than S3 (e.g., Amazon CloudFront CDN or another provider).