I'm currently using AWS Cloudfront to serve assets for a companies website under the subdomain content.companyname.com. The current distribution points to companyname.s3.amazonaws.com.
With this setup we lack some of the static website hosting features such as a custom index.html page and a error page. I understand from this answer that I need to point instead to http://companyname.s3-website-eu-west-1.amazonaws.com to get this to work.
My question is, can I just update the Origin without having to create a new Cloudfront distribution? Or is it better to create a new Cloudfront distribution then change the DNS and associate our custom CNAME with it?
We are ideally looking to have zero downtime if possible.
Create a second Origin on the CloudFront distribution, pointing to the new endpoint.
Then you can update each Cache Behavior and select the new origin from the drop-down, and save changes.
If your bucket is correctly configured, this change will take effect over the course of a few minutes, without any downtime.
Related
I have tried several tutorials on how to set up https via CloudFront, but nothing is working. I am hosting from an S3 bucket and the app works fine via the http protocol, but I need it to be https.
Does anyone have a very thorough tutorial on how to make this work?
Some tutorials explain how to go about setting up a certificate, some explain how to use CloudFront to handle its distribution and I even found a CloudFront tutorial that explains how not using a link from the CloudFront setup forces the wrong region to be created for a certificate, so I even tried that.
I have not found anything that explains exactly what needs to be done for this very common setup, so I am hoping that someone here has some helpful resources.
I think the main issue I had when setting up a CloudFront distribution for an S3 static webhosting bucket was in the Orign Domain Name.
When you create a new distribution, under Origin Settings, the Origin Domain Name field works as a drop-down menu and lists your buckets. Except that picking a bucket from that list doesn't work for static webhosting. You need to specifically put the endpoint for the bucket there, for example:
mywebhostingbucket.com.s3-website-sa-east-1.amazonaws.com
And for custom domains, you must set up the CNAMEs under Distribution Settings, Alternate Domain Names (CNAMEs), and then make sure you have your custom SSL certificate in the us-east-1 region.
Then you can configure the alias record set for the CloudFront distribution.
Here is a complete answer for setting up a site with https.
I had everything in this document completed:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
And it worked to get the site live via http, but in order to add https, I needed to do the following:
I had requested a certificate for whatever.com, and tried several suggestions after that. But there were a couple of things missing.
To route traffic for the domain (whatever.com) to CloudFront distribution, you will need to clear the current value of the A record and fill in distribution domain name.
Several documents that I viewed said to point the whatever.com S3 bucket to the www.whatever.com S3 bucket, and use the second one to drive the site. Since CloudFront can serve multiple domain name, you may set CNAME of distribution with both, but you will need to set A record for both to distribution AND request an ACM certificate with both domain names (with and without the www). Also, I did ask this, so if you already have a certificate, you can't edit it to do this, which means you'll need to request a new one that has both whatever.com and www.whatever.com
After all of this, I still got "Access Denied" when I went to my site, so to fix this issue, I had to create a new origin in CloudFront with 'Origin Domain Name' set to the full address of the S3 bucket (without the http), and then set the Default (*) Behavior to the S3-Website-.....whatever.com bucket.
After all of this, my site was accessible via http AND https. I hope this helps anyone who experienced this challenge.
I want to host my website in S3 (because it seems cheaper and i don't have server side script). I have a domain, and i want my domain link to my S3 website. So far, what i do is enabling Static website hosting in my S3 website bucket, and set Route53 record set's Alias Target to my S3 website. it's working. But it's not good enough, i want it to deal with multi regions.
I know that Transfer acceleration can auto sync files to other regions so it's faster for other regions. But i don't know how to make it work with Route53. I hear that some people uses CloudFront to do that but i don't quite understand how. And i don't want to manually create buckets in several regions and manually set up for each region
do you have any suggestion for me?
If your goal is to reduce latency for users worldwide, then Amazon CloudFront is definitely the way to go.
Amazon CloudFront has over 100 edge locations globally, so it has more coverage than merely using AWS regions.
You simply create a CloudFront distribution, point it to your S3 bucket and then point your domain name to CloudFront.
Whenever somebody accesses your content, CloudFront will retrieve it from S3 and cache it in the edge location closest to that user. Then, other users who want the data will receive it from the local cache. Thus, your website appears fast for many users.
See also: Amazon CloudFront pricing
I'm using vuejs and almost everything I do is on client-side, but for thing I need to call the server-side to check if URL exists or not.
I don't want to make these requests from browser, because that doesn't make sense to fetch different website from my scripts that will be more like calling any bad website without user knowing it in background, so I need to call cloud-function(gce) or aws lambda(since I don't want to host the site on server for it, since it has just one api call).
What would be the best way to accomplish it, I'm looking for something like website is www.webapp.com and cloud-function call on www.webapp.com/checkUrl
If you choose AWS platform, you can use S3, CloudFront, Route53, API Gateway and Lambda to accomplish your goal.
Step01
Create a S3 bucket and upload your frontend vueJs code
Enable Static Web Hosting onto your bucket from S3 properties
Create a CloudFront distribution
Create a CloudFront origin pointing to your s3 bucket url (you have to add static website url of the s3 bucket)
Set the default behaviour pointing to S3 orgin ID
Step 02
Create your lambda function
Create a API gateway
Add new resource (GET/POST) pointing to your lambda
Deploy your API
Go back to the CloudFront distribution and add a origin pointing to your API Gateway
In the behaviour tab, create a new behaviour eg: (/checkUrl) and point it to the OriginId of the API Gateway
Step 03
Goto Route53 and create a new Hosted Zone
Set the NS records of the hosted zone in your domain configuration
Create a new record set (eg: www.webapp.com) and point it to the DNS of your CloudFront distribution
Update your CloudFront distribution's Alternate Domain Name to www.webapp.com
I have a single page application which is hosted on s3 and have cloudfront pointing to it. The app is multi-tenant, so the "tenants" need to be able to point their domain to a subdirectory.
Example: sometenant.com should point to app.domain.com/sometenant
Is this possible? Fairly hard to test with deployment/propagation/etc...
Also wondering if I could keep pushstate working, if it's possible that is...
This can be done, but not with a single CloudFront distribution -- a single CloudFront distribution can accept and handle requests for multiple subdomains, but unless the origin web server behind CloudFront can vary the response based on the hostname (and this won't work with S3 as the origin server), the basic assumption is that all of the domains pointing to a distribution result in the same behavior.
You'll need a CloudFront distribution for each domain. Not a problem, because the default limit of 200 distributions per account can be increased on request... and distributions are easily created in automation... and you don't pay a fee for each distribution -- just the bandwidth and requests, which will essentially the same.
Each CloudFront distribution can then be configured with a default root object -- this is the page fetched from the back-end when the root / page is requested. Set this to sometenant and, for this distribution, the root page will be requested from the bucket as GET /sometenant. Any other page/object request (images, css, etc.) is forwarded straight through to the bucket.
Each tenant site needs its domain added as an alternate name for its distribution, then you configure the tenant's site to point to the distribution's assigned dxxxexample.cloudfront.net endpoint, in DNS.
I'm trying to host a simple static website from my AWS account with S3. I had an old dusty account with lots of strange settings from testing over the years, I also had an S3 account with a 'mypersonaldomain.com' and 'wwww.mypersonaldomain.com' bucket. Anyways I wanted to start fresh so I canceled the account to start new.
Now when I go to create a 'mypersonaldomain.com' and 'www.mypersonaldomain.com' it says the bucket name is taken even though the account was deleted a while ago. I had assumed that amazon would release the bucketname back to the public. However when I deleted the account, I didn't explicitly delete the buckets beforehand.
I'm under the impression to use S3 for static website hosting the bucket names need to match the domain name for the DNS to work. However If I can't create a bucket with the proper name is there anyway I can use S3 for static hosting? Its just a simple low traffic website that doesn't need to be in an EC2 instance.
FYI I'm using Route 53 for my DNS.
[note 'mypersonldomain.com' is not the actual domain name]
One way to solve your problem would be to store your website on S3 but serve it through CloudFront:
You create a bucket on S3 with whatever name you like (no need to match the bucket's name to your domain name).
You create a distribution on CloudFront.
You add an origin to your distribution pointing to your bucket.
You make the default behavior of your distribution to grab content from the origin created on the previous step.
You change your distribution's settings to make it respond to your domain name, by adding a CNAME
You create a hosted zone on Route 53 and create ALIAS entries pointing to your distribution.
That should work. By doing this, you have the added benefit of potentially improving performance for your end users.
Also note that CloudFront has been included in the Free Tier a couple months ago (http://aws.amazon.com/free).
In my opinion, this is a tremendous design flaw in the way that S3 requires that bucket names are universally unique across all users.
I could, for example, make buckets for any well known company name (assuming they weren't taken) and then the legitimate users of those domains would be blocked from using them for the purpose of creating a static s3 website if they ever wanted to.
I've never liked this requirement that s3 bucket names be unique across all users - a major design flaw (which I am sure had a legitimate reason when it was first designed), but can't imagine that if AWS could go back to the drawing board on this that they wouldn't re-think it now.
In your case, with a delete account, it is probably worth dropping a note to s3 tech support - they may be able to help you out quite easily.
I finally figured out a solution that worked for me. For my apex domain name I am using CloudFront which isn't an issue that someone already has my bucket name. The issue was the www redirect. I needed a server to rewrite the URL and issue a permanent redirect. Since I am using Elastic Beanstalk for my API I leveraged Nginx to accomplish the redirect. You can read all the details on my post here: http://brettmathe.com/aws/aws-apex-redirect-using-elastic-beanstalk/