Our web is just a web app. So we used CloudFront and S3 to host it. Now I want to use the canary release to redirect 5% users to a new version for some testing first. But I find that it seems AWS can't approach that I can't figure out how to approach that.
For example, in the screenshot, I need an SSL certificate bind to CloudFront A. But one certificate can only bind to one CloudFront, which is a limitation of AWS. It means that the certificate can't bind to CloudFront B.
I have no idea how to resolve the problem. I am not sure if I misunderstand AWS service or my solution is totally wrong.
Any comment will be much appreciated.
p.s. One solution I think about is to write a proxy or APIGateway/lambda function to accept the request and redirect by percentage.
Although CloudFront doesn't support this natively, you can implement Canary Release using AWS Lambda#Edge which runs at CloudFront Edge Locations. You might need to code the routing logic to forward a certain percentage to specific buckets.
The term canary release does not fit front-end development, it relates to your backing services, and should only be done at the API REST service level. Because in a canary configuration it isn't that a user always hits the canary release or the normal release, instead each request has a chance of hitting either release, one request could hit the canary, and then the next could hit the old release.
In regards to front-end, you may wish to have users turn on beta-features, or have an entirely different hosted site located at www.beta.yoursite.com, which the DNS resolves to your bucket with snapshot releases, while www.yoursite.com.resolves to the normal site. Then you can have what are beta users who will be chosen at random and receive an email suggesting they try out the new site at its beta location. In your application, you can mark these users as having beta-credentials to enforce that only beta-users have access to the beta site if you wish.
Note that even if you could do what you are proposing (I think there is a way with CloudFront) it would be a bad user-experience as a user may use 2 different devices when accessing your site and then have 2 different experiences but not know what is going on.
EDIT: Comment Answer - Like I say, I really don't think you want to do that, but anyway what you would do is resolve your domain to a apigateway/proxy/loadbalancer instead of a bucket, which would then route traffic based on the authenticated user to either the beta site or the old site. That way they won't see a different domain. AFAIK there is no way to do DNS resolution based on the logged in user in Route53 but also DNS in general. I could be wrong somebody correct me if so. Probably API gateway would be the simplest and use a lambda to route the traffic to the correct site.
Related
I have a Cloud Function which i want to secure by allowing only access from my domain to all users. I am exploring this for days.
Google seems to limit many options and instead you are forced to buy and use more products, for example for this you need a Network Balancer, which is a great product but a monster to smaller businesses, and not everyone needs it (or wants to pay for it).
So, how do you secure a Function on the Console, without IAM (no signin needed), to only allow a certain domain calls before you expand to a Balancer ?
I do see that Google has something called Organization policies for project which supposed to restrict a domain, but the docs are not clear and outdated (indicate UI that doesn't exist)
I know that Firebase has the Anonymous User, which allow a Function to check a Google ID of an anonymous user, but everything online is a Firebase thing, and no explanation anywhere how to do this using normal Function with Python.
EDIT
I do use Firebase Hosting, but my Function is Python and it's handled from the GCP, not a Firebase Function.
Solved, you can use API Gateway, with API key, restrict the key to your domain only, and upload a config with your Function url, so you access it with a API url+key, and nobody else can just run it.
See here Cloud API Gateway doesn't allow with CORS
I wish i could connect it to a domain as well, but we can't, google seems to want everyone to use the expensive Balancer, or Firebase (charged in this case on a Function use for every website visit)
I have a domain connected to AWS ec2(www.example.com) it works fine but when
I tried to create a subdomain(test.example.com) that point to the same ec2 as the parent domain I create an A record for the subdomain still gets 403 forbidden
Check your user permission at https://console.aws.amazon.com/iam/home#/users/<your-aws-username>. You must have one of the policies below attached to your account:
AmazonRoute53AutoNamingFullAccess
AmazonRoute53AutoNamingRegistrantAccess
AmazonRoute53DomainsFullAccess
AmazonRoute53FullAccess
If you don't have permission to attach one of these policies to your account, ask your administrator to do that.
I was facing this issue for a slightly different case but the reason of the problem was probably the same. After fixing my problem, i reproduced your case as well.
It may be because of the cloud front that you are using.
You may need to add all the URLs that you are going.
please check you have provided a value for CNAME in Cloud front distribution settings. (in your case you need to add two records there. One for main domain example.com and other for test.example.com).
Without this setting, cloud front distribution cannot be accessed via new "A record".
if you have HTTPS access then your certificate must cover the subdomain. and maybe that is why the cloud front needs this setting here( To check it against the certificate).
I have set up an S3 bucket to reroute all traffic to example.com to www.example.com with https according to this very poor AWS guide. It works for example.com and http://example.com.
But when I access https://example.com it hangs for a little while and then routes to a blank page. Why is it so difficult to redirect a URL I own to another one in AWS and how do I fix this?
Edit:
I am now configuring CloudFront distributions and trying to find one decent tutorial explaining how to perform this seemingly simple task.
Did you miss this line in the link you provided:
Note: The sites must use HTTP, because the redirect can't connect to Amazon S3 over HTTPS.
You are trying to do something that is expliciting called out as not being possible in the docs.
BTW: If you want to use https to service static s3 websites, using cloudfront if often the easiest and quickest way to do that.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-requests-s3/
So I finally figured this out and I am going to describe here what worked for me. E.J.'s answer above was a helpful pointer but wasn't specific enough to make this the absolutely trivial task I would hope this to be, even for a first-timer.
Here are the exact steps required, with some prior notes.
Two notes:
You HAVE to setup an SSL certificate with AWS to re-route after https. As an organisation AWS has not yet reached the place where automatic certificate management is... well... automatic. You have to use what I might call AWS "Extremely Manual" ACM.
You need an AWS S3 bucket (make it have the name of the domain your are routing FROM).
Steps:
Follow this guide to setup a S3 bucket that will route (without HTTPs) from example.com to www.example.com (or vice versa I guess)
Navigate to the absolute eye-sore that is Amazon CloudFront
Click everywhere until you find a button to "create distribution"
Set "Origin Domain Name" to the link for the bucket created in step 1. DO NOT use the one AWS recommends, you have to go to the bucket and copy the end-point manually, the one AWS fills-in automatically will not work. It should look like this: example.com.s3-website-eu-west-1.amazonaws.com but location and stuff will be different obviously. Not sure why AWS recommends the wrong end-point but that is the least of my concerns about this process.
This guide works for the rest of the CloudFront distribution creation but is not super specific and points to this mess at one important part. The other steps are okay but when creating an SSL certificate just click that "Request or Import a Certificate with ACM" button (you will have to refresh after creating a certificate because Ajax didn't exist when the AWS console was made 200 years ago)
And the most important step, take the link or whatever it is to your CloudFront distribution (which will look like this: d328r8fyg.cloudfront.net, this one is fake because apparently you're not supposed to share them), and make the A record for example.com created in step 1 point to that CF distro instead of pointing directly to your bucket.
And voila, only took about 3 hours to get a URL to redirect somewhere securely. Not sure why people expect us to make it to Mars when the largest company in the world can't point one url to another and Microsoft Image Editor still can't crop to a specific pixel dimension.
Anyway. I'm glad this is over.
Problem
I want to host multiple websites, each with ssl(https) and to not have to spend more than I have to.
I would also like to keep using Route53 if possible(but not necessary) because I understand how to use it and it's only costing me about $0.50/month.
Background
My backend/server understanding is very limited.
I've created some react websites(I think I'm using the word static correctly, there's some stuff that javascript changes) and currently I'm hosting each on an ec2. I used certbot(lets-encrypt) to enable https for my websites. The domain names are handled through Route53 and Namecheap.
S3 & Cloudfront
I want to put my sites on S3's to save costs. I need https though. Most tutorials that I look at talk about using cloudfront. It looks like cloudfront is going to cost me something similar to what my ec2's are costing me anyway, so it doesn't look like a solution to me. Maybe I'm wrong? Will the costs be insignificant?
Route53 & NGINX
It looks like I might be able to do this with Route53? Theres an answer from Gianluca Casati, but he didn't really provide enough detail for me to work with.
Some other tutorials explain it, but talk about setting up an NGINX server, and I don't really know what that is. I'd like to avoid NGINX if possible but I'll use it if I have to.
This is starting to get very complicated so I wanted to know if there is an easier way. If not what are all the steps involved
side note(if you can answer this also, it would be helpful, but isn't necessary)
I would also like good SEO. For at least one of the websites, it looks like this will involve dynamic rendering, using rendertron or puppeteer or something. Not all of my websites need this, but one will. It would be nice to know if this is possible or not.
Summary:
I'm looking for a cost effective method to host multiple static websites
It looks like that method is by storing each on amazon S3's
I want each website to have SSL(HTTPS)
It looks like cloudfront can do this, but won't really save me any money anyway
It looks like there's a method of doing this with Route53
The Route53 method may require an NGINX proxy server, and I have no idea what that is for the most part.
I'll try to provide answers to your questions in order.
Cloudfront: will the cost be insignificant?
Yes, incredibly so. Below is a screenshot of a random bill from my AWS account:
Here is the Cloudfront pricing breakdown presented on a typical AWS invoice:
Route53: can I do this with Route53?
Yes it will be necessary. NGINX is not necessary.
The 4 AWS services you will need are:
S3: create a bucket with "web hosting" enabled and with the same name as your desired domain url. For example: "www.jacobswebsite.com" or "jacobswebsite.com"
Cloudfront: because you can't install an SSL cert on an S3 bucket directly, but you can install SSL on a Cloudfront distribution.
Route53: what you use to point your DNS records when users type in your URL. In a simplistic way you will point "www.mysite.com" DNS records to Cloudfront - and Cloudfront will point to your S3 bucket.
AWS Certificate Manager: with this you can generate free auto-renewing browser compatible SSL certs. If Route53 is handling DNS for your domains, SSL generation is a one click setup with about a 30 minute wait. You can then install these SSL certs into a CloudFront distribution from a dropdown menu.
This is an excellent guide for the basic procedure:
https://medium.com/#sbuckpesch/setup-aws-s3-static-website-hosting-using-ssl-acm-34d41d32e394
You can skip the parts about SES for the basic setup, but if you ever need to setup a form that submits to your email inbox, then you might want to look into it.
In short, the answer is yes. You can use S3 and Cloudfront to accomplish your goal without a bunch of backend-server knowledge and for pennies per month. The method and services described above is exactly how you do it.
Jacob, here is the official guide to set up HTTPS with CloudFront. More specifically, you'd want to enable HTTPS for communication between viewers and CloudFront.
Although the above links give you general information/understanding of the setup, as you said, key thing is that you need to serve multiple domains from CloudFront over HTTPS. This page in Knowledge Center tells you exactly what you need: https://aws.amazon.com/premiumsupport/knowledge-center/multiple-domains-https-cloudfront/
In a nutshell:
Configure CNAMEs in Distribution Settings (CloudFront).
Configure SSL certificate(s) for your domains (the video explains how to create them via AWS' ACM, but you can buy wildard or multi-domain certificates elsewhere as well. Here is a nice post explaining difference between them.
Point CNAMEs to your CloudFront domain (done via Route53 or whatever DNS provider you use).
Note that if you can't use SNI due to old browser incompatibility, it will cost you more money, as you'll need one IP per domain -- see image below. SNI is the most cost-effective option and I think is what your question is all about. You might want to take a look here for more details.
Hope it helps!
You can use Cloudflare for this purpose. There is an example maybe helpful for you:
https://www.engaging.io/easy-way-to-configure-ssl-for-amazon-s3-bucket-via-cloudflare/
If you are looking for a better way to host your website with S3 and Cloudfront without the pain of using Lambda function, I have created a step by step tutorial for that very purpose, please have a look at https://www.youtube.com/watch?v=94DyGswSY6k&t=1475s
A client of mine has his website domain and hosting with. We'd like to use Amazon CloudFront as CDN, but we don't want to use S3 – we'd like to keep the site files where they are on DreamHost's servers.
I'm pretty sure this is possible, since CloudFront does allow custom origins, and I signed up for CloudFront, but I am unsure how to fill out the form (what to put for origin name, etc...) even after reading the pop-up help. We are on the bellfountain server of DreamHost.
What I've Tried
I did see the "create amazon cloudfront distribution not using amazon S3 bucket" question, and that is basically what I am after, but it wasn't specific enough for my needs.
I have also tried posting on the CloudFront forum, but that was less than helpful (no one responded after almost a month).
I've scoured Amazon's documentation (which is very thorough, I'll admit), but the most detailed information is for users of S3, and the stuff about using a custom domain again wasn't specific enough for me to figure it out. We do not have a paid support plan.
I tried chatting with DreamHost support, but they didn't even know what Amazon CloudFront was, and couldn't help me fill in the CloudFront information form. I looked around DreamHost's settings, etc. for things with similar names as what was being requested on the CloudFront form, but couldn't find anything.
Pretty much if you just put in: http://www.yourdomain.com, cloudfront figures out the rest - and you can customize from there if you need/want to - but just doing that one entry, and creating the distribution will setup a cloudfront end-point to serve the files from your external webserver - just make sure you include the 'http://' in front of the url so it can figure out the rest.