HTTPS, AWS ELB, CloudFront & S3 - amazon-web-services

Background: My division of bigcorp.com was sold off and now we are lilcorp.com. We have a fleet of appliances deployed that will be looking for software updates on https://updates.bigcorp.com/, but since we no longer control bigcorp.com, we need to update our appliances to check https://updates.lilcorp.com. bigcorp has given us a cert for updates.bigcorp.com and has a DNS CNAME in place that forwards traffic for updates.bigcorp.com to server.lilcorp.com.
I'm trying to config things like this:
HTTPS HTTPS
Appliance -----------> ELB -----------> CloudFront ----------> S3
Cert for Cert for
updates. updates.
bigcorp. lilcorp.
com com
I've got the following DNS records in place:
updates.bigcorp.com CNAME to server.lilcorp.com
server.lilcorp.com CNAME to ELB
updates.lilcorp.com CNAME to CloudFront.net address
CloudFront is configured to use an S3 bucket as its origin.
Status: Things work if I hit CloudFront directly, but that doesn't help since the appliances are hitting the updates.bigcorp.com address.
Questions:
Can an ELB forward to a CloudFront deployment? I'm not seeing how to make it a "target".
Do I need to put a web server in the middle of this to handle the redirect/forward?
Thanks in advance.

Can an ELB forward to a CloudFront deployment? I'm not seeing how to make it a "target".
No it cannot. The target (for ALB) can be only an private IP address, lambda and instance id.
Do I need to put a web server in the middle of this to handle the redirect/forward?
Yes, you would need some kind of proxy. With ALB, you could use lambda function. So ALB would invoke a lambda function, and the function would query external CloudFront distro and return the results.

Related

Connect cloudfront with Elastic Beanstalk Application

I am trying to connect cloudfront with EBS.
Whats the setup?
EBS is hosting a nodeJs application.
Cloudfront origin set to Elastic load balancer and accepts HTTPS
only [Cloudfront config]
All Alternative domains are added correctly.
ACM certificate is added to Cloudfront [region US EAST - N.Virgina]
EC2 instance / EBS is in the region Asia Pacific. ACM certificates
installed in load balancer are from Asia pacific too.
I am also redirecting http traffic with the help of load balancer listeners.
Security group allows traffic to port 443 .
No AWS WAF set.
Origin settings:-
It's been a day now.
I am trying continuously.
I am able to set DNS A & AAA record to the cloudfront using route 53, getting the dropdown value as well.
I am able to park domains directly to EBS & they work over https properly.
Getting 502 ERROR : The request could not be satisfied.
Already tried https://aws.amazon.com/premiumsupport/knowledge-center/resolve-cloudfront-bad-request-error/
I doubt my ELB has ACM certificate from Asia Pacific but the ACM certificate used in cloudfront is from US EAST , is that causing the issue? I can't change the EBS region now.
It seems that you did not setup https on your EB. But you are using HTTPS only origin. That's why it does not work. You have to have origin in http, or actually properly setup https on your EB.
Thanks to #Marcin for finding out my stupid mistake!
It was because of the ELB not accepting
https only
config from cloudfront.
But i had set listeners to redirect all HTTP traffic to HTTPS,
and HTTPS traffic points to my instance.
After finding out the reason mentioned by #Marcin
How i solved the problem:
Changed cloudfront origin request to http only.
Set a custom header :
then added a new listener to ELB which forwards to my instance if the header match, it would help EBS differentiate requests from Cloudfront and other origins.
Still i was getting same response, found out the response was from cache. Just had to invalidate cache of cloudfront. And its done! 😎
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html

Is it possible to store a website on AWS EC2 with ssl using certificate manager

even if my dns records not pointing to aws route53,
someone has tried this before?
What records do I need to add in addition to A?
Unfortunately you cannot use the public ACM certificates directly on your instance as Amazon will only allow this to be deployed to Amazon managed resource such as:
Elastic Load Balancer
API Gateway
CloudFront
Without one of these services sitting in front of your EC2 you would need to rely on another solution. One of the following would be applicable:
Free service such as certbot allows you to generate a valid SSL that will need to be regenerated every 90 days.
Buy an SSL and deploy to the hosts.
Use the paid version of Amazon ACM Private CA (This can become quite expensive).
Route 53 is a DNS configuration service so its responsibility it to control DNS resolution i.e. example.com resolves to 1.2.3.4. HTTPS is a Layer 6/7 operation after DNS has been resolved and you're trying to connect to the application.
You need simply create the DNS records for your application (be that an A, Alias or CNAME record). In addition when verifying SSLs the provider would likely ask you to either perform email validation or DNS validation (create a record they provide) to successfully prove ownership.

AWS ELB DNS failover to S3

I am trying to stay sane here, but this config was checked several times now against several tutorials and it is just not working.
I am having a public facing ELB for my website with EC2 instance behind it. What I need is to setup a maintenance website, hosted from S3 bucket.
What I did is, I created dns entry A alias for healthcheck xxx-healthcheck.xxx.com pointing to ELB internal AWS domain name.
I created an A alias for my website xxx.xxx.com pointing to my ELB internal address. Then I marked settings as failover, added above healthcheck and marked this as evaluate its health.
I added next record - for my S3 bucket (S3 bucket name is xxx.xxx.com). Alias A, again. Name of alias is same as primary address, so xxx.xxx.com. I marked it as failover and secondary.
I turned service off on both instances, healthcheck is marked as unhealthy. It is timeing out, when I try to access the website - no under maintenance site.
Please.
Please, help.
Cheers
A
Are you using TLS/SSL? If your website from ELB is being served over HTTPS, the browser tries to use HTTPS even with S3 after the failover (this is normal and it's called HSTS). Your content will still be served if you use HTTP. To verity that, use a different browser or clear all history related to your domain (if that does't work, google "chrome delete hsts domain" if you're on Chrome), and open your domain using http://<domain>.
If you enabled static website hosting on S3 and added that static S3 URL (via A record Alias) as your failover secondary route in Route53, this will not work since S3 is not configured to receive traffic using HTTPS via your domain.
The solution for this is to create a CloudFront distribution and add your S3 bucket as the origin, add CNAME value in CloudFront as your domain name and attach/configure SSL certificates on CloudFront. Now, in your Route53, add the CloudFront distribution's URL as the failover entry (A record Alias).

Routing example.at to S3 bucket and *.example.at to load balancer with HTTPS

I have set up a multi tenant application which should be available to clients via a subdomain (e.g. https://client1.example.at). Requests to *.example.at are routed to a load balancer via Route 53. The load balancer has an AWS signed wildcard certificate (e.g. supporting example.at and *.example.at). From this side, everything is working as expected and I can access https://client1.example.at, https://client2.example.at, etc.
Based on this setup, I wanted to route specific request without subdomain (except www) such as https://www.example.at or https://example.at to a bucket (which is also named www.example.com) and not to the load balancer (I just want to serve a static site for requests to the "main domain"). It works but I can only access www.example.at and example.at without using HTTPS. My setup can be seen below:
I then found out that I have to use Cloudfront in order to use HTTPS for a custom domain with S3 buckets (if that is correct?). Now I have a few questions:
Is it necessary to use Cloudfront to serve content from my S3 bucket for www.example.at and example.at via HTTPS?
If Cloudfront is necessary then I have to request a new certificate for www.example.at and example.at in region US EAST according to the official AWS docs. Is it possible to create two certificates for the same domain with AWS certificate manager or can I get some conflicts with this setup?
Is it ok to use *.example.at as A type record with alias to the load balancer at all?
Generally speaking, is my Route 53 setup valid at all?
I wanted to route specific request without subdomain (except www) such as https://www.example.com or https://example.com to a bucket (which is also named www.example.com)
Each of those "domains" must route to a different bucket unless you are using a proxy (which reroutes the hostname passed from the browser) in front of S3, the domain name must match the bucket name. If they don't then your requests are going to a bucket matching the DNS name you routed from, the routing has nothing to do with the hostname of the S3 bucket endpoint.
In other words, let's say your hostname was www.example.com, and you set the CNAME to example.com.s3.amazonaws.com (or you could use the website endpoint, it doesn't matter for this example).
When a request hits the DNS name www.example.com it then is sent to the S3 server which is behind the S3 hostname. That request from the browser is for hostname "www.example.com", the actual CNAME referenced which pointed to the S3 endpoint is irrelevant because S3 never knows what actual CNAME was used to by your browser to connect to S3. So S3 will attempt to pull the requested object from the www.example.com bucket.
URL -> S3 Bucket
https://www.example.com -> s3://www.example.com
https://example.com -> s3://example.com
It works but I can only access www.example.at and example.at without using HTTPS.
CNAME DNS routing like this when using SSL to an S3 bucket does not work. The reason for this is that the S3 wild card certificates are 1 level deep (*.s3.amazonaws.com) so your bucket www.example.com.s3.amazonaws.com will fail to match it because it has 2 extra levels above the wild card. So your browser rejects the certificate as invalid for the hostname.
To accomplish this you must use a proxy of some sort in front of S3 with your own certificates for the domain in question.
Is it necessary to use Cloudfront to serve content from my S3 bucket for www.example.at and example.at via HTTPS?
CloudFront is an excellent option for addressing the HTTPS with CNAME routed DNS to an S3 bucket issue we just mentioned.
If Cloudfront is necessary then I have to request a new certificate for www.example.at and example.at in region US EAST according to the official AWS docs. Is it possible to create two certificates for the same domain with AWS certificate manager or can I get some conflicts with this setup?
I can't answer that one, I can only suggest you try and find out what happens. If it doesn't work then it's not an option. It shouldn't take much time to figure this one out.
Is it ok to use *.example.at as A type record with alias to the load balancer at all?
To clarify, an A Record can only ever be an IP address, an A Alias is similar to a CNAME (but is Route53 specific).
I highly recommend CNAMES (or ALIASES, they are similar). Pointing directly at one of S3's A-Records is a bad idea because you don't know if or when that IP will be removed from service. By referencing the hostname with a CNAME/ALIAS you don't have to worry about that. Unless you can be 100% sure that the IP will remain available then you shouldn't reference it.
Generally speaking, is my Route 53 setup valid at all?
I don't see any issues with it, based on what you described it sounds like like things are working as expected.
If Cloudfront is necessary then I have to request a new certificate for www.example.at and example.at in region US EAST according to the official AWS docs. Is it possible to create two certificates for the same domain with AWS certificate manager or can I get some conflicts with this setup?
As suggested by #JoshuaBriefman I simply tried to create another certificate for the same domain in another region now and it worked. I could also use the certificate for the CloudFront distribution (additional certificate was created in US EAST) and all works now without any problems so far.

How to use AWS WAF with Application ELB

I need to use AWS WAF for my web application hosted on AWS to provide additional rule based security to it. I couldnt find any way to directly use WAF with ELB and WAF needs Cloudfront to add WEB ACL to block actions based on rules.
So, I added my Application ELB CNAME to cloudfront, only the domain name, WebACL with an IP block rule and HTTPS protocol was updated with cloudfront. Rest all has been left default. once both WAF and Cloudfront with ELB CNAME was added, i tried to access the CNAME ELB from one of the ip address that is in the block ip rule in WAF. I am still able to access my web application from that IP address. Also, I tried to check cloudwatch metrics for Web ACL created and I see its not even being hit.
First, is there any good way to achieve what I am doing and second, is there a specific way to add ELB CNAME on cloudfront.
Thanks and Regards,
Jay
Service update: The orignal, extended answer below was correct at the time it was written, but is now primarily applicable to Classic ELB, because -- as of 2016-12-07 -- Application Load Balancers (elbv2) can now be directly integrated with Web Application Firewall (Amazon WAF).
Starting [2016-12-07] AWS WAF (Web Application Firewall) is available on the Application Load Balancer (ALB). You can now use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. With this launch customers can now use AWS WAF on both Amazon CloudFront and Application Load Balancer.
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-Balancer/
It seems like you do need some clarification on how these pieces fit together.
So let's say your actual site that you want to secure is app.example.com.
It sounds as if you have a CNAME elb.example.com pointing to the assigned hostname of the ELB, which is something like example-123456789.us-west-2.elb.amazonaws.com. If you access either of these hostnames, you're connecting directly to the ELB -- regardless of what's configured in CloudFront or WAF. These machines are still accessible over the Internet.
The trick here is to route the traffic to CloudFront, where it can be firewalled by WAF, which means a couple of additional things have to happen: first, this means an additional hostname is needed, so you configure app.example.com in DNS as a CNAME (or Alias, if you're using Route 53) pointing to the dxxxexample.cloudfront.net hostname assigned to your distribution.
You can also access your sitr using the assigned CloudFront hostname, directly, for testing. Accessing this endpoint from the blocked IP address should indeed result in the request being denied, now.
So, the CloudFront endpoint is where you need to send your traffic -- not directly to the ELB.
Doesn't that leave your ELB still exposed?
Yes, it does... so the next step is to plug that hole.
If you're using a custom origin, you can use custom headers to prevent users from bypassing CloudFront and requesting content directly from your origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
The idea here is that you will establish a secret value known only to your servers and CloudFront. CloudFront will send this in the headers along with every request, and your servers will require that value to be present or else they will play dumb and throw an error -- such as 503 Service Unavailable or 403 Forbidden or even 404 Not Found.
So, you make up a header name, like X-My-CloudFront-Secret-String and a random string, like o+mJeNieamgKKS0Uu0A1Fqk7sOqa6Mlc3 and configure this as a Custom Origin Header in CloudFront. The values shown here are arbitrary examples -- this can be anything.
Then configure your application web server to deny any request where this header and the matching value are not present -- because this is how you know the request came from your specific CloudFront distribution. Anything else (other than ELB health checks, for which you need to make an exception) is not from your CloudFront distribution, and is therefore unauthorized by definition, so your server needs to deny it with an error, but without explaining too much in the error message.
This header and its expected value remains a secret because it will not be sent back to the browser by CloudFront -- it's only sent in the forward direction, in the requests that CloudFront sends to your ELB.
Note that you should get an SSL cert for your ELB (for the elb.example.com hostname) and configure CloudFront to forward all requests to your ELB using HTTPS. The likelihood of interception of traffic between CloudFront and ELB is low, but this is a protection you should consider implenting.
You can optionally also reduce (but not eliminate) most unauthorized access by blocking all requests that don't arrive from CloudFront by only allowing the CloudFront IP address ranges in the ELB security group -- the CloudFront address ranges are documented (search the JSON for blocks designated as CLOUDFRONT, and allow only these in the ELB security group) but note that if you do this, you still need to set up the custom origin header configuration, discussed above, because if you only block at the IP level, you're still technically allowing anybody's CloudFront distribution to access your ELB. Your CloudFront distribution shares IP addresses in a pool with other CloudFront distribution, so the fact that the request arrives from CloudFront is not a sufficient guarantee that it is from your CloudFront distribution. Note also that you need to sign up for change notifications so that if new address ranges are added to CloudFront, then you'll know to add them to your security group.