I am setting up Amazon CloudFront where the origin is a Load Balancer.
I have able to set CloudFront but their one issue I am facing is that when I access CloudFront endpoint it just change the url to load balancer end point.
I have even whitelist (host) in Cache Behavior. When I whitelist only host then end point of CloudFront did not work. Means not able to access the web application which in ELB.
When I whitelist the three header (origin, Referer, & host) then also not able to access the web application from web browser.
Last I try that I just whitelist (origin, Referer) then able to access the web site. But it just change the url and show the end point if ELB.
Can anyone help me to this?
Related
I have setup a GCP load balancer following the steps as displayed on https://cloud.google.com/load-balancing/docs/https/ext-http-lb-tf-module-examples#with_a_backend . I have create an A record at my dns provider and I am succesfully able to reach my service through the domain name. I have also created a http to https redirecting load balancer, which only redirects when visiting the domain name.
However my problem is that I can also still directly access my load balancers ip adress over http, which in turn redirects to my backend service thus allowing insecure access to my service. I am not sure what steps there are to debug my configurations or if anyone has experienced something similar.
The simplest method is to redirect HTTP to HTTPS at your backend. That method provides you with more options and control.
Tip: if the client arrives at an IP address, you most likely want to discard that traffic. That traffic is typically hackers, trolls, etc.
You can also set up a redirect in the load balancer:
Set up an HTTP-to-HTTPS redirect for global external HTTP(S) load balancer
My app structure is like the default traffic goes to S3 and traffic to /api will go to the application load balancer for my Node.js app API backend. My application has been set up and it's working when I test it like: myapplicationloadbalancerDNS:5000.
I have created a CloudFront with Alternate domain names with my domain name and add one origin - S3, and create a Default(*) to this S3 origin. It's working when I test with my domain mydomainname.com
I'm trying to create another distruibution to /api pointing to my ALB. On this setting, my origin domain is the ALB and protocal is HTTPS only. On its Behaviors setting, I created Path pattern: /api, origin and origin groups is the ALB, viewer is Redirect HTTP to HTTPS, Cache key and origin requests is Legacy cache settings with Include the following headers and Host on Add Header.
Then, when I test my domain on api link, mydomainname.com/api, I got 503 error. Even the link, mydomainname.com/api:5000, I got AccessDenied error.
On this setting, my origin domain is the ALB and protocal is HTTPS only
You have to properly setup https on the ALB. First, HTTPS works on port 443, not 5000. Then you also need a valid public SSL certificate and your own domain that you associate with the ALB.
My rest API (node) is set up in AWS ECS behind a load balancer - super-long-aws-lb-url
I also have a domain registered and a subdomain for my backend which is set up as an A-record aliased to the load balancer; I access my rest API at something like data.mydomain.com/api/resource/{:id} - this is working as expected.
There's one endpoint that serves as a reverse proxy for accessing user-generated content - it's public and currently I can access it via
data.mydomain.com/api/content/public/{:id}
What I'd like to do is create a "pretty" url to just that endpoint in route53 so that the public endpoint becomes available via content.mydomain.com/{:content-id}
So far I've tried setting up this subdomain as a CNAME pointing directly to the string value composed of ALB URL + endpoint
content.mydomain.com -> super-long-aws-lb-url/api/content/public/
I expect that this will allow me to access that content at http://content.mydomain.com/{:content-id} but I get a Server Not Found error
Next I tried setting it up as an A-Record with an alias, but since it needs a resource with an IP address, I'm forced to select an AWS resource from a dropdown, and I'm back to using the load balancer without bypassing the global prefix (api) and the resource URL (content/public)
Is there a way to point a subdomain directly to an API endpoint in AWS?
Amazon Route 53 is a Domain Name Service (DNS).
DNS is used to resolve a domain name (eg data.mydomain.com) to an IP address, which allows traffic to be sent to a specific computer.
DNS only covers the domain name. It does not include anything after the slash.
Therefore, you can not use Amazon Route 53 to point to a 'path' (eg /api/content/public/).
Such redirection would be the job of any software running on the target computer. You can likely configure this in your web server software.
First time here but have used help from here a lot.
I managed to find some answers from this thread
Cloudfront and EC2
But as it is mentioned in answer, this issue is happening for me
“Be sure, when you connect through CloudFront, that the server doesn't redirect you back to the EC2 hostname or IP (the address bar in the browser will change, if it does, and you'll want to fix your web server's config if that happens).”
So for this do I need to change anything on httpd.conf?
Or ec2’s firewall? I am using amazon AMI with LAMP
Thanks
Pramit
It means that when your application points to another page in the app (eg index.html pointing to about.html), you should use relative references (/about.html rather than http://1.2.3.4/about.html).
This way, traffic coming in through CloudFront will continue to come in through CloudFront rather than be redirected elsewhere.
Update:
Let's say your configuration is:
A single Amazon EC2 instance with an Elastic IP address
A CloudFront distribution
Your own domain name that you'd like to point to CloudFront
In this case, you would:
Configure a CNAME record (eg www.example.com) on your Domain (on Route 53 or your DNS provider) to point to the CloudFront distribution URL
Configure Alternate Domain Names (CNAMEs) in CloudFront with your CNAME (www.example.com) -- this is so that it knows what domain name is being used to send requests to CloudFront
Set origin to the Elastic IP address of your EC2 instance -- this is where CloudFront obtains the information that it should cache and serve
If you want CloudFront to fetch data from a sub-path (sub-directory) of the origin, then set origin path to that path. For example, you might want to serve content from /dev or /prod.
See: Values That You Specify When You Create or Update a Web Distribution - Amazon CloudFront
I need to use AWS WAF for my web application hosted on AWS to provide additional rule based security to it. I couldnt find any way to directly use WAF with ELB and WAF needs Cloudfront to add WEB ACL to block actions based on rules.
So, I added my Application ELB CNAME to cloudfront, only the domain name, WebACL with an IP block rule and HTTPS protocol was updated with cloudfront. Rest all has been left default. once both WAF and Cloudfront with ELB CNAME was added, i tried to access the CNAME ELB from one of the ip address that is in the block ip rule in WAF. I am still able to access my web application from that IP address. Also, I tried to check cloudwatch metrics for Web ACL created and I see its not even being hit.
First, is there any good way to achieve what I am doing and second, is there a specific way to add ELB CNAME on cloudfront.
Thanks and Regards,
Jay
Service update: The orignal, extended answer below was correct at the time it was written, but is now primarily applicable to Classic ELB, because -- as of 2016-12-07 -- Application Load Balancers (elbv2) can now be directly integrated with Web Application Firewall (Amazon WAF).
Starting [2016-12-07] AWS WAF (Web Application Firewall) is available on the Application Load Balancer (ALB). You can now use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. With this launch customers can now use AWS WAF on both Amazon CloudFront and Application Load Balancer.
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-Balancer/
It seems like you do need some clarification on how these pieces fit together.
So let's say your actual site that you want to secure is app.example.com.
It sounds as if you have a CNAME elb.example.com pointing to the assigned hostname of the ELB, which is something like example-123456789.us-west-2.elb.amazonaws.com. If you access either of these hostnames, you're connecting directly to the ELB -- regardless of what's configured in CloudFront or WAF. These machines are still accessible over the Internet.
The trick here is to route the traffic to CloudFront, where it can be firewalled by WAF, which means a couple of additional things have to happen: first, this means an additional hostname is needed, so you configure app.example.com in DNS as a CNAME (or Alias, if you're using Route 53) pointing to the dxxxexample.cloudfront.net hostname assigned to your distribution.
You can also access your sitr using the assigned CloudFront hostname, directly, for testing. Accessing this endpoint from the blocked IP address should indeed result in the request being denied, now.
So, the CloudFront endpoint is where you need to send your traffic -- not directly to the ELB.
Doesn't that leave your ELB still exposed?
Yes, it does... so the next step is to plug that hole.
If you're using a custom origin, you can use custom headers to prevent users from bypassing CloudFront and requesting content directly from your origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
The idea here is that you will establish a secret value known only to your servers and CloudFront. CloudFront will send this in the headers along with every request, and your servers will require that value to be present or else they will play dumb and throw an error -- such as 503 Service Unavailable or 403 Forbidden or even 404 Not Found.
So, you make up a header name, like X-My-CloudFront-Secret-String and a random string, like o+mJeNieamgKKS0Uu0A1Fqk7sOqa6Mlc3 and configure this as a Custom Origin Header in CloudFront. The values shown here are arbitrary examples -- this can be anything.
Then configure your application web server to deny any request where this header and the matching value are not present -- because this is how you know the request came from your specific CloudFront distribution. Anything else (other than ELB health checks, for which you need to make an exception) is not from your CloudFront distribution, and is therefore unauthorized by definition, so your server needs to deny it with an error, but without explaining too much in the error message.
This header and its expected value remains a secret because it will not be sent back to the browser by CloudFront -- it's only sent in the forward direction, in the requests that CloudFront sends to your ELB.
Note that you should get an SSL cert for your ELB (for the elb.example.com hostname) and configure CloudFront to forward all requests to your ELB using HTTPS. The likelihood of interception of traffic between CloudFront and ELB is low, but this is a protection you should consider implenting.
You can optionally also reduce (but not eliminate) most unauthorized access by blocking all requests that don't arrive from CloudFront by only allowing the CloudFront IP address ranges in the ELB security group -- the CloudFront address ranges are documented (search the JSON for blocks designated as CLOUDFRONT, and allow only these in the ELB security group) but note that if you do this, you still need to set up the custom origin header configuration, discussed above, because if you only block at the IP level, you're still technically allowing anybody's CloudFront distribution to access your ELB. Your CloudFront distribution shares IP addresses in a pool with other CloudFront distribution, so the fact that the request arrives from CloudFront is not a sufficient guarantee that it is from your CloudFront distribution. Note also that you need to sign up for change notifications so that if new address ranges are added to CloudFront, then you'll know to add them to your security group.