I'm using AWS S3 for front-end web hosting and AWS EC2 for back-end hosting.
The EC2 instance is behind an elb and has scheduled maintenance, and I want to display a maintenance page when the EC2 instance is under maintenance.
The way that I set it up is to let index.html "touch" some files on EC2, if the server is unavailable it will return HTTP 503 error. There is a 503.html in S3 and I want to display it when 503 error happens.
I've tried creating a new CloudFront Error Page and creating S3 Redirection rules, but none of them is working. What is the best way to set up the maintenance page?
I've been searching for a quick way to do this. We need to return a 503 error to the world during DB upgrade, but white list a few IPs of developers so they can test it before opening back up to public.
Found a one spot solution::
Go to the Loader Balancer in EC2 and select the load balancer you would like to target. Below, you should see Listeners. Click on a listener, and edit the rule. Create a rule like this:
Now everyone gets a pretty maintenance page returned with a 503 error code, and only two IP addresses in the first rule will be able to browse to the site. Order is important, where the two IP exceptions are on top, then it goes down the list. The last item is always there by default.
https://aws.amazon.com/about-aws/whats-new/2018/07/elastic-load-balancing-announces-support-for-redirects-and-fixed-responses-for-application-load-balancer/
Listener Rules for Your Application Load Balancer:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
Note: There is a max limit of 1024 characters in the Fixed Response.
It sounds like you are looking for the origin failover functionality https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html which lets you failover to a second origin if requests to the first origin start failing. Alternatively, you could also configure route53 health checks and do the failover at DNS level https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Related
I am attempting to setup MWAA in AWS and the UI web server needs to be inside a private subnet. Based on documentation the way to setup access to the web server VPC endpoints requires using a VPN/Bastion/Load Balancer and I would ideally like to use the load balancer to grant users access.
I am able to see the VPC endpoint created and it is associated to an IP in each subnet (two subnets total) that were chosen during the initial environment setup.
Target groups were setup to these IP addresses with HTTPS:443.
An internal ALB was created with the target groups above.
The UI is presented in the MWAA console as a pop-out link. When accessing that I am sent sent to a page that says The site can't be reached and the URL has a syntax similar to
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I replace the beginning of the URL with below I am able to get to the proper MWAA webpage but there are some HTTPS certificate issues which I can figure out later but this seems to be the proper landing page I need to reach.
https://<INTERNAL_ALB_A_RECORD>/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I access just the internal ALB A record in my browser
https://<INTERNAL_ALB_A_RECORD>
I get redirected to a login page for MWAA, click the login button, then I get re-directed to the below which has the This site can't be reached page.
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
I am not sure exactly where the issue is but it seems to be that I am not being re-directed to where I need to go.
Should I try a NLB pointing to the ALB as a target group? Additionally when accessing an internal ALB I read that you need access to the VPC. What does this mean exactly?
Still unsure of what the root cause is for the re-direction not taking place.
Our networking does not need an ALB or NLB in this scenario since we have a DirectConnect setup established which allows us to resolve VPC endpoints if we are on our corporate VPN. I still do end up at a page with an error message regarding This site can't be reached and to get to the proper landing page I just have to hit Enter again in my browser's URL search bar.
If anyone else comes across this make sure that you have the login token appended to the end of your URL entry before hitting enter again.
I have a case open with AWS on this so I will update if they are able to figure out the root cause.
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
The goal is that when the service is unavailable I'd redirect it to the lambda function.
Requests going through AWS Elastic Loadbalancer to service in ECS, let say service is paused (i.e task count=0) then we get a message "503 services temporarily unavailable" on the web.
now when this happens I would like to redirect this request to a lambda or anywhere else.
addition1:
I am using route53 as my DNS.
I am doing filtering of requests on ALB level, In route53 I have a record *.example.com send to my ALB then on ALB I'm filtering using host-header and send to target group containing the target.
addition2: (based on my research so far)
I have two approaches in my mind, it would be helpful if received some comments
Approach1: Two target groups 1 directs to target other directs to lambda function then based on trigger events I can change weights for these target groups
Approach2: Two targets in single target group base on the event I will toggle btw targets
You have 2 choices for how you can handle this type of issue.
The first way to do this is through using Route 53, assuming you're not using Route 53 as your DNS solution you would need to first migrate to using Route 53 as your DNS provider.
Once you have migrated you can then update the record for the host to become a failover record which would automate the failing over to the secondary value in the event of an issue.
However, no everyone wants to migrate to Route 53 or possibly cannot migrate. For this there is another solution, which is to use a CloudFront distribution in front of your endpoint.
By doing this you are presented with a couple of solutions:
CloudFront Custom Error pages for displaying a nice friendly error if something occurs (this is what Amazon used to display there error pages during Prime Day of Dogs of Amazon).
Use a Lambda#Edge to modify the behaviour if an error code is detected.
The advantage to the CloudFront solutions is that both actually apply instantly when an error occurs, whereas Route 53 takes potentially a few health checks, looking at 30-60 seconds before failover.
This means whenever your service becomes available again you will have instant service return back with CloudFront.
You can't do it directly on ELB alone.
However, if you can use Route53, you could configure DNS failover with failover records.
Using the failover records you would define primary and secondary records. The primary would point to the ELB, while secondary, for example, to a S3 static website. The health checks on the primary record would automatically failover your users to the secondary record if ELB (i.e. your ecs tasks go to zero).
When ELB becomes health again, failover will start routing traffic from it again automatically.
I can code but I don't understand anything about servers or DNS settings.
I found I could easily use beanstalk to launch applications that I'm working on but I do not understand fully how to properly create DNS entries and enable HTTPS.
In Route 53 I was able to successfully create a "Hosted Zone", point my domain to amazons four name servers, and create two A records. I created an A record for "domain.tld" and "www.domain.tld" and for each I selected "yes" for alias and "yes" for evaluate target health. For each A record for the alias I entered the long "environment.key.region2.elastickbeanstalk.com" URL they gave me for the application I created in Beanstalk.
To my surprise everything worked and visiting domain.tld or www.domain.tld goes to the root index.php file. Is there a better way to do this? I'm not sure if what I did is the correct way to do this.
Also, part two of my question: How do I setup HTTPS? - I watched a YouTube video where the guy goes to Services > Certificate Manager enters in "*.domain.tld" which I did, selected DNS validation, created a CNAME record as it requested me to do, the status updates successfully to "issued", I went to back to beanstalk > configuration > load balancer and under "secure elb listener" selected HTTPS for protocol and selected "*.domain.tld" for my certificate.
So now when I go to www.domain.tld or domain.tld nothing happens. If I go to https://www.domain.tld it shows the certificate but if I go to https://domain.tld it'll say "connection not private NET::ERR_CERT_COMMON_NAME_INVALID" click to continue type message.
Generally speaking I'd like everything to automatically go to https://domain.tld without someone entering in https://
I had to change my environment type to "load balanced" to see the "https/certificate" settings but I want to be able to use https on a "single instance" as well.
Also, when making changes to Route 53 "the DNS stuff" do I need to restart my application?
1) Your Route53 -> EB setup sounds fine.
2) The certificate problem you're seeing when trying to browse to https://domain.tld is because your cert only covers *.domain.tld, which does not include domain.tld. You can reissue the certificate to cover both.
3) If you want to redirect http://domain.tld to https://domain.tld, you'll need logic in your web server (apache, nginx, etc) to do that, as DNS does not operate at the protocol level.
4) If you want to use a certificate directly on an EB instance rather than on a load balancer, then you'll have to install the certificate and configure your web server appropriately. If you can afford the expense of keeping the load balancer, that'd be a much easier solution.
5) You shouldn't need to restart your app after making DNS changes.
I need to use AWS WAF for my web application hosted on AWS to provide additional rule based security to it. I couldnt find any way to directly use WAF with ELB and WAF needs Cloudfront to add WEB ACL to block actions based on rules.
So, I added my Application ELB CNAME to cloudfront, only the domain name, WebACL with an IP block rule and HTTPS protocol was updated with cloudfront. Rest all has been left default. once both WAF and Cloudfront with ELB CNAME was added, i tried to access the CNAME ELB from one of the ip address that is in the block ip rule in WAF. I am still able to access my web application from that IP address. Also, I tried to check cloudwatch metrics for Web ACL created and I see its not even being hit.
First, is there any good way to achieve what I am doing and second, is there a specific way to add ELB CNAME on cloudfront.
Thanks and Regards,
Jay
Service update: The orignal, extended answer below was correct at the time it was written, but is now primarily applicable to Classic ELB, because -- as of 2016-12-07 -- Application Load Balancers (elbv2) can now be directly integrated with Web Application Firewall (Amazon WAF).
Starting [2016-12-07] AWS WAF (Web Application Firewall) is available on the Application Load Balancer (ALB). You can now use AWS WAF directly on Application Load Balancers (both internal and external) in a VPC, to protect your websites and web services. With this launch customers can now use AWS WAF on both Amazon CloudFront and Application Load Balancer.
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-Balancer/
It seems like you do need some clarification on how these pieces fit together.
So let's say your actual site that you want to secure is app.example.com.
It sounds as if you have a CNAME elb.example.com pointing to the assigned hostname of the ELB, which is something like example-123456789.us-west-2.elb.amazonaws.com. If you access either of these hostnames, you're connecting directly to the ELB -- regardless of what's configured in CloudFront or WAF. These machines are still accessible over the Internet.
The trick here is to route the traffic to CloudFront, where it can be firewalled by WAF, which means a couple of additional things have to happen: first, this means an additional hostname is needed, so you configure app.example.com in DNS as a CNAME (or Alias, if you're using Route 53) pointing to the dxxxexample.cloudfront.net hostname assigned to your distribution.
You can also access your sitr using the assigned CloudFront hostname, directly, for testing. Accessing this endpoint from the blocked IP address should indeed result in the request being denied, now.
So, the CloudFront endpoint is where you need to send your traffic -- not directly to the ELB.
Doesn't that leave your ELB still exposed?
Yes, it does... so the next step is to plug that hole.
If you're using a custom origin, you can use custom headers to prevent users from bypassing CloudFront and requesting content directly from your origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html
The idea here is that you will establish a secret value known only to your servers and CloudFront. CloudFront will send this in the headers along with every request, and your servers will require that value to be present or else they will play dumb and throw an error -- such as 503 Service Unavailable or 403 Forbidden or even 404 Not Found.
So, you make up a header name, like X-My-CloudFront-Secret-String and a random string, like o+mJeNieamgKKS0Uu0A1Fqk7sOqa6Mlc3 and configure this as a Custom Origin Header in CloudFront. The values shown here are arbitrary examples -- this can be anything.
Then configure your application web server to deny any request where this header and the matching value are not present -- because this is how you know the request came from your specific CloudFront distribution. Anything else (other than ELB health checks, for which you need to make an exception) is not from your CloudFront distribution, and is therefore unauthorized by definition, so your server needs to deny it with an error, but without explaining too much in the error message.
This header and its expected value remains a secret because it will not be sent back to the browser by CloudFront -- it's only sent in the forward direction, in the requests that CloudFront sends to your ELB.
Note that you should get an SSL cert for your ELB (for the elb.example.com hostname) and configure CloudFront to forward all requests to your ELB using HTTPS. The likelihood of interception of traffic between CloudFront and ELB is low, but this is a protection you should consider implenting.
You can optionally also reduce (but not eliminate) most unauthorized access by blocking all requests that don't arrive from CloudFront by only allowing the CloudFront IP address ranges in the ELB security group -- the CloudFront address ranges are documented (search the JSON for blocks designated as CLOUDFRONT, and allow only these in the ELB security group) but note that if you do this, you still need to set up the custom origin header configuration, discussed above, because if you only block at the IP level, you're still technically allowing anybody's CloudFront distribution to access your ELB. Your CloudFront distribution shares IP addresses in a pool with other CloudFront distribution, so the fact that the request arrives from CloudFront is not a sufficient guarantee that it is from your CloudFront distribution. Note also that you need to sign up for change notifications so that if new address ranges are added to CloudFront, then you'll know to add them to your security group.