I have been tasked with upgrading the servers / endpoints that our AWS APIGateway uses to respond to requests. We are using a VPC link for the integration request. These servers are hosted on AWS Elastic Beanstalk.
We only use two resources / methods in our API : /middleware and middleware-dev-4 that go through this VPC link.
As our customers rely heavily on our API I can not easily swap out the servers. I can create new servers and point the APIs to those but I do not want any downtime in our API service. Can you recommend a way to do this API change without impacting our customers ?
Iv'e seen several examples using canary release but they seem to pertain to Lambda functions and not VPC links with EC2 servers as endpoints.
Edit --
AWS responded and they agreed that I should add the new servers to the target group in the network load balancer and deregister the old ones.
Do you need to update servers or endpoints?
If endpoints, api gateway has stages, your customer uses endpoints published on some stage. Make your changes & publish new stage, api gateway will publish new endpoints after some seconds.
If servers, then api gateway has not much to do with it. It's up to you how you run the servers. Check AWS Elastic Beanstalk blue/green deployments:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
Because AWS Elastic Beanstalk performs an in-place update when you
update your application versions, your application might become
unavailable to users for a short period of time. To avoid this,
perform a blue/green deployment. To do this, deploy the new version to
a separate environment, and then swap the CNAMEs of the two
environments to redirect traffic to the new version instantly.
By the way, check your DNS cache settings. DNS Cache may lead to real issues & downtime. If you will change CNAME value on DNS, make sure your client is not caching that CNAME value for a long time. Check what is DNS cache for that cname now, you will need that time period, so make sure to check. Update DNS to have minimum cache time, like 1 or 5 minutes. Wait the time period what was set for cache originally. Do your blue/green, update CNAME, wait for DNS cache time to expire the cache.
Related
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
My instance is a single instance, no load balancer.
I cannot seem to add a load balancer to my existing app instance.
Other recommendations regarding Elastic Load Balancer are obsolete - there seems to be no such service in AWS.
I do not need caching or edge delivery - my application is entirely transactional APIs, so probably don't need CloudFront.
I have a domain name and a name server (external to AWS). I have a certificate (generated in Certificate Manager).
How do I enable HTTPS for my Elastic Beanstalk Java application?
CloudFront is the easiest and cheapest way to add SSL termination, because AWS will handle it all for you through its integration with certificate manager.
If you add an ELB, you have to run it 24/7 and it will double the cost of a single instance server.
If you want to support SSL termination on the server itself, you're going to have to do that yourself (using your web container, such as apache, nginx, tomcat or whatever you're running). Its not easy to setup.
Even if you don't need caching, CloudFront is going to be worth it just for handling your certificate (which is as simple as selecting the certificate from a drop-down).
I ended up using CloudFront.
That created a problem that cookies were not being passed through.
I created a custom Caching Policy to allow the cookies, and in doing so, I also changed the caching TTLs to be very low. This served my purposes.
I want to use a GCP load balancer to terminate HTTPS and auto manage HTTPS cert renewal with Lets Encrypt.
The pricing calculator gives me $21.90/month for a single rule. Is this how much it would cost to do HTTPS termination for a single domain? Are there cheaper managed options on GCP?
Before looking at the price, and to another solution, look at what you need. Are you aware of Global Load balancer capabilities?
It offers you a unique IP reachable all over the globe and route the request to the region the closest to your user for reducing the latency. If the region is off, or the capacity of your backend full (health check KO), the request is routed to the next closest region.
It allows you to rewrite your URL, to manage SSL certificates, to cache your file into CDN, to scale with your traffic, to deploy security layer on top of it, like IAP, to absorb the DDoS attack without impacting your backend.
And the price is for 5 forwarding rules, not only one.
Now, of course, you can do differently.
You can use regional solution. This solution is often free or affordable. But you don't have all the Global load balancer feature.
If your backend is on Cloud Run or App Engine. Cloud Endpoint is a solution for Cloud Function (and other API endpoints).
You can deploy and set up your own nginx with your SSL certificate on a compute engine.
If you want to serve only static file, you can have a look to Firebase hosting.
I have an application that needs to handle huge traffic. The previous version of the application hits nearly 2,000,000 requests in 15 mins. That version does not have a CDN so that I need to deploy nearly 50 containers each for frontend and backend. So now I have added a CDN in front of my application. I have chosen AWS Cloudfront as CDN because the application is hosted on AWS.
Right now, I need to do the load test for this new application. If I do the load test with the Cloudfront URL, will it show the exact result as it will be served by Cloudfront?
If I load test with the Load Balancer URL and find out the required number of servers for handling the required load, will that be an over provision? As the Cloudfront will serve my application from nearly 189 edge locations (from AWS docs), is that much servers are required?
How can I find a relation between the amount of traffic that can be handled with and without Cloudfront?
Load testing Cloudfront itself is not the best idea, according to Cloudfront main page
The Amazon CloudFront content delivery network (CDN) is massively scaled and globally distributed.
However you could test the performance of your website with and without the CDN to see if there is a benefit/ROI of using Cloudfront as it doesn't come for free and you need to ensure that it makes sense to use it as it might be the case your application performance will be sufficient without the CDN integration.
Check out 6 CDN Load Testing Best Practices for more details.
Also make sure to add DNS Cache Manager to your test plan to ensure that each JMeter thread (virtual user) independently resolves the underlying server address for the ELB as it might be the case all the threads will be hitting the same IP address.
You can conduct the load test with cloudfront url as this is the real user scenario.
Please check the auto-scaling is enabled for the server. Also you need to monitor the load balancer while test execution to validate the traffic.
Also you need to check the security software/filters setting for compression and caching headers for the requests. Sometime this security patches/Filer ignore the header and it will impact on application performance in AWS cloud.
Use AWS cloud watch to monitor the servers.
I am in the process of setting up a medium sized AWS infrastructure for a web project. I may be overthinking a couple things and therefore wanted to ask the community for opinions. Any input is appreciated.
Please see the graphic:
here
Explanation (from left to right):
My domain is hosted on GoDaddy and will simply route to Cloudfront in order to globally cache static content.
Cloudfront will point to Route53 which is responsible for routing the user to the closest region based on Geoprximity and/or Latency
Each region will have an availability load balancer pointing to an EC2 instance (different availability zones for disasters fallback)
From there, each EC2 instance writes to a single MySQL database. Static content is loaded from a S3 bucket.
This MySQL database replicates/synchronizes itself across availability zones and regions and creates read-replicas
If an EC2 instance has a read-request, it contacts another Route53 router that forwards the read-request to a load-balancer (in each region) based on where the request is coming from (geoproximity/latency). The only alternative I see here would be to directly point read-requests from a European EC2 instance to a European load-balancer. (vice versa for US)
The load-balancer in each region will then decide from which database to read based on health or amount of requests
Each EC2 instance can also trigger a LAMBDA function through an API Gateway.
What am I missing? Is this too much? What are the ups and downs of this construct?
Thank you all so much!
Things look reasonable up to step 6. There's no need to find the nearest MySQL database, because your instances already know where it is -- it's the one in the local region.
Step 7 is problematic because ELBs can't be used to balance RDS instances. However, with Aurora/MySQL, you get a single cluster "reader" endpoint hostname with a short TTL that load balances across your Aurora replicas. If a replica dies, it's removed from DNS automatically.
Step 8 it's not strictly necessary to use API Gateway -- instances can directly invoke Lambda functions through the Lambda API.
Additionally, there's Lambda#Edge that allows triggering Lambda functions directly from CloudFront -- although if the Lambda function you need is large in size (dependencies) or needs to run inside a VPC, you have to cascade two of them -- the edge function (not in VPC) invokes the regional function (large, or in a VPC) -- but this is still typically less expensive than API Gateway. Edge functions automatically replicate globally and run in the region closest to the CloudFront edge handling the individual request, and within any given function invocation this can be identified by inspecting process.env.AWS_REGION. Edge functions can also be used to change the origin serving the content -- so, e.g. if your function sees that it's been invoked in an EU region, it can rewrite the request so that CloudFront will send S3 requests to an EU bucket.
If your site is at the apex of a domain, e.g. example.com rather than, say, www.example.com, your domain will need to be hosted in Route 53, not Go Daddy, because constraints in the DNS standards do not allow the dynamic behavior required by CloudFront at the apex. You can still have your domain registered with them, but not hosted by them.