Setting up Latency Routing in AWS - amazon-web-services

I've been digging in the AWS docs for ages and am at my wits end trying to find non AWS official examples.
How do I decide if I should have failover and latency routing or should I have both? I currently have the site on Elastic beanstalk with both a dev and production version, but I get a 500 or 502 errors at least a couple times a month where if you refresh the page, it eventually loads but then the CSS is missing or the page doesn’t load and sometimes the page is just slow to load even with caching. How am I supposed to know if it’s a need for failover or latency routing, or should I have both? The AWS notifications only say “Environment health has transitioned from Degraded to Severe”. How do I log where/which AWS server Route 53 had serve the page?
Are you supposed to have multiple EC2 instances for latency based routing? I’m confused why the docs say to create a latency record for each of my EC2 instances.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/TutorialTransitionToLBR.html
I currently have Codepipeline connected to my Github, so that changes are automatically deployed to the dev site, and then I manually approve changes to production. If I have multiple EC2 instances, do I need to set up the code pipeline for each EC2 instance such that it’s connected to my Github and then manually approve changes for all instances—ie would I just have multiple copies of the site hosted in diff regions in this situation? How do people manage this? I’m assuming there’s some way to approve production launch for all at once if this is what is done but I don't know what to google

Related

How to restrict random (unidentified) requests to a DRF based API hosted on an AWS EC2 instance?

I'm running a DRF based API deployed with Docker on an EC2 Instance. After a few days of deploying I started facing an issue where the API stopped responding properly and that's when I noticed unidentified requests to my application, and that too at huge volumes. An example screenshot is attached below:
Although all these requests ultimately return 503 because these pages do not essentially exist, I want to take steps to restrict such requests.
FYR, I have a frontend (React JS Based) app running on AWS Amplify which consumes this API. I was looking for ways to restrict inbound requests to this EC2 to only the Amplify app, but realised that Amplify doesn't offer a Static IP of it's own. Any solution to that would also be appreciated.
UPDATE:
I have also put my domain name in the ALLOWED_HOSTS setting in my DRF, I am still receiving such hits.

Need Assistance Hosting on AWS

So I’ve just finished working on my first big personal project, bought a domain name, created an AWS account, watched a lot of AWS tutorials, but I still can’t figure out how to host my web app on AWS. The whole AWS thing is a mystery to me. No tutorial online seems to teach exactly what I need.
What I’m trying to do is this:
Host my dynamic web app on a secure https connection.
Host the web app using the personalized domain name I purchased.
Link my git repo to AWS so I can easily commit and push changes when needed.
Please assist me by pointing me to a resource that can help me achieve the above 3 tasks.
For now, the web app is still hosted on Heroku’s free service; feel free to take a look at the application, and provide some feedback if you can.
Link to web app:my web app
You mentioned - The web app is still hosted on Heroku’s free service
So, if you want the same thing in AWS, use Elastic Beanstalk.
First Question: How to host my web app on AWS?
There can be multiple options to host your web app:-
S3 Bucket to host your website. How to Host in S3
Elastic Beanstalk. Link
ECS - using containers
Single EC2 Server to host your website.
EKS - Kubernetes
By the way, there are many couples of things which you need to take care of before starting.
Second Question, Host the web app using the personalized domain name I purchased.
If you have used S3, the hosted URL will be in HTTP and you can create a route entry in your purchased domain settings. If it is AWS, create a new record in Route53.
If you host your website on EC2, you will get Public IP Address. Make a route entry with that Public IP.
If you have used ECS or EKS, you might require to use the Load Balancer and then you will have the Load Balancer DNS. Make a route entry with your Load Balancer DNS. Then again question will arise which kind of Load Balancer you want to use. [Like Application, Classic or Network Load Balancer]
If you use Elastic Beanstalk. It's a managed service, when you host you will directly get an endpoint. Make a route entry with that endpoint.
Third, Link my git repo to AWS so I can easily commit and push changes when needed.
For this, you have to use Code Build and connect Github as a Source while creating Code Build Project. Link
For CI-CD, there are multiple things again.
As Heroku’s is a PaaS, which provides you the platform and but when it comes to AWS, it is an IaaS. So you get the infrastructure and when you get the provisioned infrastructure, there are so many things which you need to take care of like you have to think like an Architect. Prepare the architecture and then proceed. It requires knowledge of other things also networking, security etc.
To answer your question, the best way to host a web app in AWS is Elastic Beanstalk
But what is AWS Elastic Beanstalk and what does it do?
AWS Elastic Beanstalk encompasses processes and operations connected with the deployment of web apps into the cloud environment, as well as their scaling.
Elastic Beanstalk automates the deployment by putting forward the required capacity, balancing the load, autoscaling, and monitoring software efficiency and performance. All that is left for a developer to do is to apply the code. In these conditions, the application owner has overall control over the capacity that AWS provides for the software and can access it at any time.
So this is the best way to deploy the app and let’s follow the steps.
Open the Elastic Beanstalk console and find the management page of your environment.
Select “Upload and Deploy”.
Select “Choose File” and choose the source bundle with the dialog box.
Deploy and select the URL to open the new website.
You can use CodeDeploy to connect your Github and deploy your code
Conclusion
I have taken a simplistic approach and told you exactly what you need to do the required task without going into the hus and fuss of AWS. Saying that there is still a lot that can be done to bring the real value of your application in terms of balancing the load, scaling or improving the performance.

Which AWS services to pick for the right architecture?

AWS seems a little daunting with too many overlapping services so I'm looking for some advice and direction.
We have a mobile app for which we've developed a sync server (i.e. user will sign-up, sync data kept on AWS). Currently we've setup an EC2 instance with a web server, Django end-points and a postgres server. However we need the following:
Ensure the service is available from different regions of the
world for faster access
If that requires putting the postgres server outside of the EC2, what service do we need and how would replication work?
We will have larger file attachments stored on S3 separately, but need to do this securely and encrypt the files
Eventually we will host a web-app (i.e. an Angular 2 app) that would
connect to the same database.
We also would need to do all this in the most economical way and then scale up as the load increases.
Please any guidance would be appreciated. I'm struggling with terminologies at the moment. We also setup an Amazon SSL Certificate however that requires an Elastic Load Balancer but we only have one EC2 instance. What do we do to get this all working securely?
Based on the information provided, I would recommend you to start with AWS Elastic Beanstalk, where it will manage autoscaling and loadbalancing while providing you with a DNS URL for external domain mapping.
To ensure that the service is available from different regions for faster access, you can cache the static Angular App using Cloudfront. Then you will be able to add SSL Certificate to Cloudfront instead of ELB. If you plan to create multiple environments for different regions, you can use Route53 for geo based routing.
To take Postgres server outside EC2, you can use AWS RDS and it supports synchronous replication with fail-over for Multi-AZ deployments and also Postgres in RDS also supports Cross Region Replication if you plan to setup multiple deployment environments in different regions. Also you can create Read Replicas to improve reading speeds which will be asynchronously replicated.
You can encrypt the files in S3 using AES256 using Keys from KMS or from your client and I would recommend using Signed URLs with Cloudfront in front of S3 serving these files, so that clients can securely and directly access them improving the performance by getting advantage from distributed caching.
You can host the Angular App in AWS S3 and Cache using Cloudfront for faster access. Another option is to cache the static asset path in Cloudfront so that subsequent requests for static assets will be served from Cloudfront.
FAQs from Amazon
Who should use AWS Elastic Beanstalk?
Those who want to deploy and manage their applications within minutes
in the AWS Cloud. You don’t need experience with cloud computing to
get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js,
Python, Ruby, Go, and Docker web applications.
Your current environment isn't scalable (either load-responsive or to another region). If you need scalability then it should be re-arranged. It is difficult to provide you with details because the required environment depends on the applications architecture, however there are some suggestions:
DB: For better stability multi-AZ RDS setup for the DB is recommended. Benefit is RDS is fully managed service so you don't need to worry about replication, maintenance etc.
Web/app servers: you can deploy a copy in any region you want and connect to the same DB.
S3: you can enable crosss-region replication as well as encryption, but make sure it is used wisely (e.g. files are served to the client from bucket in closest region)
You can set up your own SSL on the server and it does not require ELB. However, you can use ELB with one webnode only.
I do NOT suggest to use Beanstalk because despite it really makes the first steps more easier you may have trouble trying to configure something non-standard in the future (unless you're very well familiar with EBT, of course).
To add efficiency you may want to add CDN (either AWS ot another vendor).
Make sure your environment configuration is really secure. You may need for your team someone who is familiar with AWS because every topic can be converted to a separate article.

How can I get useful load testing data for my AWS server?

I have a system set up on AWS where I have a set of ec2 insatnces (as an application server from an elastic beanstalk) running in an auto-scaling load-balanced environment. All this works fine.
I would like to load test this instance in order to obtain results that help me to figure out what more needs to be done to the system in order for it to handle, potentially, millions of users. I have used a tool called Locust (http://locust.io) so far to do this. This allows me to send requests to my instance(s?) through a proxy as desired. However, I cannot tell whether the requests are being routed to multiple instances or the same one constantly; and if they are being load balanced appropriately I can't see how many requests each of the ec2 instances are receiving or their health under load. (I have a feeling that the requests are not being properly load balanced as the failure rate always seems to increase drastically at a similar point every test run.)
Is there a way to get this information inside from the AWS ec2 or elastic beanstalk consoles, or is there a better distributed web based load testing tool that can provide the data I need?
There are two ways to get this information
1) Create S3 Bucket and save ELB logs. You can filter these logs to check which instance is serving your request
2) Retrieve application level logs : If apache/nginx installed on your EC2 instances to serve the request. Filter apache/nginx logs in every machine
Hope it helps !!
There is a way to get this data from the AWS console.
Inside the elastic beanstalk console there is a tab titled health. This tab (in the enhanced health overview) shows the number of requests per second, the response for the requests, the latency, the load average and the CPU utilisation for each ec2 instance being run by the elastic beanstalk.
An example of this data is shown in the following image.
This data allows the system manager to see which of their back-end instances are receiving requests and how many they are each being sent through a load-balancer and a proxy.
This can also be attained from the AWS CLI using:
eb health environment_name

How do you put up a maintenance page for AWS when your instances are behind an ELB?

How do you put up a maintenance page in AWS when you want to deploy new versions of your application behind an ELB? We want to have the ELB route traffic to the maintenance instance while the new auto-scaled instances are coming up, and only "flip over" to the new instances once they're fully up. We use auto-scaling to bring existing instances down and new instances, which have the new code, up.
The scenario we're trying to avoid is having the ELB serve both traffic to new EC2 instances while also serving up the maintenance page. Since we dont have sticky sessions enabled, we want to prevent the user from being flipped back and forth between the maintenance-mode page and the application deployed in an EC2 instance. We also can't just scale up (say from 2 to 4 instances and then back to 2) to introduce the new instances because the code changes might involve database changes which would be breaking changes for the old code.
I realise this is an old question but after facing the same problem today (December 2018), it looks like there is another way to solve this problem.
Earlier this year, AWS introduced support for redirects and fixed responses to Application Load Balancers. In a nutshell:
Locate your ELB in the console.
View the rules for the appropriate listener.
Add a fixed 503 response rule for your application's host name.
Optionally provide a text/plain or text/html response (i.e. your maintenance page HTML).
Save changes.
Once the rule propagates to the ELB (took ~30 seconds for me), when you try to visit your host in your browser, you'll be shown the 503 maintenance page.
When your deployment completes, simply remove the rule you added.
The simplest way on AWS is to use Route 53, their DNS service.
You can use the feature of Weighted Round Robin.
"You can use WRR to bring servers into production, perform A/B testing,
or balance your traffic across regions or data centers of varying
sizes."
More information in AWS documentations on this feature
EDIT: Route 53 recently added a new feature that allows DNS Failover to S3. Check their documentation for more details: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Came up with another solution that's working great for us. Here are the required steps to get a simple 503 http response:
Replicate your EB environment to create another one, call it something like app-environment-maintenance, for instance.
Change the configuration for autoscaling and set the min and max servers both to zero. This won't cost you any EC2 servers and the environment will turn grey and sit in your list.
Finally, you can use the AWS CLI to now swap the environment CNAME to take your main environment into maintenance mode. For instance:
aws elasticbeanstalk swap-environment-cnames \
--profile "$awsProfile" \
--region "$awsRegion" \
--output text \
--source-environment-name app-prod \
--destination-environment-name app-prod-maintenance
This would swap your app-prod environment into maintenance mode. It would cause the ELB to throw a 503 since there aren't any running EC2 instances and then Cloudfront can catch the 503 and return your custom 503 error page, should you wish, as described below.
Bonus configuration for custom error pages using Cloudfront:
We use Cloudfront, as many people will for HTTPS, etc. Cloudfront has error pages. This is a requirement.
Create a new S3 website hosting bucket with your error pages. Consider creating separate files for response codes, 503, etc. See #6 for directory requirements and routes.
Add the S3 bucket to your Cloudfront distribution.
Add a new behavior to your Cloudfront distribution for a route like /error/*.
Setup an error pages in Cloudfront to handle 503 response codes and point it to your S3 bucket route, like /error/503-error.html
Now, when your ELB thorws a 503, your custom error page will be displayed.
And that's it. I know there are quite a few steps to get the custom error pages and I tried a lot of the suggested options out there including Route53, etc. But all of these have issues with how they work with ELBs and Cloudfront, etc.
Note that after you swap the hostnames for the environments, it takes about a minute or so to propagate.
Route53 is not a good solution for this problem. It takes a significant amount of time for DNS entries to expire before the maintenance page shows up (and then it takes that same amount of time before they update after maintenance is complete). I realize that Lambda and CodeDeploy triggers did not exist at the time this question was asked, but I wanted to let others know that Lambda can be used to create a relatively clean solution for this, which I have detailed in a blog post:
http://blog.ajhodges.com/2016/04/aws-lambda-setting-temporary.html
The jist of the solution is to subscribe a Lambda function to CodeDeploy events, which replaces your ASG with a micro instance serving a static page in your load balancer during deployments.
As far as I could see, we were in a situation where the above answers didn't apply or weren't ideal.
We have a Rails application running the Puma with Ruby 2.3 running on 64bit Amazon Linux/2.9.0 that seems to come with a (classic) ELB.
So ALB 503 handling wasn't an option.
We also have a variety hardware clients that I wouldn't trust to always respect DNS TTL, so Route53 is risky.
What did seem to work nicely is a secondary port on the nginx that comes with the platform.
I added this as .ebextensions/maintenance.config
files:
"/etc/nginx/conf.d/maintenance.conf":
content: |
server {
listen 81;
server_name _ localhost;
root /var/app/current/public/maintenance;
}
container_commands:
restart_nginx:
command: service nginx restart
And dropped a copy of https://gist.github.com/pitch-gist/2999707 into public/maintenance/index.html
Now to set maintenance I just switch my ELB listeners to point to port 81 instead of the default 80. No extra instances, s3 buckets or waiting for clients to fresh DNS.
Only takes maybe ~15s or so for beanstalk (probably mostly waiting for cloudformation in the back-end) to apply.
Our deployment process first runs a cloudformation to spun up a ec2 micro instance (Maintenance instance) which copies pre-defined static page from s3 onto the ec2. Cloudformation is supplied with elb's to which micro ec2 instance is attached. Then a script (powershell or cli) is run to remove web instances (ec2) from elb's leaving Maintenance instance.
This way we switch to maintenance instance during deployment process.
In our case, we have two elb's, one for external and the other internal. Our internal elb's will not be updated during this process and is how we have post prod deployment smoke test is done.
Once testing is done, we run another script to attach web instances back to elb's and delete the Maintenance stack.