Deployment: Amazon Web Services - Taking too long to respond - django

I've just finished setting up my site on a free Amazon Web Services EC2 Ubuntu server.
I'm not very knowledgeable in deployment, and I'm not 100% clear on what Nginx or gunicorn even is, but I'm following a tutorial to launch a Django project.
While doing things the same exact way, having no errors, I have noticed that sometimes I will go to my site and get 'refused to connect' or 'taking too long to respond.'
One of my previous projects had no issue, one of them never loaded the page, and the last one I did gave me this problem which was cured by rebooting the server.
I've rebooted the server several times as well as deactivated and reactivated the venv (as a classmate suggested) but it isn't working. I noticed that last night my terminal just kept taking forever to load and the Amazon web services site was just being slow as well.
Is this just Amazon's fault? Is there anything I can do?

You are spinning up your server. You are responsible to manage it.
There are a couple of things you need to check. The problem could be service may not be listening on a different port (check on IP as well), inbound and outbound security groups might not be configured right.
Amazon is not responsible for anything you do with their resources. It is a company to provide resources to simplify your business.
You can read AWS SLA here,
https://aws.amazon.com/s3/sla/

Related

EC2 server losses internet connection and application fails to send email, sms and even yum updates

I have 5 EC2 servers in the same VPC and all of a sudden yesterday, all of my applications started failing to send email and sms. So I tried doing git pull of my project it also timed out. Then tried to install telnet using yum that to failed with Time out. I have checked almost everything including Network ACLs, Security Groups, Subnets, Iptables, etc and everything is correct. I am not sure why is this happening.
The weird thing is if I reboot the server once the internet comes for a brief amount of time and again it disconnects.
Attaching below are the errors I am facing:
Error while Generating the Tiny URL. Error: {"errno":-110,"code":"ETIMEDOUT","syscall":"connect","address":"XXX.XX.XXX.XX","port":443}
Error SendEmail UnknownEndpoint: Inaccessible host: `email.ap-south-1.amazonaws.com'. This service may not be available in the `ap-south-1' region.
Attaching screenshots of my Network ACLs, Security Groups, Subnets, and iptables:
Please help with what am I doing wrong or if is this an issue with AWS EC2? My goal is to make sure my application works without timeout and git and yum starts working.
Did you try terminating and reprovisioning the instances, rather than rebooting them? There may be some problem with the underlying hardware. When you terminate and recreate an instance, it will likely end up in a different rack in the datacenter, which may solve the problem.
If the above helps, you should consider setting up an application load balancer with an auto scaling group, with health checks enabled for both, so that the auto scaling group terminates unhealthy instances and replaces then with the new ones automatically.
You may also consider using Simple Notification Service and stop worrying about underlying compute for e-mail and sms distribution altogether!

Setting up Latency Routing in AWS

I've been digging in the AWS docs for ages and am at my wits end trying to find non AWS official examples.
How do I decide if I should have failover and latency routing or should I have both? I currently have the site on Elastic beanstalk with both a dev and production version, but I get a 500 or 502 errors at least a couple times a month where if you refresh the page, it eventually loads but then the CSS is missing or the page doesn’t load and sometimes the page is just slow to load even with caching. How am I supposed to know if it’s a need for failover or latency routing, or should I have both? The AWS notifications only say “Environment health has transitioned from Degraded to Severe”. How do I log where/which AWS server Route 53 had serve the page?
Are you supposed to have multiple EC2 instances for latency based routing? I’m confused why the docs say to create a latency record for each of my EC2 instances.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/TutorialTransitionToLBR.html
I currently have Codepipeline connected to my Github, so that changes are automatically deployed to the dev site, and then I manually approve changes to production. If I have multiple EC2 instances, do I need to set up the code pipeline for each EC2 instance such that it’s connected to my Github and then manually approve changes for all instances—ie would I just have multiple copies of the site hosted in diff regions in this situation? How do people manage this? I’m assuming there’s some way to approve production launch for all at once if this is what is done but I don't know what to google

How do I block calls to a specific endpoint in EC2?

I have an open port for a server I am hosting, and I get lots of spurious calls to "/ws/v1/cluster/apps/new-application" which seems to be for some Hadoop botnet (all it does is pollute my logs with lots of invalid URL errors). How do I block calls to this URL? I could change my port to a less common one but I would prefer not to.
The only way to "block" such requests from reaching your server would be to launch an AWS Web Application Firewall (AWS WAF) and configure appropriate rules.
AWS WAF only works in conjunction with Amazon CloudFront or an Elastic Load Balancer, so the extra effort (and expense) might not be worth the benefit of simply avoiding some lines in a log file.
One day I took a look at my home router's logs and I was utterly amazed to see the huge amount of bot attempts to gain access to random systems. You should be thankful if this is the only one getting through to your server!

AWS EC2 is running but website is showing connection time out

I am running Bitnami WordPress on AWS server website working since two days but suddenly it stop showing anything and connection timeout is showing. The instance EC2 is running perfectly fine, and I have also seen IP logs, and nothing suspicious has come up.
Based on the above comments I guess the issue is with the internal web server
Make sure that the web server is running perfectly fine. And I do not mean just checking the EC2 instance state, because it is possible that the EC2 instance is running but the web server is down, causing the issue

Usefulness of Amazon ELB (Elastic Load Balancing

We're considering to implement an ELB in our production Amazon environment. It seems it will require that production server instances be synched by a nightly script. Also, there is a Solr search engine which will need to replicated and maintained for each paired server. There's also the issue of debugging - which server is it going to? If there's a crash, do you have to search both logs? If a production app isn't behaving, how do you isolate which one is is, or do you just deploy debugging code to both instances?
We aren't having issues with response time or server load. This seems like added complexity in exchange for a limited upside. It seems like it may be overkill to me. Thoughts?
You're enumerating the problems that arise when you need high availability :)
You need to consider how critical is the availability of the service and take that into account when defining what is the right solution or just over-engineering :)
Solutions to some caveats:
To avoid nightly syncs: Use an EC2 with NFS server and mount share in both EC2 instances. (Or use Amazon EFS when it's available)
Debugging problem: You can configure the EC2 instances behind the ELB to have public IPs, limited in the Security Groups just to the PCs of the developers, and when debugging point your /etc/hosts (or Windows equivalent) to one particular server.
Logs: store the logs in S3 (or in the NFS server commented above)