Connect user to socket server based on ip location region - amazon-web-services

I am hosting 3 NodeJS socketio servers on the same AWS EC2 Instance. On that instance, I am using NGinX to load balance them so one of the node servers does not get overloaded depending on the workload and requests it receives.
Depending on the location, the data is slower to get received by users. For example, this EC2 instance is hosted in US-East-2 region. If you are in europe, it takes a little more time for data to get across to the user due to distance. Is there an easy way to do this with nginx so that if I spin up an instance in Europe region for aws, if your geolocation is in europe, connect to that server? If not NginX is there a way in aws where you can connect to the best available location based on speed?
Thanks

Related

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

AWS - EC2 and RDS in different regions is very slow

I'm currently in Sydney and I do have the following scenario:
1 RDS on N. Virginia.
1 EC2 on Sydney
1 EC2 on N. Virginia
I need this to redundation, and this is the simplified scenario.
When my app on EC2 sydney connection to RDS on N. Virgnia, it takes almost 2.5 seconds to give me the result. We can think: Ok, that's the latency.
BUT, when I send the request to EC2 N. Virginia, I get the result in less then 500ms.
Why there is a slow connection when you access RDS from outside the region?
I mean: I can experience this slow connection when I'm running the application on my computer too. But when the application is in the same region that RDS, works quickier that on my own computer.
Most likely you request to RDS requires multiple roundtrips to complete. I.e. at first your EC2 instance requests something to RDS, then something else based on the first request etc. Without seeing your database code, it's hard to say exactly what might be the cause of that.
You say then when you talk to the remote EC2 instance, instead, you get the response in less than 500 ms. That suggests that setting up a TCP connection and sending a single request with reply is 500 ms. Based on that, my guess is that your database connection requires at least 5x back and forth traffic.
There is no additional penalty with RDS in terms of using it out of region, but most database protocols are not optimized for high latency conditions. You might be much better off setting up a read replica in Sydney.
If you are trying to connect the RDS using public-facing network, then it might be slow. AWS launched cross region VPC peering, please peer all the region's VPC (make sure there will not be any IP conflict) and try to connect using private connections.

Amazon EC2 Penetration Request Form

I am writing Penetration Request Form to Amazon web services. I want help in filling the Request form, I am having dynamic IP, what shall I mention in Destination IP and source IP?
The server deployed in the AWS EC2 and running Ubuntu 12.02 OS.
The db is on another instance using the AWS RDS. No load balance - just a single server instance for the app and the single instance for the db.
Source: The IP(s) of the machines which will initiate the scan (usually machines external to AWS - like third party pentest service)
Destination: Your server IPs. It is better to assign elastic IPs and then use them to fill the request. You can use the dynamic IPs, but if your instances are stopped and started before your scan test starts, then your scan may fail since your servers will have a new IP.

How can I get useful load testing data for my AWS server?

I have a system set up on AWS where I have a set of ec2 insatnces (as an application server from an elastic beanstalk) running in an auto-scaling load-balanced environment. All this works fine.
I would like to load test this instance in order to obtain results that help me to figure out what more needs to be done to the system in order for it to handle, potentially, millions of users. I have used a tool called Locust (http://locust.io) so far to do this. This allows me to send requests to my instance(s?) through a proxy as desired. However, I cannot tell whether the requests are being routed to multiple instances or the same one constantly; and if they are being load balanced appropriately I can't see how many requests each of the ec2 instances are receiving or their health under load. (I have a feeling that the requests are not being properly load balanced as the failure rate always seems to increase drastically at a similar point every test run.)
Is there a way to get this information inside from the AWS ec2 or elastic beanstalk consoles, or is there a better distributed web based load testing tool that can provide the data I need?
There are two ways to get this information
1) Create S3 Bucket and save ELB logs. You can filter these logs to check which instance is serving your request
2) Retrieve application level logs : If apache/nginx installed on your EC2 instances to serve the request. Filter apache/nginx logs in every machine
Hope it helps !!
There is a way to get this data from the AWS console.
Inside the elastic beanstalk console there is a tab titled health. This tab (in the enhanced health overview) shows the number of requests per second, the response for the requests, the latency, the load average and the CPU utilisation for each ec2 instance being run by the elastic beanstalk.
An example of this data is shown in the following image.
This data allows the system manager to see which of their back-end instances are receiving requests and how many they are each being sent through a load-balancer and a proxy.
This can also be attained from the AWS CLI using:
eb health environment_name

Adobe Media Server scaling on Amazon Web Services (AWS)

I need to serve a live stream to more than 10K users. Checking Adobe website it says that one EC2 instance of type m2.2xlarge is able to serve to just 10K users so I've some questions:
Does CloudFront allow to more users to connect than those 10K users allowed by the EC2 instance acting as a multiplexor of the original tream?
And based on the response of the above question:
If CloudFront allow more users to connect then, why should anyone need one m2.2xlarge EC2 instance if one with lower specs could do the same job and let CloudFront to multiplex the live stream?
If CludFront doesn't allow more users to connect than those 10K, what kind of architecture do I need? CloudFront + ELB + 2 or more EC2 instances with AMS installed a nd connected to another small EC2 instance with AMS installed which gets the stream from the live event?
CloudFront acts as a caching layer, for each edge location. If the content is not available at the edge location, it connects to EC2, retrieves the data and passes it on. So as far as I know, if using CloudFront, you shouldn't need such a large EC2 instance.
I've tested this extensively with static resources, I didn't need it for live streaming yet, but the same principles should apply.
This post on the AWS website from 2012 seems to confirm my hypothesis: http://aws.amazon.com/about-aws/whats-new/2012/03/29/amazon-cloudfront-improves-live-streaming-support-with-adobe-fms/
So basically, as long as the EC2 instance is strong enough to stream to all CloudFront edge location simultaneously, you should be fine.