Amazon AWS routing from presentation layer to application layer - amazon-web-services

I have the following scenario where a cluster of Amazon EC2 servers are worked on the presentation layer, these servers pass requests to other cluster of EC2 servers ( Business layer ) through Amazon Elastic load balancer.
The new requirement is: the business layer's servers will be responsible for some tasks not all tasks, for example servers of type one will serve requests of types 1,2,3. Servers of type two will serve requests of type 4,5,6. and so on.
What is the best way to implement this logic in Amazon AWS, do i need an Elastic load balancer for each type, can i put a routing logic in one load balancer, or i have to do something else ?
Thank you

ELB doesn't let you inspect your traffic like that. Either create multiple ELBs, or handle it yourself with something like nginx+haproxy.

Best is to use different clusters for different functionality.
Each cluster will have a different endpoint URL, so you can reach the one you need from the presentation layer.
In certain types of job (mainly with long running time), you will have to use SQS and post the messages from the presentation layer. Then the clusters can pick the job they are interested in and execute. You can separate the different jobs, by posting different SQS messages.
When you setup these "task" based clusters, it is easy to manage them as auto scaling clusters - cost-effective and easy to scale (1 to many based on need) Read more here: http://aws.amazon.com/autoscaling/

Related

HAproxy vs ALB or any other load balancer which one to use?

We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).

Scalable server hosting

I have simple server now (some xeon cpu hosted somewhere), running apache/php/mysql (no docker, but its a possibility) and Im expecting some heavy traffic and I need my server to handle that.
Currently the server can handle about 100 users at once, I need it to handle couple thousands possibly.
What would be easiest and fastest solution to move my app to some scalable hosting?
I have no experience with AWS or something like that.
I was reading about AWS and similar, but Im mostly confused and not sure what should I choose.
The basic choice is:
Scale vertically by using a bigger computer. However, you will eventually hit a limit and you will have a single-point of failure (one server!), or
Scale horizontally by adding more servers and spreading the traffic across the servers. This has the added advantage of handling failure because, if one server fails, the others can continue serving traffic.
A benefit of doing horizontal scaling in the cloud is the ability to add/remove servers based on workload. When things are busy, add more servers. When things are quiet, remove servers. This also allows you to lower costs when things are quiet (which is not possible on-premises when you own your own equipment).
The architecture involves putting multiple servers behind a Load Balancer:
Traffic comes into a Load Balancer
The Load Balancer sends the request to a server (often based upon some measure of how "busy" each server is)
The server processes the request and sends a response back to the Load Balancer
The Load Balancer sends the response to the original requester
AWS has several Load Balancers available, which vary by need. If you are simply sending traffic to a single application that is installed on all servers, a Network Load Balancer should be sufficient. For situations where different parts of the application are on different servers (eg mobile interface vs web interface), you could use a Application Load Balancer.
AWS also assists with horizontal scaling by providing the Amazon EC2 Auto Scaling service. This allows you to specify details of the servers to launch (disk image, instance type, network settings) and Auto Scaling can then automatically launch new servers when required and terminate ones that aren't required. (Note that they launch and terminate, not start and stop.)
You can further define scaling policies that tell Auto Scaling when to launch/terminate instances by measuring metrics such as CPU Utilization. This way, the number of servers can approximately match the volume of traffic.
It should be mentioned that if you have a database, it should be stored separately to the application servers so that it does not get terminated. You could use the Amazon Relational Database Service (RDS) to run a database for you, or you could run one on a separate Amazon EC2 instance.
If you want to find out more about any of the above technologies, there are plenty of talks on YouTube or blog posts that can explain and demonstrate their use.

How to browse to a specific instance behind an AWS load balancer

I have a monitor installed into with my application, JavaMelody. The application is running on 7 different instances in AWS in an auto scaling group behind a load balancer in AWS. When I go to myapp.com/monitoring, I get statistics from JavaMelody. However, it is only giving me specifics for the node that the load balancer happens to direct me. Is there a way I can specify which node I am browsing to in a web browser?
The Load Balancer will send you to an Amazon EC2 instance based upon a least open connections algorithm.
It is not possible to specify which instance you wish to sent to.
You will need to connect specifically to each instance, or have the instances push their data to some central store.
You should use CloudWatch Custom Metrics to write data from your instances and their monitoring agent, and then use CloudWatch Dimensions to aggregate this data for the relevant instances
I have not tried this myself but you may create several listeners in your load balancer with a different listening port and a different target server for each listener. So the monitoring reports of the instance #1 may be available at http://...:81/monitoring etc for #2, #n
Otherwise, I think that there are other solutions such as:
host or path based load balancing rules (path based rules would need to add net.bull.javamelody.ReportServlet in your webapp to listen on different paths)
use a javamelody collector server to collect the data in a separate server and to have monitoring reports for each instance or aggregated for all instances
send some of the javamelody metrics to AWS CloudWatch or to Graphite

AWS Redundancy for a Single Instance

Small stateful app on AWS. Statefullness is the issue and for sake of argument assume that the app must remain stateful. How can we create redundancy across multiple AZs?
Half Baked Ideas:
1) Mirrored setup in AZ1 and AZ2. Use ELB to route all traffic to AZ1. If there are health problems, stop routing to AZ1 and route to AZ2. Is this even possible? Isn't that like anti-load-balancing?
2) Use lambda to "Turn On" and instance already created in AZ2 when AZ1 has health issues. Would also turn off instance in AZ1. If so, could you point me towards some lambda documents?
3) Something way better and probably easier than 1 or 2
p.s. I know how to easily accomplish if the app was not stateful. Unfortunately the statefullness cannot be adjusted.
By statefullness I am assuming you need a central store for data.
Choosing technology to store state data:
1) If the nature of state retrieval is not chatty or the performance requirements are not extreme, i.e. not more than once to and fro's from the server to retrieve state you should choose DynamoDb to store the state, this is multi-az by default you dont have to do anything to make it HA.
2) In a high performance scenario or when multiple round trips are needed to retrieve state (try to avoid this) you can choose memcached which is available as an option in AWS elastiCache where you can deploy them in multiple AZ's for HA.
For HA of front end servers:
1) Just add servers into multiple AZ's and attach them to the load balancer, in order to do this:
a) Add two subnets using the VPC service into more than one AZ.
b) Create a load balancer and give it both of these subnets (this makes your LB HA), make sure to enable "Cross-Zone Load Balancing", this will distribute traffic evenly to the servers attached to it across zones.
c) Create two (minimum) app servers and add one to each subnet in different AZ, add both of them to the above created LB.
2) Make sure that your servers retrieve and store state from a central data layer (which is also HA now).
You are good to go, you can optionally setup auto scaling in AWS using launch configs and auto scaling groups if that's a requirement which I'm guessing it will be.

Load balancer for php application

Questions about load balancers if you have time.
So I've been using AWS for some time now. Super basic instances, using them to do some tasks whenever I needed something done.
I have a task that needs to be load balanced now. It's not a public service though. It's pretty much a giant cron job that I don't want running on the same servers as my website.
I set up an AWS load balancer, but it doesn't do what I expected it to do.
It get's stuck on one server, and doesn't load balance at all. I've read why it does this, and that's all fine and well, but I need it to be a serious round-robin load balancer.
edit:
I've set up the instances on different zones, but no matter how many instances I add to the ELB, it just uses one. If I take that instance down, it switches to a different one, so I know it's working. But I really would like it to always use a different one under every circumstance.
I know there are alternatives. Here's my question(s):
Would a custom php load balancer be an ok option for now?
IE: Have a list of servers, and have php randomly select a ec2 instance. Wouldn't be scalable at all, bu atleast I could set this up in 2 mins and it can work for now.
or
Should I take the time to learn how HAProxy works, and set that up in place of the AWS ELB?
or
Am I doing it wrong, and AWS's ELB does do round-robin. I just have something configured wrong?
edit:
Structure:
1) Web server finds a task to do.
2) If it's too large it sends it off to AWS (to load balancer).
3) Do the job on EC2
4) Report back via curl to an API
5) Rinse and repeat
Everything works great. But because the connection always comes from my server (one IP) it get's sticky'd to a single EC2 machine.
ELB works well for sites whose loads increase gradually. If you are expecting an uncommon and sudden increase on the load, you can ask AWS to pre-warm it for you.
I can tell you I used ELB in different scenarios and it always worked well for me. As you didn't provide too much information about your architecture, I would bet that ELB works for you, and the case that all connections are hitting only one server, I would ask you:
1) Did you check the ELB to see how many instances are behind it?
2) The instances that you have behind the ELB, are all alive?
3) Are you accessing your application through the ELB DNS?
Anyway, I took an excerpt from the excellent article that does a very good comparison between ELB and HAProxy. http://harish11g.blogspot.com.br/2012/11/amazon-elb-vs-haproxy-ec2-analysis.html
ELB provides Round Robin and Session Sticky algorithms based on EC2
instance health status. HAProxy provides variety of algorithms like
Round Robin, Static-RR, Least connection, source, uri, url_param etc.
Hope this helps.
This point comes as a surprise to many users using Amazon ELB. Amazon
ELB behaves little strange when incoming traffic is originated from
Single or Specific IP ranges, it does not efficiently do round robin
and sticks the request. Amazon ELB starts favoring a single EC2 or
EC2’s in Single Availability zones alone in Multi-AZ deployments
during such conditions. For example: If you have application
A(customer company) and Application B, and Application B is deployed
inside AWS infrastructure with ELB front end. All the traffic
generated from Application A(single host) is sent to Application B in
AWS, in this case ELB of Application B will not efficiently Round
Robin the traffic to Web/App EC2 instances deployed under it. This is
because the entire incoming traffic from application A will be from a
Single Firewall/ NAT or Specific IP range servers and ELB will start
unevenly sticking the requests to Single EC2 or EC2’s in Single AZ.
Note: Users encounter this usually during load test, so it is ideal to
load test AWS Infra from multiple distributed agents.
More info at the Point 9 in the following article http://harish11g.blogspot.in/2012/07/aws-elastic-load-balancing-elb-amazon.html
HAProxy is not hard to learn and is tremendously lightweight yet flexible. I actually use HAProxy behind ELB for the best of both worlds -- the hardened, managed, hands-off reliability of ELB facing the Internet and unwrapping SSL, and the flexible configuration of HAProxy to allow me to fine tune how things hit my servers. I've never lost an HAProxy instance yet, but it I do, ELB will just take that one out of rotation... as I have seen happen when the back-end servers have all become inaccessible, which (because of the way it's configured) makes ELB think the HAProxy is unhealthy, but that's by design in my setup.