How to browse to a specific instance behind an AWS load balancer - amazon-web-services

I have a monitor installed into with my application, JavaMelody. The application is running on 7 different instances in AWS in an auto scaling group behind a load balancer in AWS. When I go to myapp.com/monitoring, I get statistics from JavaMelody. However, it is only giving me specifics for the node that the load balancer happens to direct me. Is there a way I can specify which node I am browsing to in a web browser?

The Load Balancer will send you to an Amazon EC2 instance based upon a least open connections algorithm.
It is not possible to specify which instance you wish to sent to.
You will need to connect specifically to each instance, or have the instances push their data to some central store.

You should use CloudWatch Custom Metrics to write data from your instances and their monitoring agent, and then use CloudWatch Dimensions to aggregate this data for the relevant instances

I have not tried this myself but you may create several listeners in your load balancer with a different listening port and a different target server for each listener. So the monitoring reports of the instance #1 may be available at http://...:81/monitoring etc for #2, #n
Otherwise, I think that there are other solutions such as:
host or path based load balancing rules (path based rules would need to add net.bull.javamelody.ReportServlet in your webapp to listen on different paths)
use a javamelody collector server to collect the data in a separate server and to have monitoring reports for each instance or aggregated for all instances
send some of the javamelody metrics to AWS CloudWatch or to Graphite

Related

Running multiple web services on a single ECS cluster

If I have an ECS cluster with N distinct websites running as N services on said cluster - how do I go about setting up the load balancers?
The way I've done it currently is for each website X,
I create a new target group spanning all instances in the cluster
I create a new application load balancer
I attach the ALB to the service using the target group
It seems to work... but am want to make sure this is the correct way to do this
Thanks!
The way you are doing it is of course one way to do it and how most people accomplish this.
Application load balancers also support two other types of routing. Host based and path based.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#host-conditions
Host based routing will allow you to route based off of the incoming host from that website. So for instance if you have website1.com and website2.com you could send them both through the same ALB and route accordingly.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Similarly you can do the same thing with the path. If you websites were website1.com/site1/index.html and website1.com/site2/index.html you could put both of those on the same ALB and route accordingly.

AWS ECS handling DNS subdomains across multiple instances

So I am trying to get my AWS setup working with DNS.
I have 2 instances (currently).
I have 4 task definitions. 3 of these need to run on port 80/443, however all on separate subdomains.
Currently if I stop/start a task, it can end up on either of my instances. This causes issues with the subdomain DNS potentially being pointed in the wrong places.
I imagine I need to setup some kind of load balancer to point the DNS at, but unsure how to get that to route through to the correct tasks.
So my questions:
Do I need a single load balancer, or one per 'task / subdomain'?
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
Am I over complicating this massively, or is there a simpler way to achieve this?
Do I need a single load balancer, or one per 'task / subdomain'?
You can have a single application load balancer and three target groups for Api, Site and Web App. Then you can do a rule base routing in the load balancer listener as shown in the following screenshot.
Ref: http://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
You can then map your domains www.domain.com and app.domain.com to the load balancer
How do I handle the ports to go from a set source port, to one of any number of destination ports (if I end up having multiple containers running the same task)
When you create services for your task definitions in ECS you can configure load balancing using the target groups you created.
Ref: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html (Check on "Configuring Your Service to Use a Load Balancer")

How can I get useful load testing data for my AWS server?

I have a system set up on AWS where I have a set of ec2 insatnces (as an application server from an elastic beanstalk) running in an auto-scaling load-balanced environment. All this works fine.
I would like to load test this instance in order to obtain results that help me to figure out what more needs to be done to the system in order for it to handle, potentially, millions of users. I have used a tool called Locust (http://locust.io) so far to do this. This allows me to send requests to my instance(s?) through a proxy as desired. However, I cannot tell whether the requests are being routed to multiple instances or the same one constantly; and if they are being load balanced appropriately I can't see how many requests each of the ec2 instances are receiving or their health under load. (I have a feeling that the requests are not being properly load balanced as the failure rate always seems to increase drastically at a similar point every test run.)
Is there a way to get this information inside from the AWS ec2 or elastic beanstalk consoles, or is there a better distributed web based load testing tool that can provide the data I need?
There are two ways to get this information
1) Create S3 Bucket and save ELB logs. You can filter these logs to check which instance is serving your request
2) Retrieve application level logs : If apache/nginx installed on your EC2 instances to serve the request. Filter apache/nginx logs in every machine
Hope it helps !!
There is a way to get this data from the AWS console.
Inside the elastic beanstalk console there is a tab titled health. This tab (in the enhanced health overview) shows the number of requests per second, the response for the requests, the latency, the load average and the CPU utilisation for each ec2 instance being run by the elastic beanstalk.
An example of this data is shown in the following image.
This data allows the system manager to see which of their back-end instances are receiving requests and how many they are each being sent through a load-balancer and a proxy.
This can also be attained from the AWS CLI using:
eb health environment_name

Amazon AWS routing from presentation layer to application layer

I have the following scenario where a cluster of Amazon EC2 servers are worked on the presentation layer, these servers pass requests to other cluster of EC2 servers ( Business layer ) through Amazon Elastic load balancer.
The new requirement is: the business layer's servers will be responsible for some tasks not all tasks, for example servers of type one will serve requests of types 1,2,3. Servers of type two will serve requests of type 4,5,6. and so on.
What is the best way to implement this logic in Amazon AWS, do i need an Elastic load balancer for each type, can i put a routing logic in one load balancer, or i have to do something else ?
Thank you
ELB doesn't let you inspect your traffic like that. Either create multiple ELBs, or handle it yourself with something like nginx+haproxy.
Best is to use different clusters for different functionality.
Each cluster will have a different endpoint URL, so you can reach the one you need from the presentation layer.
In certain types of job (mainly with long running time), you will have to use SQS and post the messages from the presentation layer. Then the clusters can pick the job they are interested in and execute. You can separate the different jobs, by posting different SQS messages.
When you setup these "task" based clusters, it is easy to manage them as auto scaling clusters - cost-effective and easy to scale (1 to many based on need) Read more here: http://aws.amazon.com/autoscaling/

Can I specify different set of upstream directives for different routes in Amazon ELB

I am currently using Nginx server for my load balancer. But in order to use the Amazon's Load balancing feature I want to move to Amazon ELB. But the problem is my application has different routes or locations (same domain name with different sub-urls) that are handled by different ec2 instances. Like for example. (abc.com/ is handled by a set of ec2 instances while abc.com/xyz/* is handled by another set of instances). For now I use nginx to specify different upstream lists and and locations they handle. I tried to look at that in Amazon ELB but I didn't find it. So is it possible to do that in Amazon ELB or is there any way around that?
Sorry - other than supporting sticky sessions, there is no request-based routing logic in ELB.