Overhead introduced by NLB is similar to ALB in AWS? - amazon-web-services

I have measured Round trip time from an EC2 instance to a web server (also an EC2 instance) which has a load balancer in the path. When I compare the Round trip times by changing the load balancer type (Network Load balancer and Application Load balancer) the time is quite similar although it's been said that Network Load balancer is faster and has ultra low latencies.
What could be the reason for this?

Related

EC2: Huge spike in incoming network traffic

I am seeing huge network in traffic to my application on ec2 instance hosted through Elastic Beanstalk. The app is served through a classic load balancer.
The instance has public IP but public IP access over http(s) is restricted to load balancer only
Load Balancer is publicly accessible
Given this, I am unable to find a possibility where instance will receive higher network in than the load balancer. I am expecting to enable WAF and Shield on load balancer which should rectify the issue assuming the traffic is coming through load balancer. But if that's the case
Why doesn't load balancer chart show the spikes?
In which cases we see underlying instance get more than the load balancer. Please see charts included below.
EC2 Network In
Load Balancer Processed Bytes
Any help or indications will be highly appreciated.
Maybe some crawlers try to access your instance directly. Generaly it would be better to place your instance into the private subnet and restrict the access by the secrurity group.
Turned out to be the network drive (efs) which was hitting max IO and AWS was capping the throughput I/O. When the app couldn't access it, the app server went down.
App was flushing a large amount of updates to the network file system so when we increased the throughput by setting Throughput mode it seems to have fixed the issue.
Lesson for me to learn here was EC2 network in actually included all IO to EC2 network interface including internal app traffic to efs.

Consolidating multiple AWS Classic Load Balancers into a single load balancer

We currently use a single AWS classic load balancer per EC2. This was cost effective for not many EC2s but now we're a growing project, we have 8 Classic Load Balancers which is starting to cost more than we'd like
What could I do to consolidate these multiple load balancers into a single load balancer?
The current load balancers are only used to forward HTTP/HTTPs traffic to an EC2 that's registered against it
I have DNS A records setup to route to the load balancers
Without knowing all the details, you might be better creating a single application load balancer with multiple target groups, this way it's only one load balancer and then you have the segregation at target group level rather than load balancer level.
If you need http/s access to some pieces of infrastructure and app access to others then you might consider one network LB and one application LB.

Load balancer in EC2 AWS

I am working on AWS. I have a doubt regarding how many applications a load balancer can support.
Like if I have an application whose traffic is routed and managed by one load balancer, then can I use that LB for another application also???
Also if I can use that ELB for another applications also than how ELB will get to know that which traffic should be routed to Application A server and which to Application B server??
Thanks
I think you may be misunderstanding the role of the load balancer. The whole point of a load balancer is that any of the servers behind it can provide any of the services. By setting it up this way you ensure that the failure of any one server will not affect availability of the service.
You can load balance any TCP service such as HTTP just by adding it as a "listener" for the ELB. The ELB can therefore support as many applications as you want to forward to the servers behind it.
If you set up an image of a server that provides all the services you need, you can even have the ELB auto scale the number of servers up and down by launching or terminating instances from that image as the load varies.

Amazon web service load balancer unable to balance the load equally

I am running couple of instances under my aws elastic load balancer. Say I have 6 large ubuntu instances running under the elb. The problem what I am facing right now is load is not evenly distributed across the availability zones. I am running 3 large instances on ap-southeast-1a and 3 in ap-southeast-1b. But elb is distributing more load on the 1b and the instances stop responding since it hits 100% CPU and elb automatically throws the instances out of it's control which causes the downtime. DNS is parked in Godaddy.com.
How do I make sure that elb distributes equally to the available regions.
Kindly help me!!!
There could be a number of reasons for this. Its without doing more digging, its hard to know which one you are experiencing.
Sticky sessions can result in instances traffic becoming unbalanced. Although this depends heavily on usage patterns and your application.
Cached DNS resolution. Part of how the ELB works is to direct traffic round robin on a DNS level. If a large number of users are all using the same DNS system provided by an ISP, they might all get sent to the same zone. Couple this with sticky sessions and you will end up with a bunch of traffic that will never switch. Using Route 53 with ALIAS records may reduce this somewhat.
If you can't get the ELB to balance your traffic better, you can set up something similar with vanish cache or other software load balancer. Not as convenient, but you will ultimately have more control.

Unusual behavior of AWS Elastic Load balancer

I have runnning e-commerce based ruby on rails application on AWS stack. I am running ubuntu 10.04 ec2 instances with Elastic load balancer and I have maintained equal number of instance in the both the availability zone, 1a and 1b. But according to my observation, ELB seeming to be pushing more traffic to 1a rather then dividing it equally. Though the health of the instances running in 1b is good and also I have disabled the sticky session on the ELB. I have 2 large and 1 medium instances running on both the availability zones.
What the cause of in equal distribution of the load.
In my experience, this can happen if a disproportionate amount of traffic is coming from a single network or ip address.
ELB uses different layers of balancing. DNS load balancing will send it to a set of IP addresses in one of the two zones, and the software load balancer will distribute traffic between instances in the zone.
If you have a lot of traffic coming from the same network, its likely that a lot of users are getting the same DNS resolution on your load balancer and ending up in the same zone.
If the source traffic is coming from a single Network/IP range or IP address, ELB might load balance the traffic disproportionately to the backend. I have discussed this point as well as couple of other details to note on ELB in my blog "Dissecting ELB". I have also noticed this behavior in some popular OSS LB implementations as well, You can have balance algorithm as "source" and as well as session sticky combined . If session ID is not sent on the HTTP, then it will load balance based "source".