I read this from https://docs.wso2.com/display/ELB211/FAQ#FAQ-HowdoImaketheELBhighlyavailable , the FAQ section of WSO2's ELB documentation :
How do I make the ELB highly available? Create an ELB cluster and
route the requests to them through a hardware load balancer.
How exactly do I set up an ELB cluster? I looked at the other clustering docs on wso2.com (https://docs.wso2.com/display/CLUSTER420) and didn't see how to cluster the ELB itself. My best guess is that I just get one ELB working and then set up another instance with the identical configuration but on a different host or port and then let the hardware load balancer do the rest. Is that all that's required?
Just found this, but haven't yet tried it:
http://wso2.com/library/tutorials/2013/09/make-wso2-elb-highly-available-through-aws-elb/
You must specify a different port value for each ELB in the cluster. Do this configuration for both the ELBs. This configuration is done primarily to make both ELBs aware of each other. If both your well-known members are ELBs, then each ELB has to know about the other ELB in order to create the ELB cluster. To do that, in each ELB's cluster configuration, you have to mention the other ELB. This is exceptionally helpful in scenarios when one ELB restarts and it would need to get the existing cluster information to serve requests again. To do that, it would need to communicate with the other ELB.
Related
In AWS, while configuring CLB and ALB type of Load balancers, it is mandatory to associate a Security Group. This association helps in limiting the type of traffic to the Load balancer. Why is a Security Group not required for an NLB? Is it not a security risk? I know the best guess here could be - "AWS designed it this way" but their documentation does not seem to explain the reasoning / advantage on omitting security group configuration for NLB.
NLB is not an exception. NAT gateway also does not have SGs.
The major difference between ALB, CLB and NLB (and NAT) is that their network interfaces (ENI) have different Source/dest. check setting.
For ALB and CLB, the Source/dest. check is true. For NLB and NAT gateway, the option is false. Although I don't know the technical reasons why there are no SGs for NLB and NAT, I think a part of the reason could be due to the Source/dest. check settings:
Indicates whether source/destination checks are performed, where the instance must be the source or destination of any traffic it sends or receives.
Thus, in my view the reason is due to intended purpose of NAT and NLB, rather than a technical inability of AWS to provide SGs on them. Their main purpose is to act as a proxy. NLB nor NAT generally do not interfere with the traffic, and mostly just pass it through. Its up to the destinations to determine if the traffic is allowed or not. Thus NAT nor NLB don't use SGs. They only way to block incoming traffic to them is through NACLs.
In contrast, ALB and CLB take active part in the transfer of traffic as they inspect all requests. Therefore, they also have ability to decide whether the traffic is allowed or not.
I guess a security group is not required for a Network Load Balancer (NLB) because it behaves transparently by preserving the source IP for the associated target instances. That is, you can still specify security groups - but at the target level directly instead of the load balancer. So conceptually, it does not make much of a difference (when using EC2 instances behind an NLB) where the SGs are specified. Although, some people point out it might be tricky to restrict the IP range for the NLB health check. [1] Moreover, I think it might be more convenient to specify security group rules once (centrally) at the load balancer instead of attaching a specific security group to each EC2 instance which is a target of an NLB. These two can be seen as shortcomings of the NLB compared to the other two load balancers.
Technically, the NLB is built on a completely new technology compared to the ALB/CLB. Some of the differences are pointed out on reddit by an AWS employee [2]:
At a high level, Classic (CLB) and Application (ALB) Load Balancers are a collection of load balancing resources connected to your VPC by a collection of Elastic Network Interfaces (ENIs). They have listeners that accept requests from clients and route them to your targets (ALB & NLB) / backends (CLB). In the same vein, a Network Load Balancer (NLB) is a similar grouping of load balancing resources connected to your VPC, but using an AWS Hyperplane ENI, instead of a regular ENI. A Hyperplane ENI is a distributed construct that integrates with EC2's Software Defined Network (SDN) to transparently connect multiple underlying load balancing resources via a single IP address.
Everyone who did not hear the term Hyperplane before, feel free to check out the corresponding re:Invent session. [3] Hyperplane is used for NAT Gateway, PrivateLink and Lambda's improved VPC Networking [4].
Given how much Hyperplane is capable to do and also given the fact that it is built on EC2, I see no reason why AWS could not have implemented SGs for NLBs if they wanted to. I agree with #Marcin that this is probably by design.
[1] https://forums.aws.amazon.com/thread.jspa?threadID=263245
[2] https://www.reddit.com/r/aws/comments/cwbkw4/behind_the_scenes_what_is_an_aws_load_balancer/#t1_eyb2gji
[3] https://www.youtube.com/watch?v=8gc2DgBqo9U#t=33m40s
[4] https://aws.amazon.com/de/blogs/compute/announcing-improved-vpc-networking-for-aws-lambda-functions/
NLB works at the fourth layer of the OSI model, the communication goes through the network load balancer, and the connection details reach to target, in this case, the EC2 instances receives the client IP and the instance security group have to allow source client's IPs.
ALB works at the seventh layer of the OSI model, the communication reach to ALB listener and then it opens a connection to targets, the EC2 instance receives the ALB IPs instead of clients IPs
For more details,
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html
My applications run on ElasticBeanstalk and communicate purely with internal services like Kinesis and DynamoDB. There is no web traffic needed? Do I need an ElasticLoadBalancer in order to scale my instances up and down. I want to add and remove instances purely based on some cloudwatch metrics? Do I need the ELB to do managed updates etc.?
If there is no traffic to the service then there is no need to have a load balancer.
In fact the load balancer is primarily to distribute inbound traffic such as web requests.
Autoscaling can still be accomplished without a load balancer with scaling based on the CloudWatch metric that you want to use. In fact this is generally how consumer based applications tend to work.
To create this without a load balancer you would want to configure you environment as a worker environment.
#Chris already anwsered, but I would like to complement his answer for the following:
There is no web traffic needed?
Even if you communicate with Kinesis and DynamoDB only, your instances still need to be able to access internet to communicate with the AWS services. So the web traffic is required from your instances. The direct inbound traffic to your instances is not needed.
To fully separate your EB env from the internet you should have a look at the following:
Using Elastic Beanstalk with Amazon VPC
The document describes what you can do and want can't be done when using private subnets.
I was going through the article https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance#how-different-is-eureka-from-aws-elb about Eureka when I came across this term. Also quite confused what the paragraph means (EC2 classic and AWS security groups). It said
AWS Elastic Load Balancer is a load balancing solution for edge services exposed to end-user web traffic. Eureka fills the need for mid-tier load balancing. While you can theoretically put your mid-tier services behind the AWS ELB, in EC2 classic you expose them to the outside world and thereby losing all the usefulness of the AWS security groups.
I'm completely new to Microservice architecture and reading articles from sources I can find. Any help would be helpful!
A mid-tier load balancer is a load balancer that isn't exposed to the Internet, but instead is intended to distribute internally-generated traffic between components in your stack.
An example would be the "order placement" (micro)service verifying prices by sending requests to the "catalog item details" (micro)service -- you need a mid-tier load balancer in front of the multiple nodes providing the "catalog item details" service so that the request is routed to a healthy endpoint for that service, without "order placement" needing to be responsible for somehow finding a healthy "catalog item details" endpoint on its own.
Eureka was first committed to Github in 2012. Back then, much of EC2 was still running inside "EC2 Classic" -- in simple terms, this is the old way EC2 worked, before VPC. It was a much more primitive environment compared to today.
With EC2-Classic, your instances run in a single, flat network that you share with other customers. With Amazon VPC, your instances run in a virtual private cloud (VPC) that's logically isolated to your AWS account.
The EC2-Classic platform was introduced in the original release of Amazon EC2. If you created your AWS account after 2013-12-04, it does not support EC2-Classic, so you must launch your Amazon EC2 instances in a VPC.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-classic-platform.html
EC2 Classic supported security groups for securing access to EC2 instances, but Elastic Load Balancers (ELB) inside EC2 Classic did not.
VPC became generally available in August, 2011.
Elastic Load Balancer -- originally the only type, this type was later rebranded as "ELB Classic," and is not recommended for new environments -- was released for VPC in November, 2011 but only in the Internet-facing variety. Before this, as noted above, ELB only worked in EC2 Classic, only faced the Internet, and accepted HTTP and HTTPS traffic from everywhere. You couldn't control access with security groups.
ELB Classic learned a new trick in June 2012, with the release of Internal Elastic Load Balancers -- accessible only from services inside the VPC. These could be used securely for mid-tier, but they were very limited because they could not make routing decisions based on hostname or path. ELB Classic was a very barebones load balancer with very little flexibility. You'd essentially need a different balancer for each service. One commom configuration was to use HAProxy behind ELB Classic to fill in some of the feature gaps.
AWS didn't have a solid, managed, mid-tier load balancer offering until August, 2016, when the new Application Load Balancer was announced -- with the ability to send traffic to different backend target groups based on pattern matching in the request path sent to the balancer... and with support for deploying in an Internet-facing or internal-only scheme.
In April, 2017, Application Load Balancers were enhanced with the ability to also select a back-end target group based on pattern-matching the HTTP Host header and/or the path, as before.
At this point, VPC and ALB fill many (but, in some cases, not all) of the needs that seem to have driven the development of Eureka.
I would assume that this middle tier is something that can act as a barrier or protection against your AWS ELB. Let use examples of people trying to do an SQL injection attack or spamming your AWS ELB. Also, SG in AWS allows you to specify what protocols are coming to the ALB or any other resources in AWS when you create them. So for example, you can set up an SG that only accepts traffic from your middle-tier server as an additional level of security.
Hope this helps with a better understanding.
I'm having a hard time figuring out how to set the correct SecurityGroup rules for my LoadBalancer. I have made a diagram to try and illustrate this problem, please take a look at the image below:
I have an internet facing LoadBalancer ("Service A LoadBalancer" in the diagram) that is requested from "inhouse" and from one of our ECS services ("Task B" in the diagram). For the inhouse requests, i can configure a SecurityGroup rule for "Service A LoadBalancer" that allows incoming request to the LoadBalancer on port 80 from the CIDR for our inhouse IP's. No problem there. But for the other ECS service, Task B, how would i go about adding a rule (for "Service A SecurityGroup" in the diagram) that only allows requests from Task B? (or only from tasks in the ECS cluster). Since it is an internet facing loadbalancer, requests are made from public ip of the machine EC2, not the private (as far as i can tell?).
I can obviously make a rule that allow requests on port 80 from 0.0.0.0/0, and that would work, but that's far from being restrictive enough. And since it is an internet facing LoadBalancer, adding a rule that allows requests from the "Cluster SecurityGroup" (in the diagram) will not cut it. I assume it is because the LB cannot infer from which SecurityGroup the request originated, as it is internet-facing - and that this would work if it was an internal LoadBalancer. But i cannot use an internal LoadBalancer, as it is also requested from outside AWS (Inhouse).
Any help would be appriciated.
Thanks
Frederik
We solve this by running separate Internet facing and Internal Load Balancers. You can have multiple ELBs or ALBs (ELBv2) for the same cluster. Assuming your ECS clusters runs on an IP range such as 10.X.X.X you can open 10.X.0.0/16 for internal access on the internal ELB. Just make sure the ECS cluster SG also is open to the ELB. Task B can reach Task A over the internal ELB address assuming you use the DNS of the internal ELB address when making the request. If you hit the IP of a public DNS it will always be a public request.
However, you may want to think long term whether you really need a public ELB at all. Instead of IP restrictions, the next step is usually to run a VPN such as openVPN so you can connect into the VPC and access everything on the private network. We generally only ever run Internet Facing ELBs if we truly want something on the internet such as for external customers.
I am working on AWS. I have a doubt regarding how many applications a load balancer can support.
Like if I have an application whose traffic is routed and managed by one load balancer, then can I use that LB for another application also???
Also if I can use that ELB for another applications also than how ELB will get to know that which traffic should be routed to Application A server and which to Application B server??
Thanks
I think you may be misunderstanding the role of the load balancer. The whole point of a load balancer is that any of the servers behind it can provide any of the services. By setting it up this way you ensure that the failure of any one server will not affect availability of the service.
You can load balance any TCP service such as HTTP just by adding it as a "listener" for the ELB. The ELB can therefore support as many applications as you want to forward to the servers behind it.
If you set up an image of a server that provides all the services you need, you can even have the ELB auto scale the number of servers up and down by launching or terminating instances from that image as the load varies.