This seems like it should be fairly easy, but I'm having some trouble finding a specific answer.
What is the best way to route traffic from an Amazon EC2 load balancer to specific instances?
For example, I want traffic on port 22 to always go to instance 1, and traffic on 443 and 80 to go to instance 2. Can this be done using the load balancer? Or must it be done another way?
Thanks.
No, you can't do that with an ELB. ELB's do allow you to have multiple listeners, but pinning a specific Listened to port to a specific server kind of defeats the purpose of a load balancer.
To do what you want, really requires 2 ELB's. Setup one for your HTTP traffic and another for your ssh traffic. I find it a bit odd that you would be using a load balancer for ssh. If it is simply to have a gateway into a VPC, you could instead use an actual gateway server for that purpose.
Related
I have the cert applied on the load balancer, and https works fine, but i am wondering if I need to add the certs to nginx itself, which seems overkill but i am not sure.
No, one of the benefit of using a Load Balancer is you can hide your EC2 from public internet, making it less open and more secured.
Therefore, it is normal practice to use HTTP between your EC2 and load balancers, since they are in the same AWS Region (a safe and trusted internal environment).
By doing this you will also increase performance, because the https network overhead is only executed once in the load balancer, not twice. Your EC2 will focus the CPU resources on running the application logic instead.
Load Balancer is also Highly Available and can be configured to work with CloudFront and WAF for security and anti-DDoS controls.
No, you don't have to do this. The reason is that your load balancer (LB) is going to termiante the https connection, decrypt it using a SSL certificate you've deployed on it, and then forward HTTP connection to your ec2 instance(s).
Therefore, typical connections for LB with HTTPS have the following form:
client ---(HTTPS)---->LB---(HTTP)--->EC2 instance
This configuration is suited for most use-cases as HTTP traffic is happening withing AWS private network, not over the internet.
I have setup an AWS Network Load Balancer no problem and I have a number of different services running on their assigned ports. This all works perfectly.
Then I was asked to host a number of different node apps on their own ports and are accessed via their own domains. After I realised I couldn't get this to work correctly on NLB I looked to Application Load Balancer and use Host-Based Routing rules.
app1.example.com
app2.example.com
What I did
I setup the application load balancer listener on https :443 and a Host-Based Routing rule that forwards app1.example.com traffic to a target group to watch 443 and send the traffic to the correct instance on port 3000. The security group is also setup to with port 3000 open.
So I thought.... all I had to do was add the load balancer IP to the subdomain A Records on the external domain registrar.... but I can't find the IP anywhere! I'm missing something fundamental here and AWS docs are killing me.
The above steps aren't too different from setting up a Network Load Balancer without the Host-Based Routing rules.
Could anyone point out where I can find the ALB IP or where am I'm going off track?
The Ip might change so better use an other option such as CNAME or A-record + Alias (the latter might save you some money, if I remember correctly).
(Route 53 setup)
I have a website running on apache httpd. The two web nods are added under Network Load Balancer. If I open the port 80 of the two web nodes only for the default security group or VPC CIDR range, the website does not work. However, If I open the port 80 fully 0.0.0.0/0 then it works.
This was not the case in Classic ELB. Am I doing something wrong here or its the default behaviour of Network Load Balancer?
Due to the nature of the Network Load Balancer, traffic is passed directly to the target instances, retaining its source IP address. Thus to the target instances (and their Security Groups) the traffic looks like it is coming straight from the Internet. So you would need to configure the Security Group of the target instances to allow public access.
I've seen lots of people complain about this, and I wouldn't be surprised if Amazon updated NLBs in the future to work like the other load balancers, but it may not be possible due to the way NLBs are designed.
If you are using port 80 then I assume this is for HTTP? Why not use an Application Load Balancer instead and get the Security Group feature you are looking for as well as things like SSL termination?
How can I assign a static IP address to a ELB. Seems like I cannot.
Some articles online asks to create a Route 53 record but this requires changing CNAME of domain which also redirect email traffic. I just want to change A record not CNAME.
Some articles also mention that I can use a EC2 instance as a reverse proxy. But will a single proxy be able to handle a lot of traffic?
Any solution for this?
AWS' Elastic Load Balancer is actually elastic on two levels as described here:
http://shlomoswidler.com/2009/07/elastic-in-elastic-load-balancing-elb.html
The first level is the load balancer itself. In order to make sure that ELB can scale to whatever volume you have and burst to whatever volume you suddenly encounter, AWS assigns a 'static' DNS hostname (e.g. MyDomainELB-918273645.us-east-1.elb.amazonaws.com). That hostname points to multiple IP addresses. You can see that (from a command line) by running
$ host MyDomainELB-918273645.us-east-1.elb.amazonaws.com
MyDomainELB-918273645.us-east-1.elb.amazonaws.com 172.31.7.2
MyDomainELB-918273645.us-east-1.elb.amazonaws.com 172.31.11.33
The second form of elasticity within the ELB is obviously then ELB directing the query to one of your EC2 instances in the pool.
So, you can see that trying to assign a static IP address to the load balancer would be self-defeating.
Using an EC2 instance as a reverse proxy would also seem self-defeating as you would then create a bottleneck before even getting to the ELB. Might as well just create your own load balancer.
The recommended solution (which you've pointed out) is to create a CNAME that points to the ELB hostname (which won't change).
i.e. my-app.mycompany.com ->
MyDomainELB-918273645.us-east-1.elb.amazonaws.com
This would allow you to integrate your scalable application, behind the ELB within your domain.
I'm not sure I fully understand why you cannot create a CNAME in your DNS or what that has to do with directing email traffic, can you explain?
A new feature in AWS (I believe it was announced at Re:Invent 2017) allows for static IPs with Network Load Balancers (NLB). NLB can only handle layer 4 (TCP) and not HTTP specifics (layer 7).
You can assign one Elastic IP address per availability zone.
For details see the AWS blog post or the NLB documentation.
The "Classic Load Balancer" and "Application Load Balancer" do not support static IPs. If you need a feature only provided by those, you have to fall back to the CNAME solution described above.
A blog was recently published by AWS support on this topic leveraging NLB to provide static IP to Classic and Application load balancer - https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/
Summary of solution as described by the post
We end up with a TCP listener on a NLB that accepts traffic and forwards it to an internal ALB. The ALB terminates TLS, examines HTTP headers, and routes requests based on your configured rules to target groups with your instances, servers, or containers. The AWS Lambda function keeps everything in sync by watching the ALB for IP address changes and updating the NLB target group. In the end we’ll have a few static IP addresses that are easy for whitelisting, and we won’t lose any of the benefits of ALB. Note that we will be sending all of the traffic through two load balancers
I found setting up AWS Global Accelerator very straight forward and simple. It created 2 static IP Addresses and a static DNS pointing to my Application load balancer.
Configuring Global Accelerator
Set listeners as TCP port 80, 443
Select your load balancer endpoint (AWS Global Accelerator Configuration)
Add cname record for your dns pointing to the static dns it created
(mywebsite.com > globalacceleratorDNS.com). If any client needs to
whitelist, give them the 2 static IP it created
Pricing is $18 per month + a few pennies per GB of data transfer.
I'm pretty sure its cheaper than the NLB, Nat Gateway, Elastic IP setup.
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-accelerators.html
For little traffic, it might be a solution to set up an EC2 Instance running Nginx as a forwarding proxy.
So you can use the EC2's static IP Address to forward your traffic resolving the ALB's DNS name.
However, it's a kind of a hack, but using a Global Accelerator or an NLB seems to me also like a hack :-)
Unlike the Network Load Balancer, the Application Load Balancer (ALB) does not support Elastic IPs, but that's not the worst part. If you use Route 53 together with the ALB, the DNS automatically sets the TTL to 60 seconds. This appears to be causing problems for our institutional - mainly government - customers running older Windows DNS servers. They just can't keep up with the ALB's Listener changing its public-facing IP on such a short notice. Older DNS infrastructure is either not respecting or is not capable of handling such aggressive TTL.
While I don't like it, AWS recommends to put a Network Load Balancer in front of the Application Load Balancer, per here: https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/
I have a set-up running on Amazon cloud with a couple of EC2 Instances running through a load balancer.
It is important that the site has a unique(static) IP or set of IPs as I'm plugging in 3rd party APIs which only accept requests made from IPs which have been added to their whitelist.
So basically unless we can give these 3rd parties a static IP or range of IPs that the requests from the site will always come from then we would be unable to make any calls to them.
Anyone knows how to achieve this as I know that Elastic IPs are not compatible with load balancers?
If I were to look up the IP of the load balancer DNS name (e.g. dualstack.awseb-BAMobile-ENV-xxxxxxxxx.eu-west-1.elb.amazonaws.com resolves to 200.200.200.200) would that IP be Static?
Any help/advise is greatly appreciated guys.
The ip addresses of your load balancer is not static. In any event, your incoming load balancer IP wouldn't be used for outgoing connections.
You could assign elastic IPs to the actual instances behind the load balancer, which would then be used for outgoing requests. You get 5 free elastic ips, and I believe you can apply for more if you need them.
Additionally if using a VPC and if your instances are in a private subnet then they will only be able to access the internet via the NAT instance(s) you setup, and you can of course assign an elastic IP to the NAT instances
This is an old question, but things have changed now.
Now you can create a Network ELB to get a LB with a static IP.
from https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
Support for static IP addresses for the load balancer. You can also
assign one Elastic IP address per subnet enabled for the load
balancer.
https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/
You can attache an additional ENI (Elastic Network Interface) to an instance in your VPC. This way the ELB (Elastic Load Balancer) routes the incoming Internet requests to the web server, and the additional ENI will be used to connect to your 3rd party (or internal) requests (Management network)
You can see more details about it in the VPC documentations
Really the only way I am aware of doing this is by setting up your instances within a VPC and having dedicated NAT instances by which all outbound traffic is routed.
Here is a link to the AWS documentation on how to set up NAT instances:
http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
You CAN attach an elastic IP to the instances BUT NOT to the ELB (which is what the client sees).
You could use a full reverse proxy layer 7 load balancer like HAProxy:
Or a commercial implementation like Loadbalancer.org or Riverbed (Zeus)
They both are in the AWS Marketplace:
Your outbound requests to your 3rd party APIs will NOT go out via the ELB/ALB. That's for incoming connections. If you need an inbound static IP you'll probably need to forego the loadbalancer (or figure out how to implement Anshu's suggestion to attach an elastic IP to the loadbalancers, the doc is light on details). Update: I found some documentation that ALB use static addresses (and I just tried binding an elastic IP to one to be sure and that failed).
If you're talking about outbound connections see below:
If your server is deployed in a public subnet you can attach an
elastic IP to that host. Outbound communications will go out over
that address.
If your server is deployed in a private subnet there's
a NAT gateway attached to it. All outbound traffic from your private
subnet will go out over that interface.
You could use as already mentioned loadbalancer.org appliance in AWS. It would replace the AWS NAT instance and give greater functionality and include both Layer4 and Layer7, along with SSL termination and a WAF.
Best of all you get free support in your 30 day trial in AWS to help you get up and running.
Yes I am biased as I work for loadbalancer.org however I would say nothing ventured nothing gained.
You can use a DNS service like DNSMadeeasy that allows "ANAME" records. These act like an A Record but can be pointed at a FQDN or IP. So in this case you can point it to the ELB DNS.
Dave