AWS Network Load Balancer and HTTPS endpoint [closed] - amazon-web-services

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm learning about network load balancers in AWS and I'm stuck trying to use secure layer connections.
I created
the Load balancer of type network.
one target group for my application (port 3000 / TCP)
one listener for port 443 with protocol TLS and as default action I'm forwarding to the previous target group. I added also the certificate of my domain.
an alias to the Load Balancer in route53
What I'd expect is, if I type https://www.this-is-an-example.com:443/home it should proxy to my application (running in port 3000) but keeping a secure connection or using https. But it doesn't work.
When I do curl https://www.this-is-an-example.com:443/home I receive the following response: curl: (52) Empty reply from server
If I try using Postman Error: socket hang up
I understand that network load balancers don't care about https, however, how can I use https with my domain and be able to hit the listener and utilize https from client to Load balancer.

the problem is your security group. You cannot associate a security group to network load balancer and since they operate at layer 4, you have to make sure your target instances have proper security group that allows access from NLB/Client to the target. this depends if you are using instance target type or IP target type. in instance target type, the source will be actual client and port should be target port(3000) and not 443. in case of IP target type, the source will be NLB's IP and target port is again your target port. you can get more detailed information here

Related

Application Load Balancer bypass SSL Termination [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 months ago.
Improve this question
I have an nginx that is configured for SSL Termination and work as expected for my application.
For Disaster Recovery purposes I want to set up an AWS Application Load Balancer in fron of my HTTPS NGINX. The ALB will be exposed with a Network LoadBalancer that will do the region switch.
The issue is that if I call my application, the AWS Application Load Balancer it's doing the SSL Termination and the certificates are not reaching NGINX:
400 No required SSL certificate was sent
400 Bad request
Since I would like to keep my SSL termination at the NGINX level, can I configure the AWS Application LoadBalancer Listener to forward the certificates for the nginx aslo?
ALBs are layer 7 load balancers that only support HTTP/HTTPS listeners. SSL passthrough has to happen before layer 7 actions so it's not possible to configure ALBs for SSL passthrough. However you should be able to do this with a network load balancer, using TCP listeners.
This AWS blog outlines a similar setup, but for ECS - https://aws.amazon.com/blogs/compute/maintaining-transport-layer-security-all-the-way-to-your-container-using-the-network-load-balancer-with-amazon-ecs/

AWS Network Load Balancer Redirect Port [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to deploy a Kafka cluster on AWS (using CloudFormation). My advertised listeners are (using a private DNS namespace to resolve the internal IP):
INTERNAL://kafka-${id}.local:9092
EXTERNAL://<public-ip>:9092
However, Kafka complains that two listeners cannot share the same port. The problem is I'm using a load balancer for external traffic, and I'm not sure if there's a way to redirect that traffic to a different port.
My desired configuration would be:
INTERNAL://kafka-${id}:9092
EXTERNAL://<public-ip>:19092
But the load balancer takes the incoming request and passes it to the internal IP at the same port. Ultimately I'd like to have the load balancer take connections on port 19092 and pass them to 9092, but I don't see any way to configure that.
If there are any recommendations on alternative ways to do this, I'm happy to hear them. Currently, I need services that are on other VPCs to be able to communicate with these brokers, and I'd prefer to use a load balancer to handle these requests.
Based on the comments.
The NLB does not support redirection rules in its listeners. It only has forwarding rules. But a listener can use different port that its targets defined by a target group. So a possible setup could be:
Client ---> Listener on port 19092 ---> NLB ---> Target group with port 9092
#Marcin answered this for me. See comments for details.

Amazon EC2 Load Balancer [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 days ago.
Improve this question
I want to know the major difference between Amazon application load balance (ALB) and classic load balance (CLB). I have searched that but they gave only examples, such as classic load balance contains only same content, but application load balance can contain different content and grouped with target group.
ALB has some features (e.g., host based routing, path based routing), but my question is why we use ALB instead of classic load balance. Please provide use cases for both.
ALB
The application load balancer (Elastic Load Balancer V2) lets you direct traffic to EC2 instances based on more complex criteria/rules, specifically URL paths. You can have users trying to access "/signup" go to a group of instances and users trying to access "/homepage" go to another set of instances. In comparison to the classic ELB, an ALB inspects traffic at the application level (OSI layer 7). Its at this level that URL paths can be parsed for example.
ELB/Classic
The classic load balancer routes traffic uniformly using information at the TCP level (OSI layer 4 Transport). It will either send request to each instances "round-robin" style or utilize sticky-sessions and send each users/client to the same instance they initially landed on.
Why ALB over ELB?
You could use an ALB if you decided to architect your system in such a way that each path had its own set of instances or its own service. So
/signup, /login, /info etc etc all go through one load balancer that is pinned to your domain name https//mysite.com but a different EC2 instance is servicing each. ALBs only support HTTP/HTTPS. If your system uses another protocol you would have to use an ELB/Classic load balancer. Websockets HTTP/2 are currently only supported on ALB.
Another reason you might choose ALB over ELB is that there are some additional newer features that have not yet been added to ELB or may never be added. As Michael points out below AWS WAF is not supported on the classic load balancer but is on ALB. I expanded on other features farther down.
Why ELB over ALB?
Architecturally speaking its much simpler to send every request to the same set of instances and then internally within your application delegate requests for certain paths to certain functions/classes/methods etc etc... This is essentially the monolith design most applications start out as. For most work loads dedicating traffic to certain instance groups (the ALB approach) would be a waste of EC2 power. You would have some instances doing lots of work and others barely used.
Similar to the ALB there are features of the classic ELB that have not yet arrived on ALB. I expand on that below.
Update - More on Feature Differences
From a product perspective they differ in other ways that aren't really related to how they operate and more about some features not being present yet.
HTTP to HTTPS Redirection - For example, in ALB each target group (group of instances your assigning a specific route) currently can only handle one protocol so if you were to implement HTTP to HTTPS redirects this requires two instances minimum. With ELB you can handle HTTP to HTTPS redirection on a single instance. I imagine ALB will have this feature soon.
https://forums.aws.amazon.com/thread.jspa?threadID=247546
Multiple SSL Certificates on One Load Balancer - With an ALB you can assign it multiple SSL certificates for different domains. This is not possible on a classic load balancer though the feature has been requested. For a classic load balancer you can use a wild card certificate but that is not the same thing. An ALB makes use of SNI server name identification to make this possible where as this has not been added to the classic ELB feature set.

How to connect two Ec2 Instance so that they can Communicate with each other [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
The community reviewed whether to reopen this question 15 days ago and left it closed:
Not suitable for this site This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I want to connect two EC2 instances with each other so that they can communicate with each other.
One will have Wordpress installed, and the 2nd will have a database configured (e.g. Mysql/Mariadb).
I found the problem in the way we can connect 2 EC2 instances with each other by using private IP.
To keep it very simple, For any two programs to communicate with each other over a network, you need two things
IP Address
Port Number
Consider you have two EC2 instances. Lets name them
Instance1
Instance2
On each of these instances, you must be having some programs between which you want the communication to take place. Also, these programs must be running on a PORT of the instance. For example, tomcat instance runs on port 8080 by default. Lets name our programs:-
Program1 (program running on Instance1), running on port 1000
Program2 (program running on Instance2), running on port 2000
Let us first talk about Program1 running on port 1000 of Instance1.
Log onto AWS Console
Click on EC2 Service
In the left panel, click on Security Groups
Click on the button Create Security Group
An overlay will open.
Put-in the name and description of your choosing
Click on the tab Inbound and click on Add Rule
Here, you are adding which port should accept connections.
Set the following details:-
-Type: Custom TCP Rule
-Protocol: TCP
-Port Range: 1000 [Or any other port on which your program runs]
-Source: External IP from where Program1 can be accessed. It can be "Everywhere", "My IP" or a "Custom IP"
Click on the tab Outbound and click on Add Rule
Repeat Step 9, if you want outbound communication.
Repeat these steps on Instance2 and you will be good to go.
Well , you can launch the instances in a amazon VPC , then infront of your App server you can place a Load balancer for traffic. The VPC must have a internet gateway attached to it as well.
To access the whole VPC , you can create a jumpbox/bastion host.
Based on your "Ec# Scenario" image, you can add your "application server" & "backend server" under respective load balancer & can communicate with each other using LB name/end-point url. This would ensure, even if the underling EC2 instance shutdown/re-instanced, the communication won't break.

NGINX: using multiple node as load balancer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have application deployed on tomcat server on machine A,B,C,D
I want to load balance using the Nginx using two load balancer nodes LB1 & LB2.
All configuration I got is using only one node as load balancer.
is it possible using Nginx.
If we have a critical application running on server require the zero down time. If we go with one LB and for some reason LB itself fails,then there will be an issue.
We have this set up initially using AWS Load balancer, but recently we start using the websockets. The web sockets are not working correctly on EC2 load balancer.
if some one has better option please suggest.
Use Amazon ELB and forward TCP:80/443 instead of HTTP:80/443
The only downside of balancing TCP is that your appservers have to deliver SSL certificates themselves if you use HTTPS.
If you want to run the loadbalancer yourself without having a single point of failure you can use haproxy to fall back to a standby machine when the primary balancer fails.
http://www.loadbalancer.org/blog/transparent-load-balancing-with-haproxy-on-amazon-ec2