Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have some EC2 instances running an express application listening on port 3000. I then have an elastic load balancer forwarding request from it's port 80 to these EC2 instances. Every time I bring down one of the express servers running on an EC2 instance and try to bring it back up I get the address in use error for port 3000. I can not find any actual process using this port (lsof, netstat, etc) is ELB still connected on port 3000 ? If so what is the workflow to restart applications behind ELB ?
Take a look at "Connection Draining" and either disable it or reduce the time.
It sounds like the process isn't exiting until all connections have closed.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to deploy a Kafka cluster on AWS (using CloudFormation). My advertised listeners are (using a private DNS namespace to resolve the internal IP):
INTERNAL://kafka-${id}.local:9092
EXTERNAL://<public-ip>:9092
However, Kafka complains that two listeners cannot share the same port. The problem is I'm using a load balancer for external traffic, and I'm not sure if there's a way to redirect that traffic to a different port.
My desired configuration would be:
INTERNAL://kafka-${id}:9092
EXTERNAL://<public-ip>:19092
But the load balancer takes the incoming request and passes it to the internal IP at the same port. Ultimately I'd like to have the load balancer take connections on port 19092 and pass them to 9092, but I don't see any way to configure that.
If there are any recommendations on alternative ways to do this, I'm happy to hear them. Currently, I need services that are on other VPCs to be able to communicate with these brokers, and I'd prefer to use a load balancer to handle these requests.
Based on the comments.
The NLB does not support redirection rules in its listeners. It only has forwarding rules. But a listener can use different port that its targets defined by a target group. So a possible setup could be:
Client ---> Listener on port 19092 ---> NLB ---> Target group with port 9092
#Marcin answered this for me. See comments for details.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I accidentally disabled the public checkbox on port 3389 in the Windows Firewall system on my RDP. I can't connect my RDP, but I've added public access in AWS security group for the port.
Can you please let me know on how to enable the port on windows firewall and connect to my RDP?
I have solved my problem and now I can connect my RDP.
Step to reproduces
1. Logged on AWS and goto SSM
2. Run a Command
3. Disable firewall (Because I have disabled my public port in Firewall)
4. Connect my RDP (It's working without a firewall - Firewall status is False)
5. Logged on my RDP and enabled the public port after that will enable the firewall again.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
On my backend instance a service is running which has to accept multiple connection in a second but TCP LB is not allowing multiple connection at a time.
Please help me to increase LB connection to max.
Where did you get the information it only allow one connection at time?
The Network Load Balancer (Also known as TCP load balancer) allows you to balance load of your systems based on incoming IP protocol data, such as address, port, and protocol type. As long as your instance services has resources to handle the request, the load balancer will redirect the traffic.
You can read more about it in this oficial document form Google.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have application deployed on tomcat server on machine A,B,C,D
I want to load balance using the Nginx using two load balancer nodes LB1 & LB2.
All configuration I got is using only one node as load balancer.
is it possible using Nginx.
If we have a critical application running on server require the zero down time. If we go with one LB and for some reason LB itself fails,then there will be an issue.
We have this set up initially using AWS Load balancer, but recently we start using the websockets. The web sockets are not working correctly on EC2 load balancer.
if some one has better option please suggest.
Use Amazon ELB and forward TCP:80/443 instead of HTTP:80/443
The only downside of balancing TCP is that your appservers have to deliver SSL certificates themselves if you use HTTPS.
If you want to run the loadbalancer yourself without having a single point of failure you can use haproxy to fall back to a standby machine when the primary balancer fails.
http://www.loadbalancer.org/blog/transparent-load-balancing-with-haproxy-on-amazon-ec2
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have recently signed up for the Amazon Web Service Free Tier and started an EC2 instance. I installed Nginx on this server and started the service. The problem is that whenever I try to navigate to the public DNS provided by Amazon's EC2 Management Console, I receive "This page can't be displayed".
I have added a new security group within the EC2 Management Console providing access to port 80, 22, and 443 (to 0.0.0.0/0).
I have verified nginx is running by
ps -ef | grep nginx
and it returned
I verified it is listening on port 80 by running
netstat -pant | grep :80
and it returned
I verified the default site is enabled in the .conf file and contains the "Welcome to Nginx" message.
Any ideas what could be blocking the site?
Turns out I did not need to create a new security group. I had to edit an existing one, opening the HTTP port. I'm not sure what the security group I created did, but it's gone now.