I have created a rust tcp server framework.
(https://github.com/HasinZaman/SingleThreadServer)
The framework works fine without any issue on my local machine; and on the localhost & private network on the aws lighsail instance.
The current tcp listener on the instance tries to bind to "0.0.0.0:80"
However, I am unable to connect to the webserver using the public Ip address on port 80.
Related
I have an HTTP server running on my local computer, and I would like to be able to connect to it remotely. I would like to connect to a port on the aws instance and have it forwarded to my local computer. This is because my local computer can't be connected to directly. I would like to create a connection from my computer to the instance, and forward an incoming connection to my computer over that connection.
Edit: I am able to curl the website from the AWS instance, so the port forwarding works, but I can't connect to that port from outside.
I am trying to setup a Jenkins agent running on a Mac Mini that will connect to the Master instance in AWS using JNLP. I have created a Network Load Balancer that listens on port 50000 and forwards the traffic to the Master instance. The security group attached to the Master instance allows for traffic from the public IP of the mac mini. On the node configuration I specified the tunnel connection with the DNS name of the NLB.
While trying to connect I receive an IO Exception that the Operation timed out. When I run TCP Dump on the master instance I can see traffic coming from the mac mini to the Master instance.
If I connect the mac mini with a vpn connection I can connect skipping the NLB, so I believe something is wrong with how I setup the NLB. Is there a way to change the connection timeout to increase it, or set the TCP keepalive interfal?
I'm trying to launch a socket server on Amazon EC2. It binds the socket to 0.0.0.0:5000 and then listens on this port, printing every incoming message
However, when I simply try to connect to ec2-18-x-x-x.us-east-2.compute.amazonaws.com:5000 via Chrome, it throws an error: ERR_CONNECTION_TIMED_OUT. Of course, when I try to connect to it with a client using socket.connect(), a client is also unable to set up a connection.
I've already added the port to the security group inbound rules like that:
netstat -a shows that this port is open:
I use the public DNS IPv4 address, that is stated in instance description.
As I've successfully launched the server and tested netstat, it's obvious I can connect to EC2 using RDP.
Guided by this troubleshooting manual, I've also checked the route table for subnet and ACL for subnet, but everything was OK.
What's the problem then?
netstat is simply showing that some process is listening on that port, not that the port is "open". You still need to open port 5000 in the Windows firewall.
I'm studying AWS.
But I have a problem that I cannot connect to my web server on my AWS instance
AWS instance is Windows Server 2012
this is my instance status
I want to connect my web server via public ip 52.78.47.142 outside the AWS instance
if I connect to the private ip 172.31.15.206 inside the instance, home page showed up.
But if I connect to the public ip 52.78.47.142 inside the AWS instance and outside the AWS instance, home page didn't show up both.
although I set a new rule for http protocol at port 80, it doens't work
any helps? thanks a lot...
open up tcp:80 port in the security group(Inbound) for the IP address of the machine from where your accessing the website( In this case your Desktop or Laptop IP address).
If you are trying to access the web server from another AWS Instance open the port for that particular Instance IP. For troubleshooting case you can open up All traffic and test it(Though this is not recommended)
I installed memcached on AWS EC2 Ubuntu, and I can connect it by telnet in the server:
telnet localhost 11211
But how can I connect it from other machine? I know the interval ip is 172.31.17.208, but when I try to connect it from another EC2 by:
telnet 172.31.17.208 11211
the response is
Could not open connection to the host, on port 11211: connect failed.
You will need a Public IP/ Elastic IP if you want to access your Memcache from outside of the AWS.
Your internal IP will work within the VPC and not outside of your VPC. I am guessing the another instance that you are trying to access is not in the same VPC. Try pinging your Memcache server from another instance and check if it is resolved using internal IP.
Edit:
Apart from this, you might need to check your security group and make sure the ports are open for incoming connection.