I am not able to connect to my Redis server from remote AWS instance (both instances are in same VPC though)...
I have launched CentOS 6 instance and launched Redis server. I can confirm that server is running:
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
I have set AWS security group to be:
Custom TCP | port 6379 | 0.0.0.0/0
I am able to connect to the Redis server from the same instance using redis-cli but when I try to do it from some other AWS instance I get:
Could not connect to Redis at ec2-*.compute.amazonaws.com:6379: No route to host
Seems like you are using 127.0.0.1 IP for binding instead of 0.0.0.0. Open your /etc/redis.conf and check bind option.
Turns out firewall was on, so it wasn't possible to connect from outside. So to wrap it up:
1.Set Redis to allow remote connections by setting bind 0.0.0.0 in redis.conf
2.Make sure the firewall is not preventing you to connect to your server. On AWS you can turn it off by:
sudo service iptables save
sudo service iptables stop
sudo chkconfig iptables off
Related
My setup is a docker container with a node app running on port 3000 ran using
docker run -d -p 3000:3000 <IMAGE> node dist/src/main.js
ubuntu#ip-172-31-8-192:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
da3fdeb2b843 image "docker-entrypoint.s…" 18 minutes ago Up 18 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
From inside EC2 netstat,
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp6 0 0 [::]:3000 [::]:* LISTEN
My EC2 Instance has public access.
AWS Security Group Rules
When I SSH into my EC2 container I can run curl localhost:3000/status and see a response from node app.
ubuntu#ip-172-31-8-192:~$ curl localhost:3000/status
{"statusCode":404,"message":"Cannot GET /status","error":"Not Found"}
Unfortunately, from my local terminal curl {EC2-Public IPv4 DNS}:3000/status times out, verbose output
Jamess-MBP-2:~ jamesspaniak$ curl -v <EC2-Public IPv4 DNS>:3000/status
* Trying <EC2-Public IPv4>:3000...
* connect to <EC2-Public IPv4> port 3000 failed: Operation timed out
* Failed to connect to <EC2-Public IPv4 DNS> port 3000 after 75008 ms: Couldn't connect to server
* Closing connection 0
curl: (28) Failed to connect to <EC2-Public IPv4 DNS> port 3000 after 75008 ms: Couldn't connect to server
I've also tried opening all ports and using port 80 with docker run -p 80:3000 but same result.
I've also added an inbound rule to allow ICMP Echo Request and can successfully ping my public IP.
What other things can I look at to resolve this? I expected to be able to make a request to the running docker container from outside the EC2 instance. Apreciated.
Running an AWS EC2 instance with Ubuntu 22.04. I am also running a jupyter server for python development there and connecting to that from my local Ubuntu laptop with ssh tunneling.
#!/usr/bin/env bash
# encoding:utf-8
SERVER=98.209.63.973 # My EC2 instance
# Tunnel the jupyter service
nohup ssh -N -L localhost:8081:localhost:8888 $SERVER & # 8081:Local port 8888:remote port
However, I never opened port 8888 of the ec2 instance by a security group rule. How come the port forwarding is working in that case? Should not it be blocked?
When using ssh -L, ssh will listen to local port 8081 and will send that traffic across the SSH connection (port 22) to the destination computer. The ssh daemon that receives the traffic will then forward the traffic to localhost:8888.
There is no need to permit port 8888 in the EC2 instance security group because it is receiving this traffic via port 22.
An SSH connection does more than just sending the keystrokes you type. It is a full protocol that can pass traffic across multiple logical channels.
I am running nginx in a docker container in an EC2 instance. I started nginx using docker run --name mynginx1 -p 80:80 -d nginx and can access it via curl http://localhost from inside the EC2 instance. Now when I try to access it from my outside through my browser, my request is always timing out. I have set the security rules on my EC2 instance to allow all traffic, all protocols from any IP address for the purpose of testing.
I have verified that nginx is listening on any IP address using ss -tuln | grep 80
tcp LISTEN 0 4096 0.0.0.0:80 0.0.0.0:*
tcp LISTEN 0 4096 [::]:80 [::]:*
Any ideas?
Note: When I install nginx on EC2 directly and run it using sudo systemctl start nginx, I am able to go to http://<ec2_dns> and see the nginx welcome page. So I believe this is an issue specific to running docker containers on EC2 and not a problem with the instance security rules.
Edit 1: Subnet network ACLs inbound rules are as follows:
can't connect to an dotnet app running in an aws EC2 instance on port 7070
I've added the port to the security group and when I check if the port is open (netstat -ntlp) I get the output below:
tcp 0 0 127.0.0.1:7070 0.0.0.0:* LISTEN 27021/dotnet
Is there anything I'm missing?
I was able to fix the issue by downloading nginx to my EC2 instance
(https://gist.github.com/soheilhy/8b94347ff8336d971ad0) and forwarding my custom port to port 80
Hopefully this will help!
I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?
The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.