Closed port when tunneling HTTP over ssh - amazon-web-services

I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?

The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.

Related

How Is Port Forwarding Working on AWS without Security Group Rules?

Running an AWS EC2 instance with Ubuntu 22.04. I am also running a jupyter server for python development there and connecting to that from my local Ubuntu laptop with ssh tunneling.
#!/usr/bin/env bash
# encoding:utf-8
SERVER=98.209.63.973 # My EC2 instance
# Tunnel the jupyter service
nohup ssh -N -L localhost:8081:localhost:8888 $SERVER & # 8081:Local port 8888:remote port
However, I never opened port 8888 of the ec2 instance by a security group rule. How come the port forwarding is working in that case? Should not it be blocked?
When using ssh -L, ssh will listen to local port 8081 and will send that traffic across the SSH connection (port 22) to the destination computer. The ssh daemon that receives the traffic will then forward the traffic to localhost:8888.
There is no need to permit port 8888 in the EC2 instance security group because it is receiving this traffic via port 22.
An SSH connection does more than just sending the keystrokes you type. It is a full protocol that can pass traffic across multiple logical channels.

Unable to access Kibana on AWS EC2 instance using url

I have Elasticseasrch and Kibana installed on EC2 instance where I am able to access Elasticsearch using on this url http://public-ip/9200. But I am unable to access Kibana using http://public-ip/5601.
I have configured kibana.yml and added certain fields.
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: 0.0.0.0:9200
On doing wget http://localhost:5601 I am getting below output:
--2022-06-10 11:23:37-- http://localhost:5601/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83731 (82K) [text/html]
Saving to: ‘index.html’
What am I doing wrong?
Server Host set to 0.0.0.0 means it should be accessible from outside localhost but double check that the listener is actually listening for external connections on that port using netstat -nltpu. The server is also accessible on it's public IP on port 9200 so try the following:
EC2 Security Group should inbound TCP traffic on that port 5601 from your IP address.
Network ACLs should allow inbound/outbound TCP traffic on port 5601.
OS firewall ( e.g. ufw or firewalld ) should allow traffic on that port. You can run iptables -L -nxv to check the firewall rules.
Try connecting to that port from a different EC2 instance in the same VPC. It is possible that what ever internet connection you are using may have a firewall blocking connections on that port. This is common with corporate firewalls.
If these fail, next you want to check if the packets are reaching your EC2 instance so you can run a packet capture on that port using tcpdump -ni any port 5601 and check if you have any packets coming in/out on that port.
if you don't see any packets on tcpdump, use VPC Flow Logs to see if packets are coming in/out that port.
Considering the kibana port (5601 ) is open via security groups
I could able to resolve the issue by updating config server.host:localhost to server.host:0.0.0.0
and elasticsearch.hosts: ["http://localhost:9200"] (in my case kibana and ES both are running on the same machine) in kibana.yml
https://discuss.elastic.co/t/kibana-url-gives-connection-refused-from-outside-machine/122067/8

Google Cloud expose application after SSH

I have a web application running on a private server. I use ssh port tunnelling to map the private server port's to that of google cloud VM port 8080, and when I do
curl http://localhost:8080
on gcp VM shell, it returns a valid response. However, when I try to access it from outside (in browser) using the external IP (or do curl http://[external_IP]:8080 in shell), it returns "the IP refused to connect".
My firewall settings allow tcp traffic on 8080 s.t. when I run another application on port 8080 directly in VM without ssh (say a docker hello-world app) it is accessible from outside using the same link and works well. Is there additional configuration i must do?
Check if your application is binding to 127.0.0.1 or localhost. If yes, change to 0.0.0.0.
To accept traffic from the VPC requires binding to a network interface connected to the VPC. The address 0.0.0.0 means bind to all network interfaces.
The network 127.x.x.x aka localhost or loopback address is an internal-only (Class A) network. If your application only binds to the internal network, external applications cannot connect to your application.
If instead your goal is to bind to localhost and use SSH port forwarding to access the loopback address, then start SSH like this:
ssh -L 8080:127.0.0.1:8080 IP_ADDRESS_OF_VM
You can then access port 8080 on the VM this way:
curl http://127.0.0.1:8080
The curl command is connecting to port 8080 on your local machine. SSH then forwards that connect to port 8080 on the remote machine.

Docker in VM in AWS

I have created a VM in AWS. Assign to it Security Group with PORTS 8080-8089 Open.
Inside my VM I am running a docker of a server mapping my VM port 8081 to the Docker port 8080.
using "docker run --name mynameddocker -d -p 0.0.0.0:8081:8080 webapp"
Now, Inside my VM I can access localhost:8081 using a web browser. But the issue is trying to access it from outside VM.!!!!
My assumption that I can access it using AWS_Instatance_Public_IP:8081.
But nothing worked. I have a security rule that states open all TCP port, but still no access.
I have tried the same in Google Cloud Platform. But no progress
Any Idea ??
Upon checking that the first step (test your container image locally) is already covered, you just need to assure to have the ports mapped correctly and opened to make the connections to flow from outside to your container; we were able to reproduce the issue on GCP, using an ‘Ngnix’ image which by default has open the 80/tcp port and the port was menter image description hereapped using the 8081 port (as yours),
1.here the command we used:
docker run --name nginx-new -d -p 8081:80 nginx
Meaning that 80 is my container's port and 8081 is the port mapped on the host VM in GCP.
On a firewall rule we opened port 8081, that is the one opened on my host to receive connections and map these connections to the container's 80 port.
Basically outsider connections will go like:
Browser:http://host-ip:8080 >> GCP project firewall >> Instance port 8081 >> container port 80 >> _succesfull connection!
**Troubleshooting (please refer to the attached images, for a better reference)...
Checked ports opened on my container (container-troubleshoot.png)
Test through the container port and IP (image1)
Checked ports opened on my VM (VM-ports.png)
Test through the VM port using instance internal IP (image2)
Test through the VM port using instance external IP (image3)
Test using browser using instance external IP (image4)
It will be useful to know your error message, but I would suggest you to follow the above steps to validate if used ports are mapped and opened in the container and in the VM instance.

Telnet to a specific port on my ec2 instance

I need to telnet to my ec2 instance on port 2222. I have included it in the security groups with source as Anywhere and Custom TCP Rule. It is a 64 bit Linux Machine. I am able to connect via port 22 but when I try with 2222, it shows telnet: Unable to connect to remote host: Connection refused. Also I need to skip the login/password prompt if connected successfully.
Try changing the port the Telnet server is listening on in it's configuration file.