Can't connect to remote host from AWS EC2 instance - amazon-web-services

I have a database on a remote Google Cloud (GCP) machine. On GCP, I edited the firewall rules to allow access from my desktop and from an AWS EC2 instance. However, the following happens:
From desktop:
netcat -zv 35.198.56.213 27017
Connection to 35.198.56.213 27017 port [tcp/*] succeeded!
From EC2:
netcat -zv 35.198.56.213 27017
netcat: connect to 35.198.56.213 port 27017 (tcp) failed: Connection timed out
I don't understand why I can connect from my desktop but not from the EC2. Both IPs are allowed (using the instance public address). The outbound rules for the EC2 instance are allowing all traffic.
Any tips?
Edit: I am trying to connect to a mongo instance that is running on port 27017. The bindIp on /etc/mongod.conf is correctly set to 0.0.0.0.

Related

How Is Port Forwarding Working on AWS without Security Group Rules?

Running an AWS EC2 instance with Ubuntu 22.04. I am also running a jupyter server for python development there and connecting to that from my local Ubuntu laptop with ssh tunneling.
#!/usr/bin/env bash
# encoding:utf-8
SERVER=98.209.63.973 # My EC2 instance
# Tunnel the jupyter service
nohup ssh -N -L localhost:8081:localhost:8888 $SERVER & # 8081:Local port 8888:remote port
However, I never opened port 8888 of the ec2 instance by a security group rule. How come the port forwarding is working in that case? Should not it be blocked?
When using ssh -L, ssh will listen to local port 8081 and will send that traffic across the SSH connection (port 22) to the destination computer. The ssh daemon that receives the traffic will then forward the traffic to localhost:8888.
There is no need to permit port 8888 in the EC2 instance security group because it is receiving this traffic via port 22.
An SSH connection does more than just sending the keystrokes you type. It is a full protocol that can pass traffic across multiple logical channels.

Can't access aws EC2 public ip by browser

I have a dockerized application in EC2 , which is running fine
And I have a security policy like following
Here my instance's details
If I hit https://54.167.118.150/ or http://54.167.118.150/ or https://54.167.118.150:8080 or http://54.167.118.150:8080
It shows connection refused.
But when I hit the IP in browser , it was saying refused to connect .
Check your Dockerfile is port 8080 is exposed or not. The port 8080 should be exposed to the host, add below line in the bottom of Dockerfile;
EXPOSE 8080

telnet using internal ip from command line in amazon aws

In one of my amazon aws server installed memcahed server in port 11211.
Now i ssh to that server and run this command
telnet 127.0.0.1 11211
I get connected to and can access memcache data.
If i use private or public ip instead of 127.0.0.1
telnet <private ip> 11211
i get this
telnet: Unable to connect to remote host: Connection refused
Lets call this server master server where memcached is installed.
If i now ssh to other app server and run this command
telnet <private ip> 11211
get the same error. But the master server security group has this inbound rules.
All traffic All All sg-xxxxxx (app server)
Should we not get access to all services running in our master server from app servers?

HTTPS on a EC2 instance

I have an EC2 instance on Amazon (AWS). The instance is behind a ELB (Elastic Load Balancer). I want to allow HTTPS connections to reach the EC2 instance.
Is it necessary to have the load balancer configured for HTTPS, ie, to check the certificates etc, or can this just be done traditionally within the EC2 instance and virtual host SSL configuration ?
The reason I'm asking is because I have allowed traffic via ELB -> EC2 for port 80 and 443, but only port 80 reaches the instance.
EDIT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00021s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
3306/tcp open mysql
EDIT 2
Here is my other stack overflow questions explaining the bigger problem I have, hence why I opened this question. HTTPS only works on localhost
Check whether any application is running on port 443.
Use this command to check:
nmap -sT -O localhost
EDIT
Add the certificate files on the server and then upload them to IAM using the command:
aws iam upload-server-certificate --server-certificate-name my-server-cert
--certificate-body file://my-certificate.pem --private-key file://my-private-key.pem
--certificate-chain file://my-certificate-chain.pem
For more info check this:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html

Closed port when tunneling HTTP over ssh

I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?
The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.