I have created a VM in AWS. Assign to it Security Group with PORTS 8080-8089 Open.
Inside my VM I am running a docker of a server mapping my VM port 8081 to the Docker port 8080.
using "docker run --name mynameddocker -d -p 0.0.0.0:8081:8080 webapp"
Now, Inside my VM I can access localhost:8081 using a web browser. But the issue is trying to access it from outside VM.!!!!
My assumption that I can access it using AWS_Instatance_Public_IP:8081.
But nothing worked. I have a security rule that states open all TCP port, but still no access.
I have tried the same in Google Cloud Platform. But no progress
Any Idea ??
Upon checking that the first step (test your container image locally) is already covered, you just need to assure to have the ports mapped correctly and opened to make the connections to flow from outside to your container; we were able to reproduce the issue on GCP, using an ‘Ngnix’ image which by default has open the 80/tcp port and the port was menter image description hereapped using the 8081 port (as yours),
1.here the command we used:
docker run --name nginx-new -d -p 8081:80 nginx
Meaning that 80 is my container's port and 8081 is the port mapped on the host VM in GCP.
On a firewall rule we opened port 8081, that is the one opened on my host to receive connections and map these connections to the container's 80 port.
Basically outsider connections will go like:
Browser:http://host-ip:8080 >> GCP project firewall >> Instance port 8081 >> container port 80 >> _succesfull connection!
**Troubleshooting (please refer to the attached images, for a better reference)...
Checked ports opened on my container (container-troubleshoot.png)
Test through the container port and IP (image1)
Checked ports opened on my VM (VM-ports.png)
Test through the VM port using instance internal IP (image2)
Test through the VM port using instance external IP (image3)
Test using browser using instance external IP (image4)
It will be useful to know your error message, but I would suggest you to follow the above steps to validate if used ports are mapped and opened in the container and in the VM instance.
Related
I have a web application running on a private server. I use ssh port tunnelling to map the private server port's to that of google cloud VM port 8080, and when I do
curl http://localhost:8080
on gcp VM shell, it returns a valid response. However, when I try to access it from outside (in browser) using the external IP (or do curl http://[external_IP]:8080 in shell), it returns "the IP refused to connect".
My firewall settings allow tcp traffic on 8080 s.t. when I run another application on port 8080 directly in VM without ssh (say a docker hello-world app) it is accessible from outside using the same link and works well. Is there additional configuration i must do?
Check if your application is binding to 127.0.0.1 or localhost. If yes, change to 0.0.0.0.
To accept traffic from the VPC requires binding to a network interface connected to the VPC. The address 0.0.0.0 means bind to all network interfaces.
The network 127.x.x.x aka localhost or loopback address is an internal-only (Class A) network. If your application only binds to the internal network, external applications cannot connect to your application.
If instead your goal is to bind to localhost and use SSH port forwarding to access the loopback address, then start SSH like this:
ssh -L 8080:127.0.0.1:8080 IP_ADDRESS_OF_VM
You can then access port 8080 on the VM this way:
curl http://127.0.0.1:8080
The curl command is connecting to port 8080 on your local machine. SSH then forwards that connect to port 8080 on the remote machine.
I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080 as well as port 80.
I created a docker image and container that runs an apache server as per the aws tutorial.
When I run docker run -p 80:80 hello-world (where hello-world is the apache container image), everything works fine and I can access the server from the public network (using a web browser, or a curl command).
However, when I run docker run -p 8080:80 hello-world and I try to send a GET request (web browser, or curl) I get a connection timeout.
If I login to the host that is running the docker container, the curl command works fine. This tells me that port 8080 isn't really open to the public network, and something is blocking it, what could that be?
I tried to reproduce the thing, and I wasn't able to do it (it worked for me), so things that you should check:
1) Check that security group has indeed opened ports 80 and 8080 to your ip (or 0.0.0.0/0 if this is just a test just to confirm that this is not a firewall issue).
2) check the container is running:
docker ps -a
you should see: 0.0.0.0:8080->80/tcp under ports.
3) check that when you are sending the GET request, you are specifying the port 8080 in the request, so your browser should look something like:
http://your.ip:8080
or curl:
curl http://your.ip:8080
warning: just for testing
For testing: Setting Security Groups can solve the problem.
SecurityGroups > Inbound > Edit inbound rules > Add new rules > All TCP
I am missing a piece of the puzzle. Running a docker image (say on a Linux EC2 instance) through
> sudo docker run -p 80:xyzw webapp_image:version
makes the container reachable at port 80, which means (via EXPOSE xyzw in the Dockerfile) that the container has affected its host.
Does that not contradict the premise of Docker containers? The idea, just like virtualization, appeared to be that a container runs in a sandbox and is unable to affect its host. Yet here it is able to expose itself on the host's port. Does the fact that this is doable not breach the supposed isolation? Should the mapping not be done on the host's command line, not from within the container? Suppose you have two containers on the same host, and both try to expose through the same port, then, potentially, some race would occur to see who'll get there first. Or is the idea that sandboxing is indeed observed, except that here the mapping occurs on the command line of the host?
How do I route port 80 of the EC2 instance to port 80 of the
container?
If I understood the context, you can do this by running below command -
$ sudo docker run -p 80:80 webapp_image:version
This routes port 80 of the host machine to port 80 of the container.
After this you need to open port 80 in security group of the EC2 instance & if it still deosn't work, follow the checkpoints -
1. Firewalls on the host machine like iptables, selinux etc.
2. Validate EC2 instance security groups in AWS.
3. Validate NACL rules for subnets in AWS.
Did you add this line EXPOSE 80 in your Docker file? If yes then run docker run -p 80:80 webapp_image:version. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host machine(EC2).
I'm running Bitnami MEAN on an EC2 instance. I can host my app just fine on port 3000 or 8080. Currently if I don't specify a port I'm taken to the Bitnami MEAN homepage. I'd like to be able to access my app by directly from my EC2 public dns without specifying a port in the url. How can I accomplish this?
The simple way to do that is Port Forwarding by using below command:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
After logging into the AWS using putty by having private key & with username "bitnami". Type the above command & enter.
Then, you will automatically redirected to your application.
Note : I am assuming, you have already configure port 8080 to security group on AWS
You'll have to open port 80 on the server's firewall, and either run your server on port 80 or forward port 80 to port 8080. You'll need to lookup the instructions for doing that based on what version of Linux you are running, but it is probably going to be an iptables command.
You'll also need to open port 80 on the EC2 server's security group.
I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?
The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.