Does Docker's EXPOSE not breach the supposed sandboxing? - amazon-web-services

I am missing a piece of the puzzle. Running a docker image (say on a Linux EC2 instance) through
> sudo docker run -p 80:xyzw webapp_image:version
makes the container reachable at port 80, which means (via EXPOSE xyzw in the Dockerfile) that the container has affected its host.
Does that not contradict the premise of Docker containers? The idea, just like virtualization, appeared to be that a container runs in a sandbox and is unable to affect its host. Yet here it is able to expose itself on the host's port. Does the fact that this is doable not breach the supposed isolation? Should the mapping not be done on the host's command line, not from within the container? Suppose you have two containers on the same host, and both try to expose through the same port, then, potentially, some race would occur to see who'll get there first. Or is the idea that sandboxing is indeed observed, except that here the mapping occurs on the command line of the host?

How do I route port 80 of the EC2 instance to port 80 of the
container?
If I understood the context, you can do this by running below command -
$ sudo docker run -p 80:80 webapp_image:version
This routes port 80 of the host machine to port 80 of the container.
After this you need to open port 80 in security group of the EC2 instance & if it still deosn't work, follow the checkpoints -
1. Firewalls on the host machine like iptables, selinux etc.
2. Validate EC2 instance security groups in AWS.
3. Validate NACL rules for subnets in AWS.

Did you add this line EXPOSE 80 in your Docker file? If yes then run docker run -p 80:80 webapp_image:version. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host machine(EC2).

Related

tomcat on docker container on linux mapped to anything other than 8080 is not accessible from internet

I tested AWS EC2 Amazon Linux and Ubuntu 18.04.
Tomcat is reachable from localhost:8081, but not from outside network
After pulling thee tomcat image
docker pull tomcat
Then running a container with port mapping:
docker run -d --name container-test -p 8081:8080 tomcat
Tomcat web page is not accessible, says:
This site can’t be reached 13.49.148.112:8081 took too long to respond.
But if doing this way, it's working fine.
docker run -d --name container-test2 -p 8080:8080 tomcat
I opened ALL ALL ALL in AWS security groups.
netstat shows that ports are listening correctly
ACLs are at default rule 100 allowing everything
I also did nmap this and found out the port is filtered:
$nmap -p8081 172.217.27.174
PORT STATE SERVICE
8081/tcp filtered blackice-icecap
Tried to add a rule to iptables but no luck:
iptables -I INPUT 3 -s 0.0.0.0/0 -d 0.0.0.0/0 -p tcp --dport 8081 -m state --state New -j ACCEPT
What can be done?
UPDATE:
Spent 2 good days to solve the issue with Amazon Linux2, but no success at all, switched to Ubuntu 22.04 and it's working. Also, same setup works on diff ami image in Mumbai region,
hence there is a high chance the image is faulty in Stockholm region specifically.
could be one of this:
check the port mappings of the container of your task definition
check the entries of the NACL (access control list) of your subnet (check if its public)
check if you allowed the trafic in the security group for your ip or 0.0.0.0/0

Docker in VM in AWS

I have created a VM in AWS. Assign to it Security Group with PORTS 8080-8089 Open.
Inside my VM I am running a docker of a server mapping my VM port 8081 to the Docker port 8080.
using "docker run --name mynameddocker -d -p 0.0.0.0:8081:8080 webapp"
Now, Inside my VM I can access localhost:8081 using a web browser. But the issue is trying to access it from outside VM.!!!!
My assumption that I can access it using AWS_Instatance_Public_IP:8081.
But nothing worked. I have a security rule that states open all TCP port, but still no access.
I have tried the same in Google Cloud Platform. But no progress
Any Idea ??
Upon checking that the first step (test your container image locally) is already covered, you just need to assure to have the ports mapped correctly and opened to make the connections to flow from outside to your container; we were able to reproduce the issue on GCP, using an ‘Ngnix’ image which by default has open the 80/tcp port and the port was menter image description hereapped using the 8081 port (as yours),
1.here the command we used:
docker run --name nginx-new -d -p 8081:80 nginx
Meaning that 80 is my container's port and 8081 is the port mapped on the host VM in GCP.
On a firewall rule we opened port 8081, that is the one opened on my host to receive connections and map these connections to the container's 80 port.
Basically outsider connections will go like:
Browser:http://host-ip:8080 >> GCP project firewall >> Instance port 8081 >> container port 80 >> _succesfull connection!
**Troubleshooting (please refer to the attached images, for a better reference)...
Checked ports opened on my container (container-troubleshoot.png)
Test through the container port and IP (image1)
Checked ports opened on my VM (VM-ports.png)
Test through the VM port using instance internal IP (image2)
Test through the VM port using instance external IP (image3)
Test using browser using instance external IP (image4)
It will be useful to know your error message, but I would suggest you to follow the above steps to validate if used ports are mapped and opened in the container and in the VM instance.

ssh to docker container hosted on EC2

I want to run a docker container on EC2 and also I need to ssh into the container for debugging purposes. I have 2 ports open for ssh 22 and 8022 on my EC2 instance(security group applied). The problem is when I want to bind 22 port of my docker container to port 8022 then it tells address already in use. And the address is used by sshd program. If I kill the process then I cant ssh to the instance from my localhost. How can I overcome this deadlock?
As mentioned in the comments, you don't need to start ssh inside the container in order to go inside the container. You can use the docker exec command to go inside the container after you ssh into the EC2 instance by running:
docker exec -it <container-name> bash
If you still want to ssh into the container directly, then you need to do the following:
Start the container and map port 22 inside to a free port outside;
docker run -p 2222:22 ...
After starting the container, exec into it and install ssh if not yet installed, and start the ssh service using something like systemctl start sshd
ssh into the container, by using the ec2 instance IP and the mapped port
ssh <container-user>#<ec2-instance-ip> -p 2222
This will connect to the ec2 instance and redirect you into the container due to the port mapping.

Can't access port 8080 on AWS EC2

I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080 as well as port 80.
I created a docker image and container that runs an apache server as per the aws tutorial.
When I run docker run -p 80:80 hello-world (where hello-world is the apache container image), everything works fine and I can access the server from the public network (using a web browser, or a curl command).
However, when I run docker run -p 8080:80 hello-world and I try to send a GET request (web browser, or curl) I get a connection timeout.
If I login to the host that is running the docker container, the curl command works fine. This tells me that port 8080 isn't really open to the public network, and something is blocking it, what could that be?
I tried to reproduce the thing, and I wasn't able to do it (it worked for me), so things that you should check:
1) Check that security group has indeed opened ports 80 and 8080 to your ip (or 0.0.0.0/0 if this is just a test just to confirm that this is not a firewall issue).
2) check the container is running:
docker ps -a
you should see: 0.0.0.0:8080->80/tcp under ports.
3) check that when you are sending the GET request, you are specifying the port 8080 in the request, so your browser should look something like:
http://your.ip:8080
or curl:
curl http://your.ip:8080
warning: just for testing
For testing: Setting Security Groups can solve the problem.
SecurityGroups > Inbound > Edit inbound rules > Add new rules > All TCP

Docker Container/AWS EC2 Public DNS Refusing to Connect

I am unable to connect to my EC2 instance via its public dns on a browser, even though for security groups "default and "launch-wizard-1" port 80 is open for inbound and outbound traffic.
It may be important I note that I have a docker image that is running in the instance, one I launched with:
docker run -d -p 80:80 elasticsearch
I'm under the impression this forwards port 80 of the container to port 80 of the EC2 instance, correct?
The problem was that elasticsearch serves http over port 9200.
So the correct command was:
docker run -d -p 80:9200 elasticsearch
The command was run under root.