ssh to docker container hosted on EC2 - amazon-web-services

I want to run a docker container on EC2 and also I need to ssh into the container for debugging purposes. I have 2 ports open for ssh 22 and 8022 on my EC2 instance(security group applied). The problem is when I want to bind 22 port of my docker container to port 8022 then it tells address already in use. And the address is used by sshd program. If I kill the process then I cant ssh to the instance from my localhost. How can I overcome this deadlock?

As mentioned in the comments, you don't need to start ssh inside the container in order to go inside the container. You can use the docker exec command to go inside the container after you ssh into the EC2 instance by running:
docker exec -it <container-name> bash
If you still want to ssh into the container directly, then you need to do the following:
Start the container and map port 22 inside to a free port outside;
docker run -p 2222:22 ...
After starting the container, exec into it and install ssh if not yet installed, and start the ssh service using something like systemctl start sshd
ssh into the container, by using the ec2 instance IP and the mapped port
ssh <container-user>#<ec2-instance-ip> -p 2222
This will connect to the ec2 instance and redirect you into the container due to the port mapping.

Related

Cannot connect to RDS from inside a docker container, I can from the host, and I can from a local docker container

I have an ec2 instance with docker compose installed, running a single container. I have the same setup replicated locally on my machine.
using nc -v **host** 5432 results in:
From my machine > success
From inside a docker container running on my machine > success
From inside the ec2 instance > success
From inside a docker container running on the ec2 > Host is unreachable
I'm guessing there's something I'm missing in the ec2's docker config if anyone can point me in the right direction.
This is the docker_boot.service file
Description=docker boot
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/ubuntu/metabase
ExecStart=sudo /usr/bin/docker-compose -f /home/ubuntu/metabase/docker-compose.yml up -d --remove-orphans
[Install]
WantedBy=multi-user.target
The reason behind this is your Docker container is trying to resolve your RDS DNS using public DNS rather than private one.
For a quick workaround, I think you can nslookup your RDS DNS and take 1 of its IPv4 addresses. Then, use that single IPv4 as your host.
nslookup <ID>.rds.amazonaws.com
For a clean workaround, you need to adjust your Docker container DNS configuration into your VPC internal DNS IPv4 address. Using --dns, you can quickly adjust this and you can add more DNS if your application is trying to reach other services as well.
docker run --name app --dns=169.254.169.253 -p 80:5000 -d app
References:
https://docs.docker.com/config/containers/container-networking/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html

I cannot reach from a browser to ec2 instance in AWS

I wrote a very simple spring-boot application and packed it in Docker.
The content of docker file is:
FROM openjdk:13
ADD target/HelloWorld-1.0-SNAPSHOT.jar HelloWorld.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "HelloWorld.jar"]
I pushed it to docker hub.
I created a new EC2 instance on aws. Then I connected to it and typed the following commands:
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo docker run -p 80:8085 ****/docker-hello-world
The last command gave many messages on the screen that said that spring-boot application is running.
Looks great. However, when I opened my browser and typed: "http://ec2-54-86-87-68.compute-1.amazonaws.com/" (public DNS of EC2 machine).
I got "This site can’t be reached".
Do you know what I did wrong?
Edit: security groups that regard this machine are "default" and the following group that I defined:
Inside the EC2 machine, I typed:"curl localhost:8085" and got:
"curl: (52) Empty reply from server"
Ensure that your port's inbound traffic is enabled for your local IP address in your ec2 instance security group configuration
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#adding-security-group-rule
Have you allowed inbound traffic for port 8085 in your security group configuration? That should be the first thing to check.
I found the solution.
It was port issues.
Instead of running
sudo docker run -p 80:8085 ****/docker-hello-world
I had to run:
sudo docker run -p 8085:8080 ****/docker-hello-world
This command says: "take the application that runs on port 8080 in the application and put it on port 8085 on docker".
I opened the browser and browsed to: "http://ec2-18-207-188-57.compute-1.amazonaws.com:8085/hello" and got the response I expected.

Unable to connect JupyterHub on EMR

I have created EMR cluster (5.23.0) with JupyterHub. I create ssh tunnel to 9443 on master node. However, I am not able to connect to JupyterHub, the page does not resolve.Any ideas what is missing?
I assume you have your security groups configured correctly. Double check them just to be sure.
As for the JupyterHub, have you checked that the JupyterHub docker container is running?
If you SSH into the master node and run:
sudo docker ps
You will be given a list of running docker containers. If the list is empty, try starting the container manually:
sudo docker start jupyterhub
The web interface at port 9443 on your EMR master node should be available.

Does Docker's EXPOSE not breach the supposed sandboxing?

I am missing a piece of the puzzle. Running a docker image (say on a Linux EC2 instance) through
> sudo docker run -p 80:xyzw webapp_image:version
makes the container reachable at port 80, which means (via EXPOSE xyzw in the Dockerfile) that the container has affected its host.
Does that not contradict the premise of Docker containers? The idea, just like virtualization, appeared to be that a container runs in a sandbox and is unable to affect its host. Yet here it is able to expose itself on the host's port. Does the fact that this is doable not breach the supposed isolation? Should the mapping not be done on the host's command line, not from within the container? Suppose you have two containers on the same host, and both try to expose through the same port, then, potentially, some race would occur to see who'll get there first. Or is the idea that sandboxing is indeed observed, except that here the mapping occurs on the command line of the host?
How do I route port 80 of the EC2 instance to port 80 of the
container?
If I understood the context, you can do this by running below command -
$ sudo docker run -p 80:80 webapp_image:version
This routes port 80 of the host machine to port 80 of the container.
After this you need to open port 80 in security group of the EC2 instance & if it still deosn't work, follow the checkpoints -
1. Firewalls on the host machine like iptables, selinux etc.
2. Validate EC2 instance security groups in AWS.
3. Validate NACL rules for subnets in AWS.
Did you add this line EXPOSE 80 in your Docker file? If yes then run docker run -p 80:80 webapp_image:version. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host machine(EC2).

Docker Container/AWS EC2 Public DNS Refusing to Connect

I am unable to connect to my EC2 instance via its public dns on a browser, even though for security groups "default and "launch-wizard-1" port 80 is open for inbound and outbound traffic.
It may be important I note that I have a docker image that is running in the instance, one I launched with:
docker run -d -p 80:80 elasticsearch
I'm under the impression this forwards port 80 of the container to port 80 of the EC2 instance, correct?
The problem was that elasticsearch serves http over port 9200.
So the correct command was:
docker run -d -p 80:9200 elasticsearch
The command was run under root.