Call Container running on AWS EC2 - amazon-web-services

I have a Linux EC2 with docker running inside of it.
Docker is running 3 services. 2 are workers and one is an API.
I want to be able to call that API from outside the system but I am getting "This site can't be reached".
The service is running and the call is a simple ping which works locally through VS so I don't believe that is the issue.
My security group has all traffic allowed with 0.0.0.0/0.
I have attempted the following urls but no luck:
http://ec2-{public-ip}.ap-southeast-2.compute.amazonaws.com/ping
http://{public-ip}/ping
http://{public-ip}/172.17.0.2/ping (containers IP address)

Based on the comments.
EXPOSE does not actually "exposes" a port:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
Thus, to expose port 80 from your container to the instance you have to use -p 80:80 option.

Related

Access AWS SSM Port inside Docker container

I try to access some AWS resources from inside a docker container. Therefore I have a PortForwarding SSM session running on the host and everything works fine when I try to access the resources via localhost:<port>.
However, inside of a docker container I cannot access these same resources via 172.17.0.1:<port>. Host communication per se seems to work just fine, as I can communicate with a local web server via 172.17.0.1:8000. Only the combination of SSM and docker seems to be a problem.
nmap inside of the container also shows the port as closed.
Is there any way to get the combination of SSM and docker up and running?
I suspect that what is happening is that AWS SSM is port forwarding to localhost and is bound to the loopback adaptor.
If I run aws ssm port forwarding, I am able to access the port on the localhost and not via my machine IP:Port.
So when docker tries to access the port via its own natted IP it is unable to connect to the port.
I have the same issue that I am trying to solve with miniKube. Since I am only able to access my ports via localhost on my system, minikube is unable to access my localports.
If I understand correctly, you try to connect to a webserver from your container host and this works, but when logged into the docker container itself you cannot reach it?
If this is what you meant, It could be related to the fact that containers have a different network interface from the host and thus different security groups. If the receiving server's security group is configured to allow traffic from the host, but not from the security group of the containers running on the host, it would be a possible explanation for what you experienced.

Dynamic port setting for my dockerized microservice

I want to deploy several instances of my microservice which uses certain port but make it scalable and not fix the port in task definition / Dockerfile. My microservice can listen to port provided in environment variable or command line.
At the moment all microservices are described in AWS ECS task definitions and have static port assignment.
Every microservice registers itself with Eureka server and now I can run multiple service instances only on different EC2 instances.
I want to be able to run several containers on the same EC2 instance but every new service instance shall get some free port to listen to it.
What is the standard way of implementing this?
Just set the host port to 0 in the task definition:
If using containers in a task with the EC2 launch type, you can
specify a non-reserved host port for your container port mapping (this
is referred to as static host port mapping), or you can omit the
hostPort (or set it to 0) while specifying a container port and your
container automatically receives a port (this is referred to as
dynamic host port mapping) in the ephemeral port range for your
container instance operating system and Docker version.
The default ephemeral port range is 49153–65535, and this range is
used for Docker versions before 1.6.0. For Docker version 1.6.0 and
later, the Docker daemon tries to read the ephemeral port range from
/proc/sys/net/ipv4/ip_local_port_range (which is 32768–61000 on the
latest Amazon ECS-optimized AMI)
So you will need application LB in such case to route traffic on the dynamic port.
You can take help from this article dynamic-port-mapping-in-ecs-with-application-load-balancer.

Container instance network

I am having troubles to connect one ECS container instance(www, python) to another container instance (redis).
I am getting an "connecting to 0.0.0.0:6379. Connection refused" error from the www container.
Both instances are running on the same host and were created using two task definitions each containing one docker image.
Both use Bridge networking mode. Each task is executed by means of a service.
I also did setup service discovery for both services.
Things I did do and try:
Assure that Redis is bound to 0.0.0.0 and not 127.0.0.1
Added port mappings for www (80) and redis container (6379)
ssh'ed into the ec2 instance to assure port mappings are ok. I can telnet to both port 80 and 6379
connected to the www instance and tested by means of the python console if 0.0.0.0:6379 was available.
It wasn't the case. I also tried with the docker(redis) IP address 172.17.0.3 without luck. I also tried using the .local service discovery name of the redis container without luck. The service discovery name did not resolve
resolving the service discovery name from the ec2 container (using dig): that did work but returned a 10.0.* address
I am a bit out of option why this is the case. Obviously things do work on a local development machine.
Update 10/5: I changed container networking to type "host" which appears to be working. Still not understanding why "bridge" won't work.

Full path of a docker image file required in NGINX repository

I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.

Hyperledger Multinode setup

I am trying to setup a blockchain network using 4 vms. each of the vms has fabric-peer and fabric-membersrvc docker images and that seems to work successfully. I have setup password less ssh among all vms for normal user(non-root). But the docker images are unable to communicate with each other.
Do I require Passwordless ssh for "root" users among vms ? Are there any other requirements?
the membersrvc docker image is not required on all VMs. currently (v0.6) there can be only 1 membersrvc.
if all your peers are docker containers, they talk to eachother via their advertised address, which you can set through environment variable when you start the peer containers:
-e "CORE_PEER_ADDRESS=<ip of docker host>:7051"
make sure you don't use the ip of the container because you don't have a swarm cluster running (for overlay networking) so the containers on the other hosts cannot talk to the private ip of containers on other hosts.
In order to get peers running in docker to talk to each other:
Make sure that the grpc ports are mapped from the docker VM to the
host
Set the CORE_PEER_ADDRESS to <IP of host running docker>:<grpc
port>
Make sure you use the IP of the host for the grpc communication
addresses such as membersrvc address, discovery root node etc.