Docker Swarm on AWS: Expose service port not working on AWS virtual machine - amazon-web-services

I have 2 AWS virtual machines instances, running on 2 IPv4 public IPs A.B.C.D and X.Y.Z.W
I installed Docker on both machines, and launch Docker Swarm with node A.B.C.D as manager and X.Y.Z.W as worker. When I launched Docker Swarm, I used the A.B.C.D as the advertise-addr, like so:
docker swarm init --advertise-addr A.B.C.D
The Swarm initialized successfully
The problem occurred when I created a service from the image jwilder/whoami and exposed the service on port 8000:
docker service create -d -p 8000:8000 jwilder/whoami
I expected that I can access the service on port 8000 from both nodes, according to the Swarm Routing Mesh documentation. However, in fact I can only access the service from only one node, which is the node that the container was running on
I also tried this experiment on Azure virtual machine and alo failed, so I guess this is a problem with Swarm on these cloud providers, maybe some networking misconfiguration.
Does anyone know how to fix this ? Thanks in advance :D

One main problem that you are not refer is about the Security Groups. You expose port 8000 but by default, the Security Groups only open port 22 for SSH. Please check the SG and make sure you open the necessary ports.

Related

Call Container running on AWS EC2

I have a Linux EC2 with docker running inside of it.
Docker is running 3 services. 2 are workers and one is an API.
I want to be able to call that API from outside the system but I am getting "This site can't be reached".
The service is running and the call is a simple ping which works locally through VS so I don't believe that is the issue.
My security group has all traffic allowed with 0.0.0.0/0.
I have attempted the following urls but no luck:
http://ec2-{public-ip}.ap-southeast-2.compute.amazonaws.com/ping
http://{public-ip}/ping
http://{public-ip}/172.17.0.2/ping (containers IP address)
Based on the comments.
EXPOSE does not actually "exposes" a port:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
Thus, to expose port 80 from your container to the instance you have to use -p 80:80 option.

Unable to access REST service deployed in docker swarm in AWS

I used the cloud formation template provided by Docker for AWS setup & prerequisites to set up a docker swarm.
I created a REST service using Tibco BusinessWorks Container Edition and deployed it into the swarm by creating a docker service.
docker service create --name aka-swarm-demo --publish 8087:8085 akamatibco/docker_swarm_demo:part1
The service starts successfully but the CloudWatch logs show the below exception:
I have tried passing the JVM environment variable in the Dockerfile as :
ENV JAVA_OPTS= "-Dbw.rest.docApi.port=7778"
but it doesn't help.
The interesting fact is at the end the log says:
com.tibco.thor.frwk.Application - TIBCO-THOR-FRWK-300006: Started BW Application [SFDemo:1.0]
So I tried to access the application using CURL -
curl -X GET --header 'Accept: application/json' 'URL of AWS load balancer : port which I exposed while creating the service/resource URI'
But I am getting the below message:
The REST service works fine when I do docker run.
I have checked the Security Groups of the manager and load-balancer. The load-balancer has inbound open to all traffic and for the manager I opened HTTP connections.
I am not able to figure out if anything I have missed. Can anyone please help ?
As mentioned in Deploy services to swarm, if you read along, you will find the following:
PUBLISH A SERVICE’S PORTS DIRECTLY ON THE SWARM NODE
Using the routing mesh may not be the right choice for your application if you need to make routing decisions based on application state or you need total control of the process for routing requests to your service’s tasks. To publish a service’s port directly on the node where it is running, use the mode=host option to the --publish flag.
Note: If you publish a service’s ports directly on the swarm node using mode=host and also set published= this creates an implicit limitation that you can only run one task for that service on a given swarm node. In addition, if you use mode=host and you do not use the --mode=global flag on docker service create, it will be difficult to know which nodes are running the service in order to route work to them.
Publishing ports for services works different than for regular containers. The problem was; the image does not expose the port after running service create --publish and hence the swarm routing layer cannot reach the REST service. To resolve this use mode = host.
So I used the below command to create a service:
docker service create --name tuesday --publish mode=host,target=8085,published=8087 akamatibco/docker_swarm_demo:part1
Which eventually removed the exception.
Also make sure to configure the firewall settings of your load balancer so as to allow communications through desired protocols in order to access your applications deployed inside the container.
For my case it was HTTP protocol, enabling port 8087 on load balancer which served the purpose.

Full path of a docker image file required in NGINX repository

I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.

Problems trying to run a custom SFTP service on port 22 on Amazon ECS

I have build a Node.js app that implements an SFTP server; when files are put to the server, the app parses the data and loads it into a Salesforce instance via API as it arrives. The app runs in a Docker container, and listens on port 9001. I'd like it run on Amazon's EC2 Container Service, listening on standard port 22. I can run it locally, remapping 9001 to host 22, and it works fine. But because 22 is also used by SSH, I'm not having any luck running it on ECS. Here are the steps I've taken so far:
Created an EC2 instance using the AMI amzn-ami-2016.03.j-amazon-ecs-optimized (ami-562cf236).
Assigned the instance to a Security Group that allows port 22 (was already present).
Created an ECR registry and pushed my Docker image up to it.
Created an ECS Task Definition for the image, which contains a port mapping from host port 22 to container port 9001
Created a service for the task and associated to the Default ECS Cluster, which contains my EC2 instance.
At this point, which viewing the "Events" tab of the Service view, I see the following error:
service sfsftp_test_service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXXX is already using a port required by your task. For more information, see the Troubleshooting section.
I assumed that this is because my Task Definition is trying to map host port 22 which is reserved for SSH, so I tried creating a new version of the Task Definition that maps 9001 to 9001. I also updated my security group to allow port 9001 access. This task was started on my instance, and I was able to connect and upload files. So at this point I know that my Node.js app and the Docker instance are correct. It's a port mapping issue.
In trying to resolve the port mapping issue, I found this stackoverflow question about running SSH on an alternate port on EC2, and used the answer there to change my sshd to run on port 9022. I also updated the Security Group to allow traffic on port 9022. This worked; I can now SSH to port 9022, and I can no longer ssh to port 22.
However, I'm still getting the The closest matching container-instance XXXX is already using a port required error. I also tried editing the Security Group, changing the default port 22 grant from "SSH" to "Custom TCP Rule", but that change doesn't stick; I'm also not convinced that it's anything but a quick way to pick the right port.
When I view the Container instance from the Cluster screen, I can see that 5 ports are "registered", including port 22:
According to this resolved ECS Agent github issue, those ports are "reserved by default" by the ECS Agent. I'm guessing this is why ECS refuses to start my Docker image on the EC2 Instance. So is this configurable? Can I "unreserve" port 22 to allow my Docker image to run?
Edit to add: After reviewing this ECS Agent documentation, I've opened an issue on the ECS Agent Github as well.

Hyperledger Multinode setup

I am trying to setup a blockchain network using 4 vms. each of the vms has fabric-peer and fabric-membersrvc docker images and that seems to work successfully. I have setup password less ssh among all vms for normal user(non-root). But the docker images are unable to communicate with each other.
Do I require Passwordless ssh for "root" users among vms ? Are there any other requirements?
the membersrvc docker image is not required on all VMs. currently (v0.6) there can be only 1 membersrvc.
if all your peers are docker containers, they talk to eachother via their advertised address, which you can set through environment variable when you start the peer containers:
-e "CORE_PEER_ADDRESS=<ip of docker host>:7051"
make sure you don't use the ip of the container because you don't have a swarm cluster running (for overlay networking) so the containers on the other hosts cannot talk to the private ip of containers on other hosts.
In order to get peers running in docker to talk to each other:
Make sure that the grpc ports are mapped from the docker VM to the
host
Set the CORE_PEER_ADDRESS to <IP of host running docker>:<grpc
port>
Make sure you use the IP of the host for the grpc communication
addresses such as membersrvc address, discovery root node etc.