I am unable to connect my docker worker to docker swam manager.
I have created multiple aws EC2 instances and have made one of them as a manager docker swarm init --listen-addr 0.0.0.0:2377 and trying to connect it via other EC2 instances docker swarm join 0.0.0.0:2377 as a worker, But it gives me an error.
"Error response from daemon: Timeout was reached before node joined`.
The attempt to join the swarm will continue in the background".
I need my docker swarm manager to list docker node ls all the nodes including manager and workers.
To resolve this problem I needed to expose respective ports from both Docker Worker and Docker Manager instances.
I discovered some information while resolving this question,
TCP Port 2377 is a Default port used for communication so add custom tcp rule for port 2377 in security group of aws EC2.
TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker Machine is used to orchestrate Docker hosts.
TCP port 2377 This port is used for communication between the nodes of a Docker Swarm or cluster. It only needs to be opened on manager nodes.
TCP and UDP port 7946 for communication among nodes (container network discovery).
UDP port 4789 for overlay network traffic (container ingress networking).
Kindly Note: Aside from those ports, port 22 (for SSH traffic) and any other ports needed for specific services to run on the cluster have to be open.
You need to use the real ip address in the docker swarm join command.
The "0.0.0.0" is not a real ip-address, it's an alias for "all (local) ip-addresses", it's not something you can connect to.
1.run the command in the master node:
docker swarm join-token worker
2.and than run the command obtained from above step
example:
root#ubuntu:~# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0akniaryx9xg8mmb08rbd42kwntigfkyk33vt7ac0wrehn58mk-5voo7jfl3kl40yl4cmvf16lgt 10.0.10.4:2377
root#ubuntu:~#
run on worker node:
docker swarm join --token SWMTKN-1-0akniaryx9xg8mmb08rbd42kwntigfkyk33vt7ac0wrehn58mk-5voo7jfl3kl40yl4cmvf16lgt 10.0.10.4:2377
Related
I have created a VM in AWS. Assign to it Security Group with PORTS 8080-8089 Open.
Inside my VM I am running a docker of a server mapping my VM port 8081 to the Docker port 8080.
using "docker run --name mynameddocker -d -p 0.0.0.0:8081:8080 webapp"
Now, Inside my VM I can access localhost:8081 using a web browser. But the issue is trying to access it from outside VM.!!!!
My assumption that I can access it using AWS_Instatance_Public_IP:8081.
But nothing worked. I have a security rule that states open all TCP port, but still no access.
I have tried the same in Google Cloud Platform. But no progress
Any Idea ??
Upon checking that the first step (test your container image locally) is already covered, you just need to assure to have the ports mapped correctly and opened to make the connections to flow from outside to your container; we were able to reproduce the issue on GCP, using an ‘Ngnix’ image which by default has open the 80/tcp port and the port was menter image description hereapped using the 8081 port (as yours),
1.here the command we used:
docker run --name nginx-new -d -p 8081:80 nginx
Meaning that 80 is my container's port and 8081 is the port mapped on the host VM in GCP.
On a firewall rule we opened port 8081, that is the one opened on my host to receive connections and map these connections to the container's 80 port.
Basically outsider connections will go like:
Browser:http://host-ip:8080 >> GCP project firewall >> Instance port 8081 >> container port 80 >> _succesfull connection!
**Troubleshooting (please refer to the attached images, for a better reference)...
Checked ports opened on my container (container-troubleshoot.png)
Test through the container port and IP (image1)
Checked ports opened on my VM (VM-ports.png)
Test through the VM port using instance internal IP (image2)
Test through the VM port using instance external IP (image3)
Test using browser using instance external IP (image4)
It will be useful to know your error message, but I would suggest you to follow the above steps to validate if used ports are mapped and opened in the container and in the VM instance.
I have created EMR cluster (5.23.0) with JupyterHub. I create ssh tunnel to 9443 on master node. However, I am not able to connect to JupyterHub, the page does not resolve.Any ideas what is missing?
I assume you have your security groups configured correctly. Double check them just to be sure.
As for the JupyterHub, have you checked that the JupyterHub docker container is running?
If you SSH into the master node and run:
sudo docker ps
You will be given a list of running docker containers. If the list is empty, try starting the container manually:
sudo docker start jupyterhub
The web interface at port 9443 on your EMR master node should be available.
Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379
The problem is that when uploading docker image of service to the Amazon container registry, the docker image does not run after adding it as a Task.
See (https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/)
Step1: Push to AWS Container service private image registry
docker push 734122228327.dkr.ecr.us-east-2.amazonaws.com/joethecoder2:latest
## Step2: SSH into running Docker instance
ssh -i "containerservice.pem" ec2-user#ec2-18-217-248-112.us-east-2.compute.amazonaws.com
The authenticity of host 'ec2-18-217-248-112.us-east-2.compute.amazonaws.com (18.217.248.112)' can't be established.
ECDSA key fingerprint is SHA256:wCeAUed36nKeQjEbSDsYjzq8Z5mpNY4pbcahw2mSozs.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-18-217-248-112.us-east-2.compute.amazonaws.com,18.217.248.112' (ECDSA) to the list of known hosts.
| __| __|
| ( _ \ Amazon ECS-Optimized Amazon Linux AMI 2017.09.d
____|_|____/
For documentation visit, http://aws.amazon.com/documentation/ecs
Docker ps running instances
[ec2-user#ip-10-0-0-102 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c90a2116f3ab amazon/amazon-ecs-agent:latest "/agent" About an hour ago Up About an hour ecs-agent
[ec2-user#ip-10-0-0-102 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c90a2116f3ab amazon/amazon-ecs-agent:latest "/agent" About an hour ago Up About an hour ecs-agent
Results: Do not show that joethecoder2 image is running. WHY?
[ec2-user#ip-10-0-0-102 ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
amazon/amazon-ecs-agent latest 2d99efccdfef 3 weeks ago 26.8MB
amazon/amazon-ecs-pause 0.1.0 c846030090b6 3 weeks ago 964kB
[ec2-user#ip-10-0-0-102 ~]$
Problem conclusion:
The docker image that was uploaded using Push was not included in the running container service, when adding the Task was done like the example instructions for how to deploy docker containers. (I configured the Task in step 2, and 3, and then setup the cluster in Step 4) See (https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/)
Test Data
However, when I try to curl the service it does not connect:
curl ec2-18-217-248-112.us-east-2.compute.amazonaws.com:8080
curl: (7) Failed to connect to ec2-18-217-248-112.us-east-2.compute.amazonaws.com port 8080: Connection refused
Further inspection:
Further inspection shows, that the docker service that should be running for joethecoder2 is not running in the docker instance that should be running on the container service node ec2-18-217-248-112.us-east-2.compute.amazonaws.com
Run Task had to be clicked, after setting up the Cluster, to associate the Task with a Cluster. Once the Task is running, port 8080 opens up successfully for the task. Host and Container were both mapped to port 8080
I am unable to connect to my EC2 instance via its public dns on a browser, even though for security groups "default and "launch-wizard-1" port 80 is open for inbound and outbound traffic.
It may be important I note that I have a docker image that is running in the instance, one I launched with:
docker run -d -p 80:80 elasticsearch
I'm under the impression this forwards port 80 of the container to port 80 of the EC2 instance, correct?
The problem was that elasticsearch serves http over port 9200.
So the correct command was:
docker run -d -p 80:9200 elasticsearch
The command was run under root.