Is docker swarm a container aware load balancer? - amazon-web-services

6 node docker swarm(cluster) - 3 mgrs, 3 workers
After running below command:
docker service create --name psight -p 8080:8080 --replicas 5 <image>
We see that, mgr3 does not run the task(shown below)
$ docker service ps psight1
ID NAME IMAGE NODE DESIRED_STATE CURRENT_STATE ERROR PORTS
yoj psight.1 image wrk2 Running Running 19 minutes ago
sjb psight.2 image wrk3 Running Running 19 minutes ago
vv6 psight.3 image mgr1 Running Running 19 minutes ago
scf psight.4 image mgr2 Running Running 19 minutes ago
7i2 psight.5 image wrk1 Running Running 19 minutes ago
but,
Can service be available from mgr3? with actual state(above)

As long as the mgr3 is reachable as a manager (ref. Monitor swarm health) then it should be able to perform the usual tasks of a manager.
If your instances are expose on the wide area network with a public IP, with ssh open to the world (e.g. 0.0.0.0/0, ::/0) and that you have you ssh key then you should be able to connect to the instance.

Related

Nginx inside docker container not responding

My Dockerfile is:
FROM nginx
I start a container on AWS docker run -d --name ng_ex -p 8082:80 nginx and :
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6489cbb430b9 nginx "nginx -g 'daemon of…" 22 minutes ago Up 22 minutes 0.0.0.0:8082->80/tcp ng_ex
And inside a container:
service nginx status
[ ok ] nginx is running.
But when I try to send a request thought browser on my.ip.address:8082 I get a timeout error instead Nginx welcome page. What is my mistake and how to fix it?
If you're on an VM on aws, means that you must setup your security group to allow connection on port 8082 from all internet or only your IP/proxyIP. (The timeout may come from this).
Then my.ip.address:8082 should works
If you're inside your VM get the container IP:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
Then curl < container IP >:8082
If stil not working confirm on build your container EXPOSE 80

Hyperledger Fabric: Peer nodes fail to restart with byfn script when machine is shut down while network is running

I have a hyperledger fabric network running on a single AWS instance using the default byfn script.
ERROR: Orderer, cli, CA docker containers show "Up" status. Peers show "Exited" status.
Error occurs when:
Byfn network is running, machine is rebooted (not in my control but because of some external reason).
Network is left running overnight without shutting the machine. Shows same status next morning.
Error shown:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0523a7b1730 hyperledger/fabric-tools:latest "/bin/bash" 23 seconds ago Up 21 seconds cli
bfab227eb4df hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 23 seconds ago peer1.org1.example.com
6fd7e818fab3 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 19 seconds ago peer1.org2.example.com
1287b6d93a23 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 22 seconds ago peer0.org2.example.com
2684fc905258 hyperledger/fabric-orderer:latest "orderer" 28 seconds ago Up 26 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
93d33b51d352 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 25 seconds ago peer0.org1.example.com
Attaching docker log: https://hastebin.com/ahuyihubup.cs
Only the peers fail to start up.
Steps I have tried to solve the issue:
docker start $(docker ps -aq) or manually, starting individual peers.
byfn down, generate and then up again. Shows the same result as above.
Rolled back to previous versions of fabric binaries. Same result on 1.1, 1.2 and 1.4. In older binaries, error is not repeated if network is left running overnight but repeats when machine is restarted.
Used older docker images such as 1.1 and 1.2.
Tried starting up only one peer, orderer and cli.
Changed network name and domain name.
Uninstalled docker, docker-compose and reinstalled.
Changed port numbers of all nodes.
Tried restarting without mounting any volumes.
The only thing that works is reformatting the AWS instance and reinstalling everything from scratch. Also, I am NOT using AWS blockchain template.
Any help would be appreciated. I have been stuck on this issue for a month now.
Error resolved by adding following lines to peer-base.yaml:
GODEBUG=netdns=go
dns_search: .
Thanks to #gari-singh for the answer:
https://stackoverflow.com/a/49649678/5248781

ElasticBeanstalk Docker, one Container or multiple Containers?

We are working on a new REST API that will be deployed on AWS ElasticBeanstalk using Docker. It uses Python Celery for scheduled jobs which means separate process need to run for workers, our current Docker configuration has three containers...
Multicontainer Docker:
09c3182122f7 sso "gunicorn --reload --" 18 hours ago Up 26 seconds sso-api
f627c5391ee8 sso "celery -A sso worker" 18 hours ago Up 27 seconds sso-worker
f627c5391ee8 sso "celery beat -A sso -" 18 hours ago Up 27 seconds sso-beat
Conventional wisdom would suggest we should use a Multi-container configuration on ElasticBeanstalk but since all containers use the same code, using a single container configuration with Supervisord to manage processes might be more efficient and simpler from an OPS point of view.
Single Container w/ Supervisord:
[program:api]
command=gunicorn --reload --bind 0.0.0.0:80 --pythonpath '/var/sso' sso.wsgi:application
directory=/var/sso
[program:worker]
command=celery -A sso worker -l info
directory=/var/sso
numprocs=2
[program:beat]
command=celery beat -A sso -S djcelery.schedulers.DatabaseScheduler
directory=/var/sso
When setting up a multi-container configuration on AWS memory is allocated to each container, my thinking is it more efficient to let the container OS handle memory allocation internally rather than to explicitly set it to each container. I do not know enough about how Multi-container Docker runs under the hood on ElasticBeanstalk to intelligently recommend one way or the other.
What is the optimal configuration for this situation?

Docker - Cant access docker port from outside

So i created a new EC2 Instance and installed docker on it.
I deployed code from ( https://github.com/commonsearch/cosr-front/blob/master/INSTALL.md ) and followed install instructions.
Install was successfull and i started the server:
[ec2-user#ip-172-30-0-127 cosr-front]$ make docker_devserver
docker run -e DOCKER_HOST --rm -v "/home/ec2-user/cosr-front:/go/src/github.com/commonsearch/cosr-front:rw" -w /go/src/github.com/commonsearch/cosr-front -p 9700:9700 -i -t commonsearch/local-front make devserver
mkdir -p build
go build -o build/cosr-front.bin ./server
GODEBUG=gctrace=1 COSR_DEBUG=1 ./build/cosr-front.bin
2016/05/28 16:32:38 Using Docker host IP: 172.17.0.1
2016/05/28 16:32:38 Server listening on 127.0.0.1:9700 - You should open http://127.0.0.1:9700 in your browser!
Well, now when i want to access it from outside, i cant! Not even curl the local server.
When i run docker ps it gives me correct port forwarding:
[ec2-user#ip-172-30-0-127 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a9f77e1eeb1 commonsearch/local-front "make devserver" 4 minutes ago Up 4 minutes 0.0.0.0:9700->9700/tcp stoic_hopper
9ff00fe3e70d commonsearch/local-elasticsearch-devindex "/docker-entrypoint.s" 4 minutes ago Up 4 minutes 0.0.0.0:39200->9200/tcp, 0.0.0.0:39300->9300/tcp kickass_wilson
These are my docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 3e205118cd3f 17 minutes ago 853.3 MB
<none> <none> 1d233da1fa59 2 hours ago 955.7 MB
debian jessie ce58426c830c 4 days ago 125.1 MB
commonsearch/local-front latest 30de7ab48d43 7 weeks ago 1.024 GB
commonsearch/local-elasticsearch-devindex latest b1156ada5a24 11 weeks ago 383.2 MB
commonsearch/local-elasticsearch latest 808e72f49b4a 3 months ago 355.2 MB
I have tryed disabling ipv6 and all kind of nonsense google offered me, but without success.
Any ideas ?
EDIT:
Also, if i enter the docker container for frontend( using docker exec ), then I CAN PING AND CULR the frontend.
But i cant from the outside( nor ssh, not from my home pc using browser ).
Also my docker version:
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
I made a issue at github as swell and one guy saved the day.
Here's he's response:
Server listening on 127.0.0.1:9700
Your application is listening on localhost. localhost is scoped to the container itself. Thus to be able to connect to it, you would have to be inside the container.
To fix, you need to get your application to listen on 0.0.0.0 instead.
127.0.0.1 is the loopback address for the local (EC2) instance. I just recreated your problem following the same instructions and setting up the server in a docker container on an EC2 instance.
If you open another ssh session to your EC2 instance you CAN curl the loopback address, which just spits out the HTML shown below.
<!DOCTYPE html><html lang="en"><head><title>
Common Search
</title><meta content="/apple-touch-icon-precomposed.png" itemprop="image"><link href="/favicon.ico" rel="shortcut icon"><!-- CSS: This will be replaced in templates.go:preprocessTemplate() by the inline, compiled CSS
if the file build/static/css/index.css exists --><link rel="stylesheet" href="/css/global.css"/><link rel="stylesheet" href="/css/header.css"/><link rel="stylesheet" href="/css/footer.css"/><link rel="stylesheet" href="/css/hits.css"/><link rel="stylesheet" href="/css/responsive.css"/><!-- ENDCSS --><meta name="viewport" content="width=device-width, initial-scale=1"></head><body class="full"><header id="h"><div class="about">About</div><form id="f" action="/" method="GET" data-init="{"q":"","p":1,"g":""}">Common Search<div id="w"><div id="qw"><input id="q" name="q" type="text" size="60" value="" autofocus tabindex="3"/></div><span id="g"><select name="g" tabindex="4"><option value="ar">AR</option><option value="de">DE</option><option selected value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ja">JA</option><option value="ko">KO</option><option value="nl">NL</option><option value="pl">PL</option><option value="pt">PT</option><option value="ru">RU</option><option value="vi">VI</option><option value="zh">ZH</option><option value="all">ALL</option></select></span><input id="s" type="submit" value="🔍" tabindex="5"/></div></form></header><div id="hits"></div><div id="dbg"></div><div id="pager" data-page="1"></div><script src="/js/index.js" type="text/javascript"></script></body></html>
However I doubt this is what you actually want..
If you want to be able to access the hosted server from your (or any other) computer you need to edit the security group for your EC2 instance.
From the nav bar on the left side of the AWS console, select Network & Security -> Security Groups. Select the security group that applies to your current EC2 instance (assuming you made it with the launch wizard, it will have a name like: 'launch-wizard-1 created 2016-05-28T12:57:23.487-04:00'). In the lower half of the console, select the Inbound tab. Edit a new rule to allow TCP on port 9700 from any (or a specific range of) IP(s). The resulting entry should look something like this:
My TCP rule is set up to allow inbound traffic from ANY IP address on that port, you may want to configure it differently for security purposes.
Once the rule is set up, you should be able to access the web server at the public IP of your EC2 instance (which can be found on the Instances page of the AWS console). The address you should access should be :9700
Hope this helps!

Serving Jupyter Notebook from within docker container on AWS not working?

I've set up an Ubuntu 14.04 AWS instance. My security group has port 8888 open (tcp), and port 22 open for ssh.
I can ssh into the instance just fine, then in the instance I start a docker container:
docker run -it --name="test" -p 8888:9999 b.gcr.io/tensorflow/tensorflow:latest-devel
This container has jupyter notebook in it, then in the container I run jupyter notebook and I see the correct output:
[I 14:49:43.788 NotebookApp] The Jupyter Notebook is running at: http://[all ip addresses on your system]:8888/
[I 14:49:43.788 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
And if I run docker ps by opening another ssh, connection I see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8414f19fcd5f b.gcr.io/tensorflow/tensorflow:latest-devel "/bin/bash" 38 minutes ago Up 23 minutes 6006/tcp, 8888/tcp, 0.0.0.0:8888->9999/tcp test
So everything seems correct, but I do not see jupyter notebook at: http://PUBLICIP:8888
Instead of:
docker run -it --name="test" -p 8888:9999 b.gcr.io/tensorflow/tensorflow:latest-devel
The trick was to use:
docker run -it --name="test" -p 8888:8888 b.gcr.io/tensorflow/tensorflow:latest-devel
Edit, thanks to DDW for the explanation:
"-p 8888:9999 doesn't stand for a range, it means port 9999 of your docker container is mapped to port 8888. 8888 is probably your standard notebook port, so it is logical that 8888:8888 works."
If you want to have two ports open then the command would be:
docker run -it --name="test" -p 8888:8888 -p 9999:9999 b.gcr.io/tensorflow/tensorflow:latest-devel
In my case the solution was to add --network="host" to docker run command
However it comes with some other effects be aware.
You can checkout from
https://docs.docker.com/network/host/#:~:text=If%20you%20use%20the%20host,its%20own%20IP%2Daddress%20allocated
https://docs.docker.com/network/bridge/#enable-forwarding-from-docker-containers-to-the-outside-world
Because I think , it is a docker network problem.