Connect to docker container using IP - django

I'm running the next commands:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v17.06.2-ce
$ docker-machine ip default
192.168.99.100
And I have some docker containers.
Once of the is running django, other one postgres and a third one elastic search.
When I'm running my django container with:
# python manage.py runserver 0:8000
And it is working when I use the site:
localhost:8000/
But, is not available when I use: 192.168.99.100:8000
There, I have the browser message:
This site can’t be reached
192.168.99.100 refused to connect. Try: Checking the connection
The same happens for the postgres and elasticseach container.
Why my docker container is not applying the default docker-machine ip ?

It sounds like you have not told Docker to talk to your VM, and that your containers are running on your host machine.
To resolve:
Bring down your containers
Run $ eval "$(docker-machine env default)" which sets the following environment variables:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://x.x.x.x:2376"
export DOCKER_CERT_PATH="/Users/<yourusername>/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
This tells Docker how to talk to your VM.
Spin containers back up

Related

'Could not SQLConnect' error when connecting dockerised model and dockerised database

I'm trying to connecting a dockerised c++ application with a dockerised database so that I can get it running and get some outputs, the configuration can be found in this question
when I try to run the model (which inside the application container) against the dockerised database:
>docker run --net xxxxx-network -it xxxxxrun:localbase
root#xxxxxxxx:/run# isql xxx.x.x.x user=root
[ISQL]ERROR: Could not SQLConnect
I'm new to odbc and docker, can someone gave me some hint? Many thanks.
I am assuming that your running each docker container separately.
In this case in order for your C++ application container to be able to connect to
the Mysql container they will need to be on same network.
Create Docker network docker network create mysql-network
Run C++ application container like so: docker run -it --network mysql-network xxxxxrun:localbase (xxxxxrun should be name of image and localbase should be image tag that you want to run)
Run Mysql database with command similar to docker run --network mysql-network -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
In this situation the two containers should be able to communicate freely with each other across the network.

Docker: How to rely on a local db rather than a db docker container?

Overall: I don't want my django app to rely on a docker container as my db. In other words, when I run an image
docker run -p 8000:8000 -d imagename
I want this to know to connect to my local db. I have my settings.py configured to connect to a pg db, so everything is fine when I do something like
python manage.py runserver
Feel free to call out any incorrect use of certain terms or just my overall understanding of docker. So from all the tutorials I've seen they usually make a docker compose file that is reliant on a db-image that will be spun up in a container separate from the web app. Examples of things I've gone through:
https://docs.docker.com/compose/django/#connect-the-database, http://ruddra.com/2016/08/14/docker-django-nginx-postgres/, etc. At this point I'm extremely lost, since I don't know if I do this in my Dockerfile, settings.py in my project, or docker-compose.yml (I'm guessing I shouldn't even have this since this is for a multi-container app which I'm trying to avoid). [Aside: Lastly, can one run a django app reliant on celery & rabbitmq in just one container? Like my pg example I only see instances of having them all in separate containers.] As for my Dockerfile it's pretty much this.
FROM python:3
ENV APP 'http://githubproj`
RUN git clone $APP \
&& cd $APP/projectitself \
&& pip install -r requirements.txt
CMD cd $APP_DIR/mydjangoproject && gunicorn mydjangoproject.wsgi:application --bind 0.0.0.0:8000
In order to allow your containerized Django application to connect to a local database running in the host machine, you need to enable incoming connections from your docker interface. You do that by including the following rule in your iptables in your local machine:
$ sudo iptables -A INPUT -i docker0 -j ACCEPT
Next, you have to configure your postgres server to listen on multiple addresses. Open /etc/postgresql/<version>/main/postgresql.conf and search for a line containing listen_addresses='localhost, and change that for:
listen_addresses='*'
After these changes, you should be able to connect to your local postgres database from inside the container.
This answer might give you further clarifications on how to connect to your local machine from your container.
To connect from the container to the host, you can you use the IP address of the docker0 bridge. Run ifconfig on the host to find the docker0 IP address (default is 172.17.0.1 I believe). Then connect to that from inside your container.
This is obviously not super host-portable as that IP might change between machines, so a wrapper script might be useful to find and inject that IP into the container.
Better still, postgres in a container! :p
Also, if connecting to a remote postgres then just provide the IP of the remote instance (no different to regular inter-connectivity here).

Cannot launch interactive session in Windows IIS Docker container

I'm using the AWS "Windows Server 2016 Base with Containers" image (ami-5e6bce3e).
Using docker info I can confirm I have the latest (Server Version: 1.12.2-cs-ws-beta).
From Powershell (running as Admin) I can successfully run the "microsoft/windowsservercore" container in interactive mode, connecting to CMD in the container:
docker run -it microsoft/windowsservercore cmd
When I attempt to run the "microsoft/iis" container in interactive mode, although I am able to connect to IIS (via browser), I am never connected to the interactive CMD session in the container.
docker run -it -p 80:80 microsoft/iis cmd
Instead, I simply get:
Service 'w3svc' started
Using another Powershell window, I can:
docker container ls
...and see my container running.
Attempting to attach locks up and never returns.
I have since switched regions and found that there are different AMI's on each region:
us-east-1: ami-d08edfc7
us-west-2: ami-5e6bce3e
...both of these have the same result.
Relevant links used:
AWS announcement and simple Docker example
MSDN simple Docker example
MSDN IIS Docker example
Update
Using the following link I was able to create my own Dockerfile based off the server base and installing IIS and this seems to work fine.
custom Dockerfile
This is not an issue with AWS AMI's, it was due to the way the Microsoft IIS Dockerfile was written / being new to Docker.
Link to Microsoft's IIS DockerFile
The last line (line 7):
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
Difference between CMD and ENTRYPOINT
So since this Dockerfile uses ENTRYPOINT, to launch an interactive powershell session, use the following command:
docker run --entrypoint powershell -it -p 80:80 microsoft/iis
Note that it seems that the "--entrypoint" flag needs to be after run, as this won't work:
docker run -it -p 80:80 microsoft/iis --entrypoint powershell
Here is another reference link regarding ENTRYPOINT and CMD differences

How do I provide credentials to the docker awslogs driver using Docker for Mac?

I'm trying to use the docker awslogs driver and getting the following error:
"docker: Error response from daemon: Failed to initialize logging
driver: NoCredentialProviders: no valid providers in chain.
Deprecated."
According to this GitHub comment, I need to set the AWS_SHARED_CREDENTIALS_FILE environment variable for the docker daemon, but I'm not sure how to do that when using Docker for Mac.
The command I'm using to start the container is:
docker run -d \
--log-driver=awslogs \
--log-opt awslogs-region=us-east-1 \
--log-opt awslogs-group=my-log-group \
my-image
Version information:
Docker for Mac 1.12.1-rc1-beta23 build 11375
OS X El Capitan 10.11.6
but I'm not sure how to do that when using Docker for Mac.
With boot2docker, you would need to modify /var/lib/boot2docker/profile in order to add this variable.
See "Docker daemon config file on boot2docker".
It does persists across the TinyCore-based VM reboot, and the docker daemon would then take it into account.
With the new docker for Mac xhyve-based, the idea should be the same.
/var/lib/boot2docker/profile does exist as well, as shown in this answer.
The official docker dameon doc points to:
--config-file=/etc/docker/daemon.json Daemon configuration file
So try and modify this file.
By default, the comments mention:
~/Library/Containers/com.docker.docker/Data/database/com.doc‌​ker.driver.amd64-lin‌​ux/etc/docker/daemon‌​.json
Using information taken from this answer: Docker deamon config path under mac os
You can connect to the VM layer that runs the docker daemon using:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
And you can modify /etc/docker/daemon.json to add the needed variables there.
Once you make your changes, you can just run:
service docker restart
from within the moby terminal to restart the docker daemon.
Do notice that if you restart docker from your mac, the changes will not persist.
On a side note, if you encounter a login screen when connecting with the screen command, try username: root to access the system.

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/