'Could not SQLConnect' error when connecting dockerised model and dockerised database - c++

I'm trying to connecting a dockerised c++ application with a dockerised database so that I can get it running and get some outputs, the configuration can be found in this question
when I try to run the model (which inside the application container) against the dockerised database:
>docker run --net xxxxx-network -it xxxxxrun:localbase
root#xxxxxxxx:/run# isql xxx.x.x.x user=root
[ISQL]ERROR: Could not SQLConnect
I'm new to odbc and docker, can someone gave me some hint? Many thanks.

I am assuming that your running each docker container separately.
In this case in order for your C++ application container to be able to connect to
the Mysql container they will need to be on same network.
Create Docker network docker network create mysql-network
Run C++ application container like so: docker run -it --network mysql-network xxxxxrun:localbase (xxxxxrun should be name of image and localbase should be image tag that you want to run)
Run Mysql database with command similar to docker run --network mysql-network -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
In this situation the two containers should be able to communicate freely with each other across the network.

Related

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

Sawtooth transaction processor not respond to the ping

I created a transaction processor using javascript sawtooth-sdk. When i run it locally, it works successfully and it gives me this message when running locally (By locally means running the javascript file using node index.js).
Connecting to Sawtooth validator at tcp://localhost:4004
Connected to tcp://localhost:4004
Registration of [myTP 1.0] succeeded
Then i dockerized it and when i start the container, it doesn't connect. It only has
Connecting to Sawtooth validator at tcp://localhost:4004
message. When i check the Sawtooth docker logs, there were no logs
My docker base image is FROM ubuntu:bionic and i expose EXPOSE 4004/tcp like this. What might be the problem? I know its coming from validator and what i cant understand is that this work locally and doesn't work in the docker file.
Looks like the application container and docker-compose containers are residing in 2 different networks.
Find your network (probably this will be the name of the project directory)
docker network ls
Then connect the application container to the same network used by the compose
docker network connect <network> <app container>
If you need to do this in the start up of the app container,
docker run -itd --network=<network name> <app image>
Then, form the application, you can call the validator container name and connect,
tcp://sawtooth-validator-default:4004

Connect to docker container using IP

I'm running the next commands:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v17.06.2-ce
$ docker-machine ip default
192.168.99.100
And I have some docker containers.
Once of the is running django, other one postgres and a third one elastic search.
When I'm running my django container with:
# python manage.py runserver 0:8000
And it is working when I use the site:
localhost:8000/
But, is not available when I use: 192.168.99.100:8000
There, I have the browser message:
This site can’t be reached
192.168.99.100 refused to connect. Try: Checking the connection
The same happens for the postgres and elasticseach container.
Why my docker container is not applying the default docker-machine ip ?
It sounds like you have not told Docker to talk to your VM, and that your containers are running on your host machine.
To resolve:
Bring down your containers
Run $ eval "$(docker-machine env default)" which sets the following environment variables:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://x.x.x.x:2376"
export DOCKER_CERT_PATH="/Users/<yourusername>/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
This tells Docker how to talk to your VM.
Spin containers back up

Docker: How to rely on a local db rather than a db docker container?

Overall: I don't want my django app to rely on a docker container as my db. In other words, when I run an image
docker run -p 8000:8000 -d imagename
I want this to know to connect to my local db. I have my settings.py configured to connect to a pg db, so everything is fine when I do something like
python manage.py runserver
Feel free to call out any incorrect use of certain terms or just my overall understanding of docker. So from all the tutorials I've seen they usually make a docker compose file that is reliant on a db-image that will be spun up in a container separate from the web app. Examples of things I've gone through:
https://docs.docker.com/compose/django/#connect-the-database, http://ruddra.com/2016/08/14/docker-django-nginx-postgres/, etc. At this point I'm extremely lost, since I don't know if I do this in my Dockerfile, settings.py in my project, or docker-compose.yml (I'm guessing I shouldn't even have this since this is for a multi-container app which I'm trying to avoid). [Aside: Lastly, can one run a django app reliant on celery & rabbitmq in just one container? Like my pg example I only see instances of having them all in separate containers.] As for my Dockerfile it's pretty much this.
FROM python:3
ENV APP 'http://githubproj`
RUN git clone $APP \
&& cd $APP/projectitself \
&& pip install -r requirements.txt
CMD cd $APP_DIR/mydjangoproject && gunicorn mydjangoproject.wsgi:application --bind 0.0.0.0:8000
In order to allow your containerized Django application to connect to a local database running in the host machine, you need to enable incoming connections from your docker interface. You do that by including the following rule in your iptables in your local machine:
$ sudo iptables -A INPUT -i docker0 -j ACCEPT
Next, you have to configure your postgres server to listen on multiple addresses. Open /etc/postgresql/<version>/main/postgresql.conf and search for a line containing listen_addresses='localhost, and change that for:
listen_addresses='*'
After these changes, you should be able to connect to your local postgres database from inside the container.
This answer might give you further clarifications on how to connect to your local machine from your container.
To connect from the container to the host, you can you use the IP address of the docker0 bridge. Run ifconfig on the host to find the docker0 IP address (default is 172.17.0.1 I believe). Then connect to that from inside your container.
This is obviously not super host-portable as that IP might change between machines, so a wrapper script might be useful to find and inject that IP into the container.
Better still, postgres in a container! :p
Also, if connecting to a remote postgres then just provide the IP of the remote instance (no different to regular inter-connectivity here).

Cannot launch interactive session in Windows IIS Docker container

I'm using the AWS "Windows Server 2016 Base with Containers" image (ami-5e6bce3e).
Using docker info I can confirm I have the latest (Server Version: 1.12.2-cs-ws-beta).
From Powershell (running as Admin) I can successfully run the "microsoft/windowsservercore" container in interactive mode, connecting to CMD in the container:
docker run -it microsoft/windowsservercore cmd
When I attempt to run the "microsoft/iis" container in interactive mode, although I am able to connect to IIS (via browser), I am never connected to the interactive CMD session in the container.
docker run -it -p 80:80 microsoft/iis cmd
Instead, I simply get:
Service 'w3svc' started
Using another Powershell window, I can:
docker container ls
...and see my container running.
Attempting to attach locks up and never returns.
I have since switched regions and found that there are different AMI's on each region:
us-east-1: ami-d08edfc7
us-west-2: ami-5e6bce3e
...both of these have the same result.
Relevant links used:
AWS announcement and simple Docker example
MSDN simple Docker example
MSDN IIS Docker example
Update
Using the following link I was able to create my own Dockerfile based off the server base and installing IIS and this seems to work fine.
custom Dockerfile
This is not an issue with AWS AMI's, it was due to the way the Microsoft IIS Dockerfile was written / being new to Docker.
Link to Microsoft's IIS DockerFile
The last line (line 7):
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
Difference between CMD and ENTRYPOINT
So since this Dockerfile uses ENTRYPOINT, to launch an interactive powershell session, use the following command:
docker run --entrypoint powershell -it -p 80:80 microsoft/iis
Note that it seems that the "--entrypoint" flag needs to be after run, as this won't work:
docker run -it -p 80:80 microsoft/iis --entrypoint powershell
Here is another reference link regarding ENTRYPOINT and CMD differences