How do I run docker-compose using Docker.DotNet? - unit-testing

I would like to use Docker.DotNet to run docker-compose with multiple interconnected containers. Is it possible to do this? Is there an example of how to do this?

Related

Run command conditionally while deploying with amazon ECS

I am having a django app deployed on ECS. I need to run fixtures, a management command needed to be run inside the container on my first deployment.
I want to have access to run fixtures conditionally, not fixed for first deployment only. One way I was thinking is to maintain a variable in my env, and to run fixtures in entrypoint.sh accordingly.
Is this a good way to go about it? Also, what are some other standard ways to do the same.
Let me know if I have missed some details you might need to understand my problem.
You probably need to handle it in your entrypoint.sh script only.As far as my experience goes, you won't be able to conditionally run commands without a script in case of ECS.

How to execute a docker-run command from C++ program?

I want to execute "docker run -it Image_name" from a C++ program. Is there any way to achieve this?
Try as you do for simple system command from C++.
System("docker run -it Image_name")
I can think of two ways you could achieve this.
For a quick-and-dirty approach, you can actually run commands from your C++ code. There seems to be a few ways to run commands with C++, but the system() function seems to be an easy way if you just want to run the command:
int main() {
system("docker run -it Image_name");
}
Bare in mind you will need to make sure the docker executable is in your PATH environment variable. You will also need to consider what operating systems you want to support, a system call in Linux might not behave the same as in windows. It can be tricky to get system calls right.
For another method, using the docker engine's API directly. docker commands are sent to this API. You could connect directly to this API yourself and call the API the same way the docker run -it Image_name command would. The Engine API is documented here https://docs.docker.com/engine/api/v1.24/ . I believe the docker run -it Image_name command starts up what the API calls a "service".
The shell command will be the easiest approach. The Engine API approach would take more effort up front, but will result in cleaner, more robust code. The correct approach will depend on your situation.

Setting up jupyterhub docker using one of the jupyter stacks

I'm trying to get a Jupyterhub up and running. 2.7 Python kernels are required, so basically whatever in the docker-stacks repo would be great. In the documentation, it mentions that it can work with Jupyterhub using DockerSpawner, but I can't quite see how it all fits together. Is anyone aware of a simple step by step guide to get this working?
To use any docker image first pull that from docker hub - docker pull jupyter/scipynotebook
Now install dockerspawner - pip install dockerspawner
Add necessary lines to jupyterhub_config.py
(https://github.com/jupyterhub/dockerspawner/blob/master/README.md)
The way to use specific docker image this line does the magic - c.DockerSpawner.image = 'jupyter/scipynotebook'

Simple docker example only appears to expose db container and not web

I clone this repo (it's pretty much based on docker docs here) and run docker-compose up. Docker builds the 2 containers and I see the output from db_1 (psql looks to be completely ready) but nothing at all from web_1, no output whatsoever.
I go to my host IP + 8000 and nothing is running there. I am using docker toolbox for mac. It's pretty much the simplest possible example of using Docker - any idea why I'm not seeing anything from my Django container?
Thanks in advance,
it might be possible that STDOUT of the web_1 Container is mapped only to display WARN and ERROR level. You say youre using Docker Toolbox for Mac? Have you tried to reach the Website over the IP of the DockerToolBox VM or the HostIP? Im not quite aware with DockerToolbox since there is an native MacClient (https://docs.docker.com/engine/installation/mac/). Maybe try to reach the DockerToolboxIp not HostIP. I would also recommend to use Docker for Mac native, since i had problems with the ToolBox but none with the "Native" Client.
Hope i could Help
After taking a better look to the documentation I was able to start your containers.
After the git clone:
cd sane-django-docker
docker-compose up -d
This is the output
Starting sanedjangodocker_db_1
Starting sanedjangodocker_web_1
[root#localhost sane-django-docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cde9e93c1a70 sanedjangodocker_web "python3 manage.py ru" 19 seconds ago Up 1 seconds 0.0.0.0:8000->8000/tcp sanedjangodocker_web_1
73ad8cafe798 postgres:9.4 "/docker-entrypoint.s" 20 seconds ago Up 1 seconds 5432/tcp sanedjangodocker_db_1
When I just performd docker-compose up (running in the forground I saw this issue).
LOG: shutting down
LOG: database system is shut down
After taking a better look in the documentation I saw the problem
Django will complain about the postgres database not existing so we'll
create one:
docker exec sanedjangodocker_db_1 createdb -Upostgres webapp
Now the postgres is fine but I had to restart the webapp to find the db.
docker restart sanedjangodocker_web_1
Now I'm able to acces it on IP:8000
It worked!
Congratulations on your first Django-powered page.
I don't know how the django app really works but the setup is pretty strange.

Multiple docker compose environments for same code base

I'm using Cookiecutter scaffold for my Django project and I follow the same workflow documented for local docker environments. I have a dev.yml compose file for a local setup. I have a testing env setup which is very different from a local setup(installs test dependencies, has different set of services specific to testing) called test.yml. I'm not able to spin up docker compose envs for both local development and testing env simultaneously. When I do a:
$ docker-compose -f dev.yml up -d
All the dev containers spin up fine.
After this I do a:
$ docker-compose -f test.yml up -d
It just recreates all the above containers. Should I use a different network? Or should I give different names for the apps and services in test.yml? What is the best practice to run different set of docker compose envs for the same codebase simultaneously?
Currently, I checkout the code in a different path and spin up the test env there, which seems to work.
docker-compose --project-name with a different name.