So I am following the official documentation django with postgres in docker
https://docs.docker.com/compose/django/
I created another database, ( not using the default postgres db ). but when shutdown the server and re run it, It doesn't show the database. How can i create database so that it doesn't vanish when i shut down my docker server.
All data in a container deleted when a container is destroyed or deleted. To save the data, you should save it in some mounted volume in docker. That volume will be on your machine. So all data which will be created during running of any process in that docker container will be stored in your machine. For this, you will have to understand Volume Api of docker.
Create a volume like this
docker volume create hello
And use that volume in your container like this
docker run -d -v hello:/world busybox ls /world
You can get further help from here.
Related
I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image
I'm trying to connecting a dockerised c++ application with a dockerised database so that I can get it running and get some outputs, the configuration can be found in this question
when I try to run the model (which inside the application container) against the dockerised database:
>docker run --net xxxxx-network -it xxxxxrun:localbase
root#xxxxxxxx:/run# isql xxx.x.x.x user=root
[ISQL]ERROR: Could not SQLConnect
I'm new to odbc and docker, can someone gave me some hint? Many thanks.
I am assuming that your running each docker container separately.
In this case in order for your C++ application container to be able to connect to
the Mysql container they will need to be on same network.
Create Docker network docker network create mysql-network
Run C++ application container like so: docker run -it --network mysql-network xxxxxrun:localbase (xxxxxrun should be name of image and localbase should be image tag that you want to run)
Run Mysql database with command similar to docker run --network mysql-network -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
In this situation the two containers should be able to communicate freely with each other across the network.
I currently have Dynamodb-local running in a Docker container using the amazon/dynamodb-local image.
The container starts up and I can manually create the necessary tables via AWS CLI.
At this point, however, I need to have the tables created when the container initially starts.
I was hoping to get thoughts on the best approach to handle this - I'm thinking I will still need to use the AWS CLI to create the tables.
If I use a dockerfile, it's my understanding I will need to create a image that has the following:
- Python (for using PIP to install AWS CLI)
- PIP
- AWS CLI
- DynamoDB Local
I could also create the tables and then create an image of dynamodb-local at that point to use as my base image, but that would require creating a new image every time I had a new table.
Instead I was hoping to build an image when I need to start the db and (using AWS CLI) read JSON files for the necessary tables and create the tables.
Any advice on how others are currently handling this scenario?
Thanks.
I've extended dynamodb-local with a UI to manage tables:
docker run -p 8000:8000 -p 80:80 -v storage-volume:/storage -d awspilotcom/dynamodb-ui
check dynamodb-ui docker image and here is a ui demo
it supports cloudformation templates too.
You could use a docker volume or shared folder for dynamodb-local data folder:
docker run -p 8000:8000 -v my-volume:/dbstore amazon/dynamodb-local -jar DynamoDBLocal.jar -sharedDb -dbPath /dbstore
I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?
You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker
As i am working on docker, i need help to take a container or image from existing AWS box. In my AWS box our application is installed and initiated.
For our application initialization, it takes more time. So i want to deploy this container(application installed) while the box launching time itself. When i am taking docker container it will have my application initiated, as per my understanding. So i can save the application initialization time.
I am launching the machine through ansible in AWS VPC. So i can call the docker container there.
Can anyone help on this how to do this activity.
With Thanks,
Ezhilmurugan M I
If you docker commit your changes into an image with a tag, you can then push to a registry, and then pull down the images on another server.
$ docker commit <hash or name> yourusername/red_panda
$ docker push yourusername/red_panda
On other host
$ docker pull yourusername/red_panda
You could also export the image, transfer however you want, and then import the image on the new server.
$ docker export red_panda > latest.tar
$ cat latest.tgz | docker import - exampleimagelocal:new