Giving Docker access to db file outside container - django

I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?

You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker

Related

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

MQ Custom Docker Image - MQM Group Not Found

Description: Getting the following error when running a docker build. I thought mqm group would be automatically created by default. Doesn't mention otherwise in the site link below. Can someone else try this?
System Notes:(VS Code- Docker build), windows machine.
Error:
useradd: group 'mqm' does not exist
Reference site for instructions:
IBM MQ Customer Docker Image Instructions
Docker File:
FROM ibmcom/mq
USER root
RUN useradd alice -G mqm && \
echo alice:passw0rd | chpasswd
USER mqm
COPY 20-config.mqsc /etc/mqm/
Duplicate of ibmcom/mq docker image backward compatibility issue
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.
I was using a deprecated site apparently that's in the docker repo link. I guess its a problem with docker and they can`t remove it. Please follow the instructions here. I had no issue.
https://github.com/ibm-messaging/mq-container

Same Application in a docker compose configuration mapping different ports on AWS EC2 instance

The app has the following containers
php-fpm
nginx
local mysql
app's API
datadog container
In the dev process many feature branches are created to add new features. such as
app-feature1
app-feature2
app-feature3
...
I have an AWS EC2 instance per feature branches running docker engine V.18 and docker compose to build the and run the docker stack that compose the php app.
To save operation costs 1 AWS EC2 instance can have 3 feature branches at the same time. I was thinking that there should be a custom docker-compose with special port mapping and docker image tag for each feature branch.
The goal of this configuration is to be able to test 3 feature branches and access the app through different ports while saving money.
I also thought about using docker networks by keeping the same ports and using an nginx to redirect traffic to the different docker network ports.
What recommendations do you give?
One straight forward way I can think of in this case is to use the .env file for your docker-compose.
docker-compose.yaml file will look something like this
...
ports:
- ${NGINX_PORT}:80
...
ports:
- ${API_PORT}:80
.env file for each stack will look something like this
NGINX_PORT=30000
API_PORT=30001
and
NGINX_PORT=30100
API_PORT=30101
for different projects.
Note:
.env must be in the same folder as your docker-compose.yaml.
Make sure that all the ports inside .env files will not be conflicting with each other. You can have some kind of conventions like having prefix for features like feature1 will have port starting with 301 i.e. 301xx.
In this way, your docker-compose.yaml can be as generic as you may like.
You're making things harder than they have to be. Your app is containerized- use a container system.
ECS is very easy to get going with. It's a json file that defines your deployment- basically analogous to docker-compose (they actually supported compose files at some point, not sure if that feature stayed around). You can deploy an arbitrary number of services with different container images. We like to use a terraform module with the image tag as a parameter, but easy enough to write a shell script or whatever.
Since you're trying to save money, create a single application load balancer. each app gets a hostname, and each container gets a subpath. For short lived feature branch deployments, you can even deploy on Fargate and not have an ongoing server cost.
It turns out the solution involved capabilities from docker-compose. In docker docs the concept is called Multiple Isolated environments on a single host
to achieve this:
I used an .env file with so many env vars. The main one is CONTAINER_IMAGE_TAG that defines the git branch ID to identify the stack.
A separate docker-compose-dev file defines ports, image tags, extra metadata that is dev related
Finally the use of --project-name in the docker-compose command allows to have different stacks.
an example docker-compose Bash function that uses the docker-compose command
docker_compose() {
docker-compose -f docker/docker-compose.yaml -f docker/docker-compose-dev.yaml --project-name "project${CONTAINER_IMAGE_TAG}" --project-directory . "$#"
}
The separation should be done in the image tags, container names, network names, volume names and project name.

Can I provide AWS credentials via mounted directory to local Docker container built by sbt-native-packager

We have some docker images we build with sbt-native-packager that need to interact with AWS services. When running them outside of AWS, we need to explicitly provide credentials.
I know we can explicitly pass environment variables containing the AWS credentials. Doing this complicates keeping our credentials secret. One option is to provide them via the command line, typically storing them into our shell history (yes I know this can be avoided by adding a space to the start of the command, but that is easy to forget) and putting them at higher risk of accidental copy/paste sharing. Alternatively, we can provide them via an env-file. But this exposes us to possibly checking them into version control or pushing them to another server unintentionally.
We've found that the ideal practice is to mount our local ~/.aws/ directory into the running user's home directory for the docker container. However, our attempts at getting this to work with the sbt-native-packager images have been unsuccessful.
One unique detail for sbt-native-packager images (compared to our others) is they are build using docker's ENTRYPOINT instead of CMD to start the application. I don't know if this has bearing on the problem.
So the question: Is it possible to provide AWS credentials to a docker container created by sbt-native-packager by mounting the AWS credentials folder via command line parameters at startup?
The problem I was running into was related to permissions. The .aws files have very restricted access on my machine, and the default user within the sbt-native-packager image is daemon. This user does not have access to read my files when mounted into the container.
I am able to obtain the behavior I desire by adding the following flags to my docker run command: -v ~/.aws/:/root/.aws/ --user=root
I was able to discover this by using the --entrypoint=ash flag when running to look at the HOME environment variable (location to mount the /.aws/ folder) and attempting to cat the contents of mounted folder.
Now I just need to understand what security vulnerabilities I'm opening myself up to by running docker containers in this way.
I'm not entirely sure why mounting ~/.aws would be a problem - typically it could be related to read permissions on that directory and the different UID between the host system and the container.
That said, I can suggest a couple of workarounds:
Use an environment variable file instead of explicitly specifying them in the command line. In docuer run, you can do this by specifying --env-file. To me this sounds like the most simple approach.
Mount a different credentials file and provide the AWS_CONFIG_FILE environment variable to specify it's location.

How to copy data from docker container to ECS on startup (AWS)?

I have two containers, one is web-server based on Node.JS with assets directory. Another container is nginx which proxify page requests to web-server and getting statics from assets directory.
I created AWS cluster, EC2 instance, built and pushed docker images to registry, made tasks to deploy my applications, but I can't share with assets directory to nginx because directory is not part of this container.
So to solve my problem I figured out to create EFS and attach the volume, add permissions to ec2-user and makes directory available by path /var/html/assets.
Cool and how to copy assets content from my web-server docker container to /var/html/assets?
I want to make it public / shared because soon I will make additional servers which should also place assets to this common directory.
The process should be automized and work on each deployment, guys, any suggestions? Thanks!
To copy assets content from your web-server docker container to your host machine,
say you want to save your assets content from container to /var/html/assets on host machine, use this command to run your container:
docker run --name=nginx -d -v ~/var/html/assets:[Your Container path] -p 5000:80 nginx
-v ~/var/html/assets:[Your Container path] Sets up a bindmount volume that links [Your Container path] directory from inside the Nginx container to the ~/var/html/assets directory on the host machine. Docker uses a : to split the host's path from the container path, and the host path always comes first.
Hope it will help!
I solved problem by making host directory accessible for writing chmod 777 /var/html/assets, then added a volume which is looking to host directory and applied it to web and nginx containers. When running the web container, it invokes cp instruction to copy assets to mount directory (host directory). Nginx will see populated directory and can use it.
Note: It's a temporary / workaround solution, giving xrw access to directory is not a good way because of security.