Making a directory inside a Docker container accessible from another container - amazon-web-services

I have a Docker container named "data-container" which has a directory named /var/data. This directory is in fact an ObjectiveFS volume which is stored in AWS S3.
In addition, I have a second container named "app-container" which needs access to the /var/data directory in "data-container".
Is there a pure-Docker way of allowing "app-container" to read/write data to the directory in "data-container"? I could probably do that using NFS but I'm not sure that's the right way to go.

It's simple!
First create a data volume container
docker create -v /data --name mystore ubuntu /bin/true
Then you can bind this "/data" mount to other container over the "--volumes-from" parameter like so:
docker run -d --volumes-from mystore --name db1 postgres
You find this description here in the Docker docs:
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Chapter: Creating and mounting a data volume container

Related

Uploading fixtures to django running within docker container in an AWS EC2 instance

So I have a a few operations I need to perform on a running instance - kind of a one-off thing. That involves uploading fixtures to it. The structure is an aws ec2 instance, on which runs a few docker container, including a django container. This the docker container in which I need to upload fixtures. This is all part of a ci/cd pipeline, with a postgres RDS database, nginx proxy, and the django container.
I can scp the fixtures into the AWS EC2 instance. Then I can SSH into the EC2 instance, and from there log into the running container (using docker exec).
That's all fine, but....
How can I access the fixtures that I uploaded to the ec2 instance (the docker host) from within the docker-container?
Note: I'm aware I could commit the fixtures to git, then they would find their way into the docker image and be readily available there. I possible I'd rather scp them directly to ec2 & manage them in command lines (because small changes might be needed to the fixtures, which then means I have to run the whole pipeline and wait a few minutes each time...)
You would want to attach a volume to your Docker container. By attaching a volume essentially you are sharing a folder from the host machine with the running Docker container.
You can attach a volume when you start the container:
docker run -d \
--name=my-container-name \
-v /host/path:/container/path \
myimage:latest
In case of docker compose, you can add a volume like this:
version: "3.9"
services:
myservice:
image: myimage
volumes:
- "/host/path:/container/path"

Source files of GCE instance from an image

I have created a VM instance using a docker image stored in Google container registry. The instance is working fine and I'm happy. What I am trying to figure out is where the source files are stored.
When I access the VM via SSH and navigate to usr/src/app I don't see anything. In my dockerfile I specified that directory to be used as the app directory.
WORKDIR /usr/src/app
Where can I see the source code?
The source files will be located in the location you specified within the container in question.
After you access the VM via SSH, you may run docker ps to get the list of containers running on your instance. You'll then be able to SSH into your running container via the following command:
docker exec -it <container name> /bin/bash
Once in there you should be able to see your source code in /usr/src/app.

How to check if EFS is mounted on a docker container?

I have created a task definition and mounted EFS but i'm not sure if it works..
So i wanted to verify if my EFS file system is mounted to the running container ?
How can i verify ?
One way is to add a file to the folder inside your container:
ssh into the underlying ECS EC2 instance with ssh -i "somekey.pem" ec2-user#ec2-xxxxxxxxx.eu-west-1.compute.amazonaws.com
Run docker ps to get the id of your container.
docker exec -it CONTAINERID /bin/bash to move inside the container. Inside the container create or copy a file to the EFS-folder.
Now go to the EFS console and verify that the Metered size is greater than 0 meaning your file is in EFS.
You can run a command df -h which will list your mounted volumes on linux machines.

How to share folder between host Ubuntu and already running Docker container on it?

I am using Amazon ECS, and launched the EC2 instance with ECS Linux AMI. On that EC2 instance I have multiple Docker containers each running different-different applications.
I wanted to create a folder on host, where I can add/edit anything and will get reflected to a shared folder from docker container. I have tried using following command:
docker run --name {new_container_name} -d -p 80:80 -v /{source folder}:/{destination folder}:ro {container name}
I referred to this link.
Actual Result - it will creates a new container with similar to as {container name}.
But, what I want is, I need a way with which I can create a shared folder with the current running Docker container. How can I do that?

How to mount file from host to docker container on ECS

I want to mount a file from my host-EC2 to a container which is running on ECS server. Every change that happens on the file that is on the Host, should be updated on the file in container.
What I have tried is as follows:
Dockerfile:
FROM nginx
COPY conf/default.conf /etc/nginx/conf.d/default.conf
volume /etc/nginx/conf.d
RUN ln -sf conf/default.conf /etc/nginx/conf.d/
EXPOSE 80
Then pushed the image on ECR Repo and created task, added volume(source path:conf), mountpoint(/etc/nginx/conf.d) and created service to run the container. However every change that I do in the host server, dir: /conf/default.conf does not work on the container /etc/nginx/conf.d/default.conf
I know there is docker run -v but as I'm using ECR, it runs the container itself through the setup.
Any suggestion would be appreciated.
Just a suggestion: In you build system, copy the file to S3, and then in your docker container, run a script on startup that copies the latest file back from S3 and mounts it. Then you won't have to rebuild/redeploy your docker container constantly.