How to check if EFS is mounted on a docker container? - amazon-web-services

I have created a task definition and mounted EFS but i'm not sure if it works..
So i wanted to verify if my EFS file system is mounted to the running container ?
How can i verify ?

One way is to add a file to the folder inside your container:
ssh into the underlying ECS EC2 instance with ssh -i "somekey.pem" ec2-user#ec2-xxxxxxxxx.eu-west-1.compute.amazonaws.com
Run docker ps to get the id of your container.
docker exec -it CONTAINERID /bin/bash to move inside the container. Inside the container create or copy a file to the EFS-folder.
Now go to the EFS console and verify that the Metered size is greater than 0 meaning your file is in EFS.

You can run a command df -h which will list your mounted volumes on linux machines.

Related

AWS ECS Fargate - specify file path in host volumes

I am trying to create cluster in ECS Fargate with a Docker-hub image. To spin the container I have to upload a file config.json in the host so that the path could be mounted as -v /configfile/path:/etc/selenium
In local we can specify the path of json file like below.
docker run -d --name selenium -p 4444:4444 -v /E/config.json:/etc/selenium/
Whereas I am not sure where to upload the config.json file in ECS Faragte and use that path in volume.
It is not possible to do anything related to the host when using Fargate. The whole point of Fargate is that it is a "serverless" container runtime, so you have no access to the underlying server at all.
You need to look into other ways of achieving your requirement in Fargate. For example you could first copy the file to S3, and then configure the entrypoint script in your docker container to download the file from S3 before starting selenium.

Need to create the directory on EFS with kubernetes pod

what will be the definition file, if I need to mount my efs with pod and then create a directory with the correct permission?
efs id lets say:- fs-c98c345
Need to run the below command after efs mount on pod.
mkdir <efs mount path>/prometheus
chown -R 1000:2000 <efs mount path>/prometheus
To manage and use efs from kubernetes(on AWS) you should use efs-provisioner. You can install and configure efs-provisioner through helm chart available here https://github.com/helm/charts/tree/master/stable/efs-provisioner.

How to share folder between host Ubuntu and already running Docker container on it?

I am using Amazon ECS, and launched the EC2 instance with ECS Linux AMI. On that EC2 instance I have multiple Docker containers each running different-different applications.
I wanted to create a folder on host, where I can add/edit anything and will get reflected to a shared folder from docker container. I have tried using following command:
docker run --name {new_container_name} -d -p 80:80 -v /{source folder}:/{destination folder}:ro {container name}
I referred to this link.
Actual Result - it will creates a new container with similar to as {container name}.
But, what I want is, I need a way with which I can create a shared folder with the current running Docker container. How can I do that?

How to mount file from host to docker container on ECS

I want to mount a file from my host-EC2 to a container which is running on ECS server. Every change that happens on the file that is on the Host, should be updated on the file in container.
What I have tried is as follows:
Dockerfile:
FROM nginx
COPY conf/default.conf /etc/nginx/conf.d/default.conf
volume /etc/nginx/conf.d
RUN ln -sf conf/default.conf /etc/nginx/conf.d/
EXPOSE 80
Then pushed the image on ECR Repo and created task, added volume(source path:conf), mountpoint(/etc/nginx/conf.d) and created service to run the container. However every change that I do in the host server, dir: /conf/default.conf does not work on the container /etc/nginx/conf.d/default.conf
I know there is docker run -v but as I'm using ECR, it runs the container itself through the setup.
Any suggestion would be appreciated.
Just a suggestion: In you build system, copy the file to S3, and then in your docker container, run a script on startup that copies the latest file back from S3 and mounts it. Then you won't have to rebuild/redeploy your docker container constantly.

Making a directory inside a Docker container accessible from another container

I have a Docker container named "data-container" which has a directory named /var/data. This directory is in fact an ObjectiveFS volume which is stored in AWS S3.
In addition, I have a second container named "app-container" which needs access to the /var/data directory in "data-container".
Is there a pure-Docker way of allowing "app-container" to read/write data to the directory in "data-container"? I could probably do that using NFS but I'm not sure that's the right way to go.
It's simple!
First create a data volume container
docker create -v /data --name mystore ubuntu /bin/true
Then you can bind this "/data" mount to other container over the "--volumes-from" parameter like so:
docker run -d --volumes-from mystore --name db1 postgres
You find this description here in the Docker docs:
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Chapter: Creating and mounting a data volume container