How to copy data from docker container to ECS on startup (AWS)? - amazon-web-services

I have two containers, one is web-server based on Node.JS with assets directory. Another container is nginx which proxify page requests to web-server and getting statics from assets directory.
I created AWS cluster, EC2 instance, built and pushed docker images to registry, made tasks to deploy my applications, but I can't share with assets directory to nginx because directory is not part of this container.
So to solve my problem I figured out to create EFS and attach the volume, add permissions to ec2-user and makes directory available by path /var/html/assets.
Cool and how to copy assets content from my web-server docker container to /var/html/assets?
I want to make it public / shared because soon I will make additional servers which should also place assets to this common directory.
The process should be automized and work on each deployment, guys, any suggestions? Thanks!

To copy assets content from your web-server docker container to your host machine,
say you want to save your assets content from container to /var/html/assets on host machine, use this command to run your container:
docker run --name=nginx -d -v ~/var/html/assets:[Your Container path] -p 5000:80 nginx
-v ~/var/html/assets:[Your Container path] Sets up a bindmount volume that links [Your Container path] directory from inside the Nginx container to the ~/var/html/assets directory on the host machine. Docker uses a : to split the host's path from the container path, and the host path always comes first.
Hope it will help!

I solved problem by making host directory accessible for writing chmod 777 /var/html/assets, then added a volume which is looking to host directory and applied it to web and nginx containers. When running the web container, it invokes cp instruction to copy assets to mount directory (host directory). Nginx will see populated directory and can use it.
Note: It's a temporary / workaround solution, giving xrw access to directory is not a good way because of security.

Related

How to create folder on Elastic Beanstalk server to install LetsEncrypt SSL certificate with AcmePHP

I have a site running on an Elastic Beanstalk single instance server and want to add automated SSL certificate generation from LetsEncrypt using the AcmePHP library.
The library tries to store the certificates in ~/.acmephp, which the server responds to with an error
Failed to create "/home/webapp/.acmephp": mkdir(): Permission denied.
The acmephp library doesn't have an option to change the path built in, and rather than fork and recompile the script, I'd like to be able to store the files in the default directory.
Does anyone know how I can give the app permission to create this directory, outside of the web root, or how I can make the server create it automatically and have it be available to the app?
It looks like since it's being ran by the webapp user, when acmePHP is trying to store the certificate under that user's home directory it fails because that directory doesn't exist (afaik the webapp user only runs httpd and it definitely doesn't have a home directory).
A very dirty workaround could be manually creating that file and folder in the . ebextensions folder in your project.The file would be .ebextensions/create_home.config and it would contain something like this:
files:
"/tmp/create-home.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /home/webapp
chown webapp:webapp -R /home/webapp
commands:
01_create:
command: "/tmp/create-home.sh"
That script is ran by the root user, and afterwards it changes ownership of the /home/webapp folder to the webapp user and group respectively. Hope it helps

Starting NginX with my modified nginx.conf on ECS

I have an environment in AWS with an ECS cluster, an EFS source and some services running on this cluster.
One of my services is the NginX web server which I use to serve our site and our services. As a solution to keep some sensitive and static configuration files we have chosen the EFS service. So, each service creates a volume from this EFS and mount it every time a container starts.
The problem is with NginX. I want to store my nginx.conf file into an EFS folder and after the NginX service starts, we want the container to copy this file at /etc/nginx/ folder in order for my NginX server to start with my configuration.
I've tried to build my own image including my configuration with success but this is not what we want.That means that we should build a new image every time we want to change a line on nginx.conf.
I've tried to create a script to run every time the container starts and copy my configuration but i didn't manage to make it play on ECS. Either the NginX failed to reload, either the syntax is wrong, either the file is not available.
#!/bin/bash
cp /efs/nginx.conf /etc/nginx/
nginx -s reload
Ι considered to find out how to create a cron job to run every X minutes and copy my nginx.conf to etc/nginx but this seems to be a stupid approach.
I made like 60 different task definitions revisions in order to find out how this CMD Environment option works on ECS. Of course the most of them has to do with the syntax and i get bach errors like "invalid option: bash" or "invalid option: /tmp/1.sh" etc
Samples:
1.Command ["cp","/efs/nginx.conf /etc/nginx/"]
2.Entry point ["nginx","-g","daemon off;"]
Command ["cp /efs/nginx.conf /etc/nginx/"]
Entry point: ["nginx","-g","daemon off"]
Command: ["/bin/sh","cp","/efs/nginx.conf/","/etc/nginx/"]
Command ["[\"cp\"","\"/efs/nginx.conf\"","\"/etc/nginx/\"]","[\"nginx\"","\"-g\"","\"daemon off;\"]"]
Command ["cp /efs/nginx.conf /etc/nginx/","nginx -g daemon off;"]
Command ["cp","/efs/nginx.conf /etc/nginx/","nginx -g daemon off;"]
-
Does anyone knows or does anyone already implement this solution on ECS?
To replace /etc/nginx/nginx.conf with a modified one from a binded volume?
Thanks in advance
SOLUTION:
As I mention at my question above, I'd like to use a static nginx.conf file, which will be into an EFS folder, into my nginx service container.
My task definition is simple like this
FROM nginx
EXPOSE 80
RUN mkdir /etc/nginx/html
Through ECS task definition I create a volume and then a mounting point which is an easy process and works fine. The problem was in the entrypoint field which supposed to include my script's directory and to my script itself.
At ECS task definition Environment entrypoint field i putted
sh,-c,/efs/docker-cmd-nginx.sh
and my script is just the following
#!/bin/dash
cp /efs/nginx.conf /etc/nginx/ &&
nginx -g "daemon off;"
PS: The problem probably was at:
my script which I didn't use double quotes at the daemon off; part but I was using double quotes on the whole line nginx -g daemon off;
my script was trying to reload nginx which was not even running yet.
my attempt to put the commands seperately at my task's entrypoint was wrong, syntax-wise for sure and maybe strategy-wise as well.

Giving Docker access to db file outside container

I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?
You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker

Mount host system /app as read only inside docker container

How would you mount the host systems /app directory as read only within the docker container?
Use case: developing a Django application running inside a docker container, and you want Django to reload each time changes are made to /app code on the host system.
Creating a readonly bind mount is documented under Use a read-only bind mount.
In your case the command will look something like:
docker run -v $(pwd)/app:/app:ro ...

Sharing directories in a Docker container both with a Dockerfile and after the container is running

Sharing data between a running docker container and my host (on AWS) seems overly complicated. From the docker documentation it seems as if I need to specify volumes when I start the container.
I found this: https://github.com/synack/docker-rsync
But this watches recursively to copy only from the host machine to the docker container
I'm looking for a way to create (preferably in a Dockerfile) a folder visible on my host machine on AWS where I can scp files into that folder and they will be visible on my docker container. I am also looking for my docker image to be able to write to that folder so if the container is stopped I won't lose those files.
As a side note I already declared in my Dockerfile to
VOLUME /Training-master
but I don't know how to access it from my machine and when I stopped the container I lost the data.
Does anyone know how to do this or can they point me in the right direction?
What you are looking for is provided by docker run time options. Documented here: http://docs.docker.com/engine/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
At the end of it, its clearly mentioned
Note: The host directory is, by its nature, host-dependent.
For this reason, you can’t mount a host directory from Dockerfile
because built images should be portable. A host directory wouldn’t
be available on all potential hosts.
Like Raghav said a drive cannot be created and shared from a Dockerfile because of image portability.
But after you create the image you can run this command and this will create a shared folder between host and docker. Be careful because you can overview a directory in the docker container if it has the same name as an existing folder:
$ sudo docker run -itd -v /home/ubuntu/Sharing dockeruser/imageID:version bash
/home/ubuntu/Sharing -- Path to sharing folder on host computer
/Share -- Path to sharing folder in my container
dockeruser/imageID:version -- the name of your container
-v -- specifies you are creating a volume
-d -- daemonizes the containe, puts it in the background
bash -- the command for the container to execute
Just for reference for Windows users:
1) You can mount a host folder into a container by
docker run -ti -v C:\local_folder\:c:\container_folder container1
2) Alternatively, you can create a volume:
docker volume create --name temp_volume
See the absolute path of the volume by:
docker volume inspect temp_volume
The mountpoint is the absolute path of the volume. You can add/remove files from that path. Then you can mount it to the container by:
docker run -ti -v temp_volume:c:\tmploc container1
Notice that both host and container are Windows machines.