AWS: CodeDeploy for a Docker Compose project? - django

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.

This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.

Related

How to retrieve the docker image of a deployment on heroku via circleci

I have a django application running locally and i've set up the project on CircleCi with python and postgres images.
If I understand correctly what is happening, CircleCi would use the images to build a container to test my application with code database.
Then I'm using the job heroku/deploy-via-git to deploy it to Heroku when the tests are passed.
Now I think Heroku is using some images too to run the application.
I would like to get the image used by heroku to run my site locally on another machine.
So pull the image then push it to Docker Hub and finally download it back to my computer to only have to use a docker compose up.
Here is my CircleCi configuration's file
version: 2.1
docker-auth: &docker-auth
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
orbs:
python: circleci/python#1.5.0
heroku: circleci/heroku#0.0.10
jobs:
build-and-test:
docker:
- image: cimg/python:3.10.2
- image: cimg/postgres:14.1
environment:
POSTGRES_USER: theophile
steps:
- checkout
- run:
command: pip install -r requirements.txt
name: Install Deps
- run:
name: Run MIGRATE
command: python manage.py migrate
- run:
name: Run loaddata from Json
command: python manage.py loaddata datadump.json
- run:
name: Run tests
command: pytest
workflows:
heroku_deploy:
jobs:
- build-and-test
- heroku/deploy-via-git:
requires:
- build-and-test
I don't know if it is possible, if not, what should be the best way to proceed ? (I assume that there is a lot of possibilites)
I was considering to build an image from my local directory with docker compose up then use this image direclty on CircleCi, then i would be able to use this image an on other computer. But building images into images with CircleCi seems really messy and I'm not sure how I should proceed.
I've tried to pull images from Heroku but it seems I can only pull the code or get/modify the database but I can't get the image builds itself.
I hope this question is relevant and clear, as the CircleCi and Heroku documentation seems not clear and it's my first post on stackoverflow !
Thanks in advance
Heroku's platform is proprietary, so we can't be sure how it works internally.
We know that their stacks are based on Ubuntu LTS releases, and we know that they use open-source buildpacks to compile application slugs from source code, but details about the underlying infrastructure are murky. They certainly don't provide base images like heroku/python:3.11.0 for you to download.
If you want to use the same image locally, on CircleCI, and Heroku, a better option would be to start deploying with Heroku's Container Registry instead of Git. This allows you to build an image locally, push it into the container registry, and release it as the next version of your application.
I suggest you read the entire documentation page linked above, but the short version is:
Log into the container registry using the Heroku CLI:
heroku container:login
Assuming you already have a Dockerfile for your application, build and push an image:
heroku container:push web
In this case we are building from Dockerfile and pushing the resulting image to be used as a web process.
Release your application:
heroku container:release web
That's a basic Docker deployment from your local machine, and even if that's not your final plan I suggest you start by getting that working.
From there, you have options. One option would be to move this flow to CircleCI—continue to build images there, but have CircleCI push the resulting container to Heroku's Container Registry.
Another option might be as you suggest in your question: to build images locally and use them with both CircleCI and Heroku.

How to run a docker image from within a docker image?

I run a dockerized Django-celery app which takes some user input/data from a webpage and (is supposed to) run a unix binary on the host system for subsequent data analysis. The data analysis takes a bit of time, so I use celery to run it asynchronously. The data analysis software is dockerized as well, so my django-celery worker should do os.system('docker run ...'). However, celery says docker: command not found, obviously because docker is not installed within my Django docker image. What is the best solution to this problem? I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I don't want to run docker within docker, because my analysis software should be allowed to use all system resources and not just the resources assigned to the Django image.
I didn't catch the causal relationship here. In fact, we just need to add 2 steps to your Django image:
Follow Install client binaries on Linux to download the docker client binary from prebuilt, then your Django image will have the command docker.
When starting the Django container, add /var/run/docker.sock bind mount, this allows the Django container to directly talk to the docker daemon on the host machine and start the data-analysis tool container on the host. As the analysis container does not start in Django container, they can have separate system resources. In other words, the analysis container's resources do not depend on the resource of the Django image container.
Samples with one docker image which already has the docker client in it:
root#pie:~# ls /dev/fuse
/dev/fuse
root#pie:~# docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker /bin/sh
/ # ls /dev/fuse
ls: /dev/fuse: No such file or directory
/ # docker run --rm -it -v /dev:/dev alpine ls /dev/fuse
/dev/fuse
You can see, although the initial container does not have access to the host's /dev folder, the docker container whose command initialized from the initial container could really have separate resources.
If the above is what you need, then it's the right solution for you. Otherwise, you will have to install the analysis tool in your Django image

Elastic Beanstalk deleting generated files on config changes

On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.

Amazon EC2 | CodeDeploy [React] - Deployment succeeds but build folder not populated

TL;DR The command npm run build is taking forever to run on the Amazon EC2 [Ubuntu] instance when I tried running it explicitly by making an SSH. Meanwhile, when I try to create a deployment using CodeDeploy, the deployment takes a good 1 hour time and succeeds but the build folder doesn't get populated, hence I am unable to view my website on the public URL of the EC2 instance. Also, the instance reachability check fails every time after I try to run the command explicitly, and then I have to start and then stop the ec2 instance again! Woof!
Hello everyone, I am trying to deploy my MERN Stack application to AWS but I am stuck now!
Current Progress:
Added both Nginx configs.[Attaching image below]
Nginx is running and there is no problem there!
Added build-app.sh in appspec.yml in the root directory. [View code below]
#!/bin/bash
#clear build directory
cd /home/ubuntu/badlav-app/badlav-client
sudo rm -rf build
sudo mkdir build
#client (Generates a new `build` directory)
cd /home/ubuntu/badlav-app/badlav-client
sudo sh set-prod-env-aws.sh
sudo rm -rf node_modules
sudo npm i
sudo npm run build
#server
cd /home/ubuntu/badlav-app/badlav-server
sudo sh set-prod-env.sh
#back to root
cd /home/ubuntu
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/badlav-app
hooks:
BeforeInstall:
- location: scripts/build-app.sh
runas: root
Using the above appspec.yml file, the deployment using CodeDeploy succeeds but didn't populate the build folder within /home/badlav-app/badlav-client/build.
So I tried to debug on my own and started running the commands one by one by myself after SSH(ing :P) into the EC2 instance. But when I reach npm run build, the instance just hangs forever. After being exhausted, I have no option left, I terminate the task. Now, when I view my instance on the AWS Console, it has gone berserk! The instance reachability check fails! The only way, I get my instance back is by stopping it and starting it again.
Since I am new to CI/CD, please don't judge my appspec.yml. It'd be great if anyone of you could suggest a better way, thanks for that! :)
To sum up, I want to be able to create a deployment using AWS CodeDeploy, but due to this npm run build taking so much time and hanging my server(instance reach check fails!), I am unable to do so. Moreover, I am not even sure whether npm run build is a problem at all!
I would be more than happy to share any further details/screenshots in order to support my question. Please ask over.
Thanks in advance!
/etc/nginx/nginx.conf/etc/nginx/conf.d/default.conf
If you're using EC2 free tier, the chance is that the instance may have low spec and memory (t2.nano has 0.5G and t2.micro has 1G of memory).
Maybe npm run build consumes all of the memory space.
I often face the same problem with my vue project.
Solution: Do NOT use free tier for medium and large projects. Upgrade your plan and use better instances e.g. t2.medium

Getting ember to run under docker on Windows Quickstart

Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/