Reuse a cloudfoundry app without having to rebuild from sratch - django

I deploy a Django Application with Cloudfoundry. Building the app takes some time, however I need to launch the application with different start commands and the only solution I have today is fully to rebuild each time the application.
With Docker, changing the start command is very easy and it doesn't require to rebuild to the whole container, there must be a more efficient way to do this:
Here are the applications launched:
FrontEndApp-Prod: The Django App using gunicorn
OrchesterApp-Prod: The Django Celery Camera & Heartbeat
WorkerApp-Prod: The Django Celery Workers
All these apps are basically identical, they just use different routes, configurations and start commands.
Below is the file manifest.yml I use:
defaults: &defaults
timeout: 120
memory: 768M
disk_quota: 2G
path: .
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/buildpack-python.git
services:
- PostgresDB-Prod
- RabbitMQ-Prod
- Redis-Prod
applications:
- name: FrontEndApp-Prod
<<: *defaults
routes:
- route: www.myapp.com
instances: 2
command: chmod +x ./launch_server.sh && ./launch_server.sh
- name: OrchesterApp-Prod
<<: *defaults
memory: 1G
instances: 1
command: chmod +x ./launch_orchester.sh && ./launch_orchester.sh
health-check-type: process
no-route: true
- name: WorkerApp-Prod
<<: *defaults
instances: 3
command: chmod +x ./launch_worker.sh && ./launch_worker.sh
health-check-type: process
no-route: true

Two options I can think of for this:
You can use some of the new v3 API features and take advantage of their support for multiple processes in a Procfile. With that, you'd essentially have a Profile like this:
web: ./launch_server.sh
worker: ./launch_orchester.sh
worker: ./launch_worker.sh
The platform should then stage your app once, but deploy it three times based on the droplet that is produced from staging. It's slick because you end up with only one application that has multiple processes running off of it. The drawback is that this is a experimental API at the time of me writing this, so it still has some rough edges, plus the exact support you get could vary depending on how quickly your CF provider installs new versions of the Cloud Controller API.
You can read all the details about this here:
https://www.cloudfoundry.org/blog/build-cf-push-learn-procfiles/
You can use cf local. This is a cf cli plugin which allows you to build a droplet locally (staging occurs in a docker container on your local machine). You can then take that droplet and deploy it as much as you want.
The process would look roughly like this, you'll just need to fill in some options/flags (hint run cf local -h to see all the options):
cf local stage
cf local push FrontEndApp-Prod
cf local push OrchesterApp-Prod
cf local push WorkerApp-Prod
The first command will create a file ending in .droplet in your current directory, the subsequent three commands will deploy that droplet to your provider and run it. The net result is that you should end up with three applications, like you have now, that are all deployed from the same droplet.
The drawback is that your droplet is local, so you're uploading it three times once for each app.
I suppose you also have a third option which is to just use a docker container. That has it's own advantages & drawbacks though.
Hope that helps!

Related

How to launch a Docker container in Gitlab CI/CD

I'm new to Gitlab (and I only know the basic features of git : pull, push, merge, branch...).
I'm using a local DynamoDB database launched with docker run -p 8000:8000 amazon/dynamodb-local to do unit testing on my Python project. So I have to launch this docker container in the Gitlab CI/CD so that my unit tests work.
I already read the documentation on this subject on the site of gitlab without finding an answer to my problem and I know that I have to modify my gitlab-ci.yml file in order to launch the docker container.
When using Gitlab you can use Docker-in-Docker.
At the top of your .gitlab-ci.yml file
image: docker:stable
services:
- docker:dind
Then in your stage for tests, you can start up the database and use it.
unit_tests:
stage: tests
script:
- export CONTAINER_ID=$(docker run -p 8000:8000 amazon/dynamodb-local)
## You might need to wait a few seconds with `sleep X` for the container to start up.
## Your database is now here docker:8000
## Run your tests here. Database host=docker and port=8000
This is the best way I have found to achieve it and the easiest to understand

docker-compose same config for dev and production but enable code sharing between host and container only in development

As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.

How to serve a Java application as Docker container and .war file?

Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.

Issues with Docker Swarm running TeamCity using rexray/ebs for drive persistence in AWS EBS

I'm quite new to Docker but have started thinking about production set-ups, hence needing to crack the challenge of data persistence when using Docker Swarm. I decided to start by creating my deployment infrastructure (TeamCity for builds and NuGet plus the "registry" [https://hub.docker.com/_/registry/] for storing images).
I've started with TeamCity. Obvious this needs data persistence in order to work. I am able to run TeamCity in a container with an EBS drive and everything looks like it is working just fine - TeamCity is working through the set-up steps and my TeamCity drives appear in AWS EBS, but then the worker node TeamCity gets allocated to shuts down and the install process stops.
Here are all the steps I'm following:
Phase 1 - Machine Setup:
Create one AWS instance for master
Create two AWS instances for workers
All are 64-bit Ubuntu t2.mircro instances
Create three elastic IPs for convenience and assign them to the above machines.
Install Docker on all nodes using this: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Install Docker Machine on all nodes using this: https://docs.docker.com/machine/install-machine/
Install Docker Compose on all nodes using this: https://docs.docker.com/compose/install/
Phase 2 - Configure Docker Remote on the Master:
$ sudo docker run -p 2375:2375 --rm -d -v /var/run/docker.sock:/var/run/docker.sock jarkt/docker-remote-api
Phase 3 - install the rexray/ebs plugin on all machines:
$ sudo docker plugin install --grant-all-permissions rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=XXX EBS_SECRETKEY=YYY
[I lifted the correct values from AWS for XXX and YYY]
I test this using:
$ sudo docker volume create --driver=rexray/ebs --name=delete --opt=size=2
$ sudo docker volume rm delete
All three nodes are able to create and delete drives in AWS EBS with no issue.
Phase 4 - Setup the swarm:
Run this on the master:
$ sudo docker swarm init --advertise-addr eth0:2377
This gives the command to run on each of the workers, which looks like this:
$ sudo docker swarm join --token XXX 1.2.3.4:2377
These execute fine on the worker machines.
Phase 5 - Set up visualisation using Remote Powershell on my local machine:
$ $env:DOCKER_HOST="{master IP address}:2375"
$ docker stack deploy --with-registry-auth -c viz.yml viz
viz.yml looks like this:
version: '3.1'
services:
viz:
image: dockersamples/visualizer
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8080:8080"
deploy:
placement:
constraints:
- node.role==manager
This works fine and allows me to visualise my swarm.
Phase 6 - Install TeamCity using Remote Powershell on my local machine:
$ docker stack deploy --with-registry-auth -c docker-compose.yml infra
docker-compose.yml looks like this:
version: '3'
services:
teamcity:
image: jetbrains/teamcity-server:2017.1.2
volumes:
- teamcity-server-datadir:/data/teamcity_server/datadir
- teamcity-server-logs:/opt/teamcity/logs
ports:
- "80:8111"
volumes:
teamcity-server-datadir:
driver: rexray/ebs
teamcity-server-logs:
driver: rexray/ebs
[Incorporating NGINX as a proxy is a later step on my to do list.]
I can see both the required drives appear in AWS EBS and the container appear in my swarm visualisation.
However, after a while of seeing the progress screen in TeamCity the worker machine containing the TeamCity instance shuts down and the process abruptly ends.
I'm at a loss as to what to do next. I'm not even sure where to look for logs.
Any help gratefully received!
Cheers,
Steve.
I found a way to get logs for my service. First do this to list the services the stack creates:
$ sudo docker service ls
Then do this to see logs for the service:
$ sudo docker service logs --details {service name}
Now I just need to wade through the logs and see what went wrong...
Update
I found the following error in the logs:
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | [2018-05-14 17:38:56,849] ERROR - r.configs.dsl.DslPluginManager - DSL plugin compilation failed
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | exit code: 1
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | stdout: #
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # There is insufficient memory for the Java Runtime Environment to continue.
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # Native memory allocation (mmap) failed to map 42012672 bytes for committing reserved memory.
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # An error report file with more information is saved as:
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # /opt/teamcity/bin/hs_err_pid125.log
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 |
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | stderr: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e2dfe000, 42012672, 0) failed; error='Cannot allocate memory' (errno=12)
Which is making me think this is a memory problem. I'm going to try this again with a better AWS instance and see how I get on.
Update 2
Using a larger AWS instance solved the issue. :)
I then discovered that rexray/ebs doesn't like it when a container switches between hosts in my swarm - it duplicates the EBS volumes so that it keeps one per machine. My solution to this was to use an EFS drive in AWS and mount it to each possible host. I then updated the fstab file so that the drive is remounted on every reboot. Job done. Now to look into using a reverse proxy...

AWS: CodeDeploy for a Docker Compose project?

My current objective is to have Travis deploy our Django+Docker-Compose project upon successful merge of a pull request to our Git master branch. I have done some work setting up our AWS CodeDeploy since Travis has builtin support for it. When I got to the AppSpec and actual deployment part, at first I tried to have an AfterInstall script do docker-compose build and then have an ApplicationStart script do docker-compose up. The containers that have images pulled from the web are our PostgreSQL container (named db, image aidanlister/postgres-hstore which is the usual postgres image plus the hstore extension), the Redis container (uses the redis image), and the Selenium container (image selenium/standalone-firefox). The other two containers, web and worker, which are the Django server and Celery worker respectively, use the same Dockerfile to build an image. The main command is:
CMD paver docker_run
which uses a pavement.py file:
from paver.easy import task
from paver.easy import sh
#task
def docker_run():
migrate()
collectStatic()
updateRequirements()
startServer()
#task
def migrate():
sh('./manage.py makemigrations --noinput')
sh('./manage.py migrate --noinput')
#task
def collectStatic():
sh('./manage.py collectstatic --noinput')
# find any updates to existing packages, install any new packages
#task
def updateRequirements():
sh('pip install --upgrade -r requirements.txt')
#task
def startServer():
sh('./manage.py runserver 0.0.0.0:8000')
Here is what I (think I) need to make happen each time a pull request is merged:
Have Travis deploy changes using CodeDeploy, based on deploy section in .travis.yml tailored to our CodeDeploy setup
Start our Docker containers on AWS after successful deployment using our docker-compose.yml
How do I get this second step to happen? I'm pretty sure ECS is actually not what is needed here. My current status right now is that I can get Docker started with sudo service docker start but I cannot get docker-compose up to be successful. Though deployments are reported as "successful", this is only because the docker-compose up command is run in the background in the Validate Service section script. In fact, when I try to do docker-compose up manually when ssh'd into the EC2 instance, I get stuck building one of the containers, right before the CMD paver docker_run part of the Dockerfile.
This took a long time to work out, but I finally figured out a way to deploy a Django+Docker-Compose project with CodeDeploy without Docker-Machine or ECS.
One thing that was important was to make an alternate docker-compose.yml that excluded the selenium container--all it did was cause problems and was only useful for local testing. In addition, it was important to choose an instance type that could handle building containers. The reason why containers couldn't be built from our Dockerfile was that the instance simply did not have the memory to complete the build. Instead of a t1.micro instance, an m3.medium is what worked. It is also important to have sufficient disk space--8GB is far too small. To be safe, 256GB would be ideal.
It is important to have an After Install script run service docker start when doing the necessary Docker installation and setup (including installing Docker-Compose). This is to explicitly start running the Docker daemon--without this command, you will get the error Could not connect to Docker daemon. When installing Docker-Compose, it is important to place it in /opt/bin/ so that the binary is used via /opt/bin/docker-compose. There are problems with placing it in /usr/local/bin (I don't exactly remember what problems, but it's related to the particular Linux distribution for the Amazon Linux AMI). The After Install script needs to be run as root (runas: root in the appspec.yml AfterInstall section).
Additionally, the final phase of deployment, which is starting up the containers with docker-compose up (more specifically /opt/bin/docker-compose -f docker-compose-aws.yml up), needs to be run in the background with stdin and stdout redirected to /dev/null:
/opt/bin/docker-compose -f docker-compose-aws.yml up -d > /dev/null 2> /dev/null < /dev/null &
Otherwise, once the server is started, the deployment will hang because the final script command (in the ApplicationStart section of my appspec.yml in my case) doesn't exit. This will probably result in a deployment failure after the default deployment timeout of 1 hour.
If all goes well, then the site can finally be accessed at the instance's public DNS and port in your browser.