Can circle ci use docker-compose to build the environment - django

I currently have a few services such as db and web in a django application, and docker-compose is used to string them together.
The web version has code like this..
web:
restart: always
build: ./web
expose:
- "8000"
The docker file in web has python2.7-onbuild, so it uses the requirements.txt file to install all the necessary dependencies.
I am now using circle CI for integration and have a circle.yml file like this..
....
dependencies:
pre:
- pip install -r web/requirements.txt
....
Is there anyway I could avoid the dependency clause in the circle yml file.
Instead I would like Circle CI to use docker-compose.yml instead, if that makes sense.

Yes, using docker-compose in the circle.yml file can be a nice way to run tests because it can mirror ones dev environment very closely. This is a extract from our working tests on a AngularJS project:
---
machine:
services:
- docker
dependencies:
override:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- sudo pip install --upgrade docker-compose==1.3.0
test:
pre:
- docker-compose pull
- docker-compose up -d
- docker-compose run npm install
- docker-compose run bower install --allow-root --config.interactive=false
override:
# grunt runs our karma tests
- docker-compose run grunt deploy-build compile
Notes:
The docker login is only needed if you have private images in docker hub.
when we wrote our circle.yml file only docker-compose 1.3 was available. This is probably updated now.

I haven't tried this myself but based on the info here https://circleci.com/docs/docker I guess it may work
# circle.yml
machine:
services:
- docker
dependencies:
pre:
- pip install docker-compose
test:
pre:
- docker-compose up -d

Unfortunately, circleCI by default install old version of Docker 1.9.1 which is not compatible with latest version of docker-compose. In order to get more fresh docker version 1.10.0 you should:
machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
- pip install docker-compose
services:
- docker
test:
pre:
- docker-compose up -d
Read more: https://discuss.circleci.com/t/docker-1-10-0-is-available-beta/2100
UPD: Native-Docker support on Circle version 2.
Read more information how to switch to new Circle CI version here: https://circleci.com/docs/2.0/migrating-from-1-2/

Related

CIDC with BitBucket, Docker Image and Azure

EDITED
I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below:
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
docker-compose.yml
version: '3.4'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
My code is on BitBucket and I have a pipeline file as follows:
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io
- docker build -t xxx.azurecr.io .
- docker push xxx.azurecr.io
With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket.
What did I not do correctly?
Thanks.
The Edit
Changes in docker-compose.yml and bitbucket-pipeline.yml
docker-compose.yml
version: '3.4'
services:
web:
build: .
image: xx.azurecr.io/myticket
container_name: xx
command: python manage.py runserver 0.0.0.0:80
ports:
- 80:80
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io
- docker build -t xx.azurecr.io/xx .
- docker push xx.azurecr.io/xx
You didnt specify CMD or ENTRYPOINT in your dockerfile.
There are stages when building a dockerfile
Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.
Both ENTRYPOINT and CMD are essential for building and running Dockerfiles.
for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..
Change your files accordingly and try again.
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
# Execute commands inside the container
CMD manage.py runserver 0.0.0.0:8000
Check you are able to build and run the image by going to its directory and running
docker build -t app .
docker run -d -p 80:80 app
docker ps
See if your container is running.
Next
Update the image property in the docker-compose file.
Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.
Change the ports mapping to 80:80. Save the file.
The updated file should look similar to the following:
docker-compose.yml
Copy
version: '3'
services:
foo:
build: .
image: foo.azurecr.io/atlassian/default-image:2
container_name: foo
ports:
- "80:80"
By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.
More in documentation

Gitlab ci fails to run docker-compose for django app

I am setting up a gitlab pipeline that I want to use to deploy a Django app on AWS using Terraform.
At the moment I am just setting up the pipeline so that validates the terraform and runs tests (pytest) and lynting.
The pipeline uses docker in docker and it looks like this:
image:
name: hashicorp/terraform:1.0.5
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- Test and Lint
Test and Lint:
image: docker:20.10.9
services:
- docker:20.10.9-dind
stage: Test and Lint
script:
- apk add --update docker-compose
- apk add python3
- apk add py3-pip
- docker-compose run --rm app sh -c "pytest && flake8"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^(master|production)$/ || $CI_COMMIT_BRANCH =~ /^(master|production)$/'
The pipeline fails to run the tests due to a database error I think which is weird as I am using pytest to mock the django database.
If I just run:
docker-compose run --rm app sh -c "pytest && flake8"
on the terminal of my local machine all tests pass.
Any idea how can I debug this?
p.s.
let me know if I need to add more info.
I don't think you are able to run docker in the CI directly. You can specify which image to use in each step and then run the commands. For instance:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
unit_test:
stage: Test
script:
- pytest
See, in this pipeline, I used the python:3.7 image. You can upload your docker image to some Registry and use it in the pipeline.
I manage to solve it and the tests in CI pass using
script:
- apk add --update docker-compose
- docker-compose up -d --build && docker-compose run --rm app sh -c "pytest && flake8"

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

Setting up docker for django, vue.js, rabbitmq

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.

docker-compose not downloading additions to requirements.txt file

I have a Django project running in docker.
When I add some packages to my requirments.txt file, they don't get downloaded when I run docker-compose up
Here is the relevant commands from my Dockerfile:
ADD ./evdc/requirements.txt /opt/evdc-venv/
ADD ./env-requirements.txt /opt/evdc-venv/
# Active venv
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc- venv/requirements.txt
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc-venv/env-requirements.txt
It seems docker is using a cached version of my requirements.txt file, as when I shell into the container, the requirements.txt file in /opt/evdc-venv/requirements.txt does not include the new packages.
Is there some way I can delete this cached version of requirements.txt?
Dev OS: Windows 10
Docker: 17.03.0-ce
docker-compose: 1.11.2
docker-compose up doesn't build a new image unless you have the build section defined with your Dockerfile and you pass it the --build parameter. Without that, it will reuse the existing image.
If your docker-compose.yml does not include the build section, and your build your images with docker build ..., then after you recreate your image, a docker-compose up will recreate the impacted containers.