Gitlab ci fails to run docker-compose for django app - django

I am setting up a gitlab pipeline that I want to use to deploy a Django app on AWS using Terraform.
At the moment I am just setting up the pipeline so that validates the terraform and runs tests (pytest) and lynting.
The pipeline uses docker in docker and it looks like this:
image:
name: hashicorp/terraform:1.0.5
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- Test and Lint
Test and Lint:
image: docker:20.10.9
services:
- docker:20.10.9-dind
stage: Test and Lint
script:
- apk add --update docker-compose
- apk add python3
- apk add py3-pip
- docker-compose run --rm app sh -c "pytest && flake8"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^(master|production)$/ || $CI_COMMIT_BRANCH =~ /^(master|production)$/'
The pipeline fails to run the tests due to a database error I think which is weird as I am using pytest to mock the django database.
If I just run:
docker-compose run --rm app sh -c "pytest && flake8"
on the terminal of my local machine all tests pass.
Any idea how can I debug this?
p.s.
let me know if I need to add more info.

I don't think you are able to run docker in the CI directly. You can specify which image to use in each step and then run the commands. For instance:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
unit_test:
stage: Test
script:
- pytest
See, in this pipeline, I used the python:3.7 image. You can upload your docker image to some Registry and use it in the pipeline.

I manage to solve it and the tests in CI pass using
script:
- apk add --update docker-compose
- docker-compose up -d --build && docker-compose run --rm app sh -c "pytest && flake8"

Related

docker compose failing on gitlab-ci build stage

I am trying to build gitlab-ci but one of the stages is failing the build. I get stuck on build stage. it does not recognise python and i am trying to install it so i can build the image and get it tested with robot framework
gitlab-ci.yaml
image: python:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
stages:
- compile
- build
- test
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt
REPOSITORY_URL: $AWS_ACCOUNT_ID.dkr.ecr.eu-west-2.amazonaws.com/apps_web
TASK_DEFINITION_NAME: apps_8000
CLUSTER_NAME: QA-2
SERVICE_NAME: apps_demo
ARTIFACT_REPORT_PATH: "app/reports/"
before_script:
- docker info
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
unittests:
stage: test
before_script:
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- source env/bin/activate
- python app/manage.py jenkins --enable-coverage
artifacts:
reports:
junit: app/reports/junit.xml
paths:
- $ARTIFACT_REPORT_PATH
expire_in: 30 days
when: on_success
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
migrations:
stage: compile
before_script:
- python -m venv env
- source env/bin/activate
- pip install -r app/app-requirements.txt
script:
- python app/manage.py makemigrations
artifacts:
paths:
- "app/*/migrations/*.py"
expire_in: 1 day
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
build:
image:
name: docker/compose:1.25.4
entrypoint: [ "" ]
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
before_script:
- apt-get install python3
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:web || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE:web
- docker tag app
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
dependencies:
- migrations
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
deploy_qa:
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- echo $IMAGE
- echo $WEB_IMAGE
- docker pull $WEB_IMAGE
environment:
name: qa
url: https://app.domain.com
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
It is failing with error /bin/sh: eval: line 153: apt-get: not found
Like #slauth said in his comment, the docker/compose image is based on Alpine Linux which uses the apk package manager, not apt. However, you most likely wouldn't be able to use a debian image since you need the functionality of docker/compose. In that case, you can use apk to install python instead of apt-get, just like you're installing bash in the script section of this job:
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
(This comes from a related answer here).
However, Installing and updating packages in a CI/CD Pipeline is generally a bad practice since depending on the number of pipelines you run, it can significantly slow down your development process. Instead, you can create your own docker images based on whichever image you need, and install your packages there. For example, you can create a new image based on docker/composer, install python, bash, etc there. Then push the new image either to Docker Hub, Gitlab's built-in docker registry, or another registry you might have available. Finally, in your .gitlab-ci.yml file, you simply change docker/compose to your new image.
For more information on this part, you can see another answer I wrote for a similar question here.

Authenticaton Postgres error while using Docker Compose, Python Django and Gitlab CI

I use Gitlab CI to make pipeline with building Docker image with my Django app. I saved some .env variables to Gitlab variables. They are succesfully calling and working, but there is
psycopg2.OperationalError: FATAL: password authentication failed for user
I have checked all passwords and variables, they are correct.
.gitlab-ci.yml
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
- apk add py-pip python3-dev libffi-dev openssl-dev gcc libc-dev make
- pip3 install docker-compose
stages:
- test
test:
stage: test
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py test
- docker-compose -f docker-compose.test.yml run --rm myapp ./manage.py check
- docker-compose -f docker-compose.test.yml down -v

What is the best way to do CI/CD with AWS CDK (python) using GitLab CI?

I am using AWS CDK (with Python) for a containerized application that runs on Fargate. I would like to run cdk deploy in a GitLab CI process and pass the git tag as an environment variable that replaces the container running in Fargate. I am currently doing something similar with CloudFormation (aws cloudformation update-stack ...). Is anyone else doing CI/CD with AWS CDK in this way? Is there a better way to do it?
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
Here is start that seems to be working well:
CDK:
image: python:3.8
stage: deploy
before_script:
- apt-get -qq update && apt-get -y install nodejs npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk diff
- cdk deploy --require-approval never
Edit 2020-05-04:
CDK can build docker images during cdk deploy, but it needs access to docker. If you don't need docker, the above CI job definition should be fine. Here's the current CI job I'm using:
cdk deploy:
image: docker:19.03.1
services:
- docker:19.03.5-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- pip3 -V
- apk add nodejs-current npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk bootstrap aws://$AWS_ACCOUNT_ID/$AWS_DEFAULT_REGION
- cdk deploy --require-approval never
The cdk bootstrap is needed because I am using assets in my cdk code:
self.backend_task.add_container(
"DjangoBackend",
image=ecs.AssetImage(
"../backend",
file="scripts/prod/Dockerfile",
target="production",
),
logging=ecs.LogDrivers.aws_logs(stream_prefix="Backend"),
environment=environment_variables,
command=["/start_prod.sh"],
)
Here's more information on cdk bootstrap: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md
you definitely have to use CDK deploy inside the CI/CD pipeline if you have lambda or ECS assets, otherwise, you could run CDK synth and pass the resulting Cloudformation to AWS Code Deploy. That means a lot of your CI/CD will be spent deploying which might drain your free tier build minutes or just means you pay more (AWS Code Deploy is free)
I do something similar with Golang in CircleCi. I use the Go base image and install nodejs and cdk. I use this base image to build all my go binaries, the vuejs frontend and compile cdk typescript and deploy it.
FROM golang:1.13
RUN go get -u -d github.com/magefile/mage
WORKDIR $GOPATH/src/github.com/magefile/mage
RUN go run bootstrap.go
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y nodejs
RUN npm i -g aws-cdk#1.36.x
RUN npm i -g typescript
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
I hope that helps.
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
For anyone looking for how to implement CI/CD with AWS CDK Python in 2022, here's a tested solution:
Use python:3.10.8 as the base image in your CI/CD
(or any image with Debian 11)
Install Node.js 16 from NodeSource: curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
Install aws-cdk: npm i -g aws-cdk
You can add the two latter steps as inline scripts in your CI/CD pipeline so you do not need to build your own Docker image.
Here's a full example for Bitbucket Pipelines:
image: python:3.10.8
run-tests: &run-tests
step:
name: Run tests
script:
# Node 16
- curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
- npm i -g aws-cdk
- pip install -r requirements-dev.txt
- pytest
pipelines:
pull-requests:
"**":
- <<: *run-tests
branches:
master:
- <<: *run-tests
Note that the above instructions do not install Docker engine. In Bitbucket Pipelines, Docker can be used simply by adding
services:
- docker
in the configuration file.
If cdk deploy is giving you the error:
/usr/lib/node_modules/aws-cdk/lib/index.js:12422
home = path.join((os.userInfo().homedir ?? os.homedir()).trim(), ".cdk");
then the node version is out of date. This can be fixed by updating the docker image which also requires pip3:
cdk deploy:
image: docker:20.10.21
services:
- docker:20.10.21-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- apk add py3-pip
- pip3 -V

Bitbucket Pipelines to build Java app, Docker image and push it to AWS ECR?

I am setting up Bitbucket Pipelines for my Java app and what I want to achive is whenever I merge something with branch master, Bitbucket fires the pipeline, which in first step build and test my application, and in second step build Docker image from it and push it to ECR. The problem is that in second step it isn't possible to use the JAR file made in first step, because every step is made in a separate, fresh Docker container. Any ideas how to solve it?
My current files are:
1) Bitbucket-pipelines.yaml
pipelines:
branches:
master:
- step:
name: Build and test application
services:
- docker
image: openjdk:11
caches:
- gradle
script:
- apt-get update
- apt-get install -y python-pip
- pip install --no-cache-dir docker-compose
- bash ./gradlew clean build test testIntegration
- step:
name: Build and push image
services:
- docker
image: atlassian/pipelines-awscli
caches:
- gradle
script:
- echo $(aws ecr get-login --no-include-email --region us-west-2) > login.sh
- sh login.sh
- docker build -f Dockerfile -t my-application .
- docker tag my-application:latest 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
- docker push 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
2) Dockerfile:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8080
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
And the error I receive:
Step 4/5 : COPY build/libs/*.jar app.jar
COPY failed: no source files were specified
I have found the solutions, it's quite simple - we should just use "artifacts" feature, so in first step the additional line:
artifacts:
- build/libs/*.jar
solves the problem.

Why isn't Kaniko able to push multi-stage Docker Image?

Building the following Dockerfile on GitLab CI using Kaniko, result in the error error pushing image: failed to push to destination eu.gcr.io/stritzke-enterprises/eliah-speech-server:latest: Get https://eu.gcr.io/...: exit status 1
If I remove the first FROM, RUN and COPY --from statements from the Dockerfile, the Docker Image is built and pushed as expected. If I execute the Kaniko build using Docker on my local machine everything works as expected. I execute other Kaniko builds and pushed on the same GitLab CI runner with the same GCE Service Account credentials.
What is going wrong with the GitLab CI based Kaniko build?
Dockerfile
FROM alpine:latest as alpine
RUN apk add -U --no-cache ca-certificates
FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY binaries/speech-server /speech-server
EXPOSE 8080
ENTRYPOINT ["/speech-server"]
CMD ["serve", "-t", "$GOOGLE_ACCESS_TOKEN"]
GitLab CI build stage
buildDockerImage:
stage: buildImage
dependencies:
- build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
GOOGLE_APPLICATION_CREDENTIALS: /secret.json
script:
- echo "$GCR_SERVICE_ACCOUNT_KEY" > /secret.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $DOCKER_IMAGE:latest -v debug
only:
- branches
except:
- master
As tdensmore pointed out this was most likely an authentication issue.
So for everyone who has come here, the following Dockerfile and Kaniko call work just fine.
FROM ubuntu:latest as ubuntu
RUN echo "Foo" > /foo.txt
FROM ubuntu:latest
COPY --from=ubuntu /foo.txt /
CMD ["/bin/cat", "/foo.txt"]
The Dockerfile can be built by running
docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --context /workspace --no-push