Deploy django to GAE standard from cloud build - django

I run following bash commands from my local machine to deploy django project to App engine.
python manage.py migrate
python manage.py collectstatic --noinput
gsutil rsync -R static/ gs://xyz4/static
gcloud config set project project_name_1
gcloud app deploy --quiet
I would like to set it up on cloud build. I have enabled PUSH triggers on cloud build. Need help in creating cloudbuild.yaml file

CloudBuild doesnt support VPC hence cannot be used for migration to private cloud sql - link
Following are steps I use when deploying code from Github repo to App engine standard. Each step is dependant on previous step running successfully.
Create python venv & install all dependencies
Install gcloud proxy & make it executable
Turn on proxy, activate venv, run tests, if test pass then make migrations, collect static files
Upload static files to public bucket
deploy code to GAE standard
cloudbuild.yaml:
- id: setup-venv
name: python:3.8-slim
timeout: 100s
entrypoint: sh
args:
- -c
- '(python -m venv my_venv && . my_venv/bin/activate && pip install -r requirements.txt && ls)'
waitFor: [ '-' ]
- id: proxy-install
name: 'alpine:3.10'
entrypoint: sh
args:
- -c
- 'wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.21.0/cloud_sql_proxy.linux.amd64 && chmod +x /workspace/cloud_sql_proxy'
waitFor: [ 'setup-venv' ]
- id: run-tests-with-proxy
name: python:3.8-slim
entrypoint: sh
args:
- -c
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances="<instance_name>=tcp:3306" & sleep 2) && (. my_venv/bin/activate && python manage.py test --noinput && python manage.py migrate && python manage.py collectstatic --noinput )'
waitFor: [ 'proxy-install' ]
env:
- 'CLOUD_BUILD=1'
- 'PYTHONPATH=/workspace'
# if tests fail, these sections wont execute coz they waitFor tests section
- id: upload-static-to-bucket
name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args: [ '-c', 'gsutil rsync -R ./static/ gs://<bucket_name>/static' ]
waitFor: [ 'run-tests-with-proxy' ]
- id: deploy
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: [ '-c', 'gcloud app deploy --quiet' ]
waitFor: [ 'upload-static-to-bucket' ]
Scope for improvement:
how to have args broken into multiple lines instead of everything being on one line
If tests run on local postgres instance on cloudbuild instead of production cloud sql instance, that would be nice. I was able to create a postgres instance, but it did not run in background, hence when running tests my code could not connect to this local instance
postgres in cloudbuild.yaml:
- id: setup-postgres
name: postgres
timeout: 500s
waitFor: [ '-' ]
env:
- 'POSTGRES_PASSWORD=password123'
- 'POSTGRES_DB=aseem'
- 'POSTGRES_USER=aseem'
- 'PGPORT=5432'
- 'PGHOST=127:0:0:1'

Cloud Build support to run on custom VPC with worker pools.
You can create a worker pool to access resources in VPC. If you will access when building you must click "Assign external IPs" in your worker pool settings and you mus use below code in your cloudbuild yaml:
options:
pool:
name: "projects/${PROJECT_ID}/locations/${_REGION}/workerPools/${_WORKER_POOL}"
substitutions:
_REGION: 'your_region'
_WORKER_POOL: 'your_pool_name'
If you need to know your outbound ip you can access it from Cloud Nat and if you have firewall rules for outbound you have to assign a dedicated ip.

Related

How to pull and use existing image from Azure ACR through Dockerfile

I am performing AWS to Azure services migration.
I am using a centos VM and am trying to pull an existing image from ACR and create container. I am using Dockerfile to do so. I have created an image on Azure ACR. I need help in pulling this image and creating container on centos VM.
Earlier, I was doing this with images on AWS ECR (not sure if by using AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY) as below. But I am not sure how this can be done using Azure ACR. How do I provide Azure access to the application containing below Dockerfile and docker-compose.yml . Do I need to use access and secret key similar to AWS. If so, how do I create this pair on Azure.
Below are the files I was using when I was dealing with container creation on Centos using AWS image
Dockerfile:
FROM 12345.ecrImageUrl/latestImages/pk-image-123:latest
RUN yum update -y
docker-compose.yml:
version: '1.2`
services:
initn:
<<: *mainnode
entrypoint: ""
command: "npm i"
bldn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:build"
runn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:run"
links:
- docker-host
ports:
- "8011:8080"
environment:
- AWS_ACCESS_KEY=${AWS_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Docker-compose in GCP Cloud build

I'm trying build deploy a app in GCP Cloud run using GCP Cloud Build.
I already build, push and deploy the service using Dockerfile, but i need use the Dockerfile of the project. My dockerfile run in Docker desktop perfectly, but i am not finding documentation for docker-compose using GCP Artifact registry.
My dockerfile:
FROM python:3.10.5-slim-bullseye
#docker build -t cloud_app .
#docker image ls
#docker run -p 81:81 cloud_app
RUN mkdir wd
WORKDIR /wd
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./ ./
CMD python3 main.py
My docker-compose:
version: "3.3"
services:
web:
build:
context: ./destripa_frame/
dockerfile: ./Docker_files/Dockerfile
image: bank_anon_web_dev_build
restart: always
expose:
- 8881
- 80
- 2222
- 22
ports:
- "2222:2222"
- "80:80"
environment:
- TZ=America/Chicago
My cloud-build configuration:
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '--build', '-f', './cloud_run/docker-compose.devapp.yml', '-d']
- name: 'docker/compose:1.28.2'
args: ['-f', './cloud_run/docker-compose.devapp.yml', 'up', 'docker-build']
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/job_app:$COMMIT_SHA']
The cloud build commit execution succeed:
Cloud build execution
¿How can modify the cloud build for deploy the Docker-compose in Artifact registry?
EDIT: Find the correct method to push the image in artifact registry using cloudbuild and Docker-compose.
Modify my cloud-build.yml configuration for build the image and then rename the Docker-compose image to the Artifact registry image.
Cloud build automatically push the image in the repository (if the image name it's not a URL then push it in Docker.io).
My new Cloud-build.yml:
steps:
- name: 'docker/compose:1.28.2'
args: [
'-p', 'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador',
'-f', './cloud_run/docker-compose.devapp.yml',
'up', '--build', 'web'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'tag',
'bank_anon_web_dev_build',
'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build'
]
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build']
Hope anyone need undestand GCP Cloud build using docker-compose can help it, because every guide in the web not explicate this last part.

Run django on background for action

I'm trying to test my react app with yarn using Github Actions, I need to have Django running for some tests, however Django has the
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
October 15, 2021 - 03:32:25
Django version 3.2.6, using settings 'controller.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
message which locks bash, any ideas how I can start Django inside the action so that the action continues to run?
This is currently my action
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
jobs:
tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Installing dependecies
run:
./install.sh
shell:
bash
- name: Testing backend
run:
./backend-test.sh
shell:
bash
- name: Starting backend
run:
./backend.sh > /dev/null 2>&1
shell:
bash
- name: Testing frontend
run:
./frontend-test.sh
shell:
bash
And backend.sh does this
# Backend test script
cd backend/ && \
sudo service postgresql start && \
source .venv/bin/activate && \
python3 manage.py test
you should add & at the end of your command running backend.sh script to run it in the background. The process ID will printed to screen and you can run others scripts/commands
- name: Starting backend
run:
./backend.sh > /dev/null 2>&1 &
shell:
bash

Gitlab ci fails to run docker-compose for django app

I am setting up a gitlab pipeline that I want to use to deploy a Django app on AWS using Terraform.
At the moment I am just setting up the pipeline so that validates the terraform and runs tests (pytest) and lynting.
The pipeline uses docker in docker and it looks like this:
image:
name: hashicorp/terraform:1.0.5
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- Test and Lint
Test and Lint:
image: docker:20.10.9
services:
- docker:20.10.9-dind
stage: Test and Lint
script:
- apk add --update docker-compose
- apk add python3
- apk add py3-pip
- docker-compose run --rm app sh -c "pytest && flake8"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME =~ /^(master|production)$/ || $CI_COMMIT_BRANCH =~ /^(master|production)$/'
The pipeline fails to run the tests due to a database error I think which is weird as I am using pytest to mock the django database.
If I just run:
docker-compose run --rm app sh -c "pytest && flake8"
on the terminal of my local machine all tests pass.
Any idea how can I debug this?
p.s.
let me know if I need to add more info.
I don't think you are able to run docker in the CI directly. You can specify which image to use in each step and then run the commands. For instance:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
- Test
unit_test:
stage: Test
script:
- pytest
See, in this pipeline, I used the python:3.7 image. You can upload your docker image to some Registry and use it in the pipeline.
I manage to solve it and the tests in CI pass using
script:
- apk add --update docker-compose
- docker-compose up -d --build && docker-compose run --rm app sh -c "pytest && flake8"

Getting error when trying to execute cloud build to deploy application to cloud run

I tried to deploy application to cloud run in GCP which succesfuly got executed using docker file.Now,I am setting up CI/CD by using cloudbuild.yaml .I mirrored repo to CSR and created a cloudbuild service and placed cloudbuild.yaml in my repository .When executed from cloudbuild,it throws the following error.
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Deploying...
Creating Revision...failed
Deployment failedERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
Docker file is attached below:
#pulls python 3.7’s image from the docker hub
FROM python:alpine3.7
#copies the flask app into the container
COPY . /app
#sets the working directory
WORKDIR /app
#install each library written in requirements.txt
RUN pip install -r requirements.txt
#exposes port 8080
EXPOSE 8080
#Entrypoint and CMD together just execute the command
#python app.py which runs this file
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/projectid/servicename', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/projectid/servicename']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'phase-2'
- '--image'
- 'gcr.io/projectid/servicename'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
images:
- 'gcr.io/projectid/servicename'.
OP got the issue resolved as seen in the comments:
Got the issue resolved.It was because of the python compatibility issue.I should use pip3 and python3 in the docker file.I think gcr.io image is compatible with python3.