I'm trying to test my react app with yarn using Github Actions, I need to have Django running for some tests, however Django has the
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
October 15, 2021 - 03:32:25
Django version 3.2.6, using settings 'controller.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
message which locks bash, any ideas how I can start Django inside the action so that the action continues to run?
This is currently my action
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
workflow_dispatch:
jobs:
tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Installing dependecies
run:
./install.sh
shell:
bash
- name: Testing backend
run:
./backend-test.sh
shell:
bash
- name: Starting backend
run:
./backend.sh > /dev/null 2>&1
shell:
bash
- name: Testing frontend
run:
./frontend-test.sh
shell:
bash
And backend.sh does this
# Backend test script
cd backend/ && \
sudo service postgresql start && \
source .venv/bin/activate && \
python3 manage.py test
you should add & at the end of your command running backend.sh script to run it in the background. The process ID will printed to screen and you can run others scripts/commands
- name: Starting backend
run:
./backend.sh > /dev/null 2>&1 &
shell:
bash
Related
I am working on a Django project, I am new in Github Action,
I had set up Github action file
name: Django CI
on:
push:
branches: [ master ]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v3
- name: Install Dependencies
run: |
virtualenv -p python3 env
. env/bin/activate
pip3 install -r requirements.txt
My file get uploaded but issues is that I had to restart the Nginx server,
How I can Restart Nginx server
I run following bash commands from my local machine to deploy django project to App engine.
python manage.py migrate
python manage.py collectstatic --noinput
gsutil rsync -R static/ gs://xyz4/static
gcloud config set project project_name_1
gcloud app deploy --quiet
I would like to set it up on cloud build. I have enabled PUSH triggers on cloud build. Need help in creating cloudbuild.yaml file
CloudBuild doesnt support VPC hence cannot be used for migration to private cloud sql - link
Following are steps I use when deploying code from Github repo to App engine standard. Each step is dependant on previous step running successfully.
Create python venv & install all dependencies
Install gcloud proxy & make it executable
Turn on proxy, activate venv, run tests, if test pass then make migrations, collect static files
Upload static files to public bucket
deploy code to GAE standard
cloudbuild.yaml:
- id: setup-venv
name: python:3.8-slim
timeout: 100s
entrypoint: sh
args:
- -c
- '(python -m venv my_venv && . my_venv/bin/activate && pip install -r requirements.txt && ls)'
waitFor: [ '-' ]
- id: proxy-install
name: 'alpine:3.10'
entrypoint: sh
args:
- -c
- 'wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.21.0/cloud_sql_proxy.linux.amd64 && chmod +x /workspace/cloud_sql_proxy'
waitFor: [ 'setup-venv' ]
- id: run-tests-with-proxy
name: python:3.8-slim
entrypoint: sh
args:
- -c
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances="<instance_name>=tcp:3306" & sleep 2) && (. my_venv/bin/activate && python manage.py test --noinput && python manage.py migrate && python manage.py collectstatic --noinput )'
waitFor: [ 'proxy-install' ]
env:
- 'CLOUD_BUILD=1'
- 'PYTHONPATH=/workspace'
# if tests fail, these sections wont execute coz they waitFor tests section
- id: upload-static-to-bucket
name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args: [ '-c', 'gsutil rsync -R ./static/ gs://<bucket_name>/static' ]
waitFor: [ 'run-tests-with-proxy' ]
- id: deploy
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: [ '-c', 'gcloud app deploy --quiet' ]
waitFor: [ 'upload-static-to-bucket' ]
Scope for improvement:
how to have args broken into multiple lines instead of everything being on one line
If tests run on local postgres instance on cloudbuild instead of production cloud sql instance, that would be nice. I was able to create a postgres instance, but it did not run in background, hence when running tests my code could not connect to this local instance
postgres in cloudbuild.yaml:
- id: setup-postgres
name: postgres
timeout: 500s
waitFor: [ '-' ]
env:
- 'POSTGRES_PASSWORD=password123'
- 'POSTGRES_DB=aseem'
- 'POSTGRES_USER=aseem'
- 'PGPORT=5432'
- 'PGHOST=127:0:0:1'
Cloud Build support to run on custom VPC with worker pools.
You can create a worker pool to access resources in VPC. If you will access when building you must click "Assign external IPs" in your worker pool settings and you mus use below code in your cloudbuild yaml:
options:
pool:
name: "projects/${PROJECT_ID}/locations/${_REGION}/workerPools/${_WORKER_POOL}"
substitutions:
_REGION: 'your_region'
_WORKER_POOL: 'your_pool_name'
If you need to know your outbound ip you can access it from Cloud Nat and if you have firewall rules for outbound you have to assign a dedicated ip.
1.Currently, I'm building a flask project and I also wrote some unit testing code. Now I would like to run the Unit test on the GitHub action, but it stuck at the ./run stage(./run will turn on the http://127.0.0.1:5000/), and does not run the $pytest command. I know the reason why $pytest will not be executed because the Github Action is running the port http://127.0.0.1:5000/. In this case, it can not execute any commands after./run. I was wondering can I run $pytest on another terminal in GitHub action? Is this possible?
Here is the output of my github action:
Run cd Flask-backend
cd Flask-backend
./run
pytest
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.10/x64
* Serving Flask app 'app.py' (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 404-425-256
2.Here is my code for yml file:
name: UnitTesting
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Python 3
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirement.txt
- name: Run tests with pytest
run: |
cd Flask-backend
./run
pytest
You can use nohup command to run the flask server in the background instead of running on a different terminal.
nohup python app.py
Wait for some time after running this command using the sleep command and then run your tests.
I tried it several times but without any success. But there is a simple workaround to test your api locally. I achieved it with a pytest script by using the test_client() from flask package. This client simulates your flask-app.
from api import app
# init test_client
app.config['TESTING'] = True
client = app.test_client()
# to test your app
def test_api():
r = client.get('/your_endpoint')
assert r.status_code == 200
Note that every requests now made directly over the client and the methods which belongs to.
You can find more information here: https://flask.palletsprojects.com/en/2.0.x/testing/
1.I was trying to run the UnitTesting on GitHub action. After I run the script to connect the http://127.0.0.1:5000/, I can not input any commands anymore(Unless open another terminal). I was wondering if there is any way can input the command and execute this command after connecting the port http://127.0.0.1:5000/? Or does Github Action support run in different terminals?
2.Here is my yml code:
name: UnitTesting
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Python 3
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirement.txt
- name: Run server
run: |
cd Flask-backend
nohup python3 app.py
sleep
- name: Run Unit test
run: |
pytest
Your command does not run in the background.
You need to change it to
nohup python3 app.py & &>/dev/null
and I believe sleep requires an argument, like sleep 3
Tests using gitlab-ci in docker fail as the Postgres service in not accessible.
In my dev environment, I run tests successfully with: $docker-compose -f local.yaml run web py.test
But in gitlab, the command - docker run --env-file=.env_dev $CONTAINER_TEST_IMAGE py.test -p no:sugar in the gitlab-ci.yaml file fails with:
9bfe10de3baf: Pull complete
a137c036644b: Pull complete
8ad45b31cc3c: Pull complete
Digest: sha256:0897b57e12bd2bd63bdf3d9473fb73a150dc4f20cc3440822136ca511417762b
Status: Downloaded newer image for registry.gitlab.com/myaccount/myapp:gitlab_ci
$ docker run --env-file=.env $CONTAINER_TEST_IMAGE py.test -p no:sugar
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Postgres is unavailable - sleeping
Basically, it cannot see the Postgres service. The text Postgres is unavailable - sleeping comes from an entrypoint.sh file in the Dockerfile
Below are some relevant files:
gitlab-ci.yml
image: docker:latest
services:
- docker:dind
stages:
- build
- test
variables:
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE --file compose/local/django/Dockerfile .
- docker push $CONTAINER_TEST_IMAGE
pytest:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run --env-file=.env_dev $CONTAINER_TEST_IMAGE py.test -p no:sugar
when: on_success
Dockerfile:
# ... other configs here
ENTRYPOINT ["compose/local/django/entrypoint.sh"]
entrypoint.sh:
# ..... other configs here
export DATABASE_URL=postgres://$POSTGRES_USER:$POSTGRES_PASSWORD#postgres:5432/$POSTGRES_USER
function postgres_ready(){
python << END
import sys
import psycopg2
try:
conn = psycopg2.connect(dbname="$POSTGRES_USER", user="$POSTGRES_USER", password="$POSTGRES_PASSWORD", host="postgres")
except psycopg2.OperationalError:
sys.exit(-1)
sys.exit(0)
END
}
until postgres_ready; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - continuing..."
exec $cmd
The above setup & configuration is inspired by django-cookie-cutter