How to run two commands on Github Actions instance one after another? - django

So question seems easy but let me start with this, ";" "&" does not work.
The two commands to be ran on Github actions instance in CI/CD pipeline :
python3 manage.py runserver
python3 abc.py
After putting the command in the yaml file, only the first command runs and then the workflow is stuck there only and does not executes the second command.
I have tried putting in two separate blocks in workflow yaml file but no luck.

There are two to run commands one after another on Github Actions.
On the same step:
steps:
- name: Run both python files
run: |
python manage.py runserver
python abc.py
On different steps (that will run in sequence):
steps:
- name: Run first python file
run: python manage.py runserver
- name: Run second python file
run: python abc.py
Also, you don't need to inform python3, just python is enough, as you will use the setup/python action informing the version first.
Therefore, your whole workflow would probably look like this:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository content
uses: actions/checkout#v2.3.4
- name: Setup Python Version
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Install Python dependencies
run: python -m pip install --upgrade pip [...] # if necessary
- name: Execute Python scripts
run: |
python manage.py runserver
python abc.py

Related

Is possible run command on different terminal on Github action?

1.Currently, I'm building a flask project and I also wrote some unit testing code. Now I would like to run the Unit test on the GitHub action, but it stuck at the ./run stage(./run will turn on the http://127.0.0.1:5000/), and does not run the $pytest command. I know the reason why $pytest will not be executed because the Github Action is running the port http://127.0.0.1:5000/. In this case, it can not execute any commands after./run. I was wondering can I run $pytest on another terminal in GitHub action? Is this possible?
Here is the output of my github action:
Run cd Flask-backend
cd Flask-backend
./run
pytest
shell: /usr/bin/bash -e {0}
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.10/x64
* Serving Flask app 'app.py' (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 404-425-256
2.Here is my code for yml file:
name: UnitTesting
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Python 3
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirement.txt
- name: Run tests with pytest
run: |
cd Flask-backend
./run
pytest
You can use nohup command to run the flask server in the background instead of running on a different terminal.
nohup python app.py
Wait for some time after running this command using the sleep command and then run your tests.
I tried it several times but without any success. But there is a simple workaround to test your api locally. I achieved it with a pytest script by using the test_client() from flask package. This client simulates your flask-app.
from api import app
# init test_client
app.config['TESTING'] = True
client = app.test_client()
# to test your app
def test_api():
r = client.get('/your_endpoint')
assert r.status_code == 200
Note that every requests now made directly over the client and the methods which belongs to.
You can find more information here: https://flask.palletsprojects.com/en/2.0.x/testing/

How can I type the command after connect to the Port 5000?

1.I was trying to run the UnitTesting on GitHub action. After I run the script to connect the http://127.0.0.1:5000/, I can not input any commands anymore(Unless open another terminal). I was wondering if there is any way can input the command and execute this command after connecting the port http://127.0.0.1:5000/? Or does Github Action support run in different terminals?
2.Here is my yml code:
name: UnitTesting
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Python 3
uses: actions/setup-python#v1
with:
python-version: 3.8
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirement.txt
- name: Run server
run: |
cd Flask-backend
nohup python3 app.py
sleep
- name: Run Unit test
run: |
pytest
Your command does not run in the background.
You need to change it to
nohup python3 app.py & &>/dev/null
and I believe sleep requires an argument, like sleep 3

Coveralls is not being submitted on a Django app with Docker

I'm working on a Django project using Docker. I have configured Travis-Ci and I want to submit test coverage to coveralls. However, it is not working as expected. any help will be highly appreciated.
Here is the error I'm getting
Submitting coverage to coveralls.io...
No source for /mwibutsa/mwibutsa/settings.py
No source for /mwibutsa/mwibutsa/urls.py
No source for /mwibutsa/user/admin.py
No source for /mwibutsa/user/migrations/0001_initial.py
No source for /mwibutsa/user/models.py
No source for /mwibutsa/user/tests/test_user_api.py
No source for /mwibutsa/user/tests/test_user_model.py
Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 177, in wear
response.raise_for_status()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/cli.py", line 77, in main
result = coverallz.wear()
File "/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/coveralls/api.py", line 180, in wear
raise CoverallsException('Could not submit coverage: {}'.format(e))
coveralls.exception.CoverallsException: Could not submit coverage: 422 Client Error: Unprocessable Entity for url: https://coveralls.io/api/v1/jobs
**Here is my Travis.yml file**
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- coveralls
My Dockerfile
FROM python:3.7-alpine
LABEL description="Mwibutsa Floribert"
ENV PYTHONUNBUFFERED 1
RUN mkdir /mwibutsa
WORKDIR /mwibutsa
COPY requirements.txt /mwibutsa/
RUN apk add --update --no-cache postgresql-client jpeg-dev
RUN apk add --update --no-cache --virtual .tmp-build-deps gcc libc-dev linux-headers postgresql-dev musl-dev zlib zlib-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN apk del .tmp-build-deps
COPY . /mwibutsa/
My docker-compose.yml
version: '3.7'
services:
web:
build: .
command: >
sh -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
environment:
- DB_HOST=db
- DB_NAME=postgres
- DB_PASSWORD=password
- DB_USER=postgres
- DB_PORT=5432
volumes:
- .:/mwibutsa
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:12-alpine
environment:
- POSTGRES_NAME=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_PORT=5432
To understand why the coverage is not being submitted, you have to understand how docker containers operate.
The container is created to mimic a separate and independent unit. This means that commands being run in the global context are different from those being run inside the container context.
In your case, you are running tests and generating a coverage report inside the container's context then trying to submit a report to coveralls from the global context.
Since the file is in the container, the coveralls command cannot find the report and hence nothing gets submitted.
You may refer to the answer provided here to solve this:
Coveralls: Error- No source for in my application using Docker container
Or check out the documentation provided by travis on how to submit to coveralls from travis using docker:
https://docs.travis-ci.com/user/coveralls/#using-coveralls-with-docker-builds
You have to run coveralls inside the container so it can send the data file generated by coverage to coveralls.io. You have to run coverage again in the after_success command so the .coverage data file is present in the container when coveralls runs. You also have to pass the coveralls repo token in as an environment variable that you set in travis https://docs.travis-ci.com/user/environment-variables#defining-variables-in-repository-settings.
.travis.yml
language: python
python:
- "3.7"
services: docker
before_script: pip install docker-compose
script:
- docker-compose run web sh -c "coverage run manage.py test && flake8 && coverage report"
after_success:
- docker-compose run web sh -c "coverage run manage.py test && TRAVIS_JOB_ID=$TRAVIS_JOB_ID TRAVIS_BRANCH=$TRAVIS_BRANCH COVERALLS_REPO_TOKEN=$COVERALLS_REPO_TOKEN coveralls"
You need to make sure your git repo files are copied into the container for coveralls to accurately report the branch and have the badge work. You might also need to install git in the container.
Dockerfile:10
RUN apk add --update --no-cache postgresql-client jpeg-dev git

Travis detecting Ruby default settings in a Django project

I have a Django project that I want to test continuously using Travis-CI. The problem is that every time I run a build in Travis it fails because a rake command of ruby.
I have already changed my travis.yml a hundred times but it's not working. I leave my last travis.yml that is in the same directory that my requirements.txt
language: python
python:
- "3.5"
- "3.6"
- "3.7"
cache: pip
services:
- sqlite3
env:
- DJANGO=2.2.4 DB=sqlite3
install:
- pip install -r requirements.txt
before_script:
- sqlite3 - e 'create database test;' -u root
script:
- python manage.py makemigrations
- python manage.py migrate
- python manage.py test
The output I get from travis is this:
rvm
$ git clone --depth=50 --branch=master ...
1.01s$ rvm use default
ruby.versions
$ ruby --version
No Gemfile found, skipping bundle install
0.21s$ rake
rake aborted!
No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)
/home/travis/.rvm/gems/ruby-2.5.3#global/gems/rake-12.3.2/exe/rake:27:in `<top (required)>'
/home/travis/.rvm/gems/ruby-2.5.3#global/bin/ruby_executable_hooks:24:in `eval'
/home/travis/.rvm/gems/ruby-2.5.3#global/bin/ruby_executable_hooks:24:in `<main>'
(See full trace by running task with --trace)
The command "rake" exited with 1.
Done. Your build exited with 1.
Rename travis.yml to .travis.yml.
See more at Getting started and Building a Python projects

Can circle ci use docker-compose to build the environment

I currently have a few services such as db and web in a django application, and docker-compose is used to string them together.
The web version has code like this..
web:
restart: always
build: ./web
expose:
- "8000"
The docker file in web has python2.7-onbuild, so it uses the requirements.txt file to install all the necessary dependencies.
I am now using circle CI for integration and have a circle.yml file like this..
....
dependencies:
pre:
- pip install -r web/requirements.txt
....
Is there anyway I could avoid the dependency clause in the circle yml file.
Instead I would like Circle CI to use docker-compose.yml instead, if that makes sense.
Yes, using docker-compose in the circle.yml file can be a nice way to run tests because it can mirror ones dev environment very closely. This is a extract from our working tests on a AngularJS project:
---
machine:
services:
- docker
dependencies:
override:
- docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
- sudo pip install --upgrade docker-compose==1.3.0
test:
pre:
- docker-compose pull
- docker-compose up -d
- docker-compose run npm install
- docker-compose run bower install --allow-root --config.interactive=false
override:
# grunt runs our karma tests
- docker-compose run grunt deploy-build compile
Notes:
The docker login is only needed if you have private images in docker hub.
when we wrote our circle.yml file only docker-compose 1.3 was available. This is probably updated now.
I haven't tried this myself but based on the info here https://circleci.com/docs/docker I guess it may work
# circle.yml
machine:
services:
- docker
dependencies:
pre:
- pip install docker-compose
test:
pre:
- docker-compose up -d
Unfortunately, circleCI by default install old version of Docker 1.9.1 which is not compatible with latest version of docker-compose. In order to get more fresh docker version 1.10.0 you should:
machine:
pre:
- curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0
- pip install docker-compose
services:
- docker
test:
pre:
- docker-compose up -d
Read more: https://discuss.circleci.com/t/docker-1-10-0-is-available-beta/2100
UPD: Native-Docker support on Circle version 2.
Read more information how to switch to new Circle CI version here: https://circleci.com/docs/2.0/migrating-from-1-2/