How to setup github actions for celery and django? - django

I have some celery tasks in situated in my django app, in my tests section of the project I have several celery task that does some database work using django orm.
In my local environment, pytest works fine but in github actions the following error is shown.
kombu.exception.OperationalError
In my pytest conftest.py file I have used the following setup [ taken from celery docs ]
#pytest.fixture(scope="session")
def celery_config():
return {"broker_url": "amqp://", "result_backend": "redis://"}
but still, the exception is thrown. So, How can I properly create a github workflow that can test celery tasks without the above exception?
My github workflow:
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ '3.10' ]
services:
# Label used to access the service container
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
ports:
- 9400:9200
postgres:
# Docker Hub image
image: postgres:10.8
# Provide the password for postgres
env:
POSTGRES_USER: django
POSTGRES_PASSWORD: django
POSTGRES_DB: django
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
# Docker Hub image
image: redis
# Set health checks to wait until redis has started
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout#v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with pytest
env:
ENV: TEST
run: pytest

Related

whitelist AWS RDS on CircleCI

I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context

How can I solve syntax error in yaml file when pushing to github?

I'm using postgresql with django. I set a github action that verifies my code whenever I push or pull, and I get the following error:
You have an error in your yaml syntax on line 19
Here is my yaml:
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_actions
ports:
- 5433:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.9.7
uses: actions/setup-python#v2
with:
python-version: "3.9.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test
line 19 corresponds to image: postgres:14 but I can't see any syntax error here. I've looked at some examples and it looks exactly the same.
For GitHub actions, configuring a Django web app service container using the Docker Hub for postgres images works fine with this code only.
image: postgres
For your particular case, you can check if it works for you.
To answer my question, I followed these two posts that are up to date:
https://www.hacksoft.io/blog/github-actions-in-action-setting-up-django-and-postgres
https://www.digitalocean.com/community/tutorials/how-to-use-postgresql-with-your-django-application-on-ubuntu-14-04
Make sure you install all the dependencies.
I also set the port to 5432 and image to postgres:14.2
(To know your postrgesql version you can enter /usr/lib/postgresql/14/bin/postgres -V)
See final yml file:
name: Python application
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:14.2
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: github_action
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout#v2
- name: Set up Python 3.10
uses: actions/setup-python#v2
with:
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Test with Unittest
env:
SECRET_KEY: ${{secrets.SECRET_KEY}}
EMAIL_FROM_USER: ${{secrets.EMAIL_FROM_USER}}
EMAIL_HOST_PASSWORD: ${{secrets.EMAIL_HOST_PASSWORD}}
DB_NAME: ${{secrets.DB_NAME}}
DB_USER: ${{secrets.DB_USER}}
DB_PASSWORD: ${{secrets.DB_PASSWORD}}
DB_HOST: ${{secrets.DB_HOST}}
DB_ENGINE: ${{secrets.DB_ENGINE}}
DB_PORT: ${{secrets.DB_PORT}}
run: |
python3 manage.py test

Use pre-existing network without modify inbound ports

I have configurated the next compose.yml
version: "3.9"
x-aws-cluster: cluster
x-aws-vpc: vpc-A
x-aws-loadbalancer: balancerB
services:
backend:
image: backend:1.0
build:
context: ./backend
dockerfile: ./dockerfile
command: >
sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py runserver 0.0.0.0:8002"
networks:
- default
- allowedips
ports:
- "8002:8002"
frontend:
tty: true
image: frontend:1.0
build:
context: ./frontend
dockerfile: ./dockerfile
command: >
sh -c "npm i --save-dev vue-loader-v16
npm install
npm run serve"
ports:
- "8080:8080"
networks:
- default
- allowedips
depends_on:
- backend
networks:
default:
external: true
name: sg-1
allowedips:
external: true
name: sg-2
I thought it like:
sg-1: Default security group
sg-2: Allowed IPs access
If I run
docker compose up -d
it runs well without any problem and I can use the app.
My dude is that the process create
Allowedips8002Ingress
Allowedips8080Ingress
Default8002Ingress
Default8080Ingress
I don't want this, I will have a allowed IPs inbound rules in sg-2. How can I avoid this?

Docker compose npm install took hours

I am using aws ec2 free tier for Ubuntu 20.04. I am deploying nginx, Jenkins, Nest.js application using docker, but while building Nest.js, npm install glob rimraf took literally HOURS. Is it because of AWS free tier which is too small? When I monitored CPU usage from ec2 console, it is almost 100%. However, glob and rimraf is not a huge package. They are only 56kb and 70kb... but downloading images like node is really fast.
Is there any way to speed up this process?
Here is my dockerFile and docker-compose.yaml
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install glob rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: node dist/main
# ports:
# - 3000:3000
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
ports:
- 3000:3000
- 9229:9229
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
jenkins:
build:
context: .
dockerfile: ./jenkins/Dockerfile
image: jenkins/jenkins
restart: always
container_name: jenkins
user: root
environment:
- JENKINS_OPTS="--prefix=/jenkins"
ports:
- 8080:8080
expose:
- '8080'
networks:
- nestjs-network
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
TZ: 'Asia/Seoul'
networks:
nestjs-network:

how to setup prometheus in django rest framework and docker

I want to monitoring my database using prometheus, django rest framework and docker,
all is my local machine, the error is below:
well the error is the url http://127.0.0.1:9000/metrics, the http://127.0.0.1:9000 is the begging the my API, and I don't know what's the problem, my configuration is below
my requirements.txt
django-prometheus
my file docker: docker-compose-monitoring.yml
version: '2'
services:
prometheus:
image: prom/prometheus:v2.14.0
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
grafana:
image: grafana/grafana:6.5.2
ports:
- 3060:3060
my folder and file prometheus/prometheus.yml
global:
scrape_interval: 15s
rule_files:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- 127.0.0.1:9090
- job_name: monitoring_api
static_configs:
- targets:
- 127.0.0.1:9000
my file settings.py
INSTALLED_APPS=[
...........
'django_prometheus',]
MIDDLEWARE:[
'django_prometheus.middleware.PrometheusBeforeMiddleware',
......
'django_prometheus.middleware.PrometheusAfterMiddleware']
my model.py
from django_promethues.models import ExportMOdelOperationMixin
class MyModel(ExportMOdelOperationMixin('mymodel'), models.Model):
"""all my fields in here"""
my urls.py
url('', include('django_prometheus.urls')),
well the application is running well, when in the 127.0.0.1:9090/metrics, but just monitoring the same url, and I need monitoring different url, I think the problem is not the configuration except in the file prometheus.yml, because I don't know how to call my table or my api, please help me.
bye.
you need to change your config of prometheus and add python image in docker-compose like this:
config of prometheus(prometheus.yaml):
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: django_project # your project name
metrics_path: /metrics
static_configs:
- targets:
- web:8000
docker-compose file for prometheus and django , you can also include grafana image, I have installed grafana locally:
version: '3.7'
services:
web:
build:
context: . # context represent path of your dockerfile(dockerfile present in the root dir)
command: sh -c "python3 manage.py migrate &&
gunicorn webapp.route.wsgi:application --pythonpath webapp --bind 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml #prometheus.yaml present in the root dir
Dockerfile:
FROM python:3.8
COPY ./webapp/django /app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt*strong text*
For prometheus settings in django:
https://pypi.org/project/django-prometheus/
Hit django app api's.
Hit localhost:8000/metrics api.
Hit localhost:9090/ and search for the required metrics from the dropdown and click on execute it will generate result in console and create graph
To show graph in the grafana hit localhost:3000 and create new dashboard.