Heroku App Crashed After pushing and releasing by gitlab-ci - django

I have deployed a Django application on Heroku server. I have pushed my project without migrations and database manually by Heroku CLI. Then, pushing by Gitlab ci (which does the migrations before push) gives me app crashed error on Heroku side. I have encountered many "app crashed" errors before and could solve them by inspecting the logs. But this time I cannot understand what is the issue.
I am sure that files are completely pushed and the application is released.
Here is my Procfile:
web: gunicorn --pythonpath Code Code.wsgi --log-file -
My project is in the "Code" folder and my Django project name is Code.
Error part of the heroku logs:
2019-06-08T08:02:50.549319+00:00 app[api]: Deployed web (c1f5c903bedb) by user arminbehnamnia#gmail.com
2019-06-08T08:02:50.549319+00:00 app[api]: Release v6 created by user arminbehnamnia#gmail.com
2019-06-08T08:02:51.268875+00:00 heroku[web.1]: Restarting
2019-06-08T08:02:51.277247+00:00 heroku[web.1]: State changed from up to starting
2019-06-08T08:02:52.494158+00:00 heroku[web.1]: Stopping all processes with SIGTERM
2019-06-08T08:02:52.517991+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [4] [INFO] Handling signal: term
2019-06-08T08:02:52.519983+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [11] [INFO] Worker exiting (pid: 11)
2019-06-08T08:02:52.529529+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [10] [INFO] Worker exiting (pid: 10)
2019-06-08T08:02:52.823141+00:00 app[web.1]: [2019-06-08 08:02:52 +0000] [4] [INFO] Shutting down: Master
2019-06-08T08:02:52.958760+00:00 heroku[web.1]: Process exited with status 0
2019-06-08T08:03:09.777009+00:00 heroku[web.1]: Starting process with command `python3`
2019-06-08T08:03:11.647048+00:00 heroku[web.1]: State changed from starting to crashed
2019-06-08T08:03:11.654524+00:00 heroku[web.1]: State changed from crashed to starting
2019-06-08T08:03:11.625687+00:00 heroku[web.1]: Process exited with status 0
.
.
.
2019-06-08T08:17:16.898569+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=makan-system.herokuapp.com request_id=de4fbb8e-cd14-4263-bb6f-a8d0f956a519 fwd="69.55.54.121" dyno= connect= service= status=503 bytes= protocol=https
My .gitlab-ci.yml file:
stages:
- test
- build
- push
tests:
image: docker:latest
services:
- docker:dind
stage: test
before_script:
- docker login -u armin_gm -p $PASSWORD registry.gitlab.com
script:
- docker build . -t test_django
- docker ps
- docker run --name=testDjango test_django python /makanapp/Code/manage.py test registration
when: on_success
build:
image: docker:latest
services:
- docker:dind
stage: build
before_script:
- docker login -u armin_gm -p $PASSWORD registry.gitlab.com
script:
- docker build -t registry.gitlab.com/armin_gm/asd_project_98_6 .
- docker push registry.gitlab.com/armin_gm/asd_project_98_6
push_to_heroku:
image: docker:latest
stage: push
services:
- docker:dind
script:
# This is for gitlab
- docker login -u armin_gm -p $PASSWORD registry.gitlab.com
#- docker pull registry.gitlab.com/armin_gm/asd_project_98_6:latest
- docker build . -t push_to_django
- docker ps
# This is for heroku
- docker login --username=arminbehnamnia#gmail.com --password=$AUTH_TOKEN registry.heroku.com
- docker tag push_to_django:latest registry.heroku.com/makan-system/web:latest
- docker push registry.heroku.com/makan-system/web:latest
- docker run --rm -e HEROKU_API_KEY=$AUTH_TOKEN wingrunr21/alpine-heroku-cli container:release web --app makan-system
My Dockerfile:
# Official Python image
FROM python:latest
ENV PYTHONUNBUFFERED 1
# create root directory for project, set the working directory and move all files
RUN mkdir /makanapp
WORKDIR /makanapp
ADD . /makanapp/
# Web server will listen to this port
EXPOSE 8000
# Install all libraries we saved to requirements.txt file
#RUN apt-get -y update
#RUN apt-get -y install python3-dev python3-setuptools
RUN pip install -r requirements.txt
RUN python ./Code/manage.py makemigrations
RUN python ./Code/manage.py migrate --run-syncdb

Related

Django/AWS EB: [app-deploy] - [RunAppDeployPreBuildHooks] error (no such file or directory), BUT the file does exist?

I have a Django application and I'm trying to deploy to my AWS EB Environment.
It's a project that my friend and I are working on.
I an trying to run the command eb deploy however though I get this:
Alert: The platform version that your environment is using isn't recommended. There's a recommended version in the same platform branch.
Creating application version archive "app-3d19-210731_133226".
Uploading Prod Rest API/app-3d19-210731_133226.zip to S3. This may take a while.
Upload Complete.
2021-07-31 17:32:29 INFO Environment update is starting.
2021-07-31 17:32:33 INFO Deploying new version to instance(s).
2021-07-31 17:32:36 ERROR Instance deployment failed. For details, see 'eb-engine.log'.
2021-07-31 17:32:39 ERROR [Instance: i-05761282d68083a51] Command failed on instance. Return code: 1 Output: Engine execution has encountered an error..
2021-07-31 17:32:39 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-07-31 17:32:39 ERROR Unsuccessful command execution on instance id(s) 'i-05761282d68083a51'. Aborting the operation.
2021-07-31 17:32:39 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
I checked the eb-engine.og and this is what I get:
2021/07/31 17:19:47.104584 [INFO] Executing instruction: StageApplication
2021/07/31 17:19:47.111204 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2021/07/31 17:19:47.111230 [INFO] Running command /bin/sh -c /usr/bin/unzip -q -o /opt/elasticbeanstalk/deployment/app_source_bundle -d /var/app/staging/
2021/07/31 17:19:47.154001 [INFO] finished extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/ successfully
2021/07/31 17:19:47.156956 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2021/07/31 17:19:47.156982 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2021/07/31 17:19:47.157046 [INFO] Following platform hooks will be executed in order: [install_supervisor.sh]
2021/07/31 17:19:47.157060 [INFO] Running platform hook: .platform/hooks/prebuild/install_supervisor.sh
2021/07/31 17:19:47.157342 [ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPreBuildHooks]. Stop running the command. Error: Command .platform/hooks/prebuild/install_supervisor.sh failed with error fork/exec .platform/hooks/prebuild/install_supervisor.sh: no such file or directory
2021/07/31 17:19:47.157349 [INFO] Executing cleanup logic
2021/07/31 17:19:47.157565 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1627751987,"severity":"ERROR"}]}]}
2021/07/31 17:19:47.159200 [INFO] Platform Engine finished execution on command: app-deploy
I'm really confused on why this is occuring because for my friend who has the same repo, he is able to eb deploy fine but I can't for some reason.
This is my file structure:
my-app/
├─ .ebextensions/
│ ├─ 01_django.config
├─ .elasticbeanstalk/
│ ├─ config.yml
├─ .platform/
│ ├─ files/
│ │ ├─ supervisor.ini
│ ├─ hooks/
│ │ ├─ prebuild/
│ │ │ ├─ install_supervisor.sh
├─ other files/
where my other files are my actual Django code itself.
01_django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: RestAPI.wsgi:application
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: static
config.yml
branch-defaults:
main:
environment: Prodrestapi-env
environment-defaults:
Prodrestapi-env:
branch: null
repository: null
global:
application_name: Prod Rest API
default_ec2_keyname: aws-eb
default_platform: Python 3.8 running on 64bit Amazon Linux 2
default_region: us-east-2
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: eb-cli
sc: git
workspace_type: Application
supervisor.ini
# Create celery configuraiton script
[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=sh /var/app/current/scripts/worker.sh
directory=/var/app/current
; user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=60
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
install_supervisor.sh
#!/bin/sh
sudo amazon-linux-extras enable epel
sudo yum install -y epel-release
sudo yum -y update
sudo yum -y install supervisor
sudo systemctl start supervisord
sudo systemctl enable supervisord
sudo cp .platform/files/supervisor.ini /etc/supervisord.d/celery.ini
sudo supervisorctl reread
sudo supervisorctl update
I"m just really confused on why it's not deploying properly because the install_supervisor.sh file is clearly in the folder.
Any and all help will be really appreciate.
Thank you so much!

Can't start gunicorn.service on VirtualBox with CentOS 8, Nginx and Django-Rest-Framework

Trying to deploy django application using gunicorn and nginx on CentOS. Following DigitalOcean tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-centos-7
But I have CentOS 8.
I can run my application locally from virtualenv using:
python manage.py runserver
gunicorn --bind 0.0.0.0:8000 app.wsgi:application
but then I try to run gunicorn.service I have status - failed.
Inside of systemctl status gunicorn.service I have
started gunicorn deamon
gunicorn.service: main process exited, code=exited, status=203/EXEC
gunicorn.service: failed with result 'exit-code'
Without this file I can't bind app.sock file as it not being created.
My gunicorn.service looks like this
app - is fictional application name :)
admin - is a real user of this system
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=admin
Group=nginx
WorkingDirectory=/home/admin/app/portal/app
ExecStart=/home/admin/app/env/bin/gunicorn --workers 3 --bind unix:/home/admin/app/portal/app/app.sock app.wsgi:application
[Install]
WantedBy=multi-user.target
There is a tree of my project:
app
- env
- portal
-- client
-- app
--- documents
--- fixtures
--- images
--- app
---- __init__.py
---- settings.py
---- urls.py
---- wsgi.py
--- app_project
---- ...
--- manage.py
--- requirements.txt
What can be done to make it work and that can I check to find more clues why it doesn't work?
Any input is welcome.
Thanks
in journalctl -xe
I've noticed
SELinux is preventing file/path from execute access on the file
so I changed
SELINUX=permissive
in /etc/selinux/config
rebooted and now I have different error in systemctl status gunicorn.service:
gunicorn: error: unrecognized arguments: app.wsgi:application
oh, that was because I added additional flag to ExecStart. Removed it and it works now.
So issue was with SELinux.
Will keep it here in case if it would be useful to anyone

Dockerfile Django not opening in browser

I'm using Django 2.x and dockerizing the application.
I have the following Dockerfile content.
FROM python:3.7-alpine
# Install Rabbit-mq server
RUN echo #testing http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk --update add \
libxml2-dev \
libxslt-dev \
libffi-dev \
gcc \
musl-dev \
libgcc curl \
jpeg-dev \
zlib-dev \
freetype-dev \
lcms2-dev \
openjpeg-dev \
tiff-dev \
tk-dev \
tcl-dev \
mariadb-connector-c-dev \
supervisor \
nginx \
--no-cache bash
# Set environment variable
ENV PYTHONUNBUFFERED 1
# Set locale variables
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
# -- Install Application into container:
RUN set -ex && mkdir /app
# -- Adding dependencies:
ADD . /app/
# Copy environment variable file
ADD src/production.env /app/src/.env
COPY scripts/docker/entrypoint.sh /app/
# Switch to the working directory
WORKDIR /app
RUN chmod ug+x /app/entrypoint.sh
# Install Pipenv first to use pipenv module
RUN pip install pipenv
# -- Adding Pipfiles
ONBUILD COPY Pipfile Pipfile
ONBUILD COPY Pipfile.lock Pipfile.lock
RUN pipenv install --system --deploy
RUN mkdir -p /etc/supervisor.d
COPY configs/docker/supervisor/supervisor.conf /etc/supervisor.d/supervisord.ini
EXPOSE 80 8000
CMD ["/app/entrypoint.sh"]
and the entrypoint.sh hash
#!/usr/bin/env bash
exec gunicorn --pythonpath src qcg.wsgi:application -w 3 -b 0.0.0.0:8000 -t 300 --max-requests=100
I use the command to build the image
docker build . -t qcg_new
and running it using
docker run qcg_new
It runs the docker and the gunicorn server is also running on 8000 port
[2019-09-16 09:03:31 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-09-16 09:03:31 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2019-09-16 09:03:31 +0000] [1] [INFO] Using worker: sync
[2019-09-16 09:03:31 +0000] [10] [INFO] Booting worker with pid: 10
[2019-09-16 09:03:31 +0000] [11] [INFO] Booting worker with pid: 11
[2019-09-16 09:03:31 +0000] [12] [INFO] Booting worker with pid: 12
But when I visit http://127.0.0.1:8000 or http://localhost:8000 in the browser, it does not opens.
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published. To actually publish the port when running
the container, use the -p flag on docker run to publish and map one or
more ports, or the -P flag to publish all exposed ports and map them
to high-order ports.
see it here
so you need to -p 8000:8000 in your run command

No resources found error after deploying Django app

I'm trying to deploy my app to Heroku using this tutorial:
https://devcenter.heroku.com/articles/getting-started-with-django#deploy-to-heroku
I managed to push my app to Heroku, but I keep getting this error when I try to make sure I have at least one dyno running:
(tapeworm_django)Christophers-MacBook-Pro-2:tapeworm christopherspears$ heroku ps:scale web=1
Scaling dynos... failed
! No app specified.
! Run this command from an app folder or specify which app to use with --app APP.
(tapeworm_django)Christophers-MacBook-Pro-2:tapeworm christopherspears$ heroku ps:scale web=1 --app tapeworm
Scaling dynos... failed
! Resource not found
I ran the command inside the same directory as my Procfile:
/Users/christopherspears/PyDevel/tapeworm_django
(tapeworm_django)Christophers-MacBook-Pro-2:tapeworm_django christopherspears$ ls *
README.md requirements.txt
tapeworm:
Procfile drawings/ manage.py* tapeworm/ templates/
Any hints?
UPDATE:
I can get it to run locally:
(tapeworm_django)Christophers-MacBook-Pro-2:tapeworm christopherspears$ foreman start
16:43:17 web.1 | started with pid 2366
16:43:17 web.1 | 2014-03-29 16:43:17 [2366] [INFO] Starting gunicorn 18.0
16:43:17 web.1 | 2014-03-29 16:43:17 [2366] [INFO] Listening at: http://0.0.0.0:5000 (2366)
16:43:17 web.1 | 2014-03-29 16:43:17 [2366] [INFO] Using worker: sync
16:43:17 web.1 | 2014-03-29 16:43:17 [2369] [INFO] Booting worker with pid: 2369
I managed to get this to work. First, I moved my Procfile up a level, so my project is structured like so:
tapeworm_django/
Procfile
README.md
requirements.txt
tapeworm/
drawings/ <- app
manage.py
tapeworm/ <- project configuration folder
templates/
I moved the Procfile up a level because I see that most developers seem to place the file in the root directory. Am I wrong?
Then I changed the contents of the file to from
web: gunicorn tapeworm.wsgi
to
web: python tapeworm/manage.py runserver 0.0.0.0:$PORT --noreload
I am unsure if that solution is considered "proper" because it seems to clash with the Getting Started With Django tutorial:
https://devcenter.heroku.com/articles/getting-started-with-django

Django on Heroku - how can I get a celery worker to run correctly?

I am trying to deploy the simplest possible "hello world" celery configuration on heroku for my Django app. My Procfile is as follows:
web: gunicorn myapp.wsgi
worker: celery -A myapp worker -l info -B -b amqp://XXXXX:XXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXX
This is the RABBITMQ_BIGWIG_RX_URL that I'm giving to the celery worker. I have the corresponding RABBITMQ_BIGWIG_TX_URL in my settings file as the BROKER_URL.
If I use these broker URLs in my local dev environment, everything works fine and I can actually use the Heroku RabbitMQ system. However, when I deploy my app to Heroku it isn't working.
This Procfile seems to work (although Celery is giving me memory leak issues).
web: gunicorn my_app.wsgi
celery: celery worker -A my_app -l info --beat -b amqp://XXXXXXXX:XXXXXXXXXXXXXXXXXXXX#red-thistle-3.bigwig.lshift.net:PPPP/XXXXXXXXX