I am running a Django application on Heroku.
when i make a request longer than 30 seconds, it stopped and the request returns 503 Service Unavailable.
These are the heroku logs
2021-10-20T07:11:14.726613+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/ajax/download_yt/" host=yeti-mp3.herokuapp.com request_id=293289be-6484-4f11-a62d-bda6d0819351 fwd="79.3.90.79" dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=https
I tried to do some research on that and i found that i could increase the gunicorn (i am using gunicorn version 20.0.4) timeout in the Procfile. I tried increasing to 60 seconds, but the request still stops after 30.
This is my current Procfile:
web: gunicorn yetiMP3.wsgi --timeout 60 --log-file -
these are the buildpacks i'm using on my heroku app:
heroku/python
https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git
How can i increase the request timeout?
Did i miss some heroku configuration?
Eventually i found that it's a Heroku dyno configuration.
The Heroku router will drop any request after 30 seconds, no matter how high you set the gunicorn timer.
More details in this official heroku article
Related
My Django API app has a view class that defines a post and a patch method. The post method creates a reservation resource with a user-defined expiration time and creates a Celery task which changes said resource once the user-defined expiration time has expired. The patch method gives the user the ability to update the expiration time. Changing the expiration time also requires changing the Celery task. In my patch method, I use app.control.revoke(task_id=task_id, terminate=True, signal="SIGKILL") to revoke the existing task, followed by creating a new task with the amended expiration time.
All of this works just fine in my local Docker setup. But once I deploy to Heroku the Celery worker appears to be terminated after the execution of the above app.control.revoke(...) causes a request timeout with an H12 error code.
2021-11-21T16:39:35.509103+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=PATCH path="/update-reservation-unauth/30/emailh#email.com/" host=secure-my-spot-api.herokuapp.com request_id=e0a304f8-3901-4ca1-a5eb-ba3dc06df999 fwd="108.21.217.120" dyno=web.1 connect=0ms service=30001ms status=503 bytes=0 protocol=https
2021-11-21T16:39:35.835796+00:00 app[web.1]: [2021-11-21 16:39:35 +0000] [3] [CRITICAL] WORKER TIMEOUT (pid:5)
2021-11-21T16:39:35.836627+00:00 app[web.1]: [2021-11-21 16:39:35 +0000] [5] [INFO] Worker exiting (pid: 5)
I would like to emphasise that the timeout cannot possibly be caused by calculation complexity as the Celery task only changes a boolean field on the reservation resource.
For reference, the Celery service in my docker-compose.yml looks like this:
worker:
container_name: celery
build:
context: .
dockerfile: Dockerfile
command: celery -A secure_my_spot.celeryconf worker --loglevel=INFO
depends_on:
- db
- broker
- redis
restart: always
I have looked everywhere for a solution but have not found anything that helps and I am drawing a blank as to what could possibly cause this issue with the deployed Heroku app whilst the local development containers work just fine.
I'm trying to deploy my django rest framework app on Heroku. I read many other similar questions but I'm confused. My app structure is not right or I'm missing something.
This is my structure on git:
.gitignore
requirements.txt
src
|
--authorization
--core
--static
--staticfiles
--Procfile
--manage.py
--suacm
|
---asgi.py
---settings.py
---urls.py
---wsgi.py
authorization and core are apps under my django project. there wasn't static or staticfiles before heroku deploy. But it automatically created staticfiles. Then I also created static and followed instructions to make it work via changes in settings.py. It'd be awesome if someone help me figure out my problem on heroku and why it doesn't work.
This is Procfile:
web: gunicorn suacm.wsgi
web: gunicorn suacm:app
When I run app with this command my app works fine and run locally:
gunicorn suacm.wsgi:application
But I couldn't solve the error in deployed app.
With
heroku logs --tail
I receive errors starts like below:
2021-05-21T14:32:20.000000+00:00 app[api]: Build succeeded
2021-05-21T14:32:26.505788+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=suacm.herokuapp.com request_id=dfeba2ff-fa7d-4dfb-9337-706f50d286dc fwd="82.222.237.15" dyno= connect= service= status=503 bytes= protocol=https
2021-05-21T14:32:26.882608+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/favicon.ico" host=suacm.herokuapp.com request_id=292525ae-bc4e-49c7-b979-e455bbfd6b95 fwd="82.222.237.15" dyno= connect= service= status=503 bytes= protocol=https
When I run this
heroku ps:scale web=1 --app suacm
I get this:
Scaling dynos... !
▸ Couldn't find that process type (web).
And finally when I try to run heroku locally under src folder with this command
src % heroku local web
I get this response:
[INFO] Starting gunicorn 20.1.0
[INFO] Listening at: http://0.0.0.0:5000 (35246)
[INFO] Using worker: sync
[INFO] Booting worker with pid: 35247
Failed to find attribute 'app' in 'suacm'.
[INFO] Worker exiting (pid: 35247)
.
.
If needed, this is my requirements.txt file:
django_environ==0.4.5
djangorestframework_simplejwt==4.6.0
django_filter==2.4.0
Django==3.1.3
djangorestframework==3.12.4
environ==1.0
PyJWT==2.1.0
gunicorn==20.1.0
django-on-heroku==1.1.2
whitenoise==5.2.0
This is my first time deploying a django app. I hope I gave enough information. Just ask if there is more information needed.
Put your Procfile in the root directory, not in the src and
change this
web: gunicorn suacm.wsgi
to
web: gunicorn src.suacm.wsgi
Then why it work on a local server?
In your local setup, you are launching the server from the src directory which is not the root directory!
But the in the production Heroku launches the application from the root directory, since there is no Procfile in the root directory you are getting these errors.
I have a Flask app deployed on Heroku using Gunicorn. Some processes take longer than the default 30 seconds, so I need to make use of the --timeout setting.
When I run gunicorn --workers=4 --timeout=300 app:app on my computer, there's no problem. But when I deploy it on Heroku with a Procfile consisting of web: gunicorn --workers=4 --timeout=300 app:app the app works fine but the timeout still defaults to 30 seconds.
When I check the Heroku logs, this is the error message I get after 30 seconds. Note the 503 status.
2021-01-15T01:57:01.545230+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/api" host=counterpoint-server.herokuapp.com request_id=0865b365-e8de-48f1-bf94-6325206ec7df fwd="73.165.51.85" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0 protocol=https
Any idea why my Gunicorn settings are being ignored in production?
I have a Django app which works when I run it locally heroku local. When I push my code to heroku git push heroku main it pushes without any problem, but when I opening my site it showes an 503 error in console
1 GET https://XXX.herokuapp.com/ 503 (Service Unavailable)
inside my logs I get this
at=error code=H14 desc="No web processes running" method=GET path="/" host=omylibrary.herokuapp.com request_id=lotofnumbers fwd="somenumbers" dyno= connect= service= status=503 bytes= protocol=https
from this question I assume that the problem is in ProcFile if more precisely in web. My Procfile looks like this
web: gunicorn --chdir ./Lib library.wsgi --log-file -
I also tried this
heroku ps:scale web=1
from previous answer. What Is wrong here ?
also heroku ps returns No dynos on mysite
The problem was that I named ProcFile instead of Procfile I found the solution here I have checked logs of heroku while pushing my app by terminal and I got
remote: Procfile declares types -> (none)
so if you don't have web process running probably it's because of procfile or the code inside it
So I am building a Dockerized Django project and I want to deploy it to Heroku, but I am having a lot of issues. My issues are exactly the same as this post:
Docker + Django + Postgres Add-on + Heroku
Except I cannot use CMD python3 manage.py runserver 0.0.0.0:$PORT since I receive an invalid port pair error.
I'm just running
heroku container:push web
heroku container:release web
heroku open
After going to the site it stays loading until it says an error occurred. My log shows the following:
System check identified no issues (0 silenced).
2019-05-03T11:38:47.708761+00:00 app[web.1]: May 03, 2019 - 11:38:47
2019-05-03T11:38:47.709011+00:00 app[web.1]: Django version 2.2.1, using settings 'loan_app.settings.heroku'
2019-05-03T11:38:47.709012+00:00 app[web.1]: Starting development server at http://0.0.0.0:8000/
2019-05-03T11:38:47.709014+00:00 app[web.1]: Quit the server with CONTROL-C.
2019-05-03T11:38:55.505334+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=albmej-loan-application.herokuapp.com request_id=9037f839-8421-46f2-943a-599ec3cc6cb6 fwd="129.161.215.240" dyno= connect= service= status=503 bytes= protocol=https
2019-05-03T11:39:45.091840+00:00 heroku[web.1]: State changed from starting to crashed
2019-05-03T11:39:45.012262+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2019-05-03T11:39:45.012440+00:00 heroku[web.1]: Stopping process with SIGKILL
2019-05-03T11:39:45.082756+00:00 heroku[web.1]: Process exited with status 137
The app works locally through a virtual environment and using Docker but just not on Heroku. Not sure what else to try. You can find my code at: https://github.com/AlbMej/Online-Loan-Application
Maybe I have some glaring problems in my Dockerfile or docker-compose.yml
The Answer is not correct.
If you use container with Dockerfile, you do not need any Profile.
Just use the $PORT variable to let heroku to determine which port to use.
https://help.heroku.com/PPBPA231/how-do-i-use-the-port-environment-variable-in-container-based-apps
Quick solution is to change your Procfile to this:
web: python subfolder/manage.py runserver 0.0.0.0:$PORT
That would work, but keep in mind that you are using the development server on production, which is a really bad idea! But if you are just toying around, that's ok.
However, if you're using this as a production app with real data, you should use a real production server. Then your Procfile would look like this:
web: gunicorn yourapp.wsgi --log-file -