My Django API app has a view class that defines a post and a patch method. The post method creates a reservation resource with a user-defined expiration time and creates a Celery task which changes said resource once the user-defined expiration time has expired. The patch method gives the user the ability to update the expiration time. Changing the expiration time also requires changing the Celery task. In my patch method, I use app.control.revoke(task_id=task_id, terminate=True, signal="SIGKILL") to revoke the existing task, followed by creating a new task with the amended expiration time.
All of this works just fine in my local Docker setup. But once I deploy to Heroku the Celery worker appears to be terminated after the execution of the above app.control.revoke(...) causes a request timeout with an H12 error code.
2021-11-21T16:39:35.509103+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=PATCH path="/update-reservation-unauth/30/emailh#email.com/" host=secure-my-spot-api.herokuapp.com request_id=e0a304f8-3901-4ca1-a5eb-ba3dc06df999 fwd="108.21.217.120" dyno=web.1 connect=0ms service=30001ms status=503 bytes=0 protocol=https
2021-11-21T16:39:35.835796+00:00 app[web.1]: [2021-11-21 16:39:35 +0000] [3] [CRITICAL] WORKER TIMEOUT (pid:5)
2021-11-21T16:39:35.836627+00:00 app[web.1]: [2021-11-21 16:39:35 +0000] [5] [INFO] Worker exiting (pid: 5)
I would like to emphasise that the timeout cannot possibly be caused by calculation complexity as the Celery task only changes a boolean field on the reservation resource.
For reference, the Celery service in my docker-compose.yml looks like this:
worker:
container_name: celery
build:
context: .
dockerfile: Dockerfile
command: celery -A secure_my_spot.celeryconf worker --loglevel=INFO
depends_on:
- db
- broker
- redis
restart: always
I have looked everywhere for a solution but have not found anything that helps and I am drawing a blank as to what could possibly cause this issue with the deployed Heroku app whilst the local development containers work just fine.
Related
I am trying to host a Django project with a Postgres database in a Docker container. The project is a practice e-commerce site with a database for product info. I was able to get it working with docker-compose up and accessed the site running in the container at localhost:8000 but when I tried hosting it on AWS it didn't work. I uploaded the image to ECR and started a cluster. When I tried running a task with the image, it showed PENDING but as soon as I tried to refresh, the task was gone. I tried setting up cloudwatch logs but they were empty since the task was stopping immediately after starting. After that I tried hosting on Heroku. I was able to deploy the image but when I tried to open the app it showed an error (shown below).
It feels like the image is just failing immediately whenever I try hosting it anywhere, but it works fine when I use docker-compose up. I think I'm making a very basic mistake (I'm a total beginner at all this) but not sure what it is. Thanks for taking the time to help.
I'll also add my Dockerfile and docker-compose.yml
Error Message from Heroku
2022-11-25T05:13:31.719689+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=hk-comic-app.herokuapp.com request_id=ea683b1d-e869-4ea9-98aa-2b9ed08f7219 fwd="98.116.68.242" dyno= connect= service= status=503 bytes= protocol=https
2022-11-25T05:22:36.083750+00:00 app[api]: Scaled to app#1:Free by user
2022-11-25T05:22:39.300239+00:00 heroku[app.1]: Starting process with command `python3`
2022-11-25T05:22:39.895200+00:00 heroku[app.1]: State changed from starting to up
2022-11-25T05:22:40.178736+00:00 heroku[app.1]: Process exited with status 0
2022-11-25T05:22:40.228638+00:00 heroku[app.1]: State changed from up to crashed
2022-11-25T05:22:40.232742+00:00 heroku[app.1]: State changed from crashed to starting
2022-11-25T05:22:43.937389+00:00 heroku[app.1]: Starting process with command `python3`
2022-11-25T05:22:44.610097+00:00 heroku[app.1]: State changed from starting to up
2022-11-25T05:22:45.130636+00:00 heroku[app.1]: Process exited with status 0
2022-11-25T05:22:45.180808+00:00 heroku[app.1]: State changed from up to crashed
2022-11-25T05:23:09.462805+00:00 heroku[router]: at=error code=H14 desc="No web processes running" method=GET path="/" host=hk-comic-app.herokuapp.com request_id=f4cc3e04-0257-4336-94b3-7e48094cabd4 fwd="98.116.68.242" dyno= connect= service= status=503 bytes= protocol=https
Dockerfile
FROM python:3.9-slim-buster
ENV PYTHONUNBUFFERED=1
WORKDIR /django
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
docker-compose.yml
version: "3"
services:
app:
build: .
volumes:
- .:/django
- ./wait-for-it.sh:/wait-for-it.sh
ports:
- 8000:8000
image: app:django
container_name: django_container
command: /wait-for-it.sh db:5432 -- python3 manage.py runserver 0.0.0.0:8000
depends_on:
- db
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=comic_db
- POSTGRES_USER=comic_user
- POSTGRES_PASSWORD=password
container_name: postgres_db
Heroku doesn't use docker-compose.yml. You'll need to make a few changes:
Update your Dockerfile to include a command that should be used to run your application, e.g.
CMD gunicorn project_name.wsgi
This shouldn't impact your development environment since your docker-compose.yml overrides the command. You'll need to make sure Gunicorn (or whatever WSGI server you choose to use) is declared as a dependency.
Update your Django application to get its PostgreSQL connection string from the DATABASE_URL environment variable that Heroku provides. A common way to do that is by adding a dependency on dj-database-url and then changing your DATABASES setting accordingly:
DATABASES["default"] = dj_database_url.config()
I suggest you read the documentation for that library as there's more than one way to use it.
For example, you can optionally set a default connection for development via a default argument here. Or, if you prefer, you could set your own DATABASE_URL environment variable in your docker-compose.yml.
Provision a PostgreSQL database for your application. Make sure to do the first step to check whether you already have a database.
Then redeploy.
I am running a Django application on Heroku.
when i make a request longer than 30 seconds, it stopped and the request returns 503 Service Unavailable.
These are the heroku logs
2021-10-20T07:11:14.726613+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/ajax/download_yt/" host=yeti-mp3.herokuapp.com request_id=293289be-6484-4f11-a62d-bda6d0819351 fwd="79.3.90.79" dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=https
I tried to do some research on that and i found that i could increase the gunicorn (i am using gunicorn version 20.0.4) timeout in the Procfile. I tried increasing to 60 seconds, but the request still stops after 30.
This is my current Procfile:
web: gunicorn yetiMP3.wsgi --timeout 60 --log-file -
these are the buildpacks i'm using on my heroku app:
heroku/python
https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git
How can i increase the request timeout?
Did i miss some heroku configuration?
Eventually i found that it's a Heroku dyno configuration.
The Heroku router will drop any request after 30 seconds, no matter how high you set the gunicorn timer.
More details in this official heroku article
I have a Flask app deployed on Heroku using Gunicorn. Some processes take longer than the default 30 seconds, so I need to make use of the --timeout setting.
When I run gunicorn --workers=4 --timeout=300 app:app on my computer, there's no problem. But when I deploy it on Heroku with a Procfile consisting of web: gunicorn --workers=4 --timeout=300 app:app the app works fine but the timeout still defaults to 30 seconds.
When I check the Heroku logs, this is the error message I get after 30 seconds. Note the 503 status.
2021-01-15T01:57:01.545230+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/api" host=counterpoint-server.herokuapp.com request_id=0865b365-e8de-48f1-bf94-6325206ec7df fwd="73.165.51.85" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0 protocol=https
Any idea why my Gunicorn settings are being ignored in production?
So I am building a Dockerized Django project and I want to deploy it to Heroku, but I am having a lot of issues. My issues are exactly the same as this post:
Docker + Django + Postgres Add-on + Heroku
Except I cannot use CMD python3 manage.py runserver 0.0.0.0:$PORT since I receive an invalid port pair error.
I'm just running
heroku container:push web
heroku container:release web
heroku open
After going to the site it stays loading until it says an error occurred. My log shows the following:
System check identified no issues (0 silenced).
2019-05-03T11:38:47.708761+00:00 app[web.1]: May 03, 2019 - 11:38:47
2019-05-03T11:38:47.709011+00:00 app[web.1]: Django version 2.2.1, using settings 'loan_app.settings.heroku'
2019-05-03T11:38:47.709012+00:00 app[web.1]: Starting development server at http://0.0.0.0:8000/
2019-05-03T11:38:47.709014+00:00 app[web.1]: Quit the server with CONTROL-C.
2019-05-03T11:38:55.505334+00:00 heroku[router]: at=error code=H20 desc="App boot timeout" method=GET path="/" host=albmej-loan-application.herokuapp.com request_id=9037f839-8421-46f2-943a-599ec3cc6cb6 fwd="129.161.215.240" dyno= connect= service= status=503 bytes= protocol=https
2019-05-03T11:39:45.091840+00:00 heroku[web.1]: State changed from starting to crashed
2019-05-03T11:39:45.012262+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2019-05-03T11:39:45.012440+00:00 heroku[web.1]: Stopping process with SIGKILL
2019-05-03T11:39:45.082756+00:00 heroku[web.1]: Process exited with status 137
The app works locally through a virtual environment and using Docker but just not on Heroku. Not sure what else to try. You can find my code at: https://github.com/AlbMej/Online-Loan-Application
Maybe I have some glaring problems in my Dockerfile or docker-compose.yml
The Answer is not correct.
If you use container with Dockerfile, you do not need any Profile.
Just use the $PORT variable to let heroku to determine which port to use.
https://help.heroku.com/PPBPA231/how-do-i-use-the-port-environment-variable-in-container-based-apps
Quick solution is to change your Procfile to this:
web: python subfolder/manage.py runserver 0.0.0.0:$PORT
That would work, but keep in mind that you are using the development server on production, which is a really bad idea! But if you are just toying around, that's ok.
However, if you're using this as a production app with real data, you should use a real production server. Then your Procfile would look like this:
web: gunicorn yourapp.wsgi --log-file -
Blockquote
I have a periodic task that I am implementing on heroku procfile using worker:
Procfile
web: gunicorn voltbe2.wsgi --log-file - --log-level debug
worker: celery -A voltbe2 worker --beat --events --loglevel INFO
tasks.py
class PullXXXActivityTask(PeriodicTask):
"""
A periodic task that fetch data every 1 mins.
"""
run_every = timedelta(minutes=1)
def run(self, **kwargs):
abc= MyModel.objects.all()
for rk in abc:
rk.pull()
logger = self.get_logger(**kwargs)
logger.info("Running periodic task for XXX.")
return True
For this periodictask, I need the --beat (I checked by turning it off, and it does not repeat the task). So, in some way, the --beat does the work of a clock (https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes)
My concern is: if I scale the worker heroku ps:scale worker=2 to 2x dynos, I am seeing that there are two beats running on worker.1 and worker.2 from the logs:
Aug 25 09:38:11 emstaging app/worker.2: [2014-08-25 16:38:11,580: INFO/Beat] Scheduler: Sending due task apps.notification.tasks.SendPushNotificationTask (apps.notification.tasks.SendPushNotificationTask)
Aug 25 09:38:20 emstaging app/worker.1: [2014-08-25 16:38:20,239: INFO/Beat] Scheduler: Sending due task apps.notification.tasks.SendPushNotificationTask (apps.notification.tasks.SendPushNotificationTask)
The log displayed is for a different periodic task, but the key point is that both worker dynos are getting signals to do the same task from their respective clocks, while in fact there should be one clock that ticks and after every XX seconds decides what to do, and gives that task to the least loaded worker.n dyno
More on why a single clock is essential is here : https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes#custom-clock-processes
Is this a problem and how to avoid this, if so?
You should have a separate worker for the beat process.
web: gunicorn voltbe2.wsgi --log-file - --log-level debug
worker: celery -A voltbe2 worker -events -loglevel info
beat: celery -A voltbe2 beat
Now you can scale the worker task without affecting the beat one.
Alternatively, if you won't always need the extra process, you can continue to use -B in the worker task but also have a second task - say, extra_worker - which is normally set to 0 dynos, but which you can scale up as necessary. The important thing is to always keep the task with the beat at 1 process