django web app deployment gunicorn + aws ECS issue - django

I have working django REST API docker image with following dependencies:
python 3.5.2, django 1.10.6, djangorestframework 3.6.2, gevent 1.2.2
In my dockerfile, port 5000 is exposed.
docker command:
/usr/local/bin/gunicorn --log-level=DEBUG --worker-class gevent --timeout=300 config.wsgi -w 4 -b :5000
In the ECS task definition, 5000 container port is forwarded to port 80 of the host. The security group has an inbound rule allowing everyone at port 80.
When I ran the ECS task with this ECS task definition, following are the application logs, which seem fine.
[2017-09-13 16:45:34 +0000] [9] [INFO] Starting gunicorn 19.6.0
[2017-09-13 16:45:34 +0000] [9] [INFO] Listening at: http://0.0.0.0:5000 (9)
[2017-09-13 16:45:34 +0000] [9] [INFO] Using worker: gevent
[2017-09-13 16:45:34 +0000] [12] [INFO] Booting worker with pid: 12
[2017-09-13 16:45:34 +0000] [13] [INFO] Booting worker with pid: 13
[2017-09-13 16:45:35 +0000] [15] [INFO] Booting worker with pid: 15
[2017-09-13 16:45:35 +0000] [16] [INFO] Booting worker with pid: 16
But I am unable to access the service endpoints using the EC2 instance's public IP/Public DNS address.
I tried to get into the running container and curl the application url curl localhost:5000. Following are the logs that I see (the connections are closed)
[2017-09-13 17:42:42 +0000] [14] [DEBUG] GET /
[2017-09-13 17:42:42 +0000] [14] [DEBUG] Closing connection.
[2017-09-13 17:42:56 +0000] [12] [DEBUG] GET /
[2017-09-13 17:42:56 +0000] [12] [DEBUG] Closing connection.
[2017-09-13 17:53:20 +0000] [14] [DEBUG] GET /users/get_mfatype/
[2017-09-13 17:53:20 +0000] [14] [DEBUG] Closing connection.
The same docker image is working as expected when I run locally. I even tried running the same docker image inside EC2 instance, which is working fine.
I am not able to find the root cause why the application is not running as ECS task.
Am I missing anything?

Related

Heroku app is deployed but yields "takes too long to respond" error in browser

Ive deployed a few heroku webapps. But for some reason, this one is giving me pain.
with
git push heroku main
my app starts, and is deployed. No issues whatsoever. But when I click the link, I get a this site cant be reached. ____ took too long to respond
Upon checking the logs, there seems to be no errors:
2022-09-01T17:46:08.937248+00:00 heroku[web.1]: State changed from down to starting
2022-09-01T17:46:12.101512+00:00 heroku[web.1]: Starting process with command `gunicorn wsgi:app`
2022-09-01T17:46:13.204452+00:00 app[web.1]: [2022-09-01 17:46:13 +0000] [4] [INFO] Starting gunicorn 20.1.0
2022-09-01T17:46:13.204789+00:00 app[web.1]: [2022-09-01 17:46:13 +0000] [4] [INFO] Listening at: http://0.0.0.0:42097 (4)
2022-09-01T17:46:13.204842+00:00 app[web.1]: [2022-09-01 17:46:13 +0000] [4] [INFO] Using worker: sync
2022-09-01T17:46:13.207772+00:00 app[web.1]: [2022-09-01 17:46:13 +0000] [9] [INFO] Booting worker with pid: 9
2022-09-01T17:46:13.268559+00:00 app[web.1]: [2022-09-01 17:46:13 +0000] [10] [INFO] Booting worker with pid: 10
2022-09-01T17:46:13.737219+00:00 heroku[web.1]: State changed from starting to up
What is going on?
It runs completely fine when I run the wsgi.py file locally. Im so confused why its taking too long to respond because It's just rendering an HTML template.

Django (DRF) fails silently

I have been working on a side project using django and django rest framework. There is even an active version running in the cloud currently without any issue. I wanted to make some minor changes on the code base but while using the django admin page it crashes silently. I haven't changed anything in the code until now.
[2022-08-31 13:39:10 +0200] [8801] [INFO] Starting gunicorn 20.1.0
[2022-08-31 13:39:10 +0200] [8801] [INFO] Listening at: http://127.0.0.1:8000 (8801)
[2022-08-31 13:39:10 +0200] [8801] [INFO] Using worker: sync
[2022-08-31 13:39:10 +0200] [8802] [INFO] Booting worker with pid: 8802
[2022-08-31 13:39:18 +0200] [8801] [WARNING] Worker with pid 8802 was terminated due to signal 11
[2022-08-31 13:39:18 +0200] [8810] [INFO] Booting worker with pid: 8810
[2022-08-31 13:39:23 +0200] [8801] [WARNING] Worker with pid 8810 was terminated due to signal 11
[2022-08-31 13:39:23 +0200] [8814] [INFO] Booting worker with pid: 8814
Same happens with python manage.py runserver command.
I'm using a virtual environment with python 3.9
Signal 11 (SIGSEGV, also known as segmentation violation) means that the program accessed a memory location that was not assigned to it.
That's usually a bug in a program. So if you're writing your own program, that's the most likely cause.
It can also commonly occur with some hardware malfunctions.

how to keep gunicorn flask app running after ssh aws ec2 instance

I am new to server and apps.
Currently, I have created an app on my aws instance.
gunicorn --threads 4 -b 0.0.0.0:5000 --access-logfile server.log --timeout 60 server:app
But I want to keep it running after I ssh the instance, how could I achieve this?
[2018-09-24 17:45:28 +0000] [8318] [INFO] Starting gunicorn 19.9.0
[2018-09-24 17:45:28 +0000] [8318] [INFO] Listening at:
http://0.0.0.0:5000 (8318)
[2018-09-24 17:45:28 +0000] [8318] [INFO] Using worker: threads
[2018-09-24 17:45:28 +0000] [8321] [INFO] Booting worker with pid: 8321
I have to use control+c to exit too at the moment.
add --daemon to your command line or use screen (https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-gunicorn-and-nginx-on-ubuntu-14-04)

Django+Gunicorn incorrect logs timestamp

In my Django settings and on my machine I have utc+3 configured time so the expectations were to get all logs in utc+3, but turned out, that actually they are pretty messy:
[2017-08-08 10:29:22 +0000] [1] [INFO] Starting gunicorn 19.7.1
[2017-08-08 10:29:22 +0000] [1] [DEBUG] Arbiter booted
[2017-08-08 10:29:22 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000
[2017-08-08 10:29:22 +0000] [1] [INFO] Using worker: sync
[2017-08-08 10:29:22 +0000] [7] [INFO] Booting worker with pid: 7
[2017-08-08 10:29:23 +0000] [1] [DEBUG] 1 worker
[2017-08-08 13:29:26 +0300] [7] [INFO] [dashboard.views:9] Displaying menu
Settings:
TIME_ZONE = 'Europe/Moscow'
USE_TZ = True
Maybe you can provide some hints/information how to configure or debug it?
For a moment I thought that this is a gunicorn's problem, but it uses Django settings soo I have no idea what's wrong :/
Gunicorn logging time do not relay on Django timezone, but in the local machine one, so to get the right timezone you should configure your local machine and how to do it depends in what OS is running on it.
For Debian/Ubuntu:
sudo dpkg-reconfigure tzdata
Follow the directions in the terminal.
The timezone info is saved in /etc/timezone - which can be edited or used below
If you are using CentOS you can check it in this article.
For other options, check it in Google.
Hope that it helps.
So, the timestamps were correct, but different because of my company proxy settings. Also it turned out that is best way to handle different time zones is just use utc everywhere except presentation to user.

Gunicorn timeout and streaming dynamic content

I have Django app using StreamingHttpResponse that fails when gunicorn worker times out. Basically extending timeout is not an option as streaming can take longer as it depends on network speed. Web server won't time out as is actually doing something but gunicorn workers seems to not recognize it.
I am aware of a choice between sync and async workers supported by gunicorn and using for example gevent.
it starts:
gunicorn -D -p /path/to/django.pid --bind 127.0.0.1:8000 --workers 2 -k gevent --worker-connections 10 --max-requests 100 myapp.wsgi:application
gunicorn log:
[2016-01-21 15:12:34 +0000] [30333] [INFO] Listening at: ...
[2016-01-21 15:12:34 +0000] [30333] [INFO] Using worker: gevent
[2016-01-21 15:14:22 +0000] [30338] [DEBUG] GET /url/1/
[2016-01-21 15:14:22 +0000] [30338] [DEBUG] Closing connection.
[2016-01-21 15:14:24 +0000] [30343] [DEBUG] GET /download/1/
...
[2016-01-21 15:15:43 +0000] [30333] [CRITICAL] WORKER TIMEOUT (pid:30343)
[2016-01-21 15:15:43 +0000] [30343] [DEBUG] Closing connection.
[2016-01-21 15:15:43 +0000] [30343] [INFO] Worker exiting (pid: 30343)
[2016-01-21 15:15:43 +0000] [31203] [INFO] Booting worker with pid: 31203
nginx log
2016/01/21 15:15:43 [error] 23160#0: *10849 upstream prematurely closed connection while reading upstream...
Why deploying the same app using fastcgi and flup never exposed that problem? Anyone could advice?