nginx, gunicorn and django timing out - django

I'm so confused!
I set everything up, my site was working for two days, and then suddenly today it stops working.
The only thing I changed was yesterday I was trying to serve PHP files so I installed PHP and uwsgi. It was late and I didn't realize what I was doing. It was from this website: http://uwsgi-docs.readthedocs.org/en/latest/PHP.html
# Add ppa with libphp5-embed package
sudo add-apt-repository ppa:l-mierzwa/lucid-php5
# Update to use package from ppa
sudo apt-get update
# Install needed dependencies
sudo apt-get install php5-dev libphp5-embed libonig-dev libqdbm-dev
# Compile uWSGI PHP plugin
python uwsgiconfig --plugin plugins/php
But didn't change any settings. Even after doing that, everything was still fine. However the next day, my site just doesn't load.
I tried a few things which didn't work. In my settings:
ALLOWED_HOSTS = ['*']
In my gunicorn.sh, I set TIMEOUT=60. However, when I try to access my site (lewischi.com), nothing even happens. But when I go to http://127.0.0.1:8000, I do see workers doing stuff and get a 404 error.
Using the URLconf defined in django_project.urls,
Django tried these URL patterns, in this order:
I'm not sure what's going on! nginx-error log isn't very helpful but the access log seems more useful.
From my nginx-access.log (it works, then stops working):
50.156.86.221 - - [25/Sep/2015:00:25:43 -0700] "GET /codeWindow.html
HTTP/1.1" 200 2081 "http://lewischi.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
50.156.86.221 - - [25/Sep/2015:00:25:58 -0700] "GET /test.jpg HTTP/1.1"
404 208 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36"
192.168.2.6 - - [25/Sep/2015:16:42:19 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:17:24:44 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:23:28:51 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
192.168.2.6 - - [25/Sep/2015:23:29:02 -0700] "GET / HTTP/1.1" 200 9596 "-" "-"
From my supervisor log file:
supervisor: couldn't exec /home/lewischi/projects/active/django_project/gunicorn.sh: ENOEXEC
supervisor: child process was not spawned
ANY HELP would be greatly appreciated!!!! I feel like I should just uninstall uwsgi. I don't want to break anything so I'm asking for advice before I go messing things up.
I'm pretty new to this so I may be overlooking something obvious. My gunicorn debug mode output:
“Starting ”djangotut” as lewischi”
[2015-09-26 17:50:28 +0000] [2316] [DEBUG] Current configuration:
proxy_protocol: False
worker_connections: 1000
statsd_host: None
max_requests_jitter: 0
post_fork: <function post_fork at 0x7faf049ec848>
pythonpath: None
enable_stdio_inheritance: False
worker_class: sync
ssl_version: 3
suppress_ragged_eofs: True
syslog: False
syslog_facility: user
when_ready: <function when_ready at 0x7faf049ec578>
pre_fork: <function pre_fork at 0x7faf049ec6e0>
cert_reqs: 0
preload_app: False
keepalive: 2
accesslog: None
group: 1000
graceful_timeout: 30
do_handshake_on_connect: False
spew: False
workers: 3
proc_name: ”djangotut”
sendfile: True
pidfile: None
umask: 0
on_reload: <function on_reload at 0x7faf049ec410>
pre_exec: <function pre_exec at 0x7faf049ecde8>
worker_tmp_dir: None
post_worker_init: <function post_worker_init at 0x7faf049ec9b0>
limit_request_fields: 100
on_exit: <function on_exit at 0x7faf049f2500>
config: None
secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
proxy_allow_ips: ['127.0.0.1']
pre_request: <function pre_request at 0x7faf049ecf50>
post_request: <function post_request at 0x7faf049f20c8>
user: 1000
forwarded_allow_ips: ['127.0.0.1']
worker_int: <function worker_int at 0x7faf049ecb18>
threads: 1
max_requests: 1
limit_request_line: 4094
access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
certfile: None
worker_exit: <function worker_exit at 0x7faf049f2230>
chdir: /home/lewischi/projects/active/django_project
paste: None
default_proc_name: django_project.wsgi:application
errorlog: -
loglevel: DEBUG
logconfig: None
syslog_addr: udp://localhost:514
syslog_prefix: None
daemon: False
ciphers: TLSv1
on_starting: <function on_starting at 0x7faf049ec2a8>
worker_abort: <function worker_abort at 0x7faf049ecc80>
bind: ['0.0.0.0:8000']
raw_env: []
reload: False
check_config: False
limit_request_field_size: 8190
nworkers_changed: <function nworkers_changed at 0x7faf049f2398>
timeout: 60
ca_certs: None
django_settings: None
tmp_upload_dir: None
keyfile: None
backlog: 2048
logger_class: gunicorn.glogging.Logger
statsd_prefix:
[2015-09-26 17:50:28 +0000] [2316] [INFO] Starting gunicorn 19.3.0
[2015-09-26 17:50:28 +0000] [2316] [DEBUG] Arbiter booted
[2015-09-26 17:50:28 +0000] [2316] [INFO] Listening at: http://0.0.0.0:8000 (2316)
[2015-09-26 17:50:28 +0000] [2316] [INFO] Using worker: sync
[2015-09-26 17:50:28 +0000] [2327] [INFO] Booting worker with pid: 2327
[2015-09-26 17:50:28 +0000] [2328] [INFO] Booting worker with pid: 2328
[2015-09-26 17:50:28 +0000] [2329] [INFO] Booting worker with pid: 2329
[2015-09-26 17:50:29 +0000] [2316] [DEBUG] 3 workers
[2015-09-26 17:50:30 +0000] [2316] [DEBUG] 3 workers

The problem is not with supervisord itself, few things to consider when dealing with Nginx, Gunicorn and Django in general:
Make sure the user running the app process(minimum 1 user non root not including users created by default for e.g: Nginx, Postgresql. Changes with the stack) has the right permissions and ownership to achieve it's goals.
When adding another app to your stack, you should first check the port it runs on by default, and change it to prevent port conflicts, keep in mind the difference between internal and external ports since you use Nginx as a proxy to Gunicorn(this is what causes most timeouts, happened to me several times at late night work), you can use Nginx as a proxy server and create many apps with different unique internal port for each app.
With the error log you provided for supervisor, it seems you're running your gunicorn.sh either with a user that doesn't have enough permissions or ownership, or executing with a wrong command.
Please provide the supervisor config file relevant to your app.
Update: seems his ip address changed.

Ah never mind. Thanks for your time.
It turned out that my ip address somehow changed which should not have happened.... Rookie mistake.

Related

Heroku app won't restart after error 14 (Memory quota exceeded)

We have a Django app deployed on Heroku with the following Procfile:
release: python manage.py migrate
web: gunicorn RDHQ.wsgi:application --log-file - --log-level debug
celery: celery -A RDHQ worker -l info
Yesterday the app was down and accessing the site returned ERR_CONNECTION_TIMED_OUT.
When I looked at the logs, I saw that the celery process was showing an R14 (Memory usage exceeded) error:
2022-12-24T07:14:46.771299+00:00 heroku[celery.1]: Process running mem=526M(102.7%)
2022-12-24T07:14:46.772983+00:00 heroku[celery.1]: Error R14 (Memory quota exceeded)
I restarted the dynos a couple of times, but the celery dyno immediately throws the same error after restart.
I then removed the celery process entirely from my Procfile:
release: python manage.py migrate
web: gunicorn RDHQ.wsgi:application --log-file - --log-level debug
After I pushed the new Procfile to Heroku, the app is still down!
I tried manually scaling down the web dyno and then scaling it up again - nothing.
This is what the logs show:
2022-12-24T07:57:26.757537+00:00 app[web.1]: [2022-12-24 07:57:26 +0000] [12] [DEBUG] Closing connection.
2022-12-24T07:57:53.000000+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_SILVER addon=postgresql-reticulated-80597 sample#current_transaction=796789 sample#db_size=359318383bytes sample#tables=173 sample#active-connections=12 sample#waiting-connections=0 sample#index-cache-hit-rate=0.99972 sample#table-cache-hit-rate=0.99943 sample#load-avg-1m=0.01 sample#load-avg-5m=0.005 sample#load-avg-15m=0 sample#read-iops=0 sample#write-iops=0.076923 sample#tmp-disk-used=543600640 sample#tmp-disk-available=72435191808 sample#memory-total=8038324kB sample#memory-free=3006824kB sample#memory-cached=4357424kB sample#memory-postgres=25916kB sample#wal-percentage-used=0.06576949341778418
2022-12-24T07:59:26.615551+00:00 app[web.1]: [2022-12-24 07:59:26 +0000] [12] [DEBUG] GET /us/first-aid-cover/california/
2022-12-24T07:59:28.421560+00:00 app[web.1]: 10.1.23.217 - - [24/Dec/2022:07:59:28 +0000] "GET /us/first-aid-cover/california/?order_by=title HTTP/1.1" 200 4380 "-" "Mozilla/5.0 (compatible; AhrefsBot/7.0; +http://ahrefs.com/robot/)"
2022-12-24T07:59:28.420775+00:00 heroku[router]: at=info method=GET path= "/us/first-aid-cover/california/?order_by=title" host=www.racedirectorshq.com request_id=c0d1bc3f-90ad-4b5d-ac2b-edaf4d777e26 fwd="54.36.149.50" dyno=web.1 connect=0ms service=1806ms status=200 bytes=4991 protocol=https
2022-12-24T07:59:34.952805+00:00 app[web.1]: [2022-12-24 07:59:34 +0000] [12] [DEBUG] Closing connection.
I'm totally at a loss as to how to fix this and our app has been down for almost a day now. Please help.
EDIT: It was a DNS issue in the end. Fixed.

I get a 502 Bad Gateway error with django app on AppEngine

I'm deploying a django app on Google app engine (flexible environment). The app works fine locally and the deployment (using gcloud app deploy) goes well. While the homepage loads well, I get a 502 Bad Gateway nginx error when I load some binary data (about 40Mo) using pickle from a directory in the same app directory (through a POST request). I've tried many proposed solutions (change the PORT to 8080, add the gunicorn timeout or add --preload, change n° of workers..), but still have the problem. I think that the problems comes from the fact that I load a heavy file, since I can access the django admin on the deployed version..
I'm not really knowledgeable in gunicorn/nginx (the first time I deploy an app). I'll be very thankful if you have some ideas after so much time spent on this!
The log file doesn't show any error:
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [1] [INFO] Starting gunicorn 19.9.0
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [1] [DEBUG] Arbiter booted
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [1] [INFO] Using worker: sync
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [10] [INFO] Booting worker with pid: 10
2021-10-30 14:38:46 default[20211030t141946] [2021-10-30 14:38:46 +0000] [1] [DEBUG] 1 workers
2021-10-30 14:39:04 default[20211030t133157] "GET /nginx_metrics" 200
2021-10-30 14:39:31 default[20211030t141946] [2021-10-30 14:39:31 +0000] [10] [DEBUG] GET /
2021-10-30 14:39:47 default[20211030t141946] "GET /nginx_metrics" 200
2021-10-30 14:40:04 default[20211030t141946] [2021-10-30 14:40:04 +0000] [10] [DEBUG] POST /
2021-10-30 14:40:04 default[20211030t141946] POST REQUEST (I click here)
2021-10-30 14:40:20 default[20211030t133157] [2021-10-30 14:40:20 +0000] [1] [INFO] Handling signal: term
2021-10-30 14:40:20 default[20211030t133157] [2021-10-30 14:40:20 +0000] [14] [INFO] Worker exiting (pid: 14)
2021-10-30 14:40:21 default[20211030t133157] [2021-10-30 14:40:21 +0000] [1] [INFO] Shutting down: Master
2021-10-30 14:40:47 default[20211030t141946] "GET /nginx_metrics" 200
My app.yaml file :
runtime: python
env: flex
env_variables:
SECRET_KEY: 'DJANGO-SECRET-KEY'
DEBUG: 'False'
DB_HOST: '/cloudsql/django-naimai:europe-west1:naimai-sql'
DB_PORT: '5432' # PostgreSQL port
DB_NAME: 'postgres'
DB_USER: 'postgres'
DB_PASSWORD: 'DB_PASSWORD'
entrypoint: gunicorn -b :$PORT --log-level=debug --timeout=120 django_naimai.wsgi
manual_scaling:
instances: 1
beta_settings:
cloud_sql_instances: django-naimai-west1:naimai-sql
runtime_config:
python_version: 3
resources:
cpu: 2
memory_gb: 2.3
disk_size_gb: 20
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 2
My settings.py file :
DEBUG = os.environ['DEBUG']
ALLOWED_HOSTS = ["django-naimai.oa.r.appspot.com","127.0.0.1",]
DATABASES = {"default": {
'ENGINE': 'django.db.backends.postgresql',
'HOST': os.environ['DB_HOST'],
'PORT': os.environ['DB_PORT'],
'NAME': os.environ['DB_NAME'],
'USER': os.environ['DB_USER'],
'PASSWORD': os.environ['DB_PASSWORD']
}}
if os.getenv("USE_CLOUD_SQL_AUTH_PROXY", None):
DATABASES["default"]["HOST"] = "127.0.0.1"
DATABASES["default"]["PORT"] = 5432
GS_BUCKET_NAME="naimai_bucket"
STATIC_URL = "/static/"
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"
As #gaefan suggested, I needed to max out the memory! I tried 10 in memory_gb in the yaml file and it worked.

Django Telegram bot not responding on Heroku

I deployed my bot to Heroku but it is not responding
I have a doubt regarding the Procfile, is this correct?:
web: gunicorn tlgrmbot.wsgi
worker: python bot.py
This is what I see in the logs:
2021-07-27T21:25:56.080317+00:00 heroku[web.1]: Starting process with command `gunicorn tlgrmbot.wsgi`
2021-07-27T21:26:00.590684+00:00 heroku[web.1]: State changed from starting to up
2021-07-27T21:26:00.092958+00:00 app[web.1]: [2021-07-27 21:26:00 +0000] [4] [INFO] Starting gunicorn 20.1.0
2021-07-27T21:26:00.093896+00:00 app[web.1]: [2021-07-27 21:26:00 +0000] [4] [INFO] Listening at: http://0.0.0.0:54428 (4)
2021-07-27T21:26:00.094088+00:00 app[web.1]: [2021-07-27 21:26:00 +0000] [4] [INFO] Using worker: sync
2021-07-27T21:26:00.106699+00:00 app[web.1]: [2021-07-27 21:26:00 +0000] [9] [INFO] Booting worker with pid: 9
2021-07-27T21:26:00.157630+00:00 app[web.1]: [2021-07-27 21:26:00 +0000] [10] [INFO] Booting worker with pid: 10
2021-07-27T21:25:59.000000+00:00 app[api]: Build succeeded
2021-07-27T21:26:06.266840+00:00 app[web.1]: 10.43.228.207 - - [27/Jul/2021:21:26:06 +0000] "POST / HTTP/1.1" 200 10697 "-" "-"
2021-07-27T21:26:06.267235+00:00 heroku[router]: at=info method=POST path="/" host=tlgrmbotgym.herokuapp.com request_id=a7b1fd84-93d2-4fdb-88cd-941dd581b4c1 fwd="91.108.6.98" dyno=web.1 connect=0ms service=37ms status=200 bytes=10924 protocol=https
and this is how I setup the webhook in bot.py
mode = os.environ.get("MODE", "polling")
if mode == 'webhook':
# enable webhook
updater.start_webhook(listen="0.0.0.0",
port=PORT,
url_path=TOKEN)
updater.bot.setWebhook('https://tlgrmbotgym.herokuapp.com/'+TOKEN)
else:
# enable polling
updater.start_polling()
updater.idle()
UPDATE
I updated my webhook:
mode = os.environ.get("MODE", "polling")
if mode == 'webhook':
# enable webhook
updater.start_webhook(listen="0.0.0.0",
port=PORT,
url_path=TOKEN,
webhook_url= "https://tlgrmbotgym.herokuapp.com/"+TOKEN)
else:
# enable polling
updater.start_polling()
updater.idle()
and the bot is not responding, but I see that the logs have changed and I see new information:
2021-07-28T16:25:09.856073+00:00 heroku[web.1]: State changed from down to starting
2021-07-28T16:25:18.088650+00:00 heroku[web.1]: Starting process with command `gunicorn tlgrmbot.wsgi`
2021-07-28T16:25:21.429559+00:00 app[web.1]: [2021-07-28 16:25:21 +0000] [4] [INFO] Starting gunicorn 20.1.0
2021-07-28T16:25:21.430458+00:00 app[web.1]: [2021-07-28 16:25:21 +0000] [4] [INFO] Listening at: http://0.0.0.0:43093 (4)
2021-07-28T16:25:21.430624+00:00 app[web.1]: [2021-07-28 16:25:21 +0000] [4] [INFO] Using worker: sync
2021-07-28T16:25:21.437623+00:00 app[web.1]: [2021-07-28 16:25:21 +0000] [9] [INFO] Booting worker with pid: 9
2021-07-28T16:25:21.445747+00:00 app[web.1]: [2021-07-28 16:25:21 +0000] [10] [INFO] Booting worker with pid: 10
2021-07-28T16:25:22.355994+00:00 heroku[web.1]: State changed from starting to up
2021-07-28T16:25:25.573630+00:00 heroku[router]: at=info method=POST path="/" host=tlgrmbotgym.herokuapp.com request_id=cbab72e0-bded-4bb1-9e10-b5e820de9871 fwd="91.108.6.98" dyno=web.1 connect=1ms service=1629ms status=200 bytes=10924 protocol=https
2021-07-28T16:25:25.567572+00:00 app[web.1]: 10.41.182.161 - - [28/Jul/2021:16:25:25 +0000] "POST / HTTP/1.1" 200 10697 "-" "-"
2021-07-28T16:27:45.534385+00:00 heroku[router]: at=info method=POST path="/" host=tlgrmbotgym.herokuapp.com request_id=92bcee23-c40c-4111-9e2f-dab8d6a3faa8 fwd="91.108.6.98" dyno=web.1 connect=1ms service=17ms status=200 bytes=10924 protocol=https
2021-07-28T16:27:45.534074+00:00 app[web.1]: 10.45.182.145 - - [28/Jul/2021:16:27:45 +0000] "POST / HTTP/1.1" 200 10697 "-" "-"
logging code
import logging
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
this is what I see in the logs now:
021-07-28T18:45:28.807525+00:00 app[web.1]: Not Found: /{Token}
2021-07-28T18:45:28.811974+00:00 app[web.1]: 10.181.143.206 - - [28/Jul/2021:18:45:28 +0000] "POST /{Token} HTTP/1.1" 404 2230 "-" "-"
2021-07-28T18:46:28.977729+00:00 heroku[router]: at=info method=POST path="/{Token}" host=tlgrmbotgym.herokuapp.com request_id=ad50bc1b-5e0e-4e98-b599-315e425c56e7 fwd="91.108.6.98" dyno=web.1 connect=0ms service=87ms status=404 bytes=2463 protocol=https
2021-07-28T18:46:28.982557+00:00 app[web.1]: Not Found: /{Token}
Updated Logs:
2021-07-29T01:02:04.000000+00:00 app[api]: Build succeeded
2021-07-29T01:02:06.949623+00:00 app[worker.1]: 2021-07-29 01:02:06,949 - telegram.ext.dispatcher - DEBUG - Setting singleton dispatcher as <telegram.ext.dispatcher.Dispatcher object at 0x7faed70b0130>
2021-07-29T01:02:06.959355+00:00 app[worker.1]: 2021-07-29 01:02:06,959 - apscheduler.scheduler - INFO - Scheduler started
2021-07-29T01:02:06.960775+00:00 app[worker.1]: 2021-07-29 01:02:06,960 - apscheduler.scheduler - DEBUG - Looking for jobs to run
2021-07-29T01:02:06.961328+00:00 app[worker.1]: 2021-07-29 01:02:06,961 - apscheduler.scheduler - DEBUG - No jobs; waiting until a job is added
2021-07-29T01:02:06.961409+00:00 app[worker.1]: 2021-07-29 01:02:06,960 - telegram.bot - DEBUG - Entering: get_me
2021-07-29T01:02:07.248949+00:00 app[worker.1]: 2021-07-29 01:02:07,247 - telegram.bot - DEBUG - {'supports_inline_queries': False, 'username': 'CKBXFbot', 'can_join_groups': True, 'first_name': 'gym_bot', 'is_bot': True, 'can_read_all_group_messages': False, 'id': 1810662496}
2021-07-29T01:02:07.252111+00:00 app[worker.1]: 2021-07-29 01:02:07,248 - telegram.bot - DEBUG - Exiting: get_me
2021-07-29T01:02:07.252115+00:00 app[worker.1]: 2021-07-29 01:02:07,248 - telegram.ext.updater - DEBUG - Bot:1810662496:dispatcher - started
2021-07-29T01:02:07.252123+00:00 app[worker.1]: 2021-07-29 01:02:07,250 - telegram.ext.updater - DEBUG - Bot:1810662496:updater - started
2021-07-29T01:02:07.252123+00:00 app[worker.1]: 2021-07-29 01:02:07,250 - telegram.ext.updater - DEBUG - Updater thread started (webhook)
2021-07-29T01:02:07.252126+00:00 app[worker.1]: 2021-07-29 01:02:07,251 - telegram.ext.updater - DEBUG - Start network loop retry bootstrap set webhook
2021-07-29T01:02:07.252126+00:00 app[worker.1]: 2021-07-29 01:02:07,251 - telegram.ext.updater - DEBUG - Setting webhook
2021-07-29T01:02:07.252127+00:00 app[worker.1]: 2021-07-29 01:02:07,251 - telegram.bot - DEBUG - Entering: set_webhook
2021-07-29T01:02:07.256003+00:00 app[worker.1]: 2021-07-29 01:02:07,254 - telegram.ext.updater - DEBUG - Waiting for Dispatcher and Webhook to start
2021-07-29T01:02:07.257154+00:00 app[worker.1]: 2021-07-29 01:02:07,256 - telegram.ext.dispatcher - DEBUG - Dispatcher started
2021-07-29T01:02:07.344474+00:00 app[worker.1]: 2021-07-29 01:02:07,343 - telegram.bot - DEBUG - True
2021-07-29T01:02:07.344478+00:00 app[worker.1]: 2021-07-29 01:02:07,343 - telegram.bot - DEBUG - Exiting: set_webhook
2021-07-29T01:02:07.377616+00:00 app[worker.1]: 2021-07-29 01:02:07,376 - asyncio - DEBUG - Using selector: EpollSelector
2021-07-29T01:02:07.377620+00:00 app[worker.1]: 2021-07-29 01:02:07,376 - telegram.ext.utils.webhookhandler - DEBUG - Webhook Server started.
2021-07-29T01:03:04.550955+00:00 heroku[router]: at=info method=POST path="/{token}" host=tlgrmbotgym.herokuapp.com request_id=6faf2fcc-a84c-410c-8ad2-f0455dfe6121 fwd="91.108.6.98" dyno=web.1 connect=0ms service=30ms status=404 bytes=2463 protocol=https
2021-07-29T01:03:04.487802+00:00 app[web.1]: Not Found: /1810662496:AAFyVWyOr5K9CVM6XQcTWiVIG05qgSxmNDk
2021-07-29T01:03:04.488421+00:00 app[web.1]: 10.45.67.217 - - [29/Jul/2021:01:03:04 +0000] "POST /{token} HTTP/1.1" 404 2230 "-" "-"
looks like that what is not founding is the url "https://tlgrmbotgym.herokuapp.com/" + TOKEN which in the procfile is web and that somehow it has conflicts when the worker(bot.py) runs
If you're on PTB 13.4+, you'll have to change how you set the webhook. See https://t.me/pythontelegrambotchannel/100 and also the wiki page on webhooks.
This is how I solved the problem:
Profile
web: gunicorn tlgrmbot.wsgi
worker: python manage.py bot
I'm using polling at the moment, if I found a way to make it work with a webhook I will update my answer.

Servicing concurrent JAX-RS requests with WebLogic 12.2.1

I wrote a JAX-RS web service method to run on WebLogic 12.2.1, to test how many concurrent requests it can handle. I purposely make the method take 5 minutes to execute.
#Singleton
#Path("Services")
#ApplicationPath("resources")
public class Services extends Application {
private static int count = 0;
private static synchronized int addCount(int a) {
count = count + a;
return count;
}
#GET
#Path("Ping")
public Response ping(#Context HttpServletRequest request) {
int c = addCount(1);
logger.log(INFO, "Method entered, total running requests: [{0}]", c);
try {
Thread.sleep(300000);
} catch (InterruptedException exception) {
}
c = addCount(-1);
logger.log(INFO, "Exiting method, total running requests: [{0}]", c);
return Response.ok().build();
}
}
I also wrote a stand-alone client program to send 500 concurrent requests to this service. The client uses one thread for each request.
From what I understand, WebLogic has a default maximum of 400 threads, which means that it can handle 400 requests concurrently. This figure is confirmed with my test result below. As you can see, within the first 5 minutes, starting from 10:46:31, only 400 requests were been serviced.
23/08/2016 10:46:31.393 [132] [INFO] [Services.ping] - Method entered, total running requests: [1]
23/08/2016 10:46:31.471 [204] [INFO] [Services.ping] - Method entered, total running requests: [2]
23/08/2016 10:46:31.471 [66] [INFO] [Services.ping] - Method entered, total running requests: [3]
23/08/2016 10:46:31.471 [210] [INFO] [Services.ping] - Method entered, total running requests: [4]
23/08/2016 10:46:31.471 [206] [INFO] [Services.ping] - Method entered, total running requests: [5]
23/08/2016 10:46:31.487 [207] [INFO] [Services.ping] - Method entered, total running requests: [6]
23/08/2016 10:46:31.487 [211] [INFO] [Services.ping] - Method entered, total running requests: [7]
23/08/2016 10:46:31.487 [267] [INFO] [Services.ping] - Method entered, total running requests: [8]
23/08/2016 10:46:31.487 [131] [INFO] [Services.ping] - Method entered, total running requests: [9]
23/08/2016 10:46:31.502 [65] [INFO] [Services.ping] - Method entered, total running requests: [10]
23/08/2016 10:46:31.518 [265] [INFO] [Services.ping] - Method entered, total running requests: [11]
23/08/2016 10:46:31.565 [266] [INFO] [Services.ping] - Method entered, total running requests: [12]
23/08/2016 10:46:35.690 [215] [INFO] [Services.ping] - Method entered, total running requests: [13]
23/08/2016 10:46:35.690 [269] [INFO] [Services.ping] - Method entered, total running requests: [14]
23/08/2016 10:46:35.690 [268] [INFO] [Services.ping] - Method entered, total running requests: [15]
23/08/2016 10:46:35.690 [214] [INFO] [Services.ping] - Method entered, total running requests: [16]
23/08/2016 10:46:35.690 [80] [INFO] [Services.ping] - Method entered, total running requests: [17]
23/08/2016 10:46:35.690 [79] [INFO] [Services.ping] - Method entered, total running requests: [18]
23/08/2016 10:46:35.690 [152] [INFO] [Services.ping] - Method entered, total running requests: [19]
23/08/2016 10:46:37.674 [158] [INFO] [Services.ping] - Method entered, total running requests: [20]
23/08/2016 10:46:37.674 [155] [INFO] [Services.ping] - Method entered, total running requests: [21]
23/08/2016 10:46:39.674 [163] [INFO] [Services.ping] - Method entered, total running requests: [22]
23/08/2016 10:46:39.705 [165] [INFO] [Services.ping] - Method entered, total running requests: [23]
23/08/2016 10:46:39.705 [82] [INFO] [Services.ping] - Method entered, total running requests: [24]
23/08/2016 10:46:39.705 [166] [INFO] [Services.ping] - Method entered, total running requests: [25]
23/08/2016 10:46:41.690 [84] [INFO] [Services.ping] - Method entered, total running requests: [26]
23/08/2016 10:46:41.690 [160] [INFO] [Services.ping] - Method entered, total running requests: [27]
23/08/2016 10:46:43.690 [226] [INFO] [Services.ping] - Method entered, total running requests: [28]
23/08/2016 10:46:43.705 [162] [INFO] [Services.ping] - Method entered, total running requests: [29]
....
....
23/08/2016 10:50:52.008 [445] [INFO] [Services.ping] - Method entered, total running requests: [398]
23/08/2016 10:50:52.008 [446] [INFO] [Services.ping] - Method entered, total running requests: [399]
23/08/2016 10:50:54.008 [447] [INFO] [Services.ping] - Method entered, total running requests: [400]
23/08/2016 10:51:31.397 [132] [INFO] [Services.ping] - Exiting method, total running requests: [399]
23/08/2016 10:51:31.475 [207] [INFO] [Services.ping] - Exiting method, total running requests: [398]
23/08/2016 10:51:31.475 [207] [INFO] [Services.ping] - Method entered, total running requests: [399]
....
....
But what I don't understand is how come the first 400 requests were not serviced at the same time by the service method? As you can see from the test result, the first request was serviced at 10:46:31.393, but the 400th request was serviced at 10:50:54.008, which is more than 4 minutes later.
If we look at access.log, we can see that all 500 requests were received by WebLogic between 10:46:31 and 10:46:35. So it seems that even though WebLogic received the requests with a very short period of time, it doesn't allocate a thread and call the service method that fast.
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:31 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
....
....
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
10.204.133.176 - - [23/Aug/2016:10:46:35 +0800] "GET /Test/Services/Ping HTTP/1.1" 200 0
EDITED
Added work manager to define a minimum of 400 threads.
weblogic.xml
<wls:work-manager>
<wls:name>HighPriorityWorkManager</wls:name>
<wls:fair-share-request-class>
<wls:name>HighPriority</wls:name>
<wls:fair-share>100</wls:fair-share>
</wls:fair-share-request-class>
<wls:min-threads-constraint>
<wls:name>MinThreadsCount</wls:name>
<wls:count>400</wls:count>
</wls:min-threads-constraint>
</wls:work-manager>
web.xml
<init-param>
<param-name>wl-dispatch-policy</param-name>
<param-value>HighPriorityWorkManager</param-value>
</init-param>
That's how weblogic scales threadpools (they are "self-tuning"), it does not start 400 Threads immediately. It's more a slow increase of threads to maximize throughput.
https://docs.oracle.com/cd/E24329_01/web.1211/e24432/self_tuned.htm#CNFGD113

Logging stdout to gunicorn access log?

When I wrap my Flask application in gunicorn writing to stdout no longer seems to go anywhere (simple print statements don't appear). Is there someway to either capture the stdout into the gunicorn access log, or get a handle to the access log and write to it directly?
Use the logging: set the stream to stdout
import logging
app.logger.addHandler(logging.StreamHandler(sys.stdout))
app.logger.setLevel(logging.DEBUG)
app.logger.debug("Hello World")
Two solutions to this problem. They are probably longer than others, but ultimately they tap into how logging is done under the hood in Python.
1. set logging configuration in the Flask app
The official Flask documentation on logging works for gunicorn. https://flask.palletsprojects.com/en/1.1.x/logging/#basic-configuration
some example code to try out:
from logging.config import dictConfig
from flask import Flask
dictConfig(
{
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"default": {
"format": "[%(asctime)s] [%(process)d] [%(levelname)s] in %(module)s: %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S %z"
}
},
"handlers": {
"wsgi": {
"class": "logging.StreamHandler",
"stream": "ext://flask.logging.wsgi_errors_stream",
"formatter": "default",
}
},
"root": {"level": "DEBUG", "handlers": ["wsgi"]},
}
)
app = Flask(__name__)
#app.route("/")
def hello():
app.logger.debug("this is a DEBUG message")
app.logger.info("this is an INFO message")
app.logger.warning("this is a WARNING message")
app.logger.error("this is an ERROR message")
app.logger.critical("this is a CRITICAL message")
return "hello world"
run with gunicorn
gunicorn -w 2 -b 127.0.0.1:5000 --access-logfile - app:app
request it using curl
curl http://127.0.0.1:5000
this would generate the following logs
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Starting gunicorn 20.0.4
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Listening at: http://127.0.0.1:5000 (2724300)
[2020-09-04 11:24:43 +0200] [2724300] [INFO] Using worker: sync
[2020-09-04 11:24:43 +0200] [2724311] [INFO] Booting worker with pid: 2724311
[2020-09-04 11:24:43 +0200] [2724322] [INFO] Booting worker with pid: 2724322
[2020-09-04 11:24:45 +0200] [2724322] [DEBUG] in flog: this is a DEBUG message
[2020-09-04 11:24:45 +0200] [2724322] [INFO] in flog: this is an INFO message
[2020-09-04 11:24:45 +0200] [2724322] [WARNING] in flog: this is a WARNING message
[2020-09-04 11:24:45 +0200] [2724322] [ERROR] in flog: this is an ERROR message
[2020-09-04 11:24:45 +0200] [2724322] [CRITICAL] in flog: this is a CRITICAL message
127.0.0.1 - - [04/Sep/2020:11:24:45 +0200] "GET / HTTP/1.1" 200 11 "-" "curl/7.68.0"
2. set logging configuration in Gunicorn
same application code as above but without the dictConfig({...}) section
create a logging.ini file
[loggers]
keys=root
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=[%(asctime)s] [%(process)d] [%(levelname)s] - %(module)s - %(message)s
datefmt=%Y-%m-%d %H:%M:%S %z
run gunicorn with --log-config logging.ini option, i.e gunicorn -w 2 -b 127.0.0.1:5000 --access-logfile - --log-config logging.ini app:app
The solution from John mee works, but it duplicates log entries in the stdout from gunicorn.
I used this:
import logging
from flask import Flask
app = Flask(__name__)
if __name__ != '__main__':
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
and got I this from: https://medium.com/#trstringer/logging-flask-and-gunicorn-the-manageable-way-2e6f0b8beb2f
You can redirect standard output to a errorlog file, which is enough for me.
Note that:
capture_output
--capture-output
False
Redirect stdout/stderr to specified file in errorlog
My config file gunicorn.config.py setting
accesslog = 'gunicorn.log'
errorlog = 'gunicorn.error.log'
capture_output = True
Then run with gunicorn app_py:myapp -c gunicorn.config.py
The equivaluent command line would be
gunicorn app_py:myapp --error-logfile gunicorn.error.log --access-logfile gunicorn.log --capture-output