Server closed the connection unexpectedly sometimes - django

Project background:
Django project, I use gunicorn to run the project, in the project, i use python socket io to process some event
Postgresql, config like this:
DATABASES['default'] = {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': xxx,
'USER': xxx,
'PASSWORD': xxx,
'HOST': 'xxx',
'PORT': '5432',
'CONN_MAX_AGE': 60 * 10, # seconds
'OPTIONS': {
'connect_timeout': 20,
},
}
the python socket io will keep a thread to process some events. so the thread has its own postgresql database connection. sometimes, I find it will throw database connection problem, like this, I don't know why the database connection isn't be closed normally
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/engineio/server.py", line 520, in _trigger_event
return self.handlers[event](*args)
File "/usr/local/lib/python2.7/site-packages/socketio/server.py", line 590, in _handle_eio_message
self._handle_event(sid, pkt.namespace, pkt.id, pkt.data)
File "/usr/local/lib/python2.7/site-packages/socketio/server.py", line 526, in _handle_event
self._handle_event_internal(self, sid, data, namespace, id)
File "/usr/local/lib/python2.7/site-packages/socketio/server.py", line 529, in _handle_event_internal
r = server._trigger_event(data[0], namespace, sid, *data[1:])
File "/usr/local/lib/python2.7/site-packages/socketio/server.py", line 558, in _trigger_event
return self.handlers[namespace][event](*args)
File "/usr/src/app/async_worker/controllers/server/event_handler.py", line 61, in on_jira_handle_from_client_retrieve
ticket = Ticket.get_ticket_by_id(ticket_id)
File "/usr/src/app/review/models/ticket.py", line 144, in get_ticket_by_id
return Ticket.objects.filter(id=ticket_id).first()
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 564, in first
objects = list((self if self.ordered else self.order_by('pk'))[:1])
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 250, in __iter__
self._fetch_all()
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 1118, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 53, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch)
File "/usr/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 899, in execute_sql
raise original_exception
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
This is what I get in the PostgreSQL log file:
LOG: could not receive data from client: Connection reset by peer
some ideas about the problem? I think if the connection keeps too long time (> CONN_MAX_AGE), the conn should be closed and set to None, but actually the connection is not null but the already closed

The error message you see is from the PostgreSQL client library.
Now both the PostgreSQL server and client complain that the other side suddenly went away, so this is almost certainly a nerwork problem.
My money is on an ill-configured firewall that drops idle connections after a while. I've always wondered if there is any valid use case to have a firewall do this...

Related

How to recover from Django Channels connection limit?

I'm running a server on heroku where I have intermittent channel layer communications between instances of asyncConsumers. Running heroku locally, I have no problem with redis channel connection since I can have as many of them as I want, but once I upload my server to heroku, it gives me a 20 redis connection limit. If I use more than that my server errors out, so I try to lower the expiry time so inactive redis connections would close. BUT, if a redis connection expires and I try to use the channel name with self.channel_layer.send(), I'll get the error below, how do I recover from this error without having to have external calls to create another asyncConsumer instance?
ERROR Exception inside application: Reader at end of file
File "/app/.heroku/python/lib/python3.6/site-packages/channels/consumer.py", line 59, in __call__
[receive, self.channel_receive], self.dispatch
File "/app/.heroku/python/lib/python3.6/site-packages/channels/utils.py", line 51, in await_many_dispatch
await dispatch(result)
File "/app/.heroku/python/lib/python3.6/site-packages/channels/consumer.py", line 73, in dispatch
await handler(message)
File "./myapp/webhook.py", line 133, in http_request
await self.channel_layer.send(userDB.backEndChannelName,{"type": "device.query"})
File "/app/.heroku/python/lib/python3.6/site-packages/channels_redis/core.py", line 296, in send
if await connection.llen(channel_key) >= self.get_capacity(channel):
File "/app/.heroku/python/lib/python3.6/site-packages/aioredis/commands/list.py", line 70, in llen
return self.execute(b'LLEN', key)
File "/app/.heroku/python/lib/python3.6/site-packages/aioredis/commands/__init__.py", line 51, in execute
return self._pool_or_conn.execute(command, *args, **kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/aioredis/connection.py", line 322, in execute
raise ConnectionClosedError(msg)
Reader at end of file

Issue with django/postgresql when django started from cron

Edit to ask who thought that this question had anything to do with the "possibly related" one.
I have a fairly simple django (1.11) project using the rest_framework that works fine when I start it from the command line, typing
nohup python manage.py runserver 0.0.0.0:4448 &
on centos. Connects to a postgresql database, with
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'my_database',
'USER': 'my_user',
#'PASSWORD': 'mypassword',
'HOST': '127.0.0.1',
' PORT': '5432',
}
}
in my settings.py file. However if I set up the runserver command to run at boot time from a cron I get the following when I send a request to the application:
... lot of stuff
django.db.utils.OperationalError: could not connect to server: Connection
refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
and in my pg_hba file I have
IPv4 local connections:
host all all 127.0.0.1/32
trust # local host
host all all 10.2.11.53/32
trust
host all my_user 0.0.0.0/0 trust
and also I have
netstat -plunt | grep post
tcp 0 0 10.2.11.53:5432 0.0.0.0:* LISTEN
867/postmaster
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
867/postmaster
Any suggestion?
Thanks,
a
PS The full traceback:
Unhandled exception in thread started by <function wrapper at 0x1a11230>
Performing system checks...
[<RegexURLPattern batch-batch-done ^batch/batch-done/$>, <RegexURLPattern batch-load-urls ^batch/load-urls/$>, <RegexURLPattern batch-request-batch ^batch/request-batch/$>, <RegexURLPattern batch-schedule-job ^batch/schedule-job/$>]
System check identified no issues (0 silenced).
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/utils/autoreload.py", line 227, in wrapper
fn(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 128, in inner_run
self.check_migrations()
File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 422, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/usr/lib/python2.7/site-packages/django/db/migrations/executor.py", line 20, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/lib/python2.7/site-packages/django/db/migrations/loader.py", line 52, in __init__
self.build_graph()
File "/usr/lib/python2.7/site-packages/django/db/migrations/loader.py", line 209, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 65, in applied_migrations
self.ensure_schema()
File "/usr/lib/python2.7/site-packages/django/db/migrations/recorder.py", line 52, in ensure_schema
if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()):
File "/usr/lib/python2.7/site-packages/django/db/backends/base/base.py", line 254, in cursor
return self._cursor()
File "/usr/lib/python2.7/site-packages/django/db/backends/base/base.py", line 229, in _cursor
self.ensure_connection()
File "/usr/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/usr/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/lib/python2.7/site-packages/django/db/backends/base/base.py", line 213, in ensure_connection
self.connect()
File "/usr/lib/python2.7/site-packages/django/db/backends/base/base.py", line 189, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/lib/python2.7/site-packages/django/db/backends/postgresql/base.py", line 176, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/lib64/python2.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Could you post the full traceback, as well as the cron job itself?
Also, if I understand your question correctly, you are trying to use cron to start your server on boot? If this is the case, you would be better served using something like supervisord to manage your server processes IMO.

SSL SYSCALL error: Bad file descriptor on Heroku with postgres and Celery

I've been using Celery successfully with a Django site on Heroku but it's just started generating the error below, which stops it running. It looks like it's having trouble with postgres, but I'm stumped as to how to fix it, given it's Celery rather than my code that's having the problem (I assume...).
I'm using CloudAMPQ as a broker, and my Django settings include:
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Here's the traceback from the Heroku logs:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 323, in __get__
return obj.__dict__[self.__name__]
KeyError: 'scheduler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.OperationalError: SSL SYSCALL error: Bad file descriptor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.5/site-packages/billiard/process.py", line 292, in _bootstrap
self.run()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 553, in run
self.service.start(embedded_process=True)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 470, in start
humanize_seconds(self.scheduler.max_interval))
File "/app/.heroku/python/lib/python3.5/site-packages/kombu/utils/__init__.py", line 325, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 512, in scheduler
return self.get_scheduler()
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 507, in get_scheduler
lazy=lazy)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 151, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/app/.heroku/python/lib/python3.5/site-packages/celery/beat.py", line 185, in __init__
self.setup_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 158, in setup_schedule
self.install_default_entries(self.schedule)
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 251, in schedule
self._schedule = self.all_as_schedule()
File "/app/.heroku/python/lib/python3.5/site-packages/djcelery/schedulers.py", line 164, in all_as_schedule
for model in self.Model.objects.enabled():
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 258, in __iter__
self._fetch_all()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/app/.heroku/python/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/app/.heroku/python/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: SSL SYSCALL error: Bad file descriptor
I've resolved the issue now... there was a line of my Django code which had caused an Internal Server Error in the past -- I think, early on in Django starting up, it was trying to access a model before the migrations that created the model had run.
I'd resolved that but noticed these "SSL SYSCALL error"s started about the same time. So I removed that line of code, and Celery has started up again.
It could be coincidence. And I don't understand why this fixed things.
Ideally I'd still like to understand what the error above actually means so I'd have a better chance of fixing such a thing in the future.
I had the same error and i solved.
In my case the problem was a custom AppConfig in one of my django apps.
my folder
my_django_app/
__init__.py
apps.py
...
I have a file my_django_app/__init__.py like this
default_app_config="my_django_app.apps.MyAppConfig"
and the file my_django_app/apps.py like this
class ChallengeConfig(AppConfig):
name = 'appConfig'
verbose_name = "djangoAppConfig"
def ready(self):
... # custom logic
After removeing the line default_app_config="my_django_app.apps.MyAppConfig" celery works again
If you have something like this, remove your custom AppConfig and use a celery periodic task instead of a custom AppConfig. django_celery_beat do something strange with AppConfig so if you have a custom AppConfig it generate the problem
My python requirements:
Django==1.11.29
celery==4.4.7
django_celery_beat==1.6.0
I know this was asked years ago, but this was about the only question out there which related to my problem.
In my case, it was down to the CONN_MAX_AGE being set at 600. Reduced to 0 so that there are unlimited persistent connections.
From the docs:
Persistent connections avoid the overhead of re-establishing a connection to the database in each request. They’re controlled by the CONN_MAX_AGE parameter which defines the maximum lifetime of a connection. It can be set independently for each database.
The default value is 0, preserving the historical behavior of closing the database connection at the end of each request. To enable persistent connections, set CONN_MAX_AGE to a positive integer of seconds. For unlimited persistent connections, set it to None.
Django opens a connection to the database when it first makes a database query. It keeps this connection open and reuses it in subsequent requests. Django closes the connection once it exceeds the maximum age defined by CONN_MAX_AGE or when it isn’t usable any longer.
In detail, Django automatically opens a connection to the database whenever it needs one and doesn’t have one already — either because this is the first connection, or because the previous connection was closed.
At the beginning of each request, Django closes the connection if it has reached its maximum age. If your database terminates idle connections after some time, you should set CONN_MAX_AGE to a lower value, so that Django doesn’t attempt to use a connection that has been terminated by the database server. (This problem may only affect very low traffic sites.)
At the end of each request, Django closes the connection if it has reached its maximum age or if it is in an unrecoverable error state. If any database errors have occurred while processing the requests, Django checks whether the connection still works, and closes it if it doesn’t. Thus, database errors affect at most one request; if the connection becomes unusable, the next request gets a fresh connection.

GAE and Highwinds FTP Server

I'm running a GAE python application with 3 modules, one module run every half an hour and is in charge of fetching a file form an FTP server, processing it and saving it to CloudStorage.
Recently the communication between my application and the FTP server start failing more often, increasingly until it became unusable.
The FTP server is Highwinds, they did some changes recently, asked me some questions like the ping and traceroute from the instance running the code to their server, but I'm not able to provide that given the GAE restrictions.
I was able to overcome some errors by just repeating the same operation multiple times.
I'm including some error traces at the end of this message.
I would appreciate some help troubleshooting this issue.
files = ftp_service.nlst()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 509, in nlst
self.retrlines(cmd, files.append)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 432, in retrlines
conn = self.transfercmd(cmd)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 371, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 330, in ntransfercmd
conn = socket.create_connection((host, port), self.timeout)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/socket.py", line 569, in create_connection
raise err
error: [Errno 110] Connection timed out
ftp_service.retrbinary("RETR %s" % file_path, callback=handle_binary)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 409, in retrbinary
conn = self.transfercmd(cmd, rest)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 371, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 334, in ntransfercmd
resp = self.sendcmd(cmd)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 244, in sendcmd
return self.getresp()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ftplib.py", line 217, in getresp
raise error_temp, resp
error_temp: 425 Rejected data connection from foreign address 74.125.183.23:53274.

Django Database Improperly Configured when function called outside of Django

I'm trying to call a python function that makes some queries into my django database from GNU mailman.
When mailman tries to deliver a message, it imports my python script. It later calls a function in my script to modify the message object. The error I'm getting is:
ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation \
for more details.
Here's how I'm configuring the settings, at the very top of my file:
from django.core.management import setup_environ
from mysite import settings
setup_environ(settings)
When I run python manage.py syncdb, it seems to create the database fine. Here's my database configuration:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'django_db', # Or path to database file if using sqlite3.
'USER': 'root', # Not used with sqlite3.
'PASSWORD': 'root', # Not used with sqlite3.
'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
Further, I've commented out the entirety of my function such that it now looks like:
def f():
return
So I don't think this has to do with the function call.
Further, I've tested the setup_environ lines in the python console and everything works as expected.
Further, when I restart GNU mailman, I believe it has to load all its scripts, which means it necessarily has to import my file. This means that these "setup_environ" lines run when I restart mailman. And it's fine -- I get no errors.
It's only when GNU mailman tries to deliver a message that I have problems.
So I'm pretty stumped. I do run the mailman restart command as sudo with additional PYTHONPATH and DJANGO_SETTINGS_MODULE environmental variables, but I've manually added to the relevant parts to my sys.path and os.environ dict, which doesn't fix the problem either. Besides, the error doesn't suggest it's a problem with the path or being unable to find the settings module.
The full stack trace is:
Jun 04 12:06:11 2012 (5249) Uncaught runner exception: settings.DATABASES is improperly configured. Please supply the ENGINE val\
ue. Check settings documentation for more details.
Jun 04 12:06:11 2012 (5249) Traceback (most recent call last):
File "/var/lib/mailman/Mailman/Queue/Runner.py", line 100, in _oneloop
msg, msgdata = self._switchboard.dequeue(filebase)
File "/var/lib/mailman/Mailman/Queue/Switchboard.py", line 173, in dequeue
redirect_list(msg, data)
File "/home/ubuntu/djcode/mysite/mysite/apps/mailman/redirect.py", line 32, in redirect_list
File "/home/ubuntu/djcode/mysite/mysite/apps/mailman/redirect.py", line 45, in _get_real_listname
from mysite.apps.common.models import CustomUser
File "/home/ubuntu/djcode/mysite/mysite/apps/common/custom_user_manager.py", line 54, in get
email_object = Email.objects.get(email=kwargs['email'])
File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 131, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 361, in get
num = len(clone)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 85, in __len__
self._result_cache = list(self.iterator())
File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 291, in iterator
for row in compiler.results_iter():
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 763, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 808, in execute_sql
sql, params = self.as_sql()
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 71, in as_sql
out_cols = self.get_columns(with_col_aliases)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 218, in get_columns
col_aliases)
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 306, in get_default_columns
r = '%s.%s' % (qn(alias), qn2(field.column))
File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 49, in quote_name_unless_alias
r = self.connection.ops.quote_name(name)
File "/usr/local/lib/python2.6/dist-packages/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation \
for more details.
Seems you have not specified anything in your setting.py file's DATABASES dictionary.Specify the following to connect to databse successfully(as per docs),
1) ENGINE -- Either 'django.db.backends.postgresql_psycopg2', 'django.db.backends.mysql', 'django.db.backends.sqlite3' or 'django.db.backends.oracle'. Other backends are also available.
2) NAME -- The name of your database. If you're using SQLite, the database will be a file on your computer; in that case, NAME should be the full absolute path, including filename, of that file. If the file doesn't exist, it will automatically be created when you synchronize the database for the first time (see below).
3) USER -- Your database username (not used for SQLite).
4) PASSWORD -- Your database password (not used for SQLite).
5) HOST -- The host your database is on. Leave this as an empty string if your database server is on the same physical machine (not used for SQLite).
(OR) click HERE