I'm occasionally but quite often getting an unhandled exception in cursor.execute (django1.1/db/models/sql/query.py line 2369), using psycopg2 with postgresql.
Looks like the database drops connection in some way, so Django crashes. For unhandled exception there is a ticket in Django's bugtrack (#11015), but I'm rather interested in reasons why db drops connection, not why Django doesn't catches it.
Using django's dev. server this error never happens (it runs db requests in order, concurrency never happens), so it's like it has something to do with db requests concurrency or what.
I have no access to postgresql config. or logs.
Any suggestions welcomed, maybe some postgresql tweaking, or some thoughts on how to debug this issue.
upd: looks like this question - Django + FastCGI - randomly raising OperationalError - addresses the same problem, but no solution provided :-(
The problem could be mainly with Imports. Atleast thats what happened to me. I wrote my own solution after finding nothing from the web. Please check my blogpost here: Simple Python Utility to check all Imports in your project
Ofcourse this will only help you to get to the solution of the original issue pretty quickly and not the actual solution for your problem by itself.
Related
I am using Django with Google cloud datastore i.e. Djange (https://djangae.org/)
I am new to these tech stacks and currently facing one strange issue.
when I persist data by calling Model.save(commit=true) . The data gets saved into cloud datastore but after 4/5 mins it gets reverted.
To test it further I tried to directly change the value in database but it also got reverted after sometime.
I am kind of confused as there is no error or exception I see . I am making atomit transaction and wrapped my code with try except to catch any exception but no luck.
could someone please advise me as how to debug further here.
I got some lead here. well I was pointing datastore with multiple versions of code and few of them were in infinite loop to hit the same Kind of datastore. currently killing all stale version makes the DB consistent with changes. . wanted to update so that others can get an idea if something similar happen
I'm in the middle of migrating a Django project to Python 3 and updating several dependencies along the way.
Right now, I'm using Django 2.2.3.
After putting the code on the staging server I noticed that all responses are returned as bytestring literals:
b'<html>\n...'
This was very hard to narrow down because I first noticed it only on the staging server. Luckily I found out that this has nothing to do with NGINX nor Gunicorn but that DEBUG=True is actually the culprit.
The question is: what does DEBUG=True trigger that is messing up the response?
It took me a several hour long train ride to figure out but I finally found the root cause:
Going over my settings file looking for something where processing changes drastically between DEBUG=False and DEBUG=True, django-pipeline's MinifyHTMLMiddleware caught my eye. Disabling it does help indeed.
An issue about it has been opened back in May already, but I couldn't find it via Google. Hopefully this answer will help someone out there.
I am running a local development server. I am working on the project until it is ready for deployment. I checked the postgres admin page and I noticed that I have a lot of transactions running in the background.
I am not using the site/making queries, and am wondering what is causing this. (I am also the only user)
Why is this?
You'll need to find out what is going on yourself.
First, you can try SELECT * FROM pg_stat_activity (doc). It shows the last statement executed by each user.
With some luck, you'll find out what is going on.
If that is not enough, use pg_stat_statements.
It is a little bit more complicated to install (load in postgresql.conf then CREATE EXTENSION pg_stat_statements) but you will not miss any of the queries.
I was adding modules and installing them yesterday and I get this error now: "The website encountered an unexpected error. Please try again later." I can't login at all or access anything from the url. I tried deleting the modules but it still isn't working. Any suggestions?
To expose what's up, add the following line to your settings.php
$config['system.logging']['error_level'] = 'verbose';
What's happening is that the code is failing fairly deep into the execution and it may not even be a syntax error. In my case, it was the MySQL user was not associated with the MySQL DB, so the connection was failing as what the application
The models are probably still in database even though you deleted it from code ?
If you have access to drush, you can in-install it via drush. If not you need to change entries in the database tables
I have a small python script that pushes data to my django postgres db.
It imports the relevant model from a django project and uses the .save function to save the data to the db without issue.
Yesterday the system was running fine. I started and stopped both my django project and the python script many times over the course of the day, but never rebooted or powered off my computer, until the end of the day.
Today I have discovered that the data is no longer in the db!
This seems silly, as I probably forgotten to do something obvious, but I thought that when the save function is called from a model, the data is committed to the db.
So this answer is "where to start troubleshooting problems like this" since the question is quite vague and we don't have enough info to troubleshoot effectively.
If this ever happens again, the first thing to do is to turn on statement logging for PostgreSQL and look at the statements as they come in. This should show you begin and commit statements as well as the queries. It's virtually impossible to troubleshoot this sort of problem without access to the queries. Things to look for include missing COMMITs, and missing statements.
After that, the next thing to do is to look at the circumstances under which your computer rebooted. Is it possible it did so before an expected commit? Or did it lose power and not have the transaction log flushed to disk in time?
Those two should rule out just about all possible causes on the db side in a development environment. In a production environment for old versions of PostgreSQL you do want to verify that the system has autovacuum running properly and that you aren't getting warnings about xid wraparound. In newer versions this is not a problem because PostgreSQL will refuse to accept queries when approaching xid wraparound.