django flush query caches - django

I have 2 instances of django application.
One is frontend - a normal wsgi app.
Another is backend - a twisted daemon running with ./manage.py rundaemon.
They share django settigns and models.
Now, when one of them has a query, it got cached.
And when another updates database - the cache will not be flushed.
That's obviously because they have no clue about another instance accessing the same database.
Is there a way to disable caching, or flush it manually and force query to be reexecuted?
(I guess the admin app does flush query caches somehow)

I'm not sure if this is the best solution, but it worked for me when I faced the same problem.
import django
django.db.connection.close()
The connection will automatically get reopened the next time it is needed.

Related

About scaling the database in docker swarm

So I created a Docker Swarm with Django, Apache, and PostgreSQL all running in an overlay network called internal_network. When I scale Apache and Django services, it works perfectly fine, except for Django, I have to wait a little longer for it to do the migration. But when I scale the PostgreSQL service, Django just break, like first time go to my Django admin page okay then I reload the page Django printed out
relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
If I reload again, Django admin back to normal again and this cycle keep looping when I reload the page.
I think Django seem to be conflicted about having 2 PostgreSQL task running on my swarm that it just won't work properly
Okay, I just found out that in order to scale the database, I need to do more steps in order to do that. It's also dependent on the database technology and how it handle replicas and it's not simple as running a worker node or container

Can django background tasks be deleted directly from the database?

We use django-background-tasks in our application and I have had no issue with deleting tasks from the admin page when running the app in my system. But when I tried to delete some tasks from the production version (hosted with gcp), I keep getting internal server (500) errors. What could be the reason for this?
Alternately, is it advisable/safe to directly delete the tasks from the background_task table in the database?
I deleted the tasks from the background_task table and it hasn't caused any issues.
The 500 could be due to some access issues (maybe some settings have to be configured to allow table modifications via admin page, over ssh?).

Using DB without modifying it - Django

I need to connect to a Postgresql server in my Django project. But I'm under strict instructions NOT to make any modifications to the database or tables.
If I'm not wrong, simply adding this DB to settings.py and running syncdb would create and add some django specific tables to the database.
How can I get around this issue? If I add another (local) sqlite DB to settings.py under default and my remote postresql server separately; can I be sure of Django not creating any tables in my server? I couldn't find this info in the documentation.
Thanks
Configure two databases in settings.py, and a create database router to route your queries to the appropriate database.
Route all the Django admin / users stuff to your sqlite database (make it the default database, and make sure that is indeed the one your router returns by default), and single out your application queries that need to go to the Postgres database.
Routers also have a method to locate a DB for writes and one for reads, so you can use this as a failsafe: just make sure db_for_write never returns your Postgres database, and you're good to go.

Tricky issue with django sessions: sometimes session information is erased

I have a weird bug with django sessions in my app: some times (about 10 times for ~20000 per day) session information for user is erased. I traced it via log files: at page A there is information for user's session, after it he submits the form and at the next page his session is empty. I tried two types of storage: memcached+db and db only and this problem is for both of them. I tried to reproduce these scenarios, but all works as expected, as I said, it happens very rare. I also checked that this problem exists for different users, and for them is doesn't reproduce each time. I don't have any ideas how to catch the root cause and I don't know what else post here as a description. If someone has any ideas, please let me know. If it is important, I'm running my app with django 1.2 + FastCGI.
Thanks!
UPD: I checked and see that session key from uses is not changed during two sequential requests, at first request there is an actual session state, and at second session variables are relaced with empty.
As a way to debug this problem, I would subclass the standard Django session middleware (or whatever you're currently using):
django.contrib.sessions.middleware.SessionMiddleware
and wrap process_request and (probably more importantly) process_response in some extra logging. Then install your subclassed session middleware in the MIDDLEWARE_CLASSES, rather than the stock Django one.
You could also validate that session.save() has actually committed its changes by attempting to read it back. It could be that the problem lies in session-state serialisation, and it's failing on a particular key or value that you're attempting to store.
None of this will fix your problem, but it might help you to establish what's going on.
As #Steve Mayne mentioned, it would be good to do some logging on the sessions middleware and sessions model save method. That's something I'd start with.
In addition I'd like to say that this could be a database related issue, especially if you're using MySQL database backend for sessions. You can check the log for database locks and other concurrency issues. I had to deal with similar issues before and the solution is clear: optimization and additional performance.
If you have some specific application middleware, you can check for functionality that interferes with Django sessions. Such parallel operations can cause problems, if not implemented properly.
Another thing I would do is to upgrade to the latest stable release of Django and migrate to a mod_wsgi setup.

session issue with django+apache+mod_wsgi

I've written a django application, and put it on a CentOS server. It is definitely okay when I use django development web server.
Such as I start it by "python ./manage.py runserver", and access that server from browser on another computer. I can sign in one time, and access all the pages without issues.
However when I run it with apache+mod_wsgi, I just found I have to login with user and password time by time. I think maybe there is some problem with the session middleware, so, how can I find the root cause and fix it?
There are a couple of different options for this.
In order of likelyhood (imho):
The session backend uses the cache system to store the sessions and you're using the locmem cache backend
The session backend isn't storing the cookies (secure cookies enabled? cookie timeouts? incorrect date on the server?)
The session middleware might not be loaded (custom settings for production server?)
Storing the session in the cache is only a good solution if you use memcached as the cache backend. So if you're storing the sessions in cache, make sure you use memcache :)
Either way, check if SESSION_ENGINE is set to django.contrib.sessions.backends.db