Can django background tasks be deleted directly from the database? - django

We use django-background-tasks in our application and I have had no issue with deleting tasks from the admin page when running the app in my system. But when I tried to delete some tasks from the production version (hosted with gcp), I keep getting internal server (500) errors. What could be the reason for this?
Alternately, is it advisable/safe to directly delete the tasks from the background_task table in the database?

I deleted the tasks from the background_task table and it hasn't caused any issues.
The 500 could be due to some access issues (maybe some settings have to be configured to allow table modifications via admin page, over ssh?).

Related

Django transfer database to different Django project without messing up the logic

I built a Django project with a few models. Now I created a second server with the same setup. This one is meant to be the deployment server. Databases are separate from the dev-server.
However can you tell me if I can simply copy the databases from the dev server to the deploy or will the Django logic, since I also mean the user models and permissions etc.
The tables which I created myself are no problem to transfer to the new server. However I am wondering If Django gets confused when I also transfer something like the auth_user Model.
Should this work since I also just copied the backend logic as well?

About scaling the database in docker swarm

So I created a Docker Swarm with Django, Apache, and PostgreSQL all running in an overlay network called internal_network. When I scale Apache and Django services, it works perfectly fine, except for Django, I have to wait a little longer for it to do the migration. But when I scale the PostgreSQL service, Django just break, like first time go to my Django admin page okay then I reload the page Django printed out
relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
If I reload again, Django admin back to normal again and this cycle keep looping when I reload the page.
I think Django seem to be conflicted about having 2 PostgreSQL task running on my swarm that it just won't work properly
Okay, I just found out that in order to scale the database, I need to do more steps in order to do that. It's also dependent on the database technology and how it handle replicas and it's not simple as running a worker node or container

Heroku: Django database file lost on every dyno restart

I deployed a Django app on Heroku. I have some models and inserted some data to the database (SQLite database). But, when I tried to access the data after certain time, it showing an empty database. I found a problem similar to my issue here ->django heroku media files 404 error and I understood that, I should keep my Media files somewhwere else. Here my problem is with database and my question is, can I prevent my SQLite database from this data loss ?
There is nothing you can do about this, short of storing the database on some other service like Amazon S3. Heroku has ephemeral storage: https://devcenter.heroku.com/articles/sqlite3#disk-backed-storage
However, Heroku also comes with free persistent PostgreSQL. So I would advice you to use that instead.

Flask Login Sessions Not Working

I'm having an issue with Flask-Login where for some reason it seems to clear the data from my session. This issue only seem to happen to me when I run my application on AWS within a Docker container. There doesn't seem to be any issues when this is run locally within a Docker container. The container kick starts the application using supervisord to launch the nginx and gunicorn servers.
I'm using Flask-Login and SQLAlchemy to handle my user logins. I'm creating a custom token using the get_auth_token() method in my User model which stores the token with some session data in my database. I use the token_loader and user_loader callbacks to retrieve my User data from the database which works fine.
However, if I'm not actively using my application for a few minutes the session data seems to disappear when I change to a page that requires a login. My session cookie remains unchanged and my token_loader or user_loader callbacks never seem to be called. To work out what might be happening with the session I attached a #app.before_request handler to print the session contents:
[2015-09-29 14:47:21,348] DEBUG in __init__: <SecureCookieSession {u'csrf_token': '51b5b253c55ac954c1bc61dd2dca513e18c4d790', u'_fresh': True, u'user_id': 3, u'_id': 'd3adbd2ed3905986d515aeb04cd1ff7d'}>
[2015-09-29 14:47:21,382] DEBUG in __init__: <SecureCookieSession {u'_flashes': [('message', u'Please log in to access this page.')]}>
It appeared that all of the user information was there for me to be able to load my user but it has bailed out and re-directed to the login page with the Flask-Login flash error. This re-directs before it even touches my callbacks to load the user from the database.
Is this possibly just a set up issue with my server configs that is causing an issue with domains? I'm not really sure what I need to look at and try to debug this further.
This is a known bug in Flask-Login that was fixed around release 0.2.10 (by me). The bug reappeared in release 0.3.0 of Flask-Login, which as of today is the most current release. I submitted a new fix, plus a unit test to prevent this from ever happening again. The fix was merged a few days ago, but a 0.3.1 release has not been made yet.
Bug report: https://github.com/maxcountryman/flask-login/issues/231
My pull request with the fix: https://github.com/maxcountryman/flask-login/pull/237

django flush query caches

I have 2 instances of django application.
One is frontend - a normal wsgi app.
Another is backend - a twisted daemon running with ./manage.py rundaemon.
They share django settigns and models.
Now, when one of them has a query, it got cached.
And when another updates database - the cache will not be flushed.
That's obviously because they have no clue about another instance accessing the same database.
Is there a way to disable caching, or flush it manually and force query to be reexecuted?
(I guess the admin app does flush query caches somehow)
I'm not sure if this is the best solution, but it worked for me when I faced the same problem.
import django
django.db.connection.close()
The connection will automatically get reopened the next time it is needed.