So I created a Docker Swarm with Django, Apache, and PostgreSQL all running in an overlay network called internal_network. When I scale Apache and Django services, it works perfectly fine, except for Django, I have to wait a little longer for it to do the migration. But when I scale the PostgreSQL service, Django just break, like first time go to my Django admin page okay then I reload the page Django printed out
relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
If I reload again, Django admin back to normal again and this cycle keep looping when I reload the page.
I think Django seem to be conflicted about having 2 PostgreSQL task running on my swarm that it just won't work properly
Okay, I just found out that in order to scale the database, I need to do more steps in order to do that. It's also dependent on the database technology and how it handle replicas and it's not simple as running a worker node or container
Related
I am trying to configure my django site to utilise two databases. My site is deployed via Heroku and I have a 'follower' database which is a read-only copy of my main database. As per my understanding from the Heroku docs, any changes to my main database are streamed live to the follower database.
So what I am trying to achieve is making all read operations go to the follower database and write operations to hit the main database.
Any help would be amazing.
Cheers,
Tom
We use django-background-tasks in our application and I have had no issue with deleting tasks from the admin page when running the app in my system. But when I tried to delete some tasks from the production version (hosted with gcp), I keep getting internal server (500) errors. What could be the reason for this?
Alternately, is it advisable/safe to directly delete the tasks from the background_task table in the database?
I deleted the tasks from the background_task table and it hasn't caused any issues.
The 500 could be due to some access issues (maybe some settings have to be configured to allow table modifications via admin page, over ssh?).
I'm building a webapp using gunicorn, flask, and plotly's dash. I'm using guncorns's --reload option which automatically reloads or resets the workers if any code is modified. I have observed this basically restarts my entire web app. At the start of my webapp I'm initializing a client connection and cursor to documents inside a mongo db. Then the webapp starts graphing stuff. If I modified the HTML of the webapp, I want gunicorn to reload the HTML side of things only, and not reinitialize the mongo db each time. Is there any way I can avoid reloading everything using gunicorn's reload? Or maybe some other alternative?
Gunicorn only reloads the Python code. It will not reload your HTML code.
Your HTML code should be read each time a request is made, unless it is using cached version.
Try disabling cache on the page which you are trying to re-load.
These links should point you towards a solution:
https://pythonhosted.org/Flask-Caching/
https://gist.github.com/arusahni/9434953
Disable cache on a specific page using Flask
I deployed a Django app on Heroku. I have some models and inserted some data to the database (SQLite database). But, when I tried to access the data after certain time, it showing an empty database. I found a problem similar to my issue here ->django heroku media files 404 error and I understood that, I should keep my Media files somewhwere else. Here my problem is with database and my question is, can I prevent my SQLite database from this data loss ?
There is nothing you can do about this, short of storing the database on some other service like Amazon S3. Heroku has ephemeral storage: https://devcenter.heroku.com/articles/sqlite3#disk-backed-storage
However, Heroku also comes with free persistent PostgreSQL. So I would advice you to use that instead.
I have 2 instances of django application.
One is frontend - a normal wsgi app.
Another is backend - a twisted daemon running with ./manage.py rundaemon.
They share django settigns and models.
Now, when one of them has a query, it got cached.
And when another updates database - the cache will not be flushed.
That's obviously because they have no clue about another instance accessing the same database.
Is there a way to disable caching, or flush it manually and force query to be reexecuted?
(I guess the admin app does flush query caches somehow)
I'm not sure if this is the best solution, but it worked for me when I faced the same problem.
import django
django.db.connection.close()
The connection will automatically get reopened the next time it is needed.