Admin shows entries of a model even though database is empty - django

How is the following possible?
I cloned a django project into a dir, ran syncdb, check_permissions, migrate, createsuperuser and ran my development server. I then went to the admin and there I saw 2 entries of a model even though there are no fixtures that could cause these entries. When I looked at my sqlite database file, there are no entries for that model, even though it shows these two entries in the admin(I also did python manage.py dumpdata > my_db and case insensitively searched within that file, no result). It's also not browser related because the problem also occurs on a new installed browser.
These 2 entries are entries I made in the development server of the project on another location. But that server is not running and I'm using a different port.
I also grepped (grep -r -n entry_name) from my root folder looking for the names of those entries, however without result.
When I select these entries in the admin and click delete, nothing happens. These two entries remain there.
Edit 1
#Hedde:
I did ps ffaux|grep python . There is only one Django process running: /home/my_username/.virtualenvs/project_name/bin/python manage.py runserver 8044
I gave each of my settings files a print line to identify when each is running. Only the development settings is run, like expected and has always been the case.
I am using sqlite3 for the database, there is only one my_database.db file in my project dir. (find . -iname *db* only returns one database file)
Edit 2
When I clone the project from another pc and do the whole procedure listed above(syncdb, check_permissions, migrate, createsuperuser and run development server) everything works as expected, the entries are gone.

The problem was related to Redis. Simply flushing the Redis-db solved the problem.
$ redis-cli
$redis> flushall
$redis> exit

Related

Django's --fake-initial doesn't work when migrating with existing tables

I am migrating a project from Django 1.1 to Django 3.0 I am done with the project. When I am dumping the production dump to my local in the newly converted project I get "Table already exists".
Here's what I am doing.
mysql> create database xyx;
docker exec -i <container-hash> mysql -u<user> -p<password> xyx < dbdump.sql
then I run the migrate, as I have had to do some changes to the previously given models.
./manage.py migrate --fake-initial
this is the output I get
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1050, "Table 'city' already exists")
So, what to do ?
Alright boys and girls, here's the approach I followed to solve this problem.
I dumped the entire database.
docker exec -i <container-hash> mysql -u<username> -p<password> <dbname> < dump.sql
Now I listed all the migrations I made using
./manage.py showmigrations <app-name>
This will give me the list of all the migrations I have applied, now from inspecting the migrations, I realized that from the 7th migration to the 30th migration I had done my changes.
Here's the tedious part which any sys admin can write a script to do in less than 4 lines of bash script. You can generate the raw SQL of any migration with this command.
./manage.py sqlmigrate <app-name> <migration-name> > changes-i-made.sql
Now that I have created my changes-i-made.sql file I'll need to run this script 22 more times but with >> otherwise everytime you run the command with a single > it will keep overwriting your changes file.
Now once all of your migration changes are recorded inside a file, open up your sql shell connect to the database and start pasting the changes or do some sql magic to pick all the changes directly from the file.
Once you're done go ahead and fake all the migrations, cause you don't need Django to do them you already did.
./manage.py migrate --fake
and then login to your production instance and get ready to fuck with your senior team lead who said you couldn't do it.
I just checked to see if this approach is working and the future migrations will be working, so I created one and everything works like a breeze.

Django migrations not persisting

My django app is containerized along side postgresql. The problem is that migrations do not seem to be persisting in the directory. Whenever I run docker exec -it <container_id> python manage.py makemigrations forum, the same migrations are detected. If I spin down the stack and spin it back up the and run makemigrations again, I see the same migrations detected. Changes to the fields, adding models, deleting models, none ever get detected. These migrations that do appear seem to be getting written to the database, as when I try to migrate, I get an error that there are existing fields already. But if I look at my migrations folder, only the init.py folder is present. All the migrate commands add no changes to the migrations folder.
I also tried unregistered the post model from the admin and spinning up the stack, yet I still see it present in the admin. Same things with changes to the templates. No change I make sticks from inside docker seems to stick.
*Note this problem started after I switched to wsl 2 and enabled it in docker desktop (windows)
**Update migrations can be made from bash of docker container
I found out what the problem was. My docker-stack.yml file was pointed to a directory that did not exist in the dockerfile.

Postgres database of a cloned project not showing up when I run psql

I was under the assumption that Django's models would automatically create the database when I ran migrations, however, I cannot find the database listed in psql despite the fact that the project runs perfectly with all the database info populating the website.
I don't exactly know what to look for in the settings.py file to see where the database is actually being stored but is there any way to find out?

Configuring postgresql database for local development in Django while using Heroku

I know there are a lot of questions floating around there relating to similar issues, but I think I have a specific flavor which hasn't been addressed yet. I'm attempting to create my local postgresql database so that I can do local development in addition to pushing to Heroku.
I have found basic answers on how to do this, for example (which I think is a wee bit outdated):
'#DATABASES = {'default': dj_database_url.config(default='postgres://fooname:barpass#localhost/dbname')}'
This solves the "ENGINE" is not configured error. However, when I run 'python manage.py syncdb' I get the following error:
'OperationalError: FATAL: password authentication failed for user "foo"
FATAL: password authentication failed for user "foo"'
This happens for all conceivable combinations of username/pass. So my ubuntu username/pass, my heroku username/pass, etc. Also this happens if I just try to take out the Heroku component and build it locally as if I was using postgresql while following the tutorial. Since I don't have a database yet, what the heck do those username/pass values refer to? Is the problem exactly that, that I need to create a database first? If so how?
As a side note I know I could get the db from heroku using the process outlined here: Should I have my Postgres directory right next to my project folder? If so, how?
But assuming I were to do so, where would the new db live, how would django know how to access it, and would I have the same user/pass problems?
Thanks a bunch.
Assuming you have postgres installed, connect via pgadmin or psql and create a new user. Then create a new database and with your new user as the owner. Make sure you can connect via psql with the new user into to the database. you will then need to set up an env variable in your postactivate file in your virtualenv's bin folder and save it. Here is what I have for the database:
export DATABASE_URL='postgres://{{username}}:{{password}}#localhost:5432/{{database}}'
Just a note: adding this value to your postactivate doesn't do anything. The file is not run upon saving. You will either need to run this at the $ prompt, or simply deactivate and active your virtualenv.
Your settings.py should read from this env var:
DATABASES = {'default': dj_database_url.config()}
You will then configure Heroku with their CLI tool to use your production database when deployed. Something like:
heroku config:set DATABASE_URL={{production value here}}
(if you don't have Heroku's CLI tool installed, you need to do it)
If you need to figure how exactly what that value you need for your production database, you can get it by logging into heroku's postgresql subdomain (at the time this is being written, it's https://postgres.heroku.com/) and selecting the db from the list and looking at the "Connection Settings : URL" value.
This way your same settings.py value will work for both local and production and you keep your usernames/passwords out of version control. They are just env config values.

manage.py not updating database name

I have a Django application, and I recently changed the name of the database it is supposed to use. However, manage.py doesn't seem to be using the new database.
I've doublechecked the settings.py file, and I've even added a "print settings.DATABASE_NAME" to the manage.py file, and it prints out the correct name, but still connects to the old database.
For example, using ./manage.py dbshell:
NewDB
Password for user :
Welcome to psql 8.1.11, the PostgreSQL interactive terminal.
OldDB=>
So as far as I can see, it's completely ignoring what's in the settings file.
What could be causing this?
I figured out what the problem was.
While manage.py does take the settings file, the djang.conf.settings is loaded from the sys.path rather than what is passed via manage.py.
Apparently, the old location was in the sys.path, and so it was loading the old settings file.