Table is Not Dropping in PostrgreSQL database - django

I wanted to drop a table in my Postgres database on Heroku. But I haven't yet wrapped my head around South, so I decided to write a function:
def dropTable(request):
cursor = connection.cursor()
cursor.execute("DROP TABLE books_request CASCADE;")
success = simplejson.dumps({"success":"success",})
return HttpResponse(success, mimetype="application/json")
So now my books_request table has been dropped.
Then I went into the bash for my app on Heroku, and I did python manage.py syncdb hoping it would recreate the table anew, but it didn't seem to. Why?
Is there any way to force sync just that app?
I also got this error message which might be part of the problem while doing the sync:
Problem installing fixture '/app/bookstore/fixtures/initial_data.json': Traceback (most recent call last):
DatabaseError: Could not load sites.Site(pk=1): value too long for type character varying(50)
UPDATE:
I think the issue is the table isn't dropping in the first place because when I dump the data, I can still see the table. Why isn't it dropping the table properly?

If you're using south you may need to run the migrate command. Additionally if it views the migration as already applied it won't re-run it. Try running the below and then adding the output:
heroku run python manage.py migrate

Yeah you probably didnt drop the table, you just think you have.
Make sure you are referencing connection correctly first by:
from django db import connection
Use django debug toolbar to see the sql that got executed and what happened. Or possibly use a debug program like pudb on the heroku shell to see whats going on.

Related

Django's --fake-initial doesn't work when migrating with existing tables

I am migrating a project from Django 1.1 to Django 3.0 I am done with the project. When I am dumping the production dump to my local in the newly converted project I get "Table already exists".
Here's what I am doing.
mysql> create database xyx;
docker exec -i <container-hash> mysql -u<user> -p<password> xyx < dbdump.sql
then I run the migrate, as I have had to do some changes to the previously given models.
./manage.py migrate --fake-initial
this is the output I get
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1050, "Table 'city' already exists")
So, what to do ?
Alright boys and girls, here's the approach I followed to solve this problem.
I dumped the entire database.
docker exec -i <container-hash> mysql -u<username> -p<password> <dbname> < dump.sql
Now I listed all the migrations I made using
./manage.py showmigrations <app-name>
This will give me the list of all the migrations I have applied, now from inspecting the migrations, I realized that from the 7th migration to the 30th migration I had done my changes.
Here's the tedious part which any sys admin can write a script to do in less than 4 lines of bash script. You can generate the raw SQL of any migration with this command.
./manage.py sqlmigrate <app-name> <migration-name> > changes-i-made.sql
Now that I have created my changes-i-made.sql file I'll need to run this script 22 more times but with >> otherwise everytime you run the command with a single > it will keep overwriting your changes file.
Now once all of your migration changes are recorded inside a file, open up your sql shell connect to the database and start pasting the changes or do some sql magic to pick all the changes directly from the file.
Once you're done go ahead and fake all the migrations, cause you don't need Django to do them you already did.
./manage.py migrate --fake
and then login to your production instance and get ready to fuck with your senior team lead who said you couldn't do it.
I just checked to see if this approach is working and the future migrations will be working, so I created one and everything works like a breeze.

Can't install auth.group fixture when testing - django 1.7

I have a django project and I am trying to write some tests for it. However, my initial_data fixtures cause an error when running the test.
The error that I am getting is:
django.db.utils.ProgrammingError: Problem installing fixture 'accounts/fixtures/initial_data.json': Could not load auth.Group(pk=1): relation "auth_group" does not exist
LINE 1: UPDATE "auth_group" SET "name" = '...
If I rename my fixture to something other than initial_data so that it doesn't get loaded by default, it works, but I don't want to rename my fixtures, because that would mean that I can no longer run loaddata without arguments.
I have found this bug, but my project does not have any initial migrations. Also, I have other fixtures which are loaded just fine.
So far, I have tried:
flushing my development database, as well as deleting any possible migration files
deleting and re-creating my virtual env
changing the order of my apps in INSTALLED_APPS
calling the flush command in the .setUp() method.
I should mention that I am using the APITestCase from django-rest-framework.
Any suggestions are welcomed. Thanks.
Ok, so finally, it seems that the problem wasn't just when I was testing. When I changed back to running my server, I noticed I was getting the same error.
Every single similar problem I found had something to do with migrations, but I didn't even had those, because running ./manage.py makemigrations was not generating them.
So I ended up doing ./manage.py makemigrations *app_name* for each of my apps, and everything started working again ...

Restoring Django dump

I've been running a daily dump of a production Django application as follows:
./manage.py dumpdata --exclude=contenttypes --exclude=auth.Permission -e sessions -e admin --all > data.json
Normally, restoring this to another installation for development hasn't caused a problem, but recently attempts to restore the data have caused this:
./manage.py loaddata -i data.json
django.db.utils.IntegrityError: Problem installing fixtures: The row in table 'reversion_version' with primary key '1' has an invalid foreign key: reversion_version.content_type_id contains a value '14' that does not have a corresponding value in django_content_type.id.
This suggests to me that the problem has been caused by the recent addition of django-reversion to the codebase, but I am not sure why and I have not been able to find any means of importing the backup. Some posts suggest that using natural keys may work, but then I get errors like:
django.core.serializers.base.DeserializationError: Problem installing fixture 'data.json': [u"'maintainer' value must be an integer."]
"maintainer" is in this case a reference to this bit of code in a model definition in models.py:
maintainer = models.ForeignKey(Organization,related_name="maintainer",blank=True,null=True)
Does anyone has any suggestions as to how I might get this dump installed, or modify the dump procedure to make a reproducible dump?
I note that the production site is using Postgres and the test site has SQLite, but this has never been a problem before.
On your local machine clone your project and do something like this:
Checkout the project at state that was used to create the dump.
Create a new database and tables.
Load the dump.
Update the code to current state.
Run migrations.
That was rather painful. It seems that the way to fix it was to dump django_content_types as csv from the production posgres database, delete the IDs from the resulting csv file, then do the following on the SQLite database for the test version:
CREATE TABLE temp_table(a, b, c)
.mode csv
.import content_type.csv temp_table
DELETE FROM sqlite_sequence WHERE name = 'django_content_type'
DELETE FROM django_content_type
INSERT INTO django_content_type(name,app_label,model) SELECT * FROM temp_table
That had the effect of setting the ids of the entries in the django_content_type table to match those in the dump, allowing the restore to proceed.

django syncdb does not appear to run my custom hook

I have some existing SQL statements that I'd like to use as a custom hook after the CREATE TABLE command, in [myapp]/sql/[model-name].sql.
My app is in INSTALLED_APPS. I see listed if I run ./manage.py sql.
My custom hook is found; I see the SQL statements output if I run any of the following:
./manage.py sqlall <myapp>
./manage.py sqlcustom <myapp>
./manage.py sql <myapp>
I'm using postgres 9.x on my mac.
If I psql to that same database (with no user) and copy them from the .sql file and paste them into the psql command input, they all work... so I believe they're valid SQL understood by postgres. These are all pretty simple INSERT statements (fixtures addressed below).
However, if I run ./manage.py syncdb those statements are either not run, or they are ignored or silent errors happen; all I know is that the new rows do not appear in the database. I am tailing the postgres log file and nothing is logged when I run syncdb, so I don't know if it's not finding my .sql file, or parsing it and finding some error before it gets to the database.
I have created a .json file, for fixtures, with the equivalent of those statements, and ./manage.py loaddata <path-to-json-file> works correctly: my site now shows those values in the database. This makes me believe that my settings file is correct and the database I'm writing to inside postgres is set correctly, and I have write permissions when I run ./manage.py.
I saw in some other post that the django documentation is wrong and I should put the custom hook in the 'models' directory, but I don't know if that's right; if sqlall and sqlcustom find my hook, shouldn't syncdb find it? Also I don't (yet) have a models directory and may not need it.
For various reasons I'd rather not use JSON format, but if I have to I will... however I've invested so much time in the .sql format I really want to know what's going on (and I've seen enough existing related questions that this might help others).
I believe I found it, although it's based on behaviors not any real research. I simply changed 'tile' to 'tilexx' everywhere and it worked. This django-project post indicates that if there is some sort of python class name conflict the custom SQL won't be executed... and 'tile' is a pretty common thing.
So the answer is to change the name of my class to something a bit more unique.
I've been searching for an answer to a similar problem, trying to initialize an sqlite database with data I dumped from a Flask application I'm porting to Django. Like the OP, all of the following did the right thing:
./manage.py sqlall <myapp> ./manage.py sqlcustom <myapp> ./manage.py sql <myapp>
However, the insert statements in myapp/sql/myapp.sql were not being run. A careful reading of the documentation revealed this clue:
Django provides a hook for passing the database arbitrary SQL that’s executed just after the CREATE TABLE statements when you run syncdb.
(emphasis added)
The issue is that I had already registered my models and run syncdb, so the table in question already existed in the database, although it held no data yet. I deduce that because of this, the CREATE TABLE statement was not being run on subsequent executions of syncdb, and therefore, my custom sql could not be run after that statement. The solution was to DROP table table_name and then run syncdb again, at which point my custom sql was run.

newbie difficulty using south with pycharm - DatabaseError: no such table: south_migrationhistory

I'm using sqlite3 and pycharm to learn more about django, and googled to find that south is recommended to make it easier to modify models after they have been created.
I'm trying to follow the advice on http://south.aeracode.org/docs/tutorial/part1.html#starting-off.
The most success I've had so far is to create a simple model and run syncdb before adding south to installed_apps. That way the intial tables are created and I get a chance to create a super user. (Django admin seems to fret if there are no users).
Then I add south to installed_apps, and run django_manage.py schemamigration bookmarks --initial
It seems to work fine. A new directory is created called migrations with a couple of files in it in my app folder and an encouraging message.
"Created 0001_initial.py. You can now apply this migration with: ./manage.py migrate bookmarks"
The next step - django_manage.py" migrate bookmarks generates the following error message
django.db.utils.DatabaseError: no such table: south_migrationhistory.
I thought that table would be created in the first schememigration step. What am I missing? Can anyone help?
Marg
South uses a table if its own to keep track of which migrations have been applied. Before you can apply any migrations, this must have been created, using python ./manage.py syncdb.
As well as for setting up south, you will find syncdb sometimes necessary for non-south apps in your project, such as the very common django.contrib.auth.
Note that as a convenience, you can run both in one go like this
python ./manage.py syncdb --migrate
My latest (unsuccessful) effort was the following
Create application – synch db – superuser created
Test run –admin screen shows basic tables
Add south, and syncdb from command line with manage.py syncdb – south_migrationhistory table created. Add basic vanilla model
Tried various combinations of manage.py syncdb –manage, and
schemamigration from Pycharm (if run from within pycharm a
migrations directory is created within the app
– if run from the command line the directory does not seem to be
created.)
Django admin screen shows table – but if I try to edit
the table it says that it doesn’t exist
Check database structure
using SQLite browser - table for newly created model doesn’t exist
I’m starting to think that the whole thing is not worth the time wasting hassle – maybe I’m better off just modifying the tables in SQLite browser
Answer in the similar question:
Run syncdb to add the Django and South tables to the database.