django south migration: Reset the schema only few tables - django

I am new django south migration. I have my main application and most of additional functions of that application I built as sub applications of the main application. Now what I want to do is reset the tables that are specific to sub application of the main application. I don't want to loose any data from other tables table.
This is how my database look like:
public | tos_agreement | table | g_db_admin
public | tos_agreementversion | table | g_db_admin
public | tos_signature | table | g_db_admin
public | userclickstream_click | table | g_db_admin
public | userclickstream_stream | table | g_db_admin
public | vote | table | g_db_admin
(80 rows)
I only want to re-build (dump all data of)
public | userclickstream_click | table | g_db_admin
public | userclickstream_stream | table | g_db_admin
How can I do this using south migration.
In my south_migrationhistory table I have following:
15 | userclickstream | 0001_initial | 2013-12-10 13:26:15.684678-06
16 | userclickstream | 0002_auto__del_field_stream_auth_user | 2013-12-10 13:26:15.693485-06
17 | userclickstream | 0003_auto__del_field_stream_username__add_field_stream_user | 2013-12-10 13:26:15.721449-06
I assume this record took place when I initially wired up it with south migration.
I was also thinking what if?
Delete the above records from south_migrationhistory and re-run the migration for this app which will generate the tables.
./manage.py schemamigration userclickstream --initial
./manage.py migrate userclickstream

Do it this way:
Open up your terminal and write manage.py dumpdata > backup.json. it will create a json fixture with all data currently in the database. That way, if you screw up anything, you can always re-load the data with manage.py loaddata backup.json (note that all tables need to be empty for this to work).
optional: pass the data to a new development db using the aformentioned loaddata command
Write your own migration, and not worry about breaking anything because - hey, you got a backup. It might take some learning, but the basic idea is you create a migration class with two functions - forward and reverse. Check out the south documentation and pick it up slowly from there.
Come back to SO with any more specific question and troubles you have along the way
This isn't a coded "here's the solution" answer, but I hope this helps nonetheless

Related

DynamoDB one-to-many relation denormalization or adjacency?

I am designing a table for a data structure that represents a business operation that can be performed either ad hoc or as part of a batch. Operations that are performed together as a batch must be linked and queryable, and there is meta data on the batch that will be persisted.
The table must support 2 queries: retrieve history, both ad hoc and batch instances.
Amazon suggests 2 approaches, adjacency and denormalization.
I am not sure which approach is best. Speed will be a priority, cost secondary.
This will be a multi-tenant database with multiple organizations with million+ operations. (Orgs will be a part of partition key to segregated these across nodes)
Here are the ideas I've come up with:
Denormalized, non adjacency - single root wrapper object with 1 (ad hoc) or more (batch) operation data.
Denormalized, adjacency - top level keys consist of operation instances (ad hoc) as well as parent objects containing collection of operation instances (batch)
Normalized, non adjacency, duplicate data - top level consists of operations instances, with or without a batch key, abd batch information duplicated among all members of batch
Is there a standard best practice? Any advice on setting up/generating keys?
Honestly it confuses me to understand the concept with these terms in NoSQL most specific DynamoDB. For me, hard to design the dynamodb table based on piece by piece of the whole business process. And precisely, I am more worried to data sizes instead of the speed in DynamoDB request due to we have 1MB limit per request. In other words, I should forget all things about relational DB concept and see the data as json object when working with dynamodb.
But well, for very simple one-to-many (i.e person loves some fruits) design I will have my best scheme choice has String PartitionKey. So my table will be like so :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| PersonID | {name:String, age:Number, loveTo:map} |
|---------------------|---------------------------------------|
| FruitID | {name:String, otherProps} |
|---------------------|---------------------------------------|
The sample data :
|---------------------|---------------------------------------|
| PartitionKey | Infos |
|---------------------|---------------------------------------|
| person123 | { |
| | name:"Andy", |
| | age:24, |
| | loveTo:["fruit123","fruit432"] |
| | } |
|---------------------|---------------------------------------|
| personABC | { |
| | name:"Liza", |
| | age:20, |
| | loveTo:["fruit432"] |
| | } |
|---------------------|---------------------------------------|
| fruit123 | { |
| | name:"Apple", |
| | ... |
| | } |
|---------------------|---------------------------------------|
| fruit432 | { |
| | name:"Manggo", |
| | ... |
| | } |
|---------------------|---------------------------------------|
But let's see more complex case, for the sample chat app. Each channel allows many users, Each user is possible to join in any channels. Whether it should be one-to-many or many-to-many and how to make the relation? I will say I don't care about them. If we think as like relational DB, what a headache! In this case I will have composite SortKey and even Secondary Index to speedup the specific query.
So, the question what's the whole business process you work on will
help us to design the table, not piece by piece

why am I not able to execute SQL query on my existing postgres database table?

for learning purpose I have migrated my database from SQLite to postgres recently with in my Django project, and it was successful.
I am able to connect to the DB through below command
sudo -u <username> psql -d <DB_name>;
I am able to list the tables including the schema:
\d
But when I tried to query simple select query it give below error:
select * from public.AUTHENTICATION_userprofile;
ERROR: relation "public.authentication_userprofile" does not exist
LINE 1: select * from public.AUTHENTICATION_userprofile;
Table details:
Schema | Name | Type | Owner
--------+-----------------------------------+----------+----------
public | AUTHENTICATION_userprofile | table | postgres
public | AUTHENTICATION_userprofile_id_seq | sequence | postgres
Any suggestions please.
Thank you
As you have created the table with capital letters, Postgres will be case sensitive for this table and you would have to put double quotes in the query:
select * from public."AUTHENTICATION_userprofile";

Visual C++ how to find name of a column in MySql

I am currently using the following code to fill a combo box with the column information inside of MySql database:
private: void Fillcombo1(void){
String^ constring=L"datasource=localhost;port=3307;username=root;password=root";
MySqlConnection^ conDataBase=gcnew MySqlConnection(constring);
MySqlCommand^ cmdDataBase= gcnew MySqlCommand("select * from database.combinations ;", conDataBase);
MySqlDataReader^ myReader;
try{
conDataBase->Open();
myReader=cmdDataBase->ExecuteReader();
while(myReader->Read()){
String^ vName;
vName= myReader->GetString("OD");
comboBox1->Items->Add(vName);
}
}catch(Exception^ex){
MessageBox::Show(ex->Message);
}
}
Is there any simple method for finding the name of the column and placing it within a combo box?
Also, I am adding small details to my app such as a news feed which would need updating every so often, will I have to dedicate a full new database spreadsheet to this single news feed text so that I can updated it or is there a simpler alternative?
Thanks.
An alternative is to use the DESCRIBE statement:
mysql> describe rcp_categories;
+---------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+------------------+------+-----+---------+----------------+
| ID_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Category_Text | varchar(32) | NO | UNI | NULL | |
+---------------+------------------+------+-----+---------+----------------+
2 rows in set (0.20 sec)
There may be an easier way without launching any other query but you could also use "SHOW COLUMNS" MySQL Query.
SHOW COLUMNS FROM combinations FROM database
or
SHOW COLUMNS FROM database.combinatons
Both will work.

Django CharField not being inserted correctly in MySQL

I would like to save phone numbers (US) in the database via Django. I have
from django.db import models
class Number(models.Model):
phone_number = models.CharField("Phone Number", max_length=10, unique=True)
When I ran:
python manage.py sql myapp
I got
BEGIN;
CREATE TABLE `demo_number` (
`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY,
`phone_number` varchar(10) NOT NULL UNIQUE
)
;
When I validate it, there was no error.
python manage.py validate
0 errors found
So I did
python manage.py syncdb
In MySQL console, I see:
mysql> select * from myapp_number;
Empty set (0.00 sec)
mysql> describe myapp_number;
+--------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| phone_number | varchar(10) | NO | UNI | NULL | |
+--------------+-------------+------+-----+---------+----------------+
2 rows in set (0.03 sec) `
Then in python manage.py shell, I do
from demo.models import Message, Number, Relationship, SmsLog
n=Number(phone_number='1234567890')
n.save()
When I check again in MySQL console, I see:
mysql> select * from myapp_number;
+------------+--------------+
| id | phone_number |
+------------+--------------+
| 2147483647 | 1234567890 |
+------------+--------------+
1 row in set (0.01 sec)
Why is the id a big number? In fact, because of that I cannot insert phone numbers anymore. For example, in python manage.py shell
n=Number(phone_number='0987654321')
n.save()
IntegrityError: (1062, "Duplicate entry '2147483647' for key 'PRIMARY'")
I am new to Django (using Django 1.5 and MySQL Server version: 5.1.63). If someone could point out the obvious mistake I'm making, I would very much appreciate that. On a side note, if I would like to extend the max_length of the CharField to 15, what is the simplest and cleanest (that is, not screwing up the existing set up) way to accomplish that? Thank you.
I can't see any mistakes in your code. If you don't have any data in the table, I would try dropping the demo_number table and running syncdb again.
If you don't have any data in the table, the easiest way to change to max length 15 is to change the model, drop the table in the db shell, then run syncdb again.
If you do have data in the table, you can change the model, then update column in the db. In your case (MySQL specific):
ALTER TABLE demo_number CHANGE phone_number phone_number varchar(15) NOT NULL UNIQUE;
For more complex migrations in Django, use South.
Turns out #Alasdair is right. I had to reset the app. In case anyone is wondering how to do it (I searched in StackOverflow, but might as well post it here since it's relevant), this https://stackoverflow.com/a/15444900/1330974 will work for Django >1.3.
My follow-up question is if the AutoField ID is incremented in case of error. For example, I did this in the shell:
from demo.models import Number
n=Number(phone_number='1234567890')
n.save()
n=Number(phone_number='1234567890')
n.save()
Got an IntegrityError as expected. So try a new number:
n=Number(phone_number='0987654321')
n.save()
Now when I check MySQL console, I see:
mysql> select * from demo_number;
+----+--------------+
| id | phone_number |
+----+--------------+
| 3 | 0987654321 |
| 1 | 1234567890 |
+----+--------------+
2 rows in set (0.00 sec)
Is that normal for Django to skip an ID in AutoField if there is an error? Thank you.

Error: "Index '' does not exist on table" when trying to create entities in Doctrine 2.0 CLI

I have a mySQL database. I am trying to get Doctrine2 to create entities from the MySQL schema. I tried this with our production database, and got the following error:
[Doctrine\DBAL\Schema\SchemaException] Index '' does not exist on table user
I then created a simple test database with only one table, and only three fields: an auto-increment primary key field and three varchar fields. When attempting to have doctrine create entities from this database, I got the same error.
Here is the table that I was trying to create an entitie for. (Should have been simple)
mysql> desc user;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| iduser | int(11) | NO | PRI | NULL | auto_increment |
| firstname | varchar(45) | YES | | NULL | |
| lastname | varchar(45) | YES | | NULL | |
| username | varchar(45) | YES | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
4 rows in set (0.00 sec)
Here is the command that I used in an attempt to get said entities created:
./doctrine orm:convert-mapping --from-database test ../models/test
I am running:
5.1.49-1ubuntu8.1 (Ubuntu)
mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (i686) using readline 6.1
Doctrine 2.0.1
I am facing the same problem right now. I have traced the problem back to the primary key being not identified / set correctly. The default value is boolean(false) which is cast to the string ''. Doctrine subsequently fails to locate an index for this attribute. ;-)
Solution: Define a PRIMARY KEY.