Django inspectdb ORA-00942: table or view does not exist - django

Im trying to import existing oracle tables in django.
Installed cx_oracle and i did all the steps for django to communicate with my oracle db.
import cx_Oracle
con = cx_Oracle.connect("PYTHON","PYTHON", "METDBR")
cur = con.cursor()
cur.execute("select * from ICUSTOMER")
res = cur.fetchall()
for row in res:
print(row)
works fine....
when im trying to inspect the table with the command
python manage.py inspectdb icustomer
i get
Unable to inspect table 'icustomer'
The error was: ORA-00942: table or view does not exist

Usually, it is about letter case.
By default, Oracle stores table names in UPPERCASE. If you (or whoever created the table) used mixed case and enclosed table name into double quotes, you have to reference the table exactly like that: using the same letter case (as created), enclosed into double quotes.
Therefore, check exact table name.

Check the USER that is configured in your default ENGINE, this is very probably not the user PYTHON.
You must than qualify the inspected table as PYTHON.ICUSTOMER
and grant the access on it the the engine user (while connected as PYTHON)
GRANT SELECT on PYTHON.ICUSTOMER to <engine user>;

Related

Oralce Apex Data Loading Wizard - The transformation rule failed

I am working with Application Express 4.2.6.00.03 and setting up an import app.
I am using CSV file and set correct separators and delimiters.
I need to insert the CSV field with values like 1,987456321 or 0 in the db table field of type NUMBER(9,2). To do so I am trying to set different transformation rule but none is working. If I don't import the CSV field the field is successfully imported.
I set the rule in this way:
Rule Type : PLSQL Expression
Espression1 : round(:myFiled,2) or to_number(:myFiled,'9.99')
Any help or workaround will be appreciated
EDIT1
As stated in the comments I successfully resolve the import of the field.
Querying the database table to check the whole import I notice that ALL field imported as number are without decimal part. e.g. a currency filed with value 159,63 is imported as 15963 despite the "Data Validation" step shows all this fields correctly.

Poor performance of Django ORM with Oracle

I am building a Django website with an Oracle backend, and I observe very slow performance even when doing simple lookups on the primary key. The same code works very fast when the same data are loaded in MySQL.
What could be the reason for the poor performance? I have a suspicion that the problem is related to the use of Oracle bind parameters, but this may not be the case.
Django model (a test table with ~6,200,000 rows)
from django.db import models
class Mytable(models.Model):
upi = models.CharField(primary_key=True, max_length=13)
class Meta:
db_table = 'mytable'
Django ORM (takes ~ 1s)
from myapp.models import *
r = Mytable.objects.get(upi='xxxxxxxxxxxxx')
Raw query with bind parameters (takes ~ 1s)
cursor.execute("SELECT * FROM mytable WHERE upi = %s", ['xxxxxxxxxxxxx'])
row = cursor.fetchone()
print row
Raw query with no bind parameters (instantaneous)
cursor.execute("SELECT * FROM mytable WHERE upi = 'xxxxxxxxxxxxx'")
row = cursor.fetchone()
print row
My environment
Python 2.6.6
Django 1.5.4
cx-Oracle 5.1.2
Oracle 11g
When connecting to the Oracle database I specify:
'OPTIONS': {
'threaded': True,
}
Any help will be greatly appreciated.
[Update]
I did some further testing using the debugsqlshell tool from the Django debug toolbar.
# takes ~1s
>>>Mytable.objects.get(upi='xxxxxxxxxxxxx')
SELECT "Mytable"."UPI"
FROM "Mytable"
WHERE "Mytable"."UPI" = :arg0 [2.70ms]
This suggests that Django uses the Oracle bind parameters, and the query itself is very fast, but creating the corresponding Python object takes a very long time.
Just to confirm, I ran the same query using cx_Oracle (note that the cursor in my original question is the Django cursor).
import cx_Oracle
db= cx_Oracle.connect('connection_string')
cursor = db.cursor()
# instantaneous
cursor.execute('SELECT * from mytable where upi = :upi', {'upi':'xxxxxxxxxxxxx'})
cursor.fetchall()
What could be slowing down Django ORM?
[Update 2] We looked at the database performance from the Oracle side, and it turns out that the index is not used when the query comes from Django. Any ideas why this might be the case?
Using TO_CHAR(character) should solve the performance issue:
cursor.execute("SELECT * FROM mytable WHERE upi = TO_CHAR(%s)", ['xxxxxxxxxxxxx'])
After working with our DBAs, it turned out that for some reason the Django get(upi='xxxxxxxxxxxx') queries didn't use the database index.
When the same query was rewritten using filter(upi='xxxxxxxxxxxx')[:1].get(), the query was fast.
The get query was fast only with integer primary keys (it was string in the original question).
FINAL SOLUTION
create index index_name on Mytable(SYS_OP_C2C(upi));
There seems to be some mismatch between the character sets used by cx_Oracle and Oracle. Adding the C2C index fixes the problem.
UPDATE:
Also, switching to NVARCHAR2 from VARCHAR2 in Oracle has the same effect and can be used instead of the functional index.
Here are some useful discussion threads that helped me:
http://comments.gmane.org/gmane.comp.python.db.cx-oracle/3049
http://comments.gmane.org/gmane.comp.python.db.cx-oracle/2940

In Django, how to implement foreign key relations between tables in different mysql dbs

In MySQL, we can have foreign key relationships between tables in different databases. I am finding it difficult to translate this relationship on the respective Django models.
I have read in the docs that cross-db relationships are not supported, but can we override some property/function so that we can make tables be identified as DB.table rather than table?
For example, there is table table1 in DB1 that gets referenced in some table2 in DB2. Django tries (unsuccessfully) to find table1 in DB2, and raises a DatabaseError
Variable Value
charset 'latin1'
exc <class '_mysql_exceptions.ProgrammingError'>
self <MySQLdb.cursors.Cursor object at 0x2a87ed0>
args (195,)
db <weakproxy at 0x2a95208 to Connection at 0xdad0>
value ProgrammingError(1146, "Table 'DB2.table1' doesn't exist")
query 'SELECT (1) AS `a` FROM `table1` WHERE `table1`.`ndx` = 195 LIMIT 1'
Almost everything works, except the save method. A push in the right direction would help a lot!
You required Manually Selecting a Database.
Looking on the error you gave, you should do something like this:
qs = table1.objects.using('DB1 ').filter(pk=id)
# just an example
In this example we are explicitly telling Django to locate table1 in DB1.
It seems we cannot do anything to get relationships working between two tables in different mysql DBs. This is by design. Ticket 17875 has some info. We need to write code that works around this.

Django AutoField not returning new primary_key

We've got a small problem with a Django project we're working on and our postgresql database.
The project we're working on is a site/db conversion from a PHP site to a django site. So we used inspect db to generate the models from the current PHP backend.
It gave us this and we added the primary_key and unique equals True:
class Company(models.Model):
companyid = models.IntegerField(primary_key=True,unique=True)
...
...
That didn't seem to be working when we finally got to saving a new Company entry. It would return a not-null constraint error, so we migrated to an AutoField like below:
class Company(models.Model):
companyid = models.AutoField(primary_key=True)
...
...
This saves the Company entry fine but the problem is when we do
result = form.save()
We can't do
result.pk or result.companyid
to get the newly given Primary Key in the database (yet we can see that it has been given a proper companyid in the database.
We are at a loss for what is happening. Any ideas or answers would be greatly appreciated, thanks!
I just ran into the same thing, but during a django upgrade of a project with a lot of history. What a pain...
Anyway, the problem seems to result from the way django's postgresql backend gets the primary key for a newly created object: it uses pg_get_serial_sequence to resolve the sequence for a table's primary key. In my case, the id column wasn't created with a serial type, but rather with an integer, which means that my sequence isn't properly connected to the table.column.
The following is based on a table with the create statement, you'll have to adjust your table names, columns and sequence names according to your situation:
CREATE TABLE "mike_test" (
"id" integer NOT NULL PRIMARY KEY,
"somefield" varchar(30) NOT NULL UNIQUE
);
The solution if you're using postgresql 8.3 or later is pretty easy:
ALTER SEQUENCE mike_test_id_seq OWNED BY mike_test.id;
If you're using 8.1 though, things are a little muckier. I recreated my column with the following (simplest) case:
ALTER TABLE mike_test ADD COLUMN temp_id serial NOT NULL;
UPDATE mike_test SET temp_id = id;
ALTER TABLE mike_test DROP COLUMN id;
ALTER TABLE mike_test ADD COLUMN id serial NOT NULL PRIMARY KEY;
UPDATE mike_test SET id = temp_id;
ALTER TABLE mike_test DROP COLUMN temp_id;
SELECT setval('mike_test_id_seq', (SELECT MAX(id) FROM mike_test));
If your column is involved in any other constraints, you'll have even more fun with it.

Automate the generation of natural keys

I'm studying a way to serialize part of the data in database A and deserialize it in database B (a sort of save/restore between different installations) and I've had a look to Django natural keys to avoid problems due to duplicated IDs.
The only issue is that I should add a custom manager and a new method to all my models. Is there a way to make Django automatically generate natural keys by looking at unique=True or unique_togheter fields?
Please note this answer has nothing to do with Django, but hopefully give you another alternative to think about.
You didn't mention your database, however, in SQL Server there is a BINARY_CHECKSUM() keyword you can use to give you a unique value for the data held in the row. Think of it as a hash against all the fields in the row.
This checksum method can be used to update a database from another by checking if local row checksum <> remote row checksum.
This SQL below will update a local database from a remote database. It won't insert new rows, for that you use insert ... where id > #MaxLocalID
SELECT delivery_item_id, BINARY_CHECKSUM(*) AS bc
INTO #DI
FROM [REMOTE.NETWORK.LOCAL].YourDatabase.dbo.delivery_item di
SELECT delivery_item_id, BINARY_CHECKSUM(*) AS bc
INTO #DI_local
FROM delivery_item di
-- Get rid of items that already match
DELETE FROM #DI_local
WHERE delivery_item_id IN (SELECT l.delivery_item_id
FROM #DI x, #DI_local l
WHERE l.delivery_item_id = x.delivery_item_id
AND l.bc = x.bc)
DROP TABLE #DI
UPDATE DI
SET engineer_id = X.engineer_id,
... -- Set other fields here
FROM delivery_item DI,
[REMOTE.NETWORK.LOCAL].YourDatabase.dbo.delivery_item x,
#DI_local L
WHERE x.delivery_item_id = L.delivery_item_id
AND DI.delivery_item_id = L.delivery_item_id
DROP TABLE #DI_local
For the above to work, you will need a linked server between your local database and the remote database:
-- Create linked server if you don't have one already
IF NOT EXISTS ( SELECT srv.name
FROM sys.servers srv
WHERE srv.server_id != 0
AND srv.name = N'REMOTE.NETWORK.LOCAL' )
BEGIN
EXEC master.dbo.sp_addlinkedserver #server = N'REMOTE.NETWORK.LOCAL',
#srvproduct = N'SQL Server'
EXEC master.dbo.sp_addlinkedsrvlogin
#rmtsrvname = N'REMOTE.NETWORK.LOCAL',
#useself = N'False', #locallogin = NULL,
#rmtuser = N'your user name',
#rmtpassword = 'your password'
END
GO
In that case you should use a GUID as your key. The database can automatically generate these for you. Google uniqueidentifier. We have 50+ warehouses all inserting data remotely and send their data up to our primary database using SQL Server replication. They all use a GUID as the primary key as this is guaranteed to be unique. It works very well.
my solution has nothing to do with natural keys but uses picke/unpickle.
It's not the most efficient way, but it's simple and easy to adapt to your code. I don't know if it works with a complex db structure, but if this is not your case give it a try!
when connected to db A:
import pickle
records_a = your_model.objects.filter(...)
f = open("pickled.records_a.txt", 'wb')
pickle.dump(records_a, f)
f.close()
then move the file and when connected to db B run:
import pickle
records_a = pickle.load(open('pickled.records_a.txt'))
for r in records_a:
r.id = None
r.save()
Hope this helps
make a custom base model by extending models.Model class, and write your generic manager inside it, and acustom .save() method then edit your models to extend the custome base model. this will have no side effect on your db tables structure nor old saved data, except when you update some old rows. and if you had old data try to make a fake update to all your recoreds.