Django with Oracle database 11g - django

I am new to the python language and django. I need to connect the django to the oracle database 11g, I have imported the cx_oracle library and using the instant client for connecting oracle with django, but when i run the command manage inspectdb > models.py. I get error as Invalid column identifier in the models.py. How could i solve it. I have only 2 tables in that schema i am connecting?

"Invalid column" suggests that you specified column name that doesn't exist in any of those tables, or you misspelled its name.
For example:
SQL> desc dept
Name
-----------------------------------------
DEPTNO
DNAME
LOC
SQL> select ndame from dept; --> misspelled column name
select ndame from dept
*
ERROR at line 1:
ORA-00904: "NDAME": invalid identifier
SQL> select imaginary_column from dept; --> non-existent column name
select imaginary_column from dept
*
ERROR at line 1:
ORA-00904: "IMAGINARY_COLUMN": invalid identifier
SQL>
Also, pay attention to letter case, especially if you created tables/columns using mixed case and enclosed those names into double quotes (if so, I'd suggest you to drop tables and recreate them, without double quotes. If you can't do that, you'll have to reference them using double quotes and exactly the same letter case).
So - check column names and compare them to your query. If you still can't make it work, post some more information - table description and your code.

I've faced the same problem. The problem is that Django expects your table have primary key (ID), so when your table is without key, it returns Invalid columns identifier.
https://docs.djangoproject.com/en/2.1/topics/db/models/#automatic-primary-key-fields

Related

Failed to parse SQL query - column invalid identifier

I am on Application Express 21.1.0.
I added a column to a db table, and tried to add that column to the Form region based on that table.
I got error...
ORA-20999: Failed to parse SQL query! ORA-06550: line 4, column 15: ORA-00904: "NEEDED_EXAMS": invalid identifier
And I can not find the column in any "source> column" attribute of any page item of that form.
I can query the new column in "SQL COMMANDS".
The new column's name is "NEEDED_EXAMS". It's a varachar2(500).
Don't do it manually; use built-in feature by right-clicking the region and then select "Synchronize columns" from the menu as it'll do everything for you. It works for reports and forms.
Solved.
I have many parsing schemas. And I was creating tables through object browser in different schema than my app's parsing schema.

How to RENAME struct/array nested columns using ALTER TABLE in BigQuery?

Suppose we have the following table in BigQuery:
CREATE TABLE sample_dataset.sample_table (
id INT
,struct_geo STRUCT
<
country STRING
,state STRING
,city STRING
>
,array_info ARRAY
<
STRUCT<
key STRING
,value STRING
>
>
);
I want to rename the columns inside the STRUCT and the ARRAY using an ALTER TABLE command. It's possible to follow the Google documentation available here for normal columns ("non-nested" columns) i:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN id TO str_id
But when I try to run the same command for nested columns I got errors from BigQuery.
Running the command for a column inside a STRUCT gives me the following message:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN `struct_geo.country` TO `struct_geo.str_country`
Error: ALTER TABLE RENAME COLUMN not found: struct_geo.country.
The exact same message appears when I run the same statement, but targeting a column inside an ARRAY:
ALTER TABLE sample_dataset.sample_table
RENAME COLUMN `array_info.str_key` TO `array_info.str_key`
Error: ALTER TABLE RENAME COLUMN not found: array_info.str_key
I got stuck since the BigQuery documentation about nested columns (available here) lacks examples of ALTER TABLE statements and refers directly to the default documentation for non-nested columns.
I understand that I can rename the columns by simply creating a new table using a CREATE TABLE new_table AS SELECT ... and then passing the new column names as aliases, but this would run a query over the whole table, which I'd rather avoid since my original table weighs way over 10TB...
Thanks in advance for any tips or solutions!

Django Migrate - row has an invalid foreign key but row does not exist

When migrating my database, I get the following error:
The row in table 'project_obicase' with primary key '2325' has an invalid foreign key: project_obicase.ckId_id contains a value '2443' that does not have a corresponding value in project_pupiladdressck.id.
Looking in my /admin/ site i cannot find this record '2325'. It skips from 2324 to 2333
project_obicase table:
Is there any way to resolve this foreign key mishap if I cannot locate the object? I'd be happy to remove record 2325 if I can find it.
Thanks
I solved this problem by deleting the records manually from the DB shell. (as the records did not appear on the front end)
manage.py dbshell
delete from table
WHERE NOT EXISTS (SELECT 1 FROM other_table t WHERE table.id = other_table.foreign_key)

After updating table id via csv file when trying to add new field getting - duplicate key value violates unique constraint

Problem.
After successful data migration from csv files to django /Postgres application .
When I try to add a new record via the application interface getting - duplicate key value violates unique constraint.(Since i had id's in my csv files -i use them as keys )
Basically the app try to generate id's that already migrated.
After each attempt ID increments by one so if I have 160 record I have to get this error 160 times and then when I try 160 times the time 161 record saves ok.
Any ideas how to solve it?
PostgreSQL doesn't have an actual AUTO_INCREMENT column, at least not in the way that MySQL does. Instead it has a special SERIAL. This creates a four-byte INT column and attaches a trigger to it. Behind the scenes, if PostgreSQL sees that there is no value in that ID column, it checks the value of a sequence created when that column was created.
You can see this by:
SELECT
TABLE_NAME, COLUMN_NAME, COLUMN_DEFAULT
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME='<your-table>' AND COLUMN_NAME = '<your-id-column>';
You should see something like:
table_name | column_name | column_default
--------------+---------------------------+-------------------------------------
<your-table> | <your-id-column> | nextval('<table-name>_<your-id-column>_seq'::regclass)
(1 row)
To resolve your particular issue, you're going to need to reset the value of the sequence (named <table-name>_<your-id-column>_seq) to reflect the current index.
ALTER SEQUENCE your_name_your_id_column_seq RESTART WITH 161;
Credit where credit is due.
Sequence syntax is here.
Finding the name is here.

db2 cannot drop foreign key with lowercase name

I am trying to drop a foreign key in DB2 through the command line. I have succeeded in this many times and I am sure that I am using the correct syntax:
db2 "alter table TABLENAME drop constraint fk_keyname"
Output:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0204N "FK_KEYNAME" is an undefined name. SQLSTATE=42704
All my foreign keys are created with an uppercase name. Except for the key I now want to drop. I don't know how to got created with a lowercase name but it seems that it will not drop keys that are lowercase.
When I try to add this foreign key (while it still exists) I get the following message:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0601N The name of the object to be created is identical to the existing
name "fk_keyname" of type "FOREIGN KEY". SQLSTATE=42710
Does anyone know how to drop foreign keys that have a lowercase name?
The answer by mustaccio worked. Tried all kinds of quotes but this way did the trick:
db2 'alter table TABLENAME drop constraint "fk_keyname"'
DB2 will convert object names to uppercase, unless they are quoted. Generally it's not a very good idea to create objects with lower- or mixed-case names. If your foreign key is actually "fk_keyname" (all lowercase), run db2 "alter table TABLENAME drop constraint \"fk_keyname\"" or db2 'alter table TABLENAME drop constraint "fk_keyname"'
This behaviour is not unique to DB2, by the way.