Django inspectdb for Oracle - django

I am doing a project and need retrieve tables/models from a Oracle database(version 19c). So I am trying to use Django ‘inspectdb’ to do this.
– my settings.py looks like:
import cx_Oracle
cx_Oracle.init_oracle_client(lib_dir=“/opt/oracle/instantclient_21_7”)
DATABASES = {
‘default’: {
‘ENGINE’: ‘django.db.backends.oracle’,
‘NAME’: ‘service name’,
‘USER’: ‘XXX’,
‘PASSWORD’: ‘XXX’,
‘HOST’: ‘’,
‘PORT’: ‘’,
‘OPTIONS’: {
‘threaded’: True,
‘use_returning_into’: False,
},
}
}
– Running these followings in shell are all good:
import cx_Oracle
dsn_tns = cx_Oracle.makedsn(‘Host Name’, ‘Port Number’, service_name=‘Service Name’)
conn = cx_Oracle.connect(user=‘User Name’, password=‘Personal Password’, dsn=dsn_tns)
c = conn.cursor()
c.execute(‘select * from schema.table’)
for row in c:
print (row)
Issues: But when I try using : python3 manage.py inspectdb schema.table
It returns error :
"The error was: ORA-00942: table or view does not exist "
Could anyone help me? thanks alot!

Related

Not being able to use 'from airflow.providers.google.cloud.operators.bigquery import BigQueryOperator' in Airflow 2.0

I am learning Cloud Composer and Airflow in Google Cloud Platform. I am trying to do some transformations and load into another table. from airflow.providers.google.cloud.operators.bigquery import BigQueryOperator gives me an error and i have looked through airflow documentation and cant see if it has been changed or not. This is my code
from airflow.providers.google.cloud.operators.bigquery import BigQueryOperator
bq_to_bq = BigQueryOperator(
task_id = "bq_to_bq",
sql = "SELECT count(*) as count FROM `raw_bikesharing.stations`",
destination_dataset_table = 'dwh_bikesharing.temporary_stations_count',
write_disposition = 'WRITE_TRUNCATE',
create_disposition = 'CREATE_IF_NEEDED',
use_legacy_sql = False,
priority = 'BATCH'
)
No name 'BigQueryOperator' in module 'airflow.providers.google.cloud.operators.bigquery'
There is another up to date operator with Airflow to execute query and create a job : BigQueryInsertJobOperator
I think you didn’t used an existing one :
import airflow
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
execute_query = BigQueryInsertJobOperator(
task_id='execute_query_task_id',
configuration={
"query": {
"query": "select …",
"useLegacySql": False,
}
},
location='EU'
)
You can check this example from my Github repository.

SQLAlchemy Select as Struct to resolve error "Subquery of type IN must have only one output column"

How can I get the following SQLAlchemy expression to work with bigquery (similarly to how it works with SQLite):
conn.execute(
data.update().
where(sqlalchemy.tuple_(data.c.person_id, data.c.person_name).
not_in(sqlalchemy.sql.select(people.c.id, people.c.name))).
values(invalid=True))
This results in the error Subquery of type IN must have only one output column when run with BigQuery:
sqlalchemy.exc.DatabaseError: (google.cloud.bigquery.dbapi.exceptions.DatabaseError) 400 Subquery of type IN must have only one output column at [1:98]
Location: US
Job ID: 4331d568-e06f-41fa-813b-9754574587d7
[SQL: UPDATE `data` SET `invalid`=%(invalid:BOOL)s WHERE ((`data`.`person_id`, `data`.`person_name`) NOT IN (SELECT `people`.`id`, `people`.`name`
FROM `people`))]
[parameters: {'invalid': True}]
(Background on this error at: https://sqlalche.me/e/14/4xp6)
Using SELECT AS STRUCT in the subquery resolves this error -- how can I do this with SQLAlchemy (and why doesn't SQLAlchemy automatically do this)? For example this works in BigQuery:
UPDATE `jdimatteo-v.scratch.data`
SET `invalid`=True
WHERE (person_id, person_name)
NOT IN (SELECT AS STRUCT id, name
FROM `jdimatteo-v.scratch.people`
)
Here is a full working example with SQLite, with the connection to BigQuery that results in the above error commented out:
#!/usr/bin/env python3
import sqlalchemy # pip install SQLAlchemy==1.4.27
engine = sqlalchemy.create_engine('sqlite://')
# engine = sqlalchemy.create_engine('bigquery://jdimatteo-v/scratch') # pip install sqlalchemy-bigquery==1.4.3
metadata_obj = sqlalchemy.MetaData()
people = sqlalchemy.Table(
'people', metadata_obj,
sqlalchemy.Column('id', sqlalchemy.Integer),
sqlalchemy.Column('name', sqlalchemy.String),
)
data = sqlalchemy.Table(
'data', metadata_obj,
sqlalchemy.Column('person_id', sqlalchemy.Integer),
sqlalchemy.Column('person_name', sqlalchemy.String),
sqlalchemy.Column('data_foo', sqlalchemy.String),
sqlalchemy.Column('invalid', sqlalchemy.Boolean),
)
metadata_obj.create_all(engine)
conn = engine.connect()
def create_records():
conn.execute(people.delete().where(True == True))
conn.execute(people.insert().values(id=1, name='Mary'))
conn.execute(people.insert().values(id=2, name='James'))
conn.execute(data.delete().where(True == True))
conn.execute(data.insert().values(person_id=1, person_name='Mary', data_foo='good foo', invalid=None))
conn.execute(data.insert().values(person_id=42, person_name='Bob', data_foo='chop suey', invalid=None))
conn.execute(data.insert().values(person_id=1, person_name='James', data_foo='mixed up', invalid=None))
def dynamic_update():
conn.execute(
data.update().
where(sqlalchemy.tuple_(data.c.person_id, data.c.person_name).
not_in(sqlalchemy.sql.select(people.c.id, people.c.name))).
values(invalid=True))
def print_records(msg):
print(f'{msg}:')
print(" people:")
for person in conn.execute(sqlalchemy.sql.select(people)):
print(' ', person)
print(" data:")
for datum in conn.execute(sqlalchemy.sql.select(data)):
print(' ', datum)
print()
create_records()
print_records('initial values')
dynamic_update()
print_records('after update')

Python 2.7 Selenium unable to extract data

I am trying to extra data by return error
NoSuchElementException: Message: u'Unable to locate element: {"method":"xpath","selector":"//*[#id=\'searchpopbox\']"}' ; Stacktrace:
at FirefoxDriver.findElementInternal_ (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/driver_component.js:8444)
at FirefoxDriver.findElement (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/driver_component.js:8453)
at DelayedCommand.executeInternal_/h (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10456)
at DelayedCommand.executeInternal_ (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10461)
at DelayedCommand.execute/< (file:///tmp/tmpjVcHQR/extensions/fxdriver#googlecode.com/components/command_processor.js:10401)
My code is as below and I am trying to get the list from the link
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
profile = webdriver.FirefoxProfile()
profile.set_preference('browser.download.folderList', 2)
profile.set_preference('browser.download.manager.showWhenStarting', False)
browser = webdriver.Firefox(profile)
url = 'https://www.bursamarketplace.com/index.php?tpl=th001_search_ajax'
browser.get(url)
time.sleep(15)
a = browser.find_element_by_xpath("//*[#id='searchpopbox']")
print a
I am seeking your help to get the right xpath for the url.
This gets all the listing for that table.
from webdriver_manager.chrome import ChromeDriverManager
from selenium import webdriver
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.get("https://www.bursamarketplace.com/index.php?tpl=th001_search_ajax")
time.sleep(15)
a = driver.find_element_by_xpath("//*[#id='searchpopbox']")
print(a.text)
Or without chromedrivermanager same thing applies to firefox
.Chrome(executable_path='absolutepathofchromedriver.exe')

Psycopg2 is not working

I am trying to connect to Postgres database using psycopg2 in a Django app but I am unable to connect. It is not throwing any exception whatsoever. I am not even able to debug it due to lack of exception throwing.
I used -
db_settings = settings.DATABASES['default']
conn = psycopg2.connect("dbname=" + db_settings['NAME'] +
" user=" db_settings['USER'] + " host=" +
db_settings['HOST'] + " password=" + db_settings['PASSWORD'])
cur = conn.cursor()
Here settings is the setting file in my Django app. Any suggestions how can I move forward and debug it so that I can find what I might be doing wrong?
to debug... open a django shell:
./manage.py shell
then in the console, import all you need and test
import psycopg2
from django.conf import settings
# have fun
or use pdb in your code
Hope this helps

How do I inspectdb 1 table from database which Contains 1000 tables

I got a schema which Contains 1000 tables,and many of them I don't need,
how can I just inspectdb the just tables that I need?
You can generate the model of a single table, running this command
python manage.py inspectdb TableName > output.py
This works also if you want to generate the model of a view
You can do it in the python console, or in *.py file:
from django.core.management.commands.inspectdb import Command
from django.conf import settings
from your_project_dir.settings import DATABASES # replace `your_project_dir`
settings.configure()
settings.DATABASES = DATABASES
Command().execute(table_name_filter=lambda table_name: table_name in ('table_what_you_need_1', 'table_what_you_need_2', ), database='default')
https://github.com/django/django/blob/master/django/core/management/commands/inspectdb.py#L32
You can do it by the following command in Django 2.2 or above
python manage.py inspectdb --database=[dbname] [table_name] > output.py
You can get the models of the tables you want by doing:
python manage.py inspectdb table1 table2 tableN > output.py
This way you can select only the tables you want.
You can generate model's python code and write to the console programmatically.
from django.core.management.commands.inspectdb import Command
command = Command()
command.execute(
database='default',
force_color=True,
no_color=False,
include_partitions=True,
include_views=True,
table=[
'auth_group',
'django_session'
]
)
set table=[] empty list to get all tables