I have created a stored procedure in SSMS for the query SELECT * FROM TABLE and now I want to create a Django API and test it. What is the entire procedure?
My script from SQL Stored Procedure:
USE [test]
GO
/****** Object: StoredProcedure [dbo].[spGetAll] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: <Author,,Name>
-- Create date: <Create Date,,>
-- Description: <Description,,>
-- =============================================
CREATE PROCEDURE [dbo].[spGetAll]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
SELECT * from app_comment
END
GO
In order to call a stored procedure do the following-
from django.db import connection
def dictfetchall(cursor):
columns = [col[0] for col in cursor.description]
return [
dict(zip(columns, row))
for row in cursor.fetchall()
]
with connection.cursor() as cursor:
cursor.callproc("stored_procedure", [arg1, arg2, arg3])
data = dictfetchall(cursor)
For reference view docs
I could not get alamshafi2263's answer to work, giving various syntax errors. My procedure takes a parameter but returns no value.
Here is what i used successfully:
from django.db import connection
local_cursor = connection.cursor()
call_definition = f'call public.ap_update_holding_value({transaction_id})'
local_cursor.execute(call_definition)
In the above transaction_id is the parameter value.
Tested with Django==3.1.13, psycopg2==2.9.1
Related
import mysql.connector
connection = mysql.connector.connect(user="REMOVED",
password="REMOVED",
host="REMOVED",
database="REMOVED")
cur = connection.cursor()
# Latitude - remove letter A
cur.execute("UPDATE tau._inm_exportados_test_csv SET latitud = REPLACE (latitud, 'a=','');")
print("Latitude change remove letter A - executed!")
# Longitude - remove letter A
cur.execute("UPDATE tau._inm_exportados_test_csv SET longitud = REPLACE (longitud, 'a=','');")
print("Longitude change remove letter A - executed!")
# Latitude - MODIFY COLUMN
cur.execute("ALTER TABLE tau._inm_exportados_test_csv MODIFY COLUMN latitud DECIMAL(10,6);")
print("Latitude - MODIFY COLUMN - executed!")
# Longitude - MODIFY COLUMN
cur.execute("ALTER TABLE tau._inm_exportados_test_csv MODIFY COLUMN longitud DECIMAL(10,6);")
print("Longitude - MODIFY COLUMN - executed!")
# Post Code data type change
cur.execute("ALTER TABLE tau._inm_exportados_test_csv MODIFY COLUMN codigo_postal varchar(255);)")
print("Post Code data type change to varchar(255) - executed!")
connection.commit()
cur.close()
connection.close()
I'm trying to make this simple list of statements work without success. What makes it more confusing is that the first four statements work whereas the final one doesn't work even when I comment out the rest! The final statement gets the following reponse:
mysql.connector.errors.InterfaceError: Use multi=True when executing multiple statements
The datatype for codigo_postal is int(11) unlike latitud and longitud which are varchar.
I have tried creating new connections, new cursors, new connections AND cursors. I have tried adding multi="True" and combining statements into one operation. I have tried adding multi="True" to each cur.execute() as both the second and third parameter. I have run the statement in Workbench to ensure the statement is valid and it works.
No success with it here though...
You can use commit after you executed DML (Data Manipulation Language) commands. Also using multi=True can be more convenient to complete this job, but you need to run the generator which created by execute. doc.
Ordinary method:
cur = connection.cursor()
def alter(state,msg):
try:
cur.execute(state)
connection.commit()
except Exception as e:
connection.rollback()
raise e
print(msg)
alter("ALTER TABLE address MODIFY COLUMN id int(15);","done")
alter("ALTER TABLE address MODIFY COLUMN email varchar(35);","done")
alter("ALTER TABLE address MODIFY COLUMN person_id int(35);","done")
With multi=True:
cur = connection.cursor()
def alter(state,msg):
result = cur.execute(state,multi=True)
result.send(None)
print(msg,result)
try:
alter("ALTER TABLE address MODIFY COLUMN id int(45)","done")
alter("ALTER TABLE address MODIFY COLUMN email varchar(25)","done")
alter("ALTER TABLE address MODIFY COLUMN person_id int(25);","done")
connection.commit()
except Exception as e:
connection.rollback()
raise e
I had the same problem.
I wanted my code to be clean and I wanted to have all my commands in a list and just run them in a sequence.
I found this link and this link and finally was able to write this code:
import mysql.connector as sql
from mysql.connector import Error
commands = [
'''
USE sakila;
SELECT * FROM actor;
''',
'''
USE sakila;
SELECT * FROM actor WHERE actor_id < 10;
'''
]
connection_config_dict = {
'user': 'username',
'password': 'password',
'host': '127.0.0.1',
}
try:
connection = sql.connect(**connection_config_dict)
if connection.is_connected():
db_Info = connection.get_server_info()
print("Connected to MySQL Server version ", db_Info, '\n')
cursor = connection.cursor()
for command in commands:
for result in cursor.execute(command, multi=True):
if result.with_rows:
print("Rows produced by statement '{}':".format(
result.statement))
print(result.fetchall())
else:
print("Number of rows affected by statement '{}': {}".format(
result.statement, result.rowcount), '\n')
record = cursor.fetchall()
except Error as e:
print("Error while connecting to MySQL", e, '\n')
finally:
if connection.is_connected():
cursor.close()
connection.close()
print("MySQL connection is closed", '\n')
I wrote a query for one of my Big Query table called historical and I would like to copy the result of this query into a new Big Query table called historical_recent. I have difficulties to figure out how to do this operation with Python. Right now, I am able to execute my query and get the expected result:
SELECT * FROM gcp-sandbox.dailydev.historical WHERE (date BETWEEN '2015-11-05 00:00:00' AND '
2015-11-07 23:00:00')
I am also able to copy a my Big Query table without making any changes with this script:
from google.cloud import bigquery
client = bigquery.Client()
job = client.copy_table(
'gcp-sandbox.dailydev.historical',
'gcp-sandbox.dailydev.historical_copy')
How can I combine both using Python?
You can use INSERT statement as in below example
INSERT `gcp-sandbox.dailydev.historical_recent`
SELECT *
FROM `gcp-sandbox.dailydev.historical`
WHERE date BETWEEN '2015-11-05 00:00:00' AND '2015-11-07 23:00:00'
Using Python to save your query result.
from google.cloud import bigquery
client = bigquery.Client()
# Target table to save results
table_id = "gcp-sandbox.dailydev.historical_recent"
job_config = bigquery.QueryJobConfig(
allow_large_results=True,
destination=table_id,
use_legacy_sql=True
)
sql = """
SELECT * FROM gcp-sandbox.dailydev.historical
WHERE (date BETWEEN '2015-11-05 00:00:00' AND '2015-11-07 23:00:00')
"""
query = client.query(sql, job_config=job_config)
query.result()
print("Query results loaded to the table {}".format(table_id))
This example is based on the Google documentation.
I am using
Python 2.7
cx_Oracle 6.0.2
I am doing something like this in my code
import cx_Oracle
connection_string = "%s:%s/%s" % ("192.168.8.168", "1521", "xe")
connection = cx_Oracle.connect("system", "oracle", connection_string)
cur = connection.cursor()
print "Connection Version: {}".format(connection.version)
query = "select *from product_information"
cur.execute(query)
result = cur.fetchone()
print result
I got the output like this
Connection Version: 11.2.0.2.0
(1, u'????????????', 'test')
I am using following query to create table in oracle database
CREATE TABLE product_information
( product_id NUMBER(6)
, product_name NVARCHAR2(100)
, product_description VARCHAR2(1000));
I used the following query to insert data
insert into product_information values(2, 'दुःख', 'teting');
Edit 1
Query: SELECT * from NLS_DATABASE_PARAMETERS WHERE parameter IN ( 'NLS_LANGUAGE', 'NLS_TERRITORY', 'NLS_CHARACTERSET');
Result
NLS_LANGUAGE: AMERICAN, NLS_TERRITORY: AMERICA, NLS_CHARACTERSET:
AL32UTF8
I solved the problem.
First I added NLS_LANG=.AL32UTF8 as the environment variable in the system where Oracle is installed
Second I passed the encoding and nencoding parameter in connect function of cx_Oracle like below.
cx_Oracle.connect(username, password, connection_string,
encoding="UTF-8", nencoding="UTF-8")
This issue is also discussed here at https://github.com/oracle/python-cx_Oracle/issues/157
MyModel.objects.filter(name="my name", date__lte=datetime.now()).query
Outputs:
SELECT "mymodel"."id", "mymodel"."name", "mymodel"."date"
FROM "mymodel"
WHERE ("mymodel"."name" = my name AND "mymodel"."date" <= 2016-02-24 20:24:00.456974+00:00)
Which is not a valid SQL (filter parameters are unquoted).
How can I get the exact SQL that will be executed, which is:
SELECT "mymodel"."id", "mymodel"."name", "mymodel"."date"
FROM "mymodel"
WHERE ("mymodel"."name" = 'my name' AND "mymodel"."date" <= '2016-02-24 20:24:00.456974+00:00'::timestamptz)
This isn't fully answering it, but for my purposes I wanted to be able to rerun the query as raw sql. I was able to get sql and params using:
queryset = MyModel.objects.filter(name="my name", date__lte=datetime.now())
sql, sql_params = queryset.query.get_compiler(using=queryset.db).as_sql()
Then I could use these values to rerun the query as raw:
MyModel.objects.raw(sql, sql_params)
Here's how I can do it when MySQL is the backend,
cursor.execute('show tables')
rows = cursor.fetchall()
for row in rows:
cursor.execute('drop table %s; ' % row[0])
But how can I do it when postgresql is the backend?
cursor.execute("""SELECT table_name FROM information_schema.tables WHERE table_schema='public' AND table_type != 'VIEW' AND table_name NOT LIKE 'pg_ts_%%'""")
rows = cursor.fetchall()
for row in rows:
try:
cursor.execute('drop table %s cascade ' % row[0])
print "dropping %s" % row[0]
except:
print "couldn't drop %s" % row[0]
Courtesy of http://www.siafoo.net/snippet/85
You can use select * from pg_tables; get get a list of tables, although you probably want to exclude where schemaname <> 'pg_catalog'...
Based on another one of your recent questions, if you're trying to just drop all your django stuff, but don't have permission to drop the DB, can you just DROP the SCHEMA that Django has everything in?
Also on your drop, use CASCADE.
EDIT: Can you select * from information_schema.tables; ?
EDIT: Your column should be row[2] instead of row[0] and you need to specify which schema to look at with a WHERE schemaname = 'my_django_schema_here' clause.
EDIT: Or just SELECT table_name from pg_tables where schemaname = 'my_django_schema_here'; and row[0]
Documentation says that ./manage.py sqlclear Prints the DROP TABLE SQL statements for the given app name(s).
I use this script to clear the tables, I put it in a script called phoenixdb.sh because it burns the DB down and a new one rises from the ashes. I use this to prevent lots of migrations in the early dev portion of the project.
set -e
python manage.py dbshell <<EOF
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
EOF
python manage.py migrate
This wipes the tables from the db without deleting the db. Your Django user will need to own the schema though which you can setup with:
alter schema public owner to django-db-user-name;
And you might want to change the owner of the db as well
alter database django-db-name owner to django-db-user-name;
\dt is the equivalent command in postgres to list tables. Each row will contain values for (schema, Name, Type, Owner), so you have to use the second (row[1]) value.
Anyway, you solution will break (in MySQL and PostgreSQL) when foreign-key constraints are involved, and if there aren't any, you might get troubles with the sequences. So the best way is in my opinion to simply drop the whole database and call initdb again (which is also the more efficient solution).