Python + MySQL DB dynamic insert query based on number of columns to insert - python-2.7

I'm pretty novice at programming (recently learned functions), and have found myself re-writing the same "insert into mysql table" function (below) from script to script... mainly to just modify these two section - (name,insert_ts) &&& VALUES (%s, %s)
Is there a good way to re-write the below to accept ANY number of values , based on length of a tuple that contains values as well as inserting the column headers based on 'labels' list? VALUES (%s, %s) and this part (name,insert_ts)
list_of_tuples = [] #list of records to be inserted.
#take a list of dictionaries - and create a list of tuples in proper format/order
for dict1 in output:
one_list = []
one_list.extend((dict1['name'],dict1['insert_ts']))
list_of_tuples.append(tuple(one_list))
labels = ['name', 'insert_ts']
#db_write accepts table name as str, labels as str, and output as list of tuples
def db_write(table,labels,output):
local_cursor.executemany(""" INSERT INTO my_table
(name,insert_ts) #this is pulled from 'labels'
VALUES (%s, %s) #number of %s comes from len(labels)
"""
, list_of_tuples)
local_db.commit()
local_db.close()
#print 'done posting!'
Or, is there a better way to accomplish what I'm trying to do, using mysqldb?
Thank you all in advance!

After a bit of experience (3 months, heh), wanted to update everyone on the solution that seems to work pretty well!
Instead of using mysqldb, I spent some time learning how to use SQL Alchemy python package, and would recommend everyone do the same!
SQL Alchemy allows you to:
1) Define a table within python code (used Excel to come up with column names, etc).
2) Most important! You can pass on a dictionary to SQL Alchemy, and as long as dictionary's key names match the table's key names, everything will magically get posted to your SQL table. If you have 60 columns in your sql table, but your dict has only two keys - BAM, SQL Alchemy will take care of everything and post just the two values, and leave the other values in MySQL as blanks. MAGIC!

Related

Python Cx_Oracle; How Can I Execute a SQL Insert using a list as a parameter

I generate a list of ID numbers. I want to execute an insert statement that grabs all records from one table where the ID value is in my list and insert those records into another table.
Instead of running through multiple execute statements (as I know is possible), I found this cx_Oracle function, that supposedly can execute everything with a single statement and list parameter. (It also avoids the clunky formatting of the SQL statement before passing in the parameters) But I think I need to alter my list before passing it in as a parameter. Just not sure how.
I referenced this web page:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
ids = getIDs()
print(ids)
[('12345',),('24567',),('78945',),('65423',)]
sql = """insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)"""
cursor.prepare(sql)
cursor.executemany(None, ids)
I expected the SQL statement to execute as follows:
Insert into scheme.newtable
select id, data1, data2, data3 from scheme.oldtable where id in ('12345','24567','78945','65423')
Instead I get the following error:
ORA-01036: illegal variable name/number
Edit:
I found this StackOverflow: How can I do a batch insert into an Oracle database using Python?
I updated my code to prepare the statement before hand and updated the list items to tuples and I'm still getting the same error.
You use executemany() for batch DML, e.g. when you want to insert a large number of values into a table as an efficient equivalent of running multiple insert statements. There are cx_Oracle examples discussed in https://blogs.oracle.com/opal/efficient-and-scalable-batch-statement-execution-in-python-cx_oracle
However what you are doing with
insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)
is a different thing - you are trying to execute one INSERT statement using multiple values in an IN clause. You would use a normal execute() for this.
Since Oracle keeps bind data distinct from SQL, you can't pass in multiple values to a single bind parameter because the data is treated as a single SQL entity, not a list of values. You could use %s string substitution syntax you have, but this is open to SQL Injection attacks.
There are various generic techniques that are common to Oracle language interfaces, see https://oracle.github.io/node-oracledb/doc/api.html#sqlwherein for solutions that you can rewrite to Python syntax.
using temporary table to save ids (batch insert)
cursor.prepare('insert into temp_table values (:1)')
dictList = [{'1': x} for x in ids]
cursor.executemany(None, dictList)
then insert selected value into newtable
sql="insert into scheme.newtable (selectid, data1, data2, data3 from scheme.oldtable inner join temp_table on scheme.oldtable.id = temp_table.id)"
cursor.execut(sql,connection)
the script of create temporary table in oracle
CREATE GLOBAL TEMPORARY TABLE temp_table
(
ID number
);
commit
I hope this useful.

Python dictionary map to SQL string with list comprehension

I have a python dictionary that maps column names from a source table to a destination table.
Note: this question was answered in a previous thread for a different query string, but this query string is more complicated and I'm not sure if it can be generated using the same list comprehension method.
Dictionary:
tablemap_computer = {
'ComputerID' : 'computer_id',
'HostName' : 'host_name',
'Number' : 'number'
}
I need to dynamically produce the following query string, such that it will update properly when new column name pairs are added to the dictionary.
(ComputerID, HostName, Number) VALUES (%(computer_id.name)s, %(host_name)s, %(number)s)
I started with a list comprehension but I only was able to generate the first part of the query string so far with this technique.
queryStrInsert = '('+','.join([tm_val for tm_key, tm_val in tablemap_incident.items()])+')'
print(queryStrInsert)
#Output
#(computer_id,host_name,number)
#Still must generate the remaining part of the query string parameterized VALUES
If I understand what you're trying to get at, you can get it done this way:
holder = list(zip(*tablemap_computer.items()))
"insert into mytable ({0}) values ({1})".format(",".join(holder[0]), ",".join(["%({})s".format(x) for x in holder[1]]))
This should yield:
# 'insert into mytable (HostName,Number,ComputerID) values (%(host_name)s,%(number)s,%(computer_id)s)'
I hope this helps.

How can I SELECT records using a select list made of foreign keys?

I have a table, DEBTOR, with a structure like this:
and a second table, DEBTOR.INFO structured like this:
I have a select list made of record IDs from the DEBTOR.INFO table. How can I
select * from DEBTOR WHERE 53 IN (name of select list)?
Is this even possible?
I realize this query looks more like SQL than RetrieVe but I wrote it that way for an easier understanding of what I'm trying to accomplish.
Currently, I accomplish this query by writing
SELECT DEBTOR WITH 53 EQ [paste list of DEBTOR.INFO record IDs]
but obviously this is unwieldy for large lists.
It looks to me that you cant do that. Even if you use and i-descriptor, It only works in one direction. TRANS("DEBTOR.INFO",53,0,"X") works from the DEBTOR file but not the other way. So TRANS("DEBTOR",#ID,53,"X") from DEBTOR.INFO will return nothing.
See this article on U2's site for a possible solution.
Would something like this work (two steps):
SELECT DEBTOR.INFO SAVING PACKET
LIST DEBTOR ....
This creates a select list of the data in the PACKET field in the DEBTOR.INFO file and makes it active. (If you have duplicate values that way you can add the keyword UNIQUE after SAVING).
Then the subsequent LIST command uses that active select list which contains values found in the #ID field of the file DEBTOR.
Not sure if you are still looking at this, but there is a simple option that will not require a lot of programming.
I did it with a program, a subroutine and a dictionary item.
First I set a named common variable to contain the list of DEBTOR.INFO ids:
SETLIST
*
* Use named common to hold list of keys
COMMON /MYKEYS/ KEYLIST
*
* Note for this example I am reading the list from SAVEDLISTS
OPEN "SAVEDLISTS" TO FILE ELSE STOP "CAN NOT OPEN SAVEDLISTS"
READ KEYLIST FROM FILE, "MIKE000" ELSE STOP "NO MIKE000 ITEM"
Now, I can create a subroutine that checks for a value in that list
CHECKLIST
SUBROUTINE CHECKLIST( RVAL, IVAL)
COMMON /MYKEYS/ KEYLIST
LOCATE IVAL IN KEYLIST <1> SETTING POS THEN
RVAL = 1
END ELSE RVAL = 0
RETURN
Lastly, I use a dictionary item to call the subroutine with the field I am looking for:
INLIST:
I
SUBR("CHECKLIST", FK)
IN LIST
10R
S
Now all I have to do is put the correct criteria on my list statement:
LIST DEBTOR WITH INLIST = 1 ACCOUNT STATUS FK
Id use the very powerfull EVAL with an XLATE ;
SELECT DEBTOR WITH EVAL \XLATE('DEBTOR.INFO',#RECORD<53>,'-1','X')\ NE ""

Empty blob insert query in ODBC c ++ (oracle)

I need to insert a blob in o oracle database. I am using c++ and ODBC library.
I am stucked at the insert query and update query .It is abstract for me how to make an blob insert query.
I know how to make an query for a non blob column.
My table structure is :
REATE TABLE t_testblob (
filename VARCHAR2(30) DEFAULT NULL NULL,
apkdata BLOB NULL
)
I found an exemple on insert and update :
INSERT INTO table_name VALUES (memberlist,?,memberlist)
UPDATE table_name SET ImageFieldName = ? WHERE ID=yourId
But these structure of querys or abstract to me . What should memberlist be ? why is there "?" where are the values to be inserted ?
Those question marks means that it is PreparedStatement. Such statements are good for both server and client. Server has less work because it is easier to parse such statement, and client do not need to worry about SQLInjection. Client prepares such query, builds buffer for input values and calls it.
Also such statement is executed very quick compared to "normal" queries, especially in loops, importing data from csv file etc.
I don't know what ODBC C++ library you use while ODBC is strictly C library. Other languages like Java or Python can use it too. I think the easiest is example in Python:
cursor = connection.cursor()
for txt in ('a', 'b', 'c'):
cursor.execute('SELECT * FROM test WHERE txt=?', (txt,))
Of course such PreparedStatement can be used in INSERT or UPDATE statements too, and for your example it can look like:
cursor.execute("INSERT INTO t_testblob (filename, apkdata) VALUE (?, ?)", filename, my_binary_data)

From a one to many SQL dataset Can I return a comma delimited list in SSRS?

I am returning a SQL dataset in SSRS (Microsoft SQL Server Reporting Services) with a one to many relationship like this:
ID REV Event
6117 B FTG-06a
6117 B FTG-06a PMT
6117 B GTI-04b
6124 A GBI-40
6124 A GTI-04b
6124 A GTD-04c
6136 M GBI-40
6141 C GBI-40
I would like to display it as a comma-delimited field in the last column [Event] like so:
ID REV Event
6117 B FTG-06a,FTG-06a PMT,GTI-04b
6124 A GBI-40, GTI-04b, GTD-04c
6136 M GBI-40
6141 C GBI-40
Is there a way to do this on the SSRS side of things?
You want to concat on the SQL side not on the SSRS side, that way you can combine these results in a stored procedure say, and then send it to the reporting layer.
Remember databases are there to work with the data. The report should just be used for the presentation layer, so there is no need to tire yourself with trying to get a function to parse this data out.
Best thing to do is do this at the sproc level and push the data from the sproc to the report.
Based on your edit this is how you would do it:
To concat fields take a look at COALESCE.
You will then get a string concat of all the values you have listed.
Here's an example:
use Northwind
declare #CategoryList varchar(1000)
select #CategoryList = coalesce(#CategoryList + ‘, ‘, ”) + CategoryName from Categories
select ‘Results = ‘ + #CategoryList
Now because you have an additional field namely the ID value, you cannot just add on values to the query, you will need to use a CURSOR with this otherwise you will get a notorious error about including additional fields to a calculated query.
Take a look here for more help, make sure you look at the comment at the bottom specifically posted by an 'Alberto' he has a similiar issue as you do and you should be able to figure it out using his comment.