SQL code (all in one file that is eventually saved in the python variable "query"):
select #dtmax:=DATE_FORMAT(max(dt), '%Y%m') from table_A;
delete from table_B where DATE_FORMAT(dt, '%Y%m')=#dtmax;
Does mysql-connector allow the use of variable assignment like I've done in the query above. i.e. take the value of max(date) from TABLE_A and delete everything with that date from TABLE_B.
python code:
c = conn.cursor(buffered=True)
c.execute(query, multi=True)
conn.commit()
conn.close()
All I know is that the 2nd SQL statement doesnt execute.
I can copy and paste the SQL code into Toad and run it there without any problems but not through mysql.connector library. I would have used pandas but this is legacy script written by someone else and I don't have time to re-write everything.
I kindly appreciate some help.
When you use multi=True, then execute() will return a generator. You need to iterate over that generator to actually advance the processing to the next sql statement in your multi-statement query:
c = conn.cursor(buffered=True)
results = c.execute(query, multi=True)
for cur in results:
print('cursor:', cur)
if cur.with_rows:
print('result:', cur.fetchall())
conn.commit()
conn.close()
cur.with_rows will be True if there are results to fetch for the current statement.
This is all described in the documentation of MySQLCursor.execute()
Related
Firstly, I have a table in SQLlite3 with two fields CAR (TEXT NOT NULL), checkout (TEXT NOT NULL)
car checkout
red %d%d/%m%m/%Y (for example 27/09/2021)
Second, I wrote a script which the structure is when I run it, all the entries that current date is equal or bigger than checkout to be deleted.
Third, in the same script with SELECT to check if the car is in the list and checkout is bigger than current date exclude from my available cars.
The code snippet makes the first step is the following:
try:
con = lite.connect(DB)
with con:
paper=[]
cur=con.cursor()
cur.execute("DELETE FROM CHECK_TABLE WHERE DATE(substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2))<=DATE(strftime('%Y%m%d',date('now')))")
con.commit()
print('Entries with old dates deleted.')
except lite.Error as e:
print('Error connection: ',e)
The problem is that is not deleting anything. The strange behaviour is firstly that the SQL query works in DB Browser,
Image: Proof DB Browser in Windows 10 - Python2.7 - SQLite3
the second strange behaviour is that no error is raising and the third strange is that I tested two days ago and it worked normally! I really need your thoughts.
The same logic is in the following code snippet which is the the third step that I described above with SELECT command.
def ReadDateAndCar(car):
try:
con = lite.connect(DB)
with con:
paper=[]
cur=con.cursor()
cur.execute("SELECT DISTINCT car FROM CHECK_TABLE WHERE car='"+car+"' AND DATE(substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2))<=DATE(strftime('%Y%m%d',date('now')))")
free_cars=cur.fetchall()
return free_cars
except lite.Error as e:
print('Error connection: ',e)
return 0
Exactly the same problems. SQL query works fine, no python error is raising, it worked few days ago. Can someone enlighten me?
Both your queries are wrong and they don't work in DB Browser either.
What you should do is store the dates with the ISO format YYYY-MM-DD, because this is the only text date format compatible with SQLite's datetime functions like date() and strftime() and it is comparable.
If you use any other format the result of these functions is null and this is what happens in your case.
The expressions substr(checkout,7,4)||substr(checkout,4,2)||substr(checkout,1,2) and strftime('%Y%m%d',date('now')) return dates in the format YYYYMMDD and if you use them inside date() or strftime() the result is null.
Since you obtain in both sides of the inequality dates in the format YYYYMMDD then they are directly comparable and you should not use the function date().
The condition should be:
substr(checkout, -4) || substr(checkout, 4, 2) || substr(checkout, 1, 2) <= strftime('%Y%m%d', 'now')
I'm pretty novice at programming (recently learned functions), and have found myself re-writing the same "insert into mysql table" function (below) from script to script... mainly to just modify these two section - (name,insert_ts) &&& VALUES (%s, %s)
Is there a good way to re-write the below to accept ANY number of values , based on length of a tuple that contains values as well as inserting the column headers based on 'labels' list? VALUES (%s, %s) and this part (name,insert_ts)
list_of_tuples = [] #list of records to be inserted.
#take a list of dictionaries - and create a list of tuples in proper format/order
for dict1 in output:
one_list = []
one_list.extend((dict1['name'],dict1['insert_ts']))
list_of_tuples.append(tuple(one_list))
labels = ['name', 'insert_ts']
#db_write accepts table name as str, labels as str, and output as list of tuples
def db_write(table,labels,output):
local_cursor.executemany(""" INSERT INTO my_table
(name,insert_ts) #this is pulled from 'labels'
VALUES (%s, %s) #number of %s comes from len(labels)
"""
, list_of_tuples)
local_db.commit()
local_db.close()
#print 'done posting!'
Or, is there a better way to accomplish what I'm trying to do, using mysqldb?
Thank you all in advance!
After a bit of experience (3 months, heh), wanted to update everyone on the solution that seems to work pretty well!
Instead of using mysqldb, I spent some time learning how to use SQL Alchemy python package, and would recommend everyone do the same!
SQL Alchemy allows you to:
1) Define a table within python code (used Excel to come up with column names, etc).
2) Most important! You can pass on a dictionary to SQL Alchemy, and as long as dictionary's key names match the table's key names, everything will magically get posted to your SQL table. If you have 60 columns in your sql table, but your dict has only two keys - BAM, SQL Alchemy will take care of everything and post just the two values, and leave the other values in MySQL as blanks. MAGIC!
I have a problem when I try to use pyodbc executemany function.
I have an Oracle database and I want to extract data for multiple days.
I cannot use between in my request, because the database is not indexed on the date field and its taking forever.
I want to manually ask all day and process answers.
I cannot thread this part, so I wanted to use executemany to get rows more quickly.
The problem is when I use executemany I only got the result of the last argument asked.
Here is my code:
import pyodbc
conn = pyodbc.connect('DRIVER={Oracle in instantclient_11_2};DBQ=dbname;UID=uid;PWD=pwd')
cursor = conn.cursor()
query = "SELECT date FROM table WHERE date = TO_DATE(?, 'DD/MM/YYYY')"
query_args = (
('29/04/2016',),
('28/04/2016',),
)
cursor.executemany(query, query_args)
rows = cursor.fetchall()
In rows, I can only find rows with (datetime.datetime(2016, 4, 28, 0, 0), ).
Always the last argument.
I am using python 2.7.9 from WinPython on a Oracle database with a client on 11.0.2.
Except this query, every other query is perfectly fine.
I cannot use IN () synthax for 2 reasons:
I want to limit operations on database side, and do most of thing on script side (I've tried but it's way too long)
I might have more than 1000 different dates in the request.
(Right now I'm using IN() OR IN() OR IN()... but if anyone find something better that would be wonderful !)
Am I doing something wrong ?
Thanks for helping.
Your query runs once with one argument. If you want to run for multiple dates either use "IN" clause, this will require to modify query_args a bit.
"SELECT date FROM table WHERE date in (TO_DATE(?, 'DD/MM/YYYY'), TO_DATE(?, 'DD/MM/YYYY'))"
query_args = (
('29/04/2016','28/04/2016'),
)
or cursor through each date argument:
while query_arg in query_args:
cursor.executemany(query, query_arg )
rows = cursor.fetchall()
I need to insert a blob in o oracle database. I am using c++ and ODBC library.
I am stucked at the insert query and update query .It is abstract for me how to make an blob insert query.
I know how to make an query for a non blob column.
My table structure is :
REATE TABLE t_testblob (
filename VARCHAR2(30) DEFAULT NULL NULL,
apkdata BLOB NULL
)
I found an exemple on insert and update :
INSERT INTO table_name VALUES (memberlist,?,memberlist)
UPDATE table_name SET ImageFieldName = ? WHERE ID=yourId
But these structure of querys or abstract to me . What should memberlist be ? why is there "?" where are the values to be inserted ?
Those question marks means that it is PreparedStatement. Such statements are good for both server and client. Server has less work because it is easier to parse such statement, and client do not need to worry about SQLInjection. Client prepares such query, builds buffer for input values and calls it.
Also such statement is executed very quick compared to "normal" queries, especially in loops, importing data from csv file etc.
I don't know what ODBC C++ library you use while ODBC is strictly C library. Other languages like Java or Python can use it too. I think the easiest is example in Python:
cursor = connection.cursor()
for txt in ('a', 'b', 'c'):
cursor.execute('SELECT * FROM test WHERE txt=?', (txt,))
Of course such PreparedStatement can be used in INSERT or UPDATE statements too, and for your example it can look like:
cursor.execute("INSERT INTO t_testblob (filename, apkdata) VALUE (?, ?)", filename, my_binary_data)
I have the following SQL statement:
USE "ws_results_db_2011_09_11_09_06_24";SELECT table_name FROM INFORMATION_SCHEMA.Tables WHERE table_name like 'NET_%_STAT' order by table_name
I am using the following C++ code to execute it:
IDBCreateCommandPtr spDBCreateCommand = GetTheDBCreateCommandPointer();
ICommandTextPtr spCommandText;
spDBCreateCommand->CreateCommand(NULL, IID_ICommandText, reinterpret_cast<IUnknown **>(&spCommandText));
spCommandText->SetCommandText(DBGUID_SQL, GetTheQueryText());
IRowsetPtr spRowset;
spCommandText->Execute(NULL, IID_IRowset, NULL, NULL, reinterpret_cast<IUnknown **>(&spRowset));
RowHandles hRows(spRowset, 0);
ULONG rowCount;
ULONG maxRowCount = 1;
spRowset->GetNextRows(DB_NULL_HCHAPTER, 0, maxRowCount, &rowCount, hRows.get_addr());
Two notes:
Error handling is omitted for brevity
RowHandles implements the RAII concept for HROW *
Anyway, I fail to execute the two SQL statements. What happens is that spCommandText->Execute returns S_OK, but sets spRowset to NULL.
If I execute the same spCommandText->Execute the second time (by moving back the instruction pointer during the debugging session), then a valid IRowset pointer is returned - I successfully obtain the correct column information using it. But spRowset->GetNextRows sets rowCount to 0 and returns DB_S_ENDOFROWSET - no luck.
The code is working fine when I execute a single SQL statement.
What am I doing wrong?
Thanks.
It is up to the client to split the sql commands - isql does this on the ; ie you are asking for two commands the use and the select.
So the fix is to do the two commands by separate sets of CreateCommands and execute.
Also note in this case you can do the commands as one SQL statement
SELECT table_name FROM ws_results_db_2011_09_11_09_06_24.INFORMATION_SCHEMA.Tables
WHERE table_name like 'NET_%_STAT'
order by table_name