Empty blob insert query in ODBC c ++ (oracle) - c++

I need to insert a blob in o oracle database. I am using c++ and ODBC library.
I am stucked at the insert query and update query .It is abstract for me how to make an blob insert query.
I know how to make an query for a non blob column.
My table structure is :
REATE TABLE t_testblob (
filename VARCHAR2(30) DEFAULT NULL NULL,
apkdata BLOB NULL
)
I found an exemple on insert and update :
INSERT INTO table_name VALUES (memberlist,?,memberlist)
UPDATE table_name SET ImageFieldName = ? WHERE ID=yourId
But these structure of querys or abstract to me . What should memberlist be ? why is there "?" where are the values to be inserted ?

Those question marks means that it is PreparedStatement. Such statements are good for both server and client. Server has less work because it is easier to parse such statement, and client do not need to worry about SQLInjection. Client prepares such query, builds buffer for input values and calls it.
Also such statement is executed very quick compared to "normal" queries, especially in loops, importing data from csv file etc.
I don't know what ODBC C++ library you use while ODBC is strictly C library. Other languages like Java or Python can use it too. I think the easiest is example in Python:
cursor = connection.cursor()
for txt in ('a', 'b', 'c'):
cursor.execute('SELECT * FROM test WHERE txt=?', (txt,))
Of course such PreparedStatement can be used in INSERT or UPDATE statements too, and for your example it can look like:
cursor.execute("INSERT INTO t_testblob (filename, apkdata) VALUE (?, ?)", filename, my_binary_data)

Related

How do I create a table for all tables in MySQL

I’m using MySQL for C++ and I want to create a new table for all the tables in my second database. The code I have now is:
CREATE TABLE new_table LIKE original_table;
INSERT INTO new_table SELECT * FROM original_table;
I want to this to work like a loop where all the tables and data in those tables are created for every table and piece of data there is in my second database. Can someone help me?
We can use a stored procedure to do the job in a loop. I just wrote the code and tested it in workbench. Got all my tables(excluding view) from sakila database to my sakila_copy database:
use testdb;
delimiter //
drop procedure if exists copy_tables //
create procedure copy_tables(old_db varchar(20),new_db varchar(20))
begin
declare tb_name varchar(30);
declare fin bool default false;
declare c cursor for select table_name from information_schema.tables where table_schema=old_db and table_type='BASE TABLE';
declare continue handler for not found set fin=true;
open c;
lp:loop
fetch c into tb_name;
if fin=true then
leave lp;
end if;
set #create_stmt=concat('create table ',new_db,'.',tb_name,' like ',old_db,'.',tb_name,';') ;
prepare ddl from #create_stmt;
execute ddl;
deallocate prepare ddl;
set #insert_stmt=concat('insert into ',new_db,'.',tb_name,' select * from ',old_db,'.',tb_name,';');
prepare dml from #insert_stmt;
execute dml;
deallocate prepare dml;
end loop lp;
close c;
end//
delimiter ;
create database sakila_copy;
call testdb.copy_tables('sakila','sakila_copy');
-- after the call, check the tables in sakila_copy to find the new tables
show tables in sakila_copy;
Note: As I stated before, only base tables are copied. I deliberately skipped views, as they provide logical access to tables and hold no data themselves.

Python + MySQL DB dynamic insert query based on number of columns to insert

I'm pretty novice at programming (recently learned functions), and have found myself re-writing the same "insert into mysql table" function (below) from script to script... mainly to just modify these two section - (name,insert_ts) &&& VALUES (%s, %s)
Is there a good way to re-write the below to accept ANY number of values , based on length of a tuple that contains values as well as inserting the column headers based on 'labels' list? VALUES (%s, %s) and this part (name,insert_ts)
list_of_tuples = [] #list of records to be inserted.
#take a list of dictionaries - and create a list of tuples in proper format/order
for dict1 in output:
one_list = []
one_list.extend((dict1['name'],dict1['insert_ts']))
list_of_tuples.append(tuple(one_list))
labels = ['name', 'insert_ts']
#db_write accepts table name as str, labels as str, and output as list of tuples
def db_write(table,labels,output):
local_cursor.executemany(""" INSERT INTO my_table
(name,insert_ts) #this is pulled from 'labels'
VALUES (%s, %s) #number of %s comes from len(labels)
"""
, list_of_tuples)
local_db.commit()
local_db.close()
#print 'done posting!'
Or, is there a better way to accomplish what I'm trying to do, using mysqldb?
Thank you all in advance!
After a bit of experience (3 months, heh), wanted to update everyone on the solution that seems to work pretty well!
Instead of using mysqldb, I spent some time learning how to use SQL Alchemy python package, and would recommend everyone do the same!
SQL Alchemy allows you to:
1) Define a table within python code (used Excel to come up with column names, etc).
2) Most important! You can pass on a dictionary to SQL Alchemy, and as long as dictionary's key names match the table's key names, everything will magically get posted to your SQL table. If you have 60 columns in your sql table, but your dict has only two keys - BAM, SQL Alchemy will take care of everything and post just the two values, and leave the other values in MySQL as blanks. MAGIC!

Python Cx_Oracle; How Can I Execute a SQL Insert using a list as a parameter

I generate a list of ID numbers. I want to execute an insert statement that grabs all records from one table where the ID value is in my list and insert those records into another table.
Instead of running through multiple execute statements (as I know is possible), I found this cx_Oracle function, that supposedly can execute everything with a single statement and list parameter. (It also avoids the clunky formatting of the SQL statement before passing in the parameters) But I think I need to alter my list before passing it in as a parameter. Just not sure how.
I referenced this web page:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
ids = getIDs()
print(ids)
[('12345',),('24567',),('78945',),('65423',)]
sql = """insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)"""
cursor.prepare(sql)
cursor.executemany(None, ids)
I expected the SQL statement to execute as follows:
Insert into scheme.newtable
select id, data1, data2, data3 from scheme.oldtable where id in ('12345','24567','78945','65423')
Instead I get the following error:
ORA-01036: illegal variable name/number
Edit:
I found this StackOverflow: How can I do a batch insert into an Oracle database using Python?
I updated my code to prepare the statement before hand and updated the list items to tuples and I'm still getting the same error.
You use executemany() for batch DML, e.g. when you want to insert a large number of values into a table as an efficient equivalent of running multiple insert statements. There are cx_Oracle examples discussed in https://blogs.oracle.com/opal/efficient-and-scalable-batch-statement-execution-in-python-cx_oracle
However what you are doing with
insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)
is a different thing - you are trying to execute one INSERT statement using multiple values in an IN clause. You would use a normal execute() for this.
Since Oracle keeps bind data distinct from SQL, you can't pass in multiple values to a single bind parameter because the data is treated as a single SQL entity, not a list of values. You could use %s string substitution syntax you have, but this is open to SQL Injection attacks.
There are various generic techniques that are common to Oracle language interfaces, see https://oracle.github.io/node-oracledb/doc/api.html#sqlwherein for solutions that you can rewrite to Python syntax.
using temporary table to save ids (batch insert)
cursor.prepare('insert into temp_table values (:1)')
dictList = [{'1': x} for x in ids]
cursor.executemany(None, dictList)
then insert selected value into newtable
sql="insert into scheme.newtable (selectid, data1, data2, data3 from scheme.oldtable inner join temp_table on scheme.oldtable.id = temp_table.id)"
cursor.execut(sql,connection)
the script of create temporary table in oracle
CREATE GLOBAL TEMPORARY TABLE temp_table
(
ID number
);
commit
I hope this useful.

How to delete a row from csv file on datalake store without using usql?

I am writing a unit test for appending data to CSV file on a datalake. I want to test it by finding my test data appended to the same file and once I found it I want to delete the row I inserted. Basically once I found the test data My test will pass but as the tests are run in production so I have to search for my test data i.e to find the row I have inserted in a file and delete it after the test is run.
I want to do it without using usql inorder to avoid the cost factor involved in using usql. What are the other possible ways we can do it?
You cannot delete a row (or any part) from a file. Azure data lake store is an append-only file system. Data once committed cannot be erased or updated. If you're testing in production, your application needs to be aware of test rows and ignore them appropriately.
The other choice is to read all the rows in U-SQL and then write an output excluding the test rows.
Like other big data analytics platforms, ADLA / U-SQL does not support appending to files per se. What you can do is take an input file, append some content to it (eg via U-SQL) and write it out as another file, eg a simple example:
DECLARE #inputFilepath string = "input/input79.txt";
DECLARE #outputFilepath string = "output/output.txt";
#input =
EXTRACT col1 int,
col2 DateTime,
col3 string
FROM #inputFilepath
USING Extractors.Csv(skipFirstNRows : 1);
#output =
SELECT *
FROM #input
UNION ALL
SELECT *
FROM(
VALUES
(
2,
DateTime.Now,
"some string"
) ) AS x (col1, col2, col3);
OUTPUT #output
TO #outputFilepath
USING Outputters.Csv(quoting : false, outputHeader : true);
If you want further control, you can do some things via the Powershell SDK, eg test an item exists:
Test-AdlStoreItem -Account $adls -Path "/data.csv"
or move an item with Move-AzureRmDataLakeStoreItem. More details here:
Manage Azure Data Lake Analytics using Azure PowerShell

How to prepare a C++ string for sql query

I have to prepare strings to be suitable for queries because these strings will be used in the queries as field values. if they contain a ' etc the sql query fails to execute.
I therefore want to replace ' with '' I have seen the code to find and replace a substring with a substring. but I guess the problem is a little tricky because replacing string also contains two single quotes '' replacing one quote ' so when I have to find the next occurance it would encounter a ' which was intentionally replaced.
I am using Sql lite C api and the example query might look like this
select * from persons where name = 'John' D'oe'
Since John Doe contain a ' the query will fail , so I want all occurances of ' in the name to replaced with ''
Any ideas how you guys prepares your field values in query to be used in sql ??? may be it's a basic thing but I am not too smart in C/C++.
your help would be very helpful
Use queries with arguments instead of replacing stuff, which could lead to several problems (like SQL injection vulnerabilities).
MySQL example:
sql::Connection *con = ...;
string query = "SELECT * FROM TABLE WHERE ID = ?";
sql::PreparedStatement *prep_stmt = con->prepareStatement(query);
prep_stmt->setInt(1, 1); // Replace first argument with 1
prep_stmt->execute();
This will execute SELECT * FROM TABLE WHERE ID = 1.
EDIT: more info for SQLite prepared statements here and here.
It depends on the SQL Library you are using. Some of them will have the concept of a PreparedStatement, which you will use question marks in place of the variables, then when you set those variables on the statement, it will internally ensure that you cannot inject sql commands.