MySQL, C++: Need Blob size to read blob data - c++

How do I get the size of data in a BLOB field in the Result Set? (Using C++ and MySQL Connector C++)
In order to read the data from the result set, I have allocate memory for it first. In order to allocate memory, I need to know the size of the blob data in the result set.
Searching the web and StackOverflow, I have found two methods: OCTECT and BLOB stream.
One method to find the BLOB size is to use OCTECT() function, which requires a new query and produces a new result set. I would rather not use this method.
Another method is to use the blob stream and seek to the end, and get the file position. However, I don't know if the stream can be rewound to the beginning in order to read the data. This method requires an additional read of the entire stream.
The ResultSet and ResultSetMetaData interfaces of MySQL Connector C++ 1.0.5 do not provide a method for obtaining the size of the data in a field (column).
Is there a process for obtaining the size of the data in a BLOB field given only the result set and a field name?
I am using MySQL Connector C++ 1.0.5, C++, Visual Studio 2008, Windows Vista / XP and "Server version: 5.1.41-community MySQL Community Server (GPL)".

You could do a select like:
select LENGTH(content),content where id=123;
where content is the BLOB field.
Regards.
see: LENGTH(str)

Related

Optimal access of NUMBER(14) column via ODBC instant client 32-bit driver

I am working on an application written in C++ that uses the 32-bit Instant Client drivers to access an Oracle database. The application uses the Record Field Exchange (RFX) methods to update the columns in the database tables. The database schema cannot be modified.
The C++ code was originally written to handle OID values as doubles because the OID column in the database is NUMBER(14), so a regular int won't be big enough. However, this leads to the database occasionally selecting a bad execution plan where it takes the OID values sent from the application and uses the to_binary_double function on them, rather than converting them to BIGINT. If this happens, the database does not do an index search over the data and instead does a full table scan.
We tried switching the OIDs to be type __int64 in the application, but there was an issue with the ODBC driver not supporting the BigInt type (or long long in C++). Similarly, when we tried to make the OIDs into longs, the database or the driver gave an error that the values sent to the database were too big for the column.
Working with the OIDs as Strings in C++ will work, but the database will never use the optimal index search because it has to convert the String to an integer before it can do any data retrieval. Because of this, we're just better off using the doubles we already have.
Does anyone have an idea of what we can do next? It is not the end of the world if we have to keep using doubles as before, but we were hoping to eliminate the chance for the database to run slowly.
We actually went with the "Convert all the OIDs to Strings in the C++ code" option. It turns out the database was still able to run an indexed search after converting the OIDs from Strings to integers. It would have been better if we switched ODBC driver to one that could handle BigInt, but that wasn't really an option for us so this will suffice.

Fetching Large data from Sql Server using ODBC - varchar, varbinary

How can i read varchar from sql server using native C++ code. Do you have any advice? What is the best way to do it?
My application(native c++) allows users to write Sql queries external and use that file to fetch some data from database which can be used inside our application.
Now the problem is customers can write queries which includes both small(int for example) and large data type(For Ex varchar and varbinary) columns ordered randomly.
I am using SQLBindCol to bind to application variables.I cannot use this method to bind large data types unless I allocate the buffer large enough to store the data(Length unknown while Binding). Data can be of any size(less than max size allowed by sql server column), I have memory concerns in allocating large buffers. So I thought I can use SQLGetData, but i see the below from microsoft.
The SQL Server Native Client ODBC driver does not support using SQLGetData to retrieve data in random column order. All unbound columns processed with SQLGetData must have higher column ordinals than the bound columns in the result set.
sqlgetdata
And also from this link it is said
For generic application, you may use SQLFetch + SQLGetData to obtain the maximum length and data. Actually, for a very long column (image datatype),
it is recommended to use SQLGetData since you can get the data in chunk, instead of pre-allocating a large buffer. However, if the amount of data is
not so large, you may use the datatype "varbinary(2048)
I can reorder the select columns in the query and use both sqlcolBind and sqlgetdata. But i am not very much convinced to do this.
From this stack overflow link pre-determining-the-length
It is suggested to either allocate more memory than you need or I had to issue two SELECT statements, one is query having larger datatype and other query to get the actual size of column using DATALENGTH, something like
select name, description from users where username = 'johnce'
select DATALENGTH(description) from description where username = 'johnce'
I am very much confused with all these information. I could find any example showing my scenario. Could someone advice which is the best approach in my scenario.
Thanks in advance.

Failed to insert BLOB object using ODBC driver

I'm trying to store file in SQL server using ODBC driver to a column defined as varbinary(max), when I use SQL server driver I get:
the text, ntext, or image pointer value conflicts with the column name specified, the insert/updtae of a text or image column did not succeed
When I use native client driver I get
string data right truncation
Both are symptoms of the same problem well documented in MSDN. Inserting BLOBs bigger than 400kb will trigger this error, any suggested fix!?
Migrating to OleDB is not an option.
The sqlsrv32.dll installed at my machine is, file version: 6.1.7601.17514
Finally i managed to find the right way,
All you have to do is in your 'DoFieldExchange' function is to:
m_rgODBCFieldInfos[6].m_nSQLType = -4;
BLOB's are always should be found at the end of your query, so m_rgODBCFieldInfos['x'] refers to your m_nFields-1 location in this array, if you have more than one BLOB you should interrogate which one is which.
In my case this solution solved both exceptions:
from Native Client 11:
"String data, right truncation."
from SQL Server:
"the text, ntext, or image pointer value conflicts with the column name specified, the insert/updtae of a text or image column did not succeed"
Cheers :)

Visual C++, CMap object save to blob column

I have a Microsoft Foundation Class (MFC) CMap object built where each object instance stores 160K~ entries of long data.
I need to store it on Oracle SQL.
We decided to save it as a BLOB since we do not want to make an additional table. We thought about saving it as local file and point the SQL column to that file, but we'd rather just keep it as BLOB on the server and clear the table every couple of weeks.
The table has a sequential key ID, and 2 columns of date/time. I need to add the BLOB column in order to store the CMap object.
Can you recommend a guide to do so (read/write Map to blob or maybe a clob)?
How do I create a BLOB field in Oracle, and how can I read and write my object to the BLOB? Perhaps using a CLOB?
CMAP cannot be inserted into blob/clob since its using pointers.
first of all use clob
and store array/vector instead of cmap.

Increasing mass import speed to MS SQL 2008 database from client application

I have a Qt application, that reads a special text file, parses it and inserts about 100000 rows into a temporary table in a firebird database. Then it starts a stored procedure to process this temporary table and apply some changes to permanent tables. Inserting 100000 rows into in-memory temporary table takes about 8 seconds on firebird.
Now I need to implement such behavior using MS SQL Server 2008. If I use simple serial inserts it takes about 76 seconds for 100000 rows. Unfortunately, it's too slow. I looked at the following ways:
Temporary tables (# and ##). Stored on the disk in tempdb scheme. So there is no speed increase.
Bulk Insert. Very nice insertion speed, but thre is a need to have client or server-side shared folder.
Table variables. MSDN says: "Do not use table variables to store large amounts of data (more than 100 rows)."
So, tell me please, what is the right way to increse insertion speed from client application to MSSSQL2008.
Thank you.
You can use the bulk copy operations available through OLE DB or ODBC interfaces.
This MSDN article seems to hold your hand through the process, for ODBC:
Allocate an environment handle and a connection handle.
Set SQL_COPT_SS_BCP and SQL_BCP_ON to enable bulk copy operations.
Connect to SQL Server.
Call bcp_init to set the following information:
The name of the table or view to bulk copy from or to.
Specify NULL for the name of the data file.
The name of an data file to receive any bulk copy error messages
(specify NULL if you do not want a message file).
The direction of the copy: DB_IN from the application to the view or
table or DB_OUT to the application from the table or view.
Call bcp_bind for each column in the bulk copy to bind the column to a
program variable.
Fill the program variables with data, and call bcp_sendrow to send a
row of data.
After several rows have been sent, call bcp_batch to checkpoint the
rows already sent. It is good practice to call bcp_batch at least once
per 1000 rows.
After all rows have been sent, call bcp_done to complete the
operation.
If you need a cross platform implementation of the bulk copy functions, take a look at FreeTDS.