Failed to insert BLOB object using ODBC driver - c++

I'm trying to store file in SQL server using ODBC driver to a column defined as varbinary(max), when I use SQL server driver I get:
the text, ntext, or image pointer value conflicts with the column name specified, the insert/updtae of a text or image column did not succeed
When I use native client driver I get
string data right truncation
Both are symptoms of the same problem well documented in MSDN. Inserting BLOBs bigger than 400kb will trigger this error, any suggested fix!?
Migrating to OleDB is not an option.
The sqlsrv32.dll installed at my machine is, file version: 6.1.7601.17514

Finally i managed to find the right way,
All you have to do is in your 'DoFieldExchange' function is to:
m_rgODBCFieldInfos[6].m_nSQLType = -4;
BLOB's are always should be found at the end of your query, so m_rgODBCFieldInfos['x'] refers to your m_nFields-1 location in this array, if you have more than one BLOB you should interrogate which one is which.
In my case this solution solved both exceptions:
from Native Client 11:
"String data, right truncation."
from SQL Server:
"the text, ntext, or image pointer value conflicts with the column name specified, the insert/updtae of a text or image column did not succeed"
Cheers :)

Related

Optimal access of NUMBER(14) column via ODBC instant client 32-bit driver

I am working on an application written in C++ that uses the 32-bit Instant Client drivers to access an Oracle database. The application uses the Record Field Exchange (RFX) methods to update the columns in the database tables. The database schema cannot be modified.
The C++ code was originally written to handle OID values as doubles because the OID column in the database is NUMBER(14), so a regular int won't be big enough. However, this leads to the database occasionally selecting a bad execution plan where it takes the OID values sent from the application and uses the to_binary_double function on them, rather than converting them to BIGINT. If this happens, the database does not do an index search over the data and instead does a full table scan.
We tried switching the OIDs to be type __int64 in the application, but there was an issue with the ODBC driver not supporting the BigInt type (or long long in C++). Similarly, when we tried to make the OIDs into longs, the database or the driver gave an error that the values sent to the database were too big for the column.
Working with the OIDs as Strings in C++ will work, but the database will never use the optimal index search because it has to convert the String to an integer before it can do any data retrieval. Because of this, we're just better off using the doubles we already have.
Does anyone have an idea of what we can do next? It is not the end of the world if we have to keep using doubles as before, but we were hoping to eliminate the chance for the database to run slowly.
We actually went with the "Convert all the OIDs to Strings in the C++ code" option. It turns out the database was still able to run an indexed search after converting the OIDs from Strings to integers. It would have been better if we switched ODBC driver to one that could handle BigInt, but that wasn't really an option for us so this will suffice.

CRecordset fails on adding a new row using the free ODBC driver for SQLite

I have an C++ application using MFC CRecordset to add rows to a table of an SQLite database, using the free ODBC driver by Ch. Werner. For this, I use the usual sequence of rs.Open(), rs.AddNew(), set values, and finally rs.Update().
This works on a small example, but with my actual database rs.Update() fails with error -1 and the following error message: unrecognized token: ""RedFaktorFly" (1). The 'token' is a truncated name of column 14 of the table, whose full name is "RedFaktorFlyt".
In some runs, it appends seemingly random characters, so the message becomes for ex. unrecognized token: ""RedFaktorFlyH" (1).
Interestingly, when I add "LongNames=true" to the ODBC connection string, which prepends table names to the column names and therefore makes the SQL query longer, the error becomes (for ex.) unrecognized token: ""K_Noder.MaxKompresjox" (1) - where "MaxKompresjonsFaktor" is the name of column 10 of the table.
This seems to suggest that there is a limit on the length of a SQL query accepted by the driver - but it seems strange that such a limit would be so small that it would fail already with 14 columns.
I do not think that the limit is in the C++ part, since the same code works fine both with the (commercial) SQLite driver from Devart and with Microsoft's ODBC driver for Access.
I tried adding a TraceFile option to the ODBC connection string, but it does not seem to do anything, so I do not know what exactly gets sent to the ODBC driver.
I see the same behaviour both with 32- and 64-bit builds, using Visual Studio 2015 on Windows 10.
Any suggestions what to try next?
I'll give you a solution to solve your issue completely. I don't know if it's suitable for you, but definitely it works. I successfully use it on my own.
For any operation except listing records, use CDatabase, not CRecordset. So, to insert rows to any table, to update records, to delete records, use CDatabase. To retrieve records from SQLLite database, use CRecordset. I can give you examples if you need.

Rails 4 ActiveRecord Sql Server - Unable to save binary into image column

We are working to upgrade our application to a more current version of Ruby & Rails. Our app integrates with a legacy database (SQL Server 2008 R2) that has a table with a column of image data type (we are unable to change this column to varbinary(max)). Previously we were able to save a binary into the image column. However now we are getting conversion errors.
We are working to upgrade to the following (among others):
Rails 4.2.1
ActiveRecord_SQLServer_Adapter (4.2.4)
tiny_tds (0.6.3.rc1)
freeTDS (v0.91.112)
When we now attempt to save into the image column, we get errors similar to:
TinyTds::Error: Unclosed quotation mark after the character string
Researching various issues within tiny_tds & activerecord_sqlserver_adapter, we decided to create a second table that matched the first but change the data type from image to varbinary(max). We can save a binary into the column.
The code causing the challenge is in a background job where we grab images from s3, store them locally and then push the image into the database. Again, we don't control the legacy database and thus can't change the data type (or confront the issue of why we are storing the image in the db in the first place).
...
#d = Doc.new
...
open("#{Rails.root}/cache/pictures/image.png", "wb") do |file|
file << open(r.image.url).read
end
#d.document = File.binread("#{Rails.root}/cache/pictures/image.png")
#d.save!
Given the upgrade has broken our saving images, we are trying to figure out how best to determine a fix. We could obviously roll back until we find a version that works. However we hope to find a fix. Anyone have any ideas?
Update:
We added the following configuration as we had triggers on the table being inserted: ActiveRecord::ConnectionAdapters::SQLServerAdapter.use_output_inserted = true
When we remove this configuration we get the following error:
TinyTds::Error: The target table 'doc' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
Note: We are unable to make any modifications to the triggers.
Per feedback on the ActiveRecord_SQLServer_Adapter site, we rolled back to 4.1.11 and we are now able to save into the image column.
We also had to add this snippet to overcome the issue with the triggers.

ODBC SQL Server Unicode Bug?

Background:
We have an application that uses the ODBC API to interact with Access and SQL Server (dynamically, depending on user's configuration).
I have discovered a bug which might be in the ODBC SQL driver, or may be a misconfiguration issue with the ODBC DSN we create, or may be a bug somehow in our code.
When a document is edited and saved, we query the database to see if this file has a corresponding record in the database - if so, we update the record with the updated data from the document; if not, we do an insert to create the necessary record for it.
We use the filename as the unique primary key on our table, and this works fine normally.
The bug is that if the filename contains characters outside of the current ANSI code page, then the select indicates no matches:
SQL: SELECT * FROM "My Designs" WHERE "PATHNAME" = '\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo' [# matches = 0]
However, when the insert is attempted, we get a unique key violation (of course) - since there already is a record with that filename.
Database error: Violation of PRIMARY KEY constraint 'PK__My Desig__1B3D5B4BF643706B'. Cannot insert duplicate key in object 'dbo.My Designs'. The duplicate key value is (\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo).
The statement has been terminated.
I've been over the code with a fine-tooth comb, and I can see nothing wrong. :(
The SQL statement that is being generated produces the correct Unicode output of the filename. Our application is compiled for Unicode. The column is SQL_WVARCHAR in ODBC speak.
I've tried adding AutoTranslate=no to the DSN configuration string, but that appears to have no effect.
I've tried logging the database connection from ODBC control panel. Sadly, that interface produces an ANSI log file - so I cannot verify UNICODE / ANSI issues using that tool.
Questions:
Is there a tool I can use to verify that these statements are being
created / issued correctly by the ODBC driver to the SQL Server
database?
Is there a better way to use ODBC so that the driver doesn't get canoodled by a simple UNICODE string in a SELECT query vs. an INSERT request?
Any other ideas for how to approach this problem (short of replacing our technology)
In the select statement, make sure you enclose the where clause string with a N to tell SQL it's unicode:
..."PATHNAME" = N'\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo'
Also, MFC converts the data to MCBS or UNICODE depending on your configuration. Make sure you use CStringT in recordset.

MySQL, C++: Need Blob size to read blob data

How do I get the size of data in a BLOB field in the Result Set? (Using C++ and MySQL Connector C++)
In order to read the data from the result set, I have allocate memory for it first. In order to allocate memory, I need to know the size of the blob data in the result set.
Searching the web and StackOverflow, I have found two methods: OCTECT and BLOB stream.
One method to find the BLOB size is to use OCTECT() function, which requires a new query and produces a new result set. I would rather not use this method.
Another method is to use the blob stream and seek to the end, and get the file position. However, I don't know if the stream can be rewound to the beginning in order to read the data. This method requires an additional read of the entire stream.
The ResultSet and ResultSetMetaData interfaces of MySQL Connector C++ 1.0.5 do not provide a method for obtaining the size of the data in a field (column).
Is there a process for obtaining the size of the data in a BLOB field given only the result set and a field name?
I am using MySQL Connector C++ 1.0.5, C++, Visual Studio 2008, Windows Vista / XP and "Server version: 5.1.41-community MySQL Community Server (GPL)".
You could do a select like:
select LENGTH(content),content where id=123;
where content is the BLOB field.
Regards.
see: LENGTH(str)