Background:
We have an application that uses the ODBC API to interact with Access and SQL Server (dynamically, depending on user's configuration).
I have discovered a bug which might be in the ODBC SQL driver, or may be a misconfiguration issue with the ODBC DSN we create, or may be a bug somehow in our code.
When a document is edited and saved, we query the database to see if this file has a corresponding record in the database - if so, we update the record with the updated data from the document; if not, we do an insert to create the necessary record for it.
We use the filename as the unique primary key on our table, and this works fine normally.
The bug is that if the filename contains characters outside of the current ANSI code page, then the select indicates no matches:
SQL: SELECT * FROM "My Designs" WHERE "PATHNAME" = '\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo' [# matches = 0]
However, when the insert is attempted, we get a unique key violation (of course) - since there already is a record with that filename.
Database error: Violation of PRIMARY KEY constraint 'PK__My Desig__1B3D5B4BF643706B'. Cannot insert duplicate key in object 'dbo.My Designs'. The duplicate key value is (\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo).
The statement has been terminated.
I've been over the code with a fine-tooth comb, and I can see nothing wrong. :(
The SQL statement that is being generated produces the correct Unicode output of the filename. Our application is compiled for Unicode. The column is SQL_WVARCHAR in ODBC speak.
I've tried adding AutoTranslate=no to the DSN configuration string, but that appears to have no effect.
I've tried logging the database connection from ODBC control panel. Sadly, that interface produces an ANSI log file - so I cannot verify UNICODE / ANSI issues using that tool.
Questions:
Is there a tool I can use to verify that these statements are being
created / issued correctly by the ODBC driver to the SQL Server
database?
Is there a better way to use ODBC so that the driver doesn't get canoodled by a simple UNICODE string in a SELECT query vs. an INSERT request?
Any other ideas for how to approach this problem (short of replacing our technology)
In the select statement, make sure you enclose the where clause string with a N to tell SQL it's unicode:
..."PATHNAME" = N'\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo'
Also, MFC converts the data to MCBS or UNICODE depending on your configuration. Make sure you use CStringT in recordset.
Related
I have an C++ application using MFC CRecordset to add rows to a table of an SQLite database, using the free ODBC driver by Ch. Werner. For this, I use the usual sequence of rs.Open(), rs.AddNew(), set values, and finally rs.Update().
This works on a small example, but with my actual database rs.Update() fails with error -1 and the following error message: unrecognized token: ""RedFaktorFly" (1). The 'token' is a truncated name of column 14 of the table, whose full name is "RedFaktorFlyt".
In some runs, it appends seemingly random characters, so the message becomes for ex. unrecognized token: ""RedFaktorFlyH" (1).
Interestingly, when I add "LongNames=true" to the ODBC connection string, which prepends table names to the column names and therefore makes the SQL query longer, the error becomes (for ex.) unrecognized token: ""K_Noder.MaxKompresjox" (1) - where "MaxKompresjonsFaktor" is the name of column 10 of the table.
This seems to suggest that there is a limit on the length of a SQL query accepted by the driver - but it seems strange that such a limit would be so small that it would fail already with 14 columns.
I do not think that the limit is in the C++ part, since the same code works fine both with the (commercial) SQLite driver from Devart and with Microsoft's ODBC driver for Access.
I tried adding a TraceFile option to the ODBC connection string, but it does not seem to do anything, so I do not know what exactly gets sent to the ODBC driver.
I see the same behaviour both with 32- and 64-bit builds, using Visual Studio 2015 on Windows 10.
Any suggestions what to try next?
I'll give you a solution to solve your issue completely. I don't know if it's suitable for you, but definitely it works. I successfully use it on my own.
For any operation except listing records, use CDatabase, not CRecordset. So, to insert rows to any table, to update records, to delete records, use CDatabase. To retrieve records from SQLLite database, use CRecordset. I can give you examples if you need.
I am trying to insert Persian string like "سلام" to SQL server 12 database with Poco ODBC. But in the database I see Characters like this "ط³ظ„ط§ظ…". the column data type is varchar(I try it with nvarchar too) and I test it with different Collation like Arabic_CI_AS and Persian_100_CI_AS.
there is no problem with data stored in the database. it is what I inserted into the database.
but when I try to see my database with Microsoft SQL Server Management Studio and another application with Qt interface, both of them show me "ط³ظ„ط§ظ…".
Does anyone have any idea how to fix it?
std::string updated = "سلام";
Statement select(session);
select << "INSERT INTO Employee VALUES( ?)",
use(updated),now;
Please change 'سلام' to N'سلام'
I am currently developing server software in C++ with a MySQL data backend. I am using the official MySQL/connector library from Oracle to work with MySQL. The connection itself is working and I'm not having any issues with that.
My problem is that the database and the table schemas tend to change every once in a while because new tables and columns keep getting added. Also exiting column may be changed for the same reason. To make sure I recognize outdated server software quickly I wanted to add a warning when the database has changed.
My first idea was to hardcode how the database (and tables and such) should look and then check whether the current database matches the hardcoded data. But I have no clue how to achive that.
In summary I want to be able to detect whether
A table has been added or removed
A column in a table has been altered
A column in a table has been added or removed
with as little C++ code as possible. Also it should be quite easy to maintain.
Additional information will be added when required.
I would suggest the following approach:
1) fork and execute the mysql command line client. Set up a pair of pipes, to mysql's standard input and output.
2) At this point you should be able to execute simple commands by piping them to mysql via the standard input pipe, and read the output from the standard output pipe.
You will need to make careful notes as to the output format of each mysql command, so that you know when you finished reading its output, and you can send the next command.
3) As the first order of being, execute:
show tables;
The output that comes back will list all tables in the database. Parsing the output into a list of table names is trival. Then execute for each table:
show create table <tablename>;
The resulting output shows all fields in the table, its keys, and constraints. Pretty much all of this table's schema. Lather, rinse, repeat, for every table.
4) In this manner you can capture a basic schema of the entire database, for comparison purposes. If necessary, use the same approach to capture the triggers, and other objects. You'll likely need to do some minor massaging of the data, and exclude a few bits. "show create table", for example, will include the current AUTO_INCREMENT values, which you can ignore.
This general approach, of driving a mysql process via its standard input and output, is bit wobbly, of course. With a little bit of work, you can use mysql's native client library, and execute all of these commands, and capture their results, directly. This should be more reliable.
The ColdFusion 10 documentation on Updating Your Database has a section on Database-related enhancements in ColdFusion 10. That page mentions that there is now support for CF_SQL_NVARCHAR among others, but with no details about them. Additionally, the cfqueryparam documentation hasn't been updated to include their existence.
The ColdFusion 9 documentation for cfqueryparam mentions that CF_SQL_VARCHAR maps to varchar in MSSQL. This is true unless the ColdFusion Administrator datasource settings has the String Format setting enabled. In which case CF_SQL_VARCHAR maps to nvarchar. This poorly documented feature is a hack which can cause performance issues within ColdFusion.
So it's great that they have introduced CF_SQL_NVARCHAR, but it would be good to understand how it works. It is simply an alias for CF_SQL_VARCHAR making it pointless? Does it always send strings as nvarchar? If so, does CF_SQL_VARCHAR always send in varchar?
I would hope that for backward compatibility's sake it is implemented as such:
If String Format is enabled CF_SQL_VARCHAR and CF_SQL_NVARCHAR both map to nvarchar.
If String Format is disabled then CF_SQL_VARCHAR maps to varchar and CF_SQL_NVARCHAR maps to nvarchar.
This would mean any pre-CF10 sites can move to CF10 and work, with the same performance considerations pre-CF10.
New sites, or sites that rewrite all queries to match CF_SQL_VARCHAR and CF_SQL_NVARCHAR with the database design will not get the performance penalty that is unavoidable pre-CF10.
Can anyone confirm if this is the case; even better if with something official?
While you are waiting for something more official, I will throw in my $0.02 ...
I did some digging and based on my observations (with an MS SQL datasource) I believe that:
CF_SQL_NVARCHAR is not just an alias for CF_SQL_VARCHAR. It maps to the newer NVARCHAR jdbc type, which lets you to handle unicode values at a more granular level.
CF_SQL_NVARCHAR values are always treated as nvarchar
The handling of CF_SQL_VARCHAR depends on the String Format setting, same as in previous versions.
CF_SQL_NVARCHAR Test/Results:
If you enable datasource logging, you can see the driver invokes the special setNString method whenever CF_SQL_NVARCHAR is used. So ultimately the value is sent to the database as nvarchar. (You can confirm this with a SQL Profiler)
// Query
SELECT ID
FROM Test
WHERE NVarcharColumn = <cfqueryparam value="#form.value#" cfsqltype="cf_sql_nvarchar">
// Log
spy(...)>> PreparedStatement[9].setNString(int parameterIndex, String value)
// Profiler
exec sp_prepexec #p1 output,N'#P1 nvarchar(4000)',N'SELECT ID
FROM Test
WHERE NVarcharColumn = #P1 ',N'Стоял он, дум великих полн'
CF_SQL_VARCHAR Test/Results:
In the case of CF_SQL_VARCHAR, it is technically flagged as varchar. However, the String Format setting ultimately controls how it is handled by the database. When the setting is enabled, it is handled as nvarchar. When it is disabled, it is treated as varchar. Again, you can verify this with a SQL Profiler.
Bottom line, everything I have seen so far says you are right on target about the implementation.
// Query
SELECT ID
FROM Test
WHERE PlainVarcharColumn = <cfqueryparam value="#form.value#" cfsqltype="cf_sql_varchar">
// Log
spy(..)>> PreparedStatement[8].setObject(int parameterIndex, Object x, int targetSqlType)
spy(..)>> parameterIndex = 1
spy(..)>> x = ????? ??, ??? ??????? ????
spy(..)>> targetSqlType = 12 (ie CF_SQL_VARCHAR)
// Profiler (Setting ENABLED)
exec sp_prepexec #p1 output,N'#P1 nvarchar(4000)',N'SELECT ID
FROM Test
WHERE PlainVarcharColumn = #P1 ',N'Стоял он, дум великих полн'
// Profiler (Setting DIS-abled)
exec sp_prepexec #p1 output,N'#P1 varchar(8000)',N'SELECT ID
FROM Test
WHERE PlainVarcharColumn = #P1 ','????? ??, ??? ??????? ????'
This may seem a odd question as I have a solution, I just dont understand why and that limits me.
I am copying data from various sources into SQL and am using a ADO connection in C++ Builder XE2.
When the data is from MSAccess or MSExcel the code is similar to the following:
//SetupADO..
ADOConn->ConnectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:/temp/testdb.mdb";
//Then open it..
ADOConn->Connected = true;
//Build SQL
UnicodeString sSQL = "SELECT * INTO [ODBC;DSN=PostgreSQL30;DATABASE=admin_db;SERVER=192.168.1.10;PORT=5432;UID=user1;PWD=pass1;SSLmode=disable;ReadOnly=0;Protocol=7.4;].[table1] FROM [accesstb]";
//And finally I use the EXCEUTE() function of the ADO Connection
ADOConn->Execute(sSQL, iRA, TExecuteOptions() << TExecuteOption::eoExecuteNoRecords);
This works fine for Excel too but not for CSV files. I'm using the same driver must can only get it working by changing the syntax around.
//SetupADO..
ADOConn->ConnectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\\temp;Extended Properties=\"Text;HDR=Yes;\";Persist Security Info=False";
//Then open it..
ADOConn->Connected = true;
//Build SQL with the IN keyword and start internal ODBC connection with 2 single quotes
UnicodeString sSQL = "SELECT * INTO [table1] IN '' [ODBC;DSN=PostgreSQL30;DATABASE=admin_db;SERVER=192.168.1.10;PORT=5432;UID=user1;PWD=pass1;SSLmode=disable;ReadOnly=0;Protocol=7.4;] FROM [test.csv]";
//And finally EXCEUTE() again
ADOConn->Execute(sSQL, iRA, TExecuteOptions() << TExecuteOption::eoExecuteNoRecords);
When using the same SQL as the Access query the error "Query input must contain at least one table or query" would be returned.
Intrestingly, one escaped quote, i.e. \' fails when used in place of the 2 single ones. I have also tried writing to another Access database in case the problem was with PG but I had the same results.
Can someone tell me why the IN keywork is required and what the single quotes do?
Extended Properties=\"Text;HDR=Yes;\" specifies text as the datasource, so the connection string is different. IN '' tells the database to map table1 to the first column of the CSV file, since there is no relational model in CSV.
References
Importing CSV Data and saving it in database - CodeProject