wrong character showing when insert persian string to sql server using poco ODBC - c++

I am trying to insert Persian string like "سلام" to SQL server 12 database with Poco ODBC. But in the database I see Characters like this "ط³ظ„ط§ظ…". the column data type is varchar(I try it with nvarchar too) and I test it with different Collation like Arabic_CI_AS and Persian_100_CI_AS.
there is no problem with data stored in the database. it is what I inserted into the database.
but when I try to see my database with Microsoft SQL Server Management Studio and another application with Qt interface, both of them show me "ط³ظ„ط§ظ…".
Does anyone have any idea how to fix it?
std::string updated = "سلام";
Statement select(session);
select << "INSERT INTO Employee VALUES( ?)",
use(updated),now;

Please change 'سلام' to N'سلام'

Related

Microsoft Access ODBC Driver Manager Function sequence error

I am trying to use Qt to query a table in an MS Access database with a QSqlQuery. I am able to query all tables, except for one. The one table returns the error:
[Microsoft][ODBC Driver Manager] Function sequence error
Here is the code I use to query the table.
QSqlQueryModel *tempModel = new QSqlQueryModel();
QSqlQuery *qry = new QSqlQuery();
qry->prepare("SELECT * FROM table_name;");
qry->exec();
tempModel->setQuery(*qry);
while(tempModel->canFetchMore())
{
tempModel->fetchMore();
}
I've tried the answer from this SO question, but no change.
QSqlQuery causing ODBC Function sequence error
I too encountered issue with DAO connector to MySql backend. My passthrough queries were working, but attempt to read from the table using DAO were receiving ODBC function sequence error. The recordset connector was fine - no problem with move.first, move.last, record count, numerating the field names. Program failed when attempting to read record data -- but again, no problem was happening with my passthrough queries.
Issue was easy to resolve. I forgot to refresh my ODBC link after making a table schema changes. Refreshed the link.. and now everything working normal again.
To simplify my life, I added a program link for end-users to automatically refresh the ODBC links.
The issue seemed to be with the Date/Time datatype of one of the columns.
One of my columns had a data type of "Date/Time" with a property of "IME Sentence Mode" set to "Phrase Predict".
Changing this from "Phrase Predict" to "None" allowed me to query the MS Access table from my Qt application.

ODBC SQL Server Unicode Bug?

Background:
We have an application that uses the ODBC API to interact with Access and SQL Server (dynamically, depending on user's configuration).
I have discovered a bug which might be in the ODBC SQL driver, or may be a misconfiguration issue with the ODBC DSN we create, or may be a bug somehow in our code.
When a document is edited and saved, we query the database to see if this file has a corresponding record in the database - if so, we update the record with the updated data from the document; if not, we do an insert to create the necessary record for it.
We use the filename as the unique primary key on our table, and this works fine normally.
The bug is that if the filename contains characters outside of the current ANSI code page, then the select indicates no matches:
SQL: SELECT * FROM "My Designs" WHERE "PATHNAME" = '\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo' [# matches = 0]
However, when the insert is attempted, we get a unique key violation (of course) - since there already is a record with that filename.
Database error: Violation of PRIMARY KEY constraint 'PK__My Desig__1B3D5B4BF643706B'. Cannot insert duplicate key in object 'dbo.My Designs'. The duplicate key value is (\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo).
The statement has been terminated.
I've been over the code with a fine-tooth comb, and I can see nothing wrong. :(
The SQL statement that is being generated produces the correct Unicode output of the filename. Our application is compiled for Unicode. The column is SQL_WVARCHAR in ODBC speak.
I've tried adding AutoTranslate=no to the DSN configuration string, but that appears to have no effect.
I've tried logging the database connection from ODBC control panel. Sadly, that interface produces an ANSI log file - so I cannot verify UNICODE / ANSI issues using that tool.
Questions:
Is there a tool I can use to verify that these statements are being
created / issued correctly by the ODBC driver to the SQL Server
database?
Is there a better way to use ODBC so that the driver doesn't get canoodled by a simple UNICODE string in a SELECT query vs. an INSERT request?
Any other ideas for how to approach this problem (short of replacing our technology)
In the select statement, make sure you enclose the where clause string with a N to tell SQL it's unicode:
..."PATHNAME" = N'\\FILE-SERVER\Home Folders\User Files\狭すぎて丸め処理が出来ません!!.foo'
Also, MFC converts the data to MCBS or UNICODE depending on your configuration. Make sure you use CStringT in recordset.

How to read SQL Server column collation metadata using ODBC API?

Our C++ application is able to get collation related column metadata from SQL Server using OLEDB API’s (using DBCOLUMN_COLLATINGSEQUENCE, DBCOLUMN_TDSCOLLATION, etc.), but I need to use ODBC as our application has to be cross platform. We are using ODBC API SQLColAttribute to read rowset metadata, but this API does not have any identifiers which can return the collation name.
I tried using SQL_CA_SS_COLUMN_COLLATION (defined in sqlncli.h) as an identifier, but SQLColAttribute only returns “Collation Name” as the collation.
I also tried using SQLGetStmtAttr followed by SQLGetDescField, using the same identifier, and I got "Collation Name" back.
I have scoured all of MSDN for answers, but haven’t been able to find any. I can get the collation name from INFORMATION_SCHEMA.COLUMNS, but that will not work for calculated columns returned by queries.
I am looking for a clean way to get collation information from result set metadata using ODBC. Any ideas?
This query will return the collation_name for each column present in the current database.
SELECT o.name AS ObjectName, c.name AS ColumnName, c.collation_name
FROM sys.columns c
INNER JOIN sys.objects o ON c.object_id = o.object_id
INNER JOIN sys.types ty ON c.system_type_id = ty.system_type_id
WHERE o.is_ms_shipped = 0
AND ty.collation_name IS NOT NULL
AND ty.name <> 'sysname';

MySQL C++ Connector: Get the insert_id

I am using mysql connector C++. There is an auto_increament column in my table, I want to get the insert id when I perform an insert action. Does someone know how to get it? Thanks.
My code is something like:
conn->setAutoCommit(0);
pstmt.reset(conn->prepareStatement(insertStr.c_str()));
int updateCount = pstmt->executeUpdate();
conn->commit();
If the API of the library you are using does not provide a method to retrieve the last_insert_id (which seems to be the case for the C++ Connector) you can always do a query
SELECT LAST_INSERT_ID();
which gives you the "value representing the first automatically generated value successfully inserted for an AUTO_INCREMENT column as a result of the most recently executed INSERT statement." See here for the explanation of MySQL's documentation
UPDATE:
I found this post from a user who is saying the if you do not use auto_increment on your field you can use
SELECT ##identity AS id;

Using a single ADO Query to copy data from a text file into another ODBC source

This may seem a odd question as I have a solution, I just dont understand why and that limits me.
I am copying data from various sources into SQL and am using a ADO connection in C++ Builder XE2.
When the data is from MSAccess or MSExcel the code is similar to the following:
//SetupADO..
ADOConn->ConnectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:/temp/testdb.mdb";
//Then open it..
ADOConn->Connected = true;
//Build SQL
UnicodeString sSQL = "SELECT * INTO [ODBC;DSN=PostgreSQL30;DATABASE=admin_db;SERVER=192.168.1.10;PORT=5432;UID=user1;PWD=pass1;SSLmode=disable;ReadOnly=0;Protocol=7.4;].[table1] FROM [accesstb]";
//And finally I use the EXCEUTE() function of the ADO Connection
ADOConn->Execute(sSQL, iRA, TExecuteOptions() << TExecuteOption::eoExecuteNoRecords);
This works fine for Excel too but not for CSV files. I'm using the same driver must can only get it working by changing the syntax around.
//SetupADO..
ADOConn->ConnectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=c:\\temp;Extended Properties=\"Text;HDR=Yes;\";Persist Security Info=False";
//Then open it..
ADOConn->Connected = true;
//Build SQL with the IN keyword and start internal ODBC connection with 2 single quotes
UnicodeString sSQL = "SELECT * INTO [table1] IN '' [ODBC;DSN=PostgreSQL30;DATABASE=admin_db;SERVER=192.168.1.10;PORT=5432;UID=user1;PWD=pass1;SSLmode=disable;ReadOnly=0;Protocol=7.4;] FROM [test.csv]";
//And finally EXCEUTE() again
ADOConn->Execute(sSQL, iRA, TExecuteOptions() << TExecuteOption::eoExecuteNoRecords);
When using the same SQL as the Access query the error "Query input must contain at least one table or query" would be returned.
Intrestingly, one escaped quote, i.e. \' fails when used in place of the 2 single ones. I have also tried writing to another Access database in case the problem was with PG but I had the same results.
Can someone tell me why the IN keywork is required and what the single quotes do?
Extended Properties=\"Text;HDR=Yes;\" specifies text as the datasource, so the connection string is different. IN '' tells the database to map table1 to the first column of the CSV file, since there is no relational model in CSV.
References
Importing CSV Data and saving it in database - CodeProject