MysqlConnector/C or MysqlConnector/C++ to use? - c++

I use visual studio 2012. I have console application. I should connect to mysql server, i use mysql Connector/C++, but when i read from table, for example if i should get 'word' i get instead four unknown symbols, 'word' and after that many unknown symbols (also there are words) and ends with fatal error. What is problem? should i use connector C?
This is my code`
sql::mysql::MySQL_Driver *driver;
sql::Connection *con;
sql::Statement *stmt;
driver = sql::mysql::get_mysql_driver_instance();
con = driver->connect("localhost", "root", "pass");
stmt = con->createStatement();
stmt->execute("USE mail_members");
sql::ResultSet* res = stmt->executeQuery("SELECT id FROM messages");
int k = 0;
res->next();
std::cout << res->getString("id").asStdString();
delete con;
delete stmt;
Are anyone use Mysql connector.c++ ?
P.S. This is my messages table`
Field Type NULL Key Default | Extra
id int(11) Yes NULL
message text YES NULL
I use MysqlConnector/C# in other application and it works correctly, may be this don't work because my application is consiole (with stdafx files)?

Is your code throwing any exceptions?
I put the field name into an std::string and then pass to the resultset methods:
const std::string name = fri.get_field_name();
unsigned int result_value = m_resultset.getUInt(name);
In my tables, the record ID field is an integer not a string.
+----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------+------+-----+---------+----------------+
| ID_Title | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Title | varchar(64) | NO | UNI | NULL | |
+----------+------------------+------+-----+---------+----------------+
2 rows in set (0.08 sec)
Is your ID field a string?

Related

Selecting column with UUID type returns syntax error in C++

I am trying to select a column which type is UUID:
const std::wstring Select = L"SELECT OptionID FROM OptionsSet WHERE OptionID = ?";
// Create query object
query->addUUID(some_option_id);
query->execute();
After execution I am getting
Error number SqlState=42000 NativeError=102 ErrorMsg=[Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near '#P1'.
I suppose '?' is being replaced with UUID like this:
OptioID = 70fc92c2-6fd9-4727-8b29-f55b8d6fb07c
but instead I guess it should have been
OptioID = '70fc92c2-6fd9-4727-8b29-f55b8d6fb07c'
I have tried to add that punctuation mark: L"SELECT OptionID FROM OptionsSet WHERE OptionID = '?'";, but it didn't help.
Well, I found the issue: I forgot to put semicolon at the end of the query. So, it should be:
const std::wstring Select = L"SELECT OptionID FROM OptionsSet WHERE OptionID = ?;";

Finding start of compressed data for items in a zip with zip4j

I'm trying to find the start of compressed data for each zip entry using zip4j. Great library for returning the local header offset, which Java's ZipFile does not do. However I'm wondering if there is a more reliable way than what I'm doing below to get the start of the compressed data? Thanks in advance.
offset = header.getOffsetLocalHeader();
offset += 30; //add fixed file header
offset += header.getFilenameLength(); // add filename field length
offset += header.getExtraFieldLength(); //add extra field length
//not quite the right number, sometimes have to add 4
//seems to be some header data that is outside the extra field value
offset += 4;
Edit
Here is a sample zip:
https://alexa-public.s3.amazonaws.com/test.zip
The code below decompresses each item properly but won't work without the +4.
String path = "/Users/test/Desktop/zip test/test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
long offset = 30 + header.getOffsetLocalHeader() + header.getFileNameLength() + header.getExtraFieldLength();
//fudge factor!
offset += 4;
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(offset);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File("/Users/test/Desktop/" + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}
The reason you have to add 4 to the offset in your example is because the size of the extra data field in central directory of this entry (= file header) is different than the one in local file header, and it is perfectly legal as per zip specification to have different extra data field sizes in central directory and local header. In fact the extra data field we are talking about, Extended Timestamp extra field (signature 0x5455), has an official definition which has varied lengths between the two.
Extended Timestamp extra field (signature 0x5455)
Local-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
| (AcTime) | Long | time of last access (UTC/GMT) |
| (CrTime) | Long | time of original creation (UTC/GMT) |
Central-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
In the sample zip file you have attached, the tool which creates the zip file adds a 4 byte additional information compared to the central directory for this extra field.
Relying on the extra field length from central directory to reach to start of data can be error prone. A more reliable way to achieve what you want is to read the extra field length from local header. I have modified your code slightly to consider the extra field length from local header and not from central header to reach to the start of data.
import net.lingala.zip4j.model.FileHeader;
import net.lingala.zip4j.util.RawIO;
import org.junit.Test;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.List;
import java.util.zip.DataFormatException;
import java.util.zip.Inflater;
public class ZipTest {
private static final int OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE = 28;
private RawIO rawIO = new RawIO();
#Test
public void testExtractWithDataOffset() throws IOException, DataFormatException {
String basePath = "/Users/slingala/Downloads/test/";
String path = basePath + "test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE);
int extraFieldLength = rawIO.readShortLittleEndian(f);
f.skipBytes(header.getFileNameLength() + extraFieldLength);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File(basePath + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}
}
}
On a side note, I wonder why you want to read the data, deal with inflater and extract the content yourself? With zip4j you can extract all entires with ZipFile.extractAll() or you can also extract each entry in the zip file with streams if you wish with ZipFile.getInputStream(). A skeleton example is:
ZipFile zipFile = new ZipFile("filename.zip");
FileHeader fileHeader = zipFile.getFileHeader("entry_name_in_zip.txt");
InputStream inputStream = zipFile.getInputStream(fileHeader);
Once you have the inputstream, you can read the content and write it to any outputstream. This way you can extract each entry in the zip file without having to deal with the inflaters yourself.

How to update\insert data into a NVARCHAR column using ODBC API

I am trying to update a Sybase table via Microsofts ODBC API. The following is the basics of the C++ I am trying to execute. In table, TableNameXXX, ColumnNameXXX has a type of NVARCHAR( 200 ).
SQLWCHAR updateStatement[ 1024 ] = L"UPDATE TableNameXXX SET ColumnNameXXX = N 'Executive Chair эюя' WHERE PKEYXXX = 'VALUE'";
if( ret = SQLExecDirect( hstmt, ( SQLWCHAR* ) updateStatement, SQL_NTS ) != SQL_SUCCESS )
{
// Handle Error
}
The Sybase database has a CatalogCollation of 1252LATIN1, CharSet of windows-1252, Collation of 1252LATIN1, NcharCharSet of UTF-8 and an NcharCollation of UCA.
Once this works for the Sybase ODBC connection I need to get it to work in various other ODBC drivers for other databases.
The error i get is "[Sybase][ODBC Driver][SQL Anywhere]Syntax error near 'Executive Chair ' on line 1"
If i take out the Unicode characters and remove the N it will update.
Does anyone know how to get this to work? What am I missing?
I wrote a C# .net project using an ODBCConnection to a SQL Server database and am getting "sort of" the same error. I means sort of as this error contains the Unicode Text in the message whereas the Sybase ODBC error has "lost" the unicode.
static void Main(string[] args)
{
using (OdbcConnection odbc = new OdbcConnection("Dsn=UnicodeTest;UID=sa;PWD=password")) // ;stmt=SET NAMES 'utf8';CharSet=utf16"
//using (OdbcConnection odbc = new OdbcConnection("Dsn=Conversion;CharSet=utf8")) // ;stmt=SET NAMES 'utf8';CharSet=utf8
{
try
{
odbc.Open();
string queryString = "UPDATE TableNameXXX SET ColumnNameXXX = N 'Executive Chair эюя' WHERE PKEYXXX = 'AS000008'";
System.Console.Out.WriteLine(queryString);
OdbcCommand command = new OdbcCommand(queryString);
command.Connection = odbc;
int result = command.ExecuteNonQuery();
if( result == 1)
{
System.Diagnostics.Debug.WriteLine("Success");
}
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.StackTrace);
System.Diagnostics.Debug.WriteLine(ex.Message);
}
}
}
"ERROR [42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Incorrect syntax near 'Executive Chair эюя'."
There should be no space between the N and the Unicode enclosed string.
"UPDATE TableNameXXX SET ColumnNameXXX = N 'Executive Chair эюя' WHERE PKEYXXX = 'AS000008'"
should be
"UPDATE TableNameXXX SET ColumnNameXXX = N'Executive Chair эюя' WHERE PKEYXXX = 'AS000008'"

Delphi: Define a static list with type of SQL servers

I'm working to a program that use different type of SQL database servers.
Inside this program I play a lot with this items.
This is how I define the static list now:
type
TServerType = (stNone, stMsSQL, stMySQL, stSQLite);
Now, I would like to associate to this static list the short name and the long name of the server like this:
stNone | (null) | (null)
stMsSQL | mssql | Microsoft SQL
stMySQL | mysql | MySQL
stSQLLite | sqlite | SQLite
Is it possible to do this directly by using this type? (can be done without var or const and array)
Or how do you recommend the best to define this associated list?
A static array will work for that:
type
TServerType = (stNone, stMsSQL, stMySQL, stSQLite);
TServerNames = record
ShortName: String;
LongName: String;
end;
const
ServerNames: array[TServerType] of TServerNames = (
(ShortName: ''; LongName: ''),
(ShortName: 'mssql'; LongName: 'Microsoft SQL'),
(ShortName: 'mysql'; LongName: 'MySQL'),
(ShortName: 'sqlite'; LongName: 'SQLite')
);
var
ServerType: TServerType;
ShortName: String;
LongName: String;
begin
ServerType := ...;
ShortName := ServerNames[ServerType].ShortName;
LongName := ServerNames[ServerType].LongName;
...
end;

Querying a MySQL BINARY(8) column, using a std::vector<unsigned char>

I am currently teaching myself C++ (c++0x) so my apologies for any idiocy in this question.
I have an std::vector of length 8 that I want to query in a database to see if it is listed, and if not I will insert it into the database. It's an unsigned char as that is what the base64_decode implementation I have found returns.
This is my query & insert method as it currently stands:
int MySQLConnection::insertByteHash(std::vector<unsigned char> hashSection) {
sql::PreparedStatement *prep_stmt;
sql::ResultSet *res;
prep_stmt = con->prepareStatement("SELECT byteHash_id FROM byteHash WHERE byteHash_content = ? LIMIT 1;");
prep_stmt->setBlob(1, &hashSection);
res = prep_stmt->executeQuery();
if(res->next()){
return res->getInt("byteHash_id");
}
/*
prep_stmt = con->prepareStatement("INSERT INTO byteHash SET byteHash_content = ?;");
prep_stmt->setBlob(1, &str);
res = prep_stmt->executeQuery();
*/
return 1;
}
This is the table in question:
CREATE TABLE `byteHash` (
`byteHash_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`byteHash_content` binary(8) NOT NULL,
PRIMARY KEY (`byteHash_id`),
UNIQUE KEY `UNIQUE_byteHash_content` (`byteHash_content`) USING HASH
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_bin
The binary data as hex is 8FDA7E1F0B56593F.
I was looking into strstream, but keep reading about it being depreciated (not that I managed to get it working anyway), so I am presuming there is a better way to do this.