Whenever I try to convert a std::string into a QString with this letter in it ('ß'), the QString will turn into something like "Ã" or some other really strange letters. What's wrong? I used this code and it didn't cause any errors or warnings!
std::string content = "Heißes Teil.";
ui->txtFind_lang->setText(QString::fromStdString(content));
The std::string has no problem with this character. I even wrote it into a text file without problems. So what am I doing wrong?
You need to set the codec to UTF-8 :
QTextCodec::setCodecForTr(QTextCodec::codecForName("UTF-8"));
QTextCodec::setCodecForCStrings(QTextCodec::codecForName("UTF-8"));
QTextCodec::setCodecForLocale(QTextCodec::codecForName("UTF-8"));
By default, Qt uses the Latin-1 encoding, which is limited. By adding this code, you set the default encoding to UTF-8 which allow you to use much more characters.
Though antoyo's answer works, I wasn't too sure why. So, I decided to investigate.
All of my documents are encoded in UTF-8, as are most web-pages. The ß character has the UTF code point of UTF+00DF.
Since UTF-8 is a variable length encoding, in the binary form, ß would be encoded as 11000011 10011111 or C3 9F. Since by default Qt relies on Latin1 encoding. It would read ß as two different characters. The first one C3 will map to à and the second one 9F will not map to anything as Latin1 does not recognize bytes in between 128-159 (in decimal).
That's why ß appears as à when using Latin1 encoding.
Side note: You might want to brush up on how UTF-8 encoding works, because otherwise it seems a little unintuitive that ß takes two bytes even though its code point DF is less than FF and should consume just one byte.
Related
I'm creating a wrapper for boost::filesystem for my application. I'm investigating what's going to happen if I have some non-ASCII characters in the file names.
On Windows, the documentation says that all characters are wchar_t. That's very understandable and coherent.
But on Linux, the the documentation says that all characters are char! So 1-byte characters. I was wondering, will this even work and read non-ASCII characters? So I created a directory with an Arabic name تجريب (It's a 5-letter word), and read it with boost::filesystem. I printed that in the terminal, and it worked fine (apart from that the terminal, terminator, wrote it incorrectly as left-to-right). The printed result on the terminal was:
/mnt/hgfs/D/تجريب
Something doesn't add up there. How could this be 1-byte char string, and still print Arabic names? So I did the following:
std::for_each(path.string().begin(), path.string().end(), [](char c) {
std::cout<<c<<std::endl;
});
And running this gave where path is the directory I mentioned above, gave:
/
m
n
t
/
h
g
f
s
/
D
/
�
�
�
�
�
�
�
�
�
�
And at this point, I really, really got lost. The Arabic word is 10 bytes, which creates a 5-letter word.
Here comes my question: Part of the characters are 1-byte, and part of the characters are 2-bytes. How does linux know that those 2-characters are a single 2-byte character? Does this mean that I never need to have a 2-byte character on linux for its file system, and char is good for all languages?
Could someone please explain how this works?
OK. The answer is that this is UTF-8 encoding, which is variable length by design. In Wikipedia, it answers my question: "How does linux know that those 2-characters are a single 2-byte character?"
The answer is quoted from there:
Since ASCII bytes do not occur when encoding non-ASCII code points into UTF-8, UTF-8 is safe to use within most programming and document languages that interpret certain ASCII characters in a special way, such as end of string.
So, there's no ambiguity when interpreting the letters.
I face one little problem. I am from country that uses extended character set in language (specifically Latin Extended-A due to characters like š,č,ť,ý,á,...).
I have ini file containing these characters and I would like to read them into program. Unfortunatelly, it is not working with getPrivateProfileStringW or ...A.
Here is part of source code. I hope it will help someone to find solution, because I am getting a little desperate. :-)
SOURCE CODE:
wchar_t pcMyExtendedString[200];
GetPrivateProfileStringA(
"CATEGORY_NAME",
"SECTION_NAME",
"error",
pcMyExtendedString,
200,
PATH_TO_INI_FILE
);
INI FILE:
[CATEGORY_NAME]
SECTION_NAME= ľščťžýáíé
Characters ý,á,í,é are readed correctly - they are from character set Latin-1 Supplement. Their hexa values are correct (0xFD, 0xE1, 0xED,...).
Characters ľ,š,č,ť,ž are readed incorrectly - they are from character set Latin Extended-A Their hexa values are incorrect (0xBE, 0x9A, 0xE8,...). Expected are values like 0x013E, 0x0161, 0x010D, ...
How could be this done? Is it possible or should I avoid these characters at all?
GetPrivateProfileString doesn't do any character conversion. If the call succeed, it will gives you exactly what is in the file.
Since you want to have unicode characters, your file is probably in UTF-8 or UTF-16. If your file is UTF-8, you should be able to read it with GetPrivateProfileStringA, but it will give you a char array that will contain the correct UTF-8 characters (that is, not 0x013E, because 0x013E is not UTF-8).
If your file is UTF-16, then GetPrivateProfileStringW should work, and give you the UTF-16 codes (0x013E, 0x0161, 0x010D, ...) in a wchar_t array.
Edit: Actually your file is encoded in Windows-1250. This is a single byte encoding, so GetPrivateProfileStringA works fine, and you can convert it to UTF-16 if you want by using MultiByteToWideChar with 1250 as code page parameter.
Try saving the file in UTF-8 - CodePage 65001 encoding, most likely your file would be in Western European (Windows) - CodePage 1252.
I am trying to create a SOAP call with Japanese string. The problem I faced is that when I encode this string to UTF8 encoded string, it has many control characters in it (e.g. 0x1B (Esc)). If I remove all such control characters to make it a valid SOAP call then the Japanese content appears as garbage on server side.
How can I create a valid SOAP request for Japanese characters? Any suggestion is highly appreciated.
I am using C++ with MS-DOM.
With Best Regards.
If I remember correctly it's true, the first 32 unicode code points are not allowed as characters in XML documents, even escaped with &#. Not sure whether they're allowed in HTML or not, but certainly the server thinks they're not allowed in your requests, and it gets the only meaningful vote.
I notice that your document claims to be encoded in iso-2022-jp, not utf-8. And indeed, the sequence of characters ESC $ B that appears in your document is valid iso-2022-jp. It indicates that the data is switching encodings (from ASCII to a 2-byte Japanese encoding called JIS X 0208-1983).
But somewhere in the process of constructing your request, something has seen that 0x1B byte and interpreted it as a character U+001B, not realising that it's intended as one byte in data that's already encoded in the document encoding. So, it has XML-escaped it as a "best effort", even though that's not valid XML.
Probably, whatever is serializing your XML document doesn't know that the encoding is supposed to be iso-2022-jp. I imagine it thinks it's supposed to be serializing the document as ASCII, ISO-Latin-1, or UTF-8, and the <meta> element means nothing to it (that's an HTML way of specifying the encoding anyway, it has no particular significance in XML). But I don't know MS-DOM, so I don't know how to correct that.
If you just remove the ESC characters from iso-2022-jp data, then you conceal the fact that the data has switched encodings, and so the decoder will continue to interpret all that 7nMK stuff as ASCII, when it's supposed to be interpreted as JIS X 0208-1983. Hence, garbage.
Something else strange -- the iso-2022-jp code to switch back to ASCII is ESC ( B, but I see |(B</font> in your data, when I'd expect the same thing to happen to the second ESC character as happened to the first: �x1B(B</font>. Similarly, $B#M#S(B and $BL#D+(B are mangled attempts to switch from ASCII to JIS X 0208-1983 and back, and again the ESC characters have just disappeared rather than being escaped.
I have no explanation for why some ESC characters have disappeared and one has been escaped, but it cannot be coincidence that what you generate looks almost, but not quite, like valid iso-2022-jp. I think iso-2022-jp is a 7 bit encoding, so part of the problem might be that you've taken iso-2022-jp data, and run it through a function that converts ISO-Latin-1 (or some other 8 bit encoding for which the lower half matches ASCII, for example any Windows code page) to UTF-8. If so, then this function leaves 7 bit data unchanged, it won't convert it to UTF-8. Then when interpreted as UTF-8, the data has ESC characters in it.
If you want to send the data as UTF-8, then first of all you need to actually convert it out of iso-2022-jp (to wide characters or to UTF-8, whichever your SOAP or XML library expects). Secondly you need to label it as UTF-8, not as iso-2022-jp. Finally you need to serialize the whole document as UTF-8, although as I've said you might already be doing that.
As pointed out by Steve Jessop, it looks like you have encoded the text as iso-2022-jp, not UTF-8. So the first thing to do is to check that and ensure that you have proper UTF-8.
If the problem still persists, consider encoding the text.
The simplest option is "hex encoding" where you just write the hex value of each byte as ASCII digits. e.g. the 0x1B byte becomes "1B", i.e. 0x31, 0x42.
If you want to be fancy you could use MIME or even UUENCODE.
I'm trying to make a base64 coder/decoder and visualize results in Qt (4.7.3) in Ubuntu.
I'm using both QPlainText to paste code and to present results. I have no problem decoding, because the result is correct, but when I try to encrypt, the results are Chinese character and non readable chars.
I think that my error is with the encoding of the widget or with the QString, because the codification algorithm is correct.
Some ideas?
Thanks!
If encoding works in 8 bit, it may produce, by chance, UTF-8 sequences of characters that represent chinese characters (or from other languages, for the matter). This also depends on the default QString encoding you selected, etc., but with base64 it will work for any encoding. For the encoded string, try to base64 it before showing it into the widget.
Problem is categorized in two steps:
Problem Step 1. Access 97 db containing XML strings that are encoded in UTF-8.
The problem boils down to this: the Access 97 db contains XML strings that are encoded in UTF-8. So I created a patch tool for separate conversion for the XML strings from UTF-8 to Unicode. In order to covert UTF8 string to Unicode, I have used function
MultiByteToWideChar(CP_UTF8, 0, PChar(OriginalName), -1, #newName, Size);.(where newName is array as declared "newName : Array[0..2048] of WideChar;" ).
This function works good on most of the cases, I have checked it with Spainsh, Arabic, characters. but I am working on Greek and Chineese Characters it is choking.
For some greek characters like "Ευγ. ΚαÏαβιά" (as stored in Access-97), the resultant new string contains null charaters in between, and when it is stored to wide-string the characters are getting clipped.
For some chineese characters like "?¢»?µ?"(as stored in Access-97), the result is totally absurd like "?¢»?µ?".
Problem Step 2. Access 97 db Text Strings, Application GUI takes unicode input and saved in Access-97
First I checked with Arabic and Spainish Characters, it seems then that no explicit characters encoding is required. But again the problem comes with greek and chineese characters.
I tried the above mentioned same function for the text conversion( Is It correct???), the result was again disspointing. The Spainsh characters which are ok with out conversion, get unicode character either lost or converted to regular Ascii Alphabets.
The Greek and Chineese characters shows similar behaviour as mentined in step 1.
Please guide me. Am I taking the right approach? Is there some other way around???
Well Right now I am confused and full of Questions :)
There is no special requirement for working with Greek characters. The real problem is that the characters were stored in an encoding that Access doesn't recognize in the first place. When the application stored the UTF8 values in the database it tried to convert every single byte to the equivalent byte in the database's codepage. Every character that had no correspondence in that encoding was replaced with ? That may mean that the Greek text is OK, while the chinese text may be gone.
In order to convert the data to something readable you have to know the codepage they are stored in. Using this you can get the actual bytes and then convert them to Unicode.