I have written a parser that turns out works incorrectly with UTF-8 texts.
The parser is very very simple:
while(pos < end) {
// find some ASCII char
if (text.at(pos) == '#') {
// Check some conditions and if the syntax is wrong...
if (...)
createDiagnostic(pos);
}
pos++;
}
So you can see I am creating a diagnostic at pos. But that pos is wrong if there were some UTF-8 characters (because UTF-8 characters in reality consists of more than one char. How do I correctly skip the UTF-8 chars as if they are one character?
I need this because the diagnostics are sent to UTF-8-aware VSCode.
I tried to read some articles on UTF-8 in C++ but every material I found is huge. And I only need to skip the UTF-8.
If the code point is less than 128, then UTF-8 encodes it as ASCII (No highest bit set). If code point is equal or larger than 128, all the encoded bytes will have the highest bit set. So, this will work:
unsigned char b = <...>; // b is a byte from a utf-8 string
if (b&0x80) {
// ignore it, as b is part of a >=128 codepoint
} else {
// use b as an ASCII code
}
Note: if you want to calculate the number of UTF-8 codepoints in a string, then you have to count bytes with:
!(b&0x80): this means that the byte is an ASCII character, or
(b&0xc0)==0xc0: this means, that the byte is the first byte of a multi-byte UTF8-sequence
Related
I try to understand how to handle basic UTF-8 operations in C++.
Let's say we have this scenario: User inputs a name, it's limited to 10 letters (symbols in user's language, not bytes), it's being stored.
It can be done this way in ASCII.
// ASCII
char * input; // user's input
char buf[11] // 10 letters + zero
snprintf(buf,11,"%s",input); buf[10]=0;
int len= strlen(buf); // return 10 (correct)
Now, how to do it in UTF-8? Let's assume it's up to 4 bytes charset (like Chinese).
// UTF-8
char * input; // user's input
char buf[41] // 10 letters * 4 bytes + zero
snprintf(buf,41,"%s",input); //?? makes no sense, it limits by number of bytes not letters
int len= strlen(buf); // return number of bytes not letters (incorrect)
Can it be done with standard sprintf/strlen? Are there any replacements of those function to use with UTF-8 (in PHP there was mb_ prefix of such functions IIRC)? If not, do I need to write those myself? Or maybe do I need to approach it another way?
Note: I would prefer to avoid wide characters solution...
EDIT: Let's limit it to Basic Multilingual Plane only.
I would prefer to avoid wide characters solution...
Wide characters are just not enough, because if you need 4 bytes for a single glyph, then that glyph is likely to be outside the Basic Multilingual Plane, and it will not be represented by a single 16 bits wchar_t character (assuming wchar_t is 16 bits wide which is just the common size).
You will have to use a true unicode library to convert the input to a list of unicode characters in their Normal Form C (canonical composition) or the compatibility equivalent (NFKC)(*) depending on whether for example you want to count one or two characters for the ligature ff (U+FB00). AFAIK, you best bet should be ICU.
(*) Unicode allows multiple representation for the same glyph, notably the normal composed form (NFC) and normal decomposed form (NFD). For example the french é character can be represented in NFC as U+00E9 or LATIN SMALL LETTER E WITH ACUTE or as U+0065 U+0301 or LATIN SMALL LETTER E followed with COMBINING ACUTE ACCENT (also displayed as é).
References and other examples on Unicode equivalence
strlen only counts the bytes in the input string, until the terminating NUL.
On the other hand, you seem interested in the glyph count (what you called "symbols in user's language").
The process is complicated by UTF-8 being a variable length encoding (as is, in a kind of lesser extent, also UTF-16), so code points can be encoded using one up to four bytes. And there are also Unicode combining characters to consider.
To my knowledge, there's nothing like that in the standard C++ library. However, you may have better luck using third party libraries like ICU.
std::strlen indeed considers only one byte characters. To compute the length of a unicode NUL terminated string, one can use std::wcslen instead.
Example:
#include <iostream>
#include <cwchar>
#include <clocale>
int main()
{
const wchar_t* str = L"爆ぜろリアル!弾けろシナプス!パニッシュメントディス、ワールド!";
std::setlocale(LC_ALL, "en_US.utf8");
std::wcout.imbue(std::locale("en_US.utf8"));
std::wcout << "The length of \"" << str << "\" is " << std::wcslen(str) << '\n';
}
If you do not want to count utf-8 chars by yourself - you can use temporary conversion to widechar to cut your input string. You do not need to store the intermediate values
#include <iostream>
#include <codecvt>
#include <string>
#include <locale>
std::string cutString(const std::string& in, size_t len)
{
std::wstring_convert<std::codecvt_utf8<wchar_t>> cvt;
auto wstring = cvt.from_bytes(in);
if(len < wstring.length())
{
wstring = wstring.substr(0,len);
return cvt.to_bytes(wstring);
}
return in;
}
int main(){
std::string test = "你好世界這是演示樣本";
std::string res = cutString(test,5);
std::cout << test << '\n' << res << '\n';
return 0;
}
/****************
Output
$ ./test
你好世界這是演示樣本
你好世界這
*/
I just finished creating a function to cache any loaded font in the small game engine I'm building in SDL2, the following function works flawlessly and rendering text is about 12 times faster than creating a new SDL_Surface each time I need text. However, as you can see, it caches only ANSI characters, this is fine for English, but not if I ever want to translate my game (German umlauts, or Cyrillic glyphs are not available in ANSI)
void cacheFonts(){
for(unsigned int i = 0; i < GlobalFontAssets.size; i++){
SDL_Colour color_font = {255, 255, 255, 255};
std::vector<SDL_Texture*> tempVector;
for(int j = 32; j < 128; j++){
char temp[2];
temp[0] = j;
temp[1] = 0;
SDL_Surface* glyph = TTF_RenderUTF8_Blended(GlobalFontAssets.fonts[i], temp, color_font);
SDL_Texture* texture =
SDL_CreateTextureFromSurface(renderer, glyph);
tempVector.push_back(texture);
SDL_FreeSurface(glyph);
}
GlobalFontAssets.cache.push_back(tempVector);
}
printf("Global Fonts Cached!\n");
}
I have tried using wchar_t, and looping from 0 to 256^2, however I can not get any characters to print even using printf, wprintf, cout and wcout, However if I do:
std::string str = "Привет, öäü"
printf("%s\n", str.c_str());
Then it prints the string on the terminal just fine. I should mention that I am on Ubuntu 16.04, so a Windows only solution doesn't work for me, ideally I wish to do this in a portable manner. To those not familiar with SDL, all I need is a way to get every UTF8 Character in a C string. I hope that this is possible.
Addressing only this portion of the question:
all I need is a way to get every UTF8 Character in a C string
Wikipedia has a nice table showing the various encoding rules, the range of codepoints that each covers, and the corresponding UTF-8 length and data bytes.
For covering the first 2000-odd characters, just generate all the one- and two-byte patterns:
char s[3] = { 0 };
for(s[0] = 0x00; s[0] < 0x80u; ++s[0]) { // can start at 0x20 to skip control characters
// one byte encodings
}
for(s[0] = 0xC0u; s[0] < 0xE0u; ++s[0]) {
for(s[1] = 0x80u; s[1] < 0xC0u; ++s[1]) {
// two byte encodings
}
}
It's no coincidence that the values 0x80u and 0xC0u appear more than once in the loop conditions -- the fact that there is no overlap between lead bytes and following bytes is what gives UTF-8 its self-synchronizing property.
I guess you're relying on the following fact (quoted from Wikipedia):
The first 128 characters (US-ASCII) need one byte. The next 1,920 characters need two bytes to encode, which covers the remainder of almost all Latin-script alphabets, and also Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac, Thaana and N'Ko alphabets, as well as Combining Diacritical Marks.
Because this range contains combining marks, you will have quite a few entries that can't be rendered alone. Whether you skip them or just handle the resulting confusion from the text layout engine is up to you.
In this question: Convert ISO-8859-1 strings to UTF-8 in C/C++
There is a really nice concise piece of c++ code that converts ISO-8859-1 strings to UTF-8.
In this answer: https://stackoverflow.com/a/4059934/3426514
I'm still a beginner at c++ and I'm struggling to understand how this works. I have read up on the encoding sequences of UTF-8, and I understand that <128 the chars are the same, and above 128 the first byte gets a prefix and the rest of the bits are spread over a couple of bytes starting with 10xx, but I see no bit shifting in this answer.
If someone could help me to decompose it into a function that only processes 1 character, it would really help me understand.
Code, commented.
This works on the fact that Latin-1 0x00 through 0xff are mapping to consecutive UTF-8 code sequences 0x00-0x7f, 0xc2 0x80-bf, 0xc3 0x80-bf.
// converting one byte (latin-1 character) of input
while (*in)
{
if ( *in < 0x80 )
{
// just copy
*out++ = *in++;
}
else
{
// first byte is 0xc2 for 0x80-0xbf, 0xc3 for 0xc0-0xff
// (the condition in () evaluates to true / 1)
*out++ = 0xc2 + ( *in > 0xbf ),
// second byte is the lower six bits of the input byte
// with the highest bit set (and, implicitly, the second-
// highest bit unset)
*out++ = ( *in++ & 0x3f ) + 0x80;
}
}
The problem with a function processing a single (input) character is that the output could be either one or two bytes, making the function a bit awkward to use. You are usually better off (both in performance and cleanliness of code) with processing whole strings.
Note that the assumption of Latin-1 as input encoding is very likely to be wrong. For example, Latin-1 doesn't have the Euro sign (€), or any of these characters ŠšŽžŒœŸ, which makes most people in Europe use either Latin-9 or CP-1252, even if they are not aware of it. ("Encoding? No idea. Latin-1? Yea, that sounds about right.")
All that being said, that's the C way to do it. The C++ way would (probably, hopefully) look more like this:
#include <unistr.h>
#include <bytestream.h>
// ...
icu::UnicodeString ustr( in, "ISO-8859-1" );
// ...work with a properly Unicode-aware string class...
// ...convert to UTF-8 if necessary.
char * buffer[ BUFSIZE ];
icu::CheckedArrayByteSink bs( buffer, BUFSIZE );
ustr.toUTF8( bs );
That is using the International Components for Unicode (ICU) library. Note the ease this is adopted to a different input encoding. Different output encodings, iostream operators, character iterators, and even a C API are readily available from the library.
Sometimes manipulating character strings at the character level is unavoidable.
Here I have a function written for ANSI/ASCII based character strings that replaces CR/LF sequences with LF only, and also replaces CR with LF. We use this because incoming text files often have goofy line endings due to various text or email programs that have made a mess of them, and I need them to be in a consistent format to make parsing / processing / output work properly down the road.
Here's a fairly efficient implementation of this compression from various line-endings to LF only, for single byte per character implementations:
// returns the in-place conversion of a Mac or PC style string to a Unix style string (i.e. no CR/LF or CR only, but rather LF only)
char * AnsiToUnix(char * pszAnsi, size_t cchBuffer)
{
size_t i, j;
for (i = 0, j = 0; pszAnsi[i]; ++i, ++j)
{
// bounds checking
ASSERT(i < cchBuffer);
ASSERT(j <= i);
switch (pszAnsi[i])
{
case '\n':
if (pszAnsi[i + 1] == '\r')
++i;
break;
case '\r':
if (pszAnsi[i + 1] == '\n')
++i;
pszAnsi[j] = '\n';
break;
default:
if (j != i)
pszAnsi[j] = pszAnsi[i];
}
}
// append null terminator if we changed the length of the string buffer
if (j != i)
pszAnsi[j] = '\0';
// bounds checking
ASSERT(pszAnsi[j] == 0);
return pszAnsi;
}
I'm trying to transform this into something that will work correctly with multibyte/unicode strings, where the size of the next character can be multible bytes wide.
So:
I need to look at a character only at a valid character-point (not in the middle of a character)
I need to copy over the portion of the character that is part of the rejected piece properly (i.e. copy whole characters, not just bytes)
I understand that _mbsinc() will give me the address of the next start of a real character. But what is the equivalent for Unicode (UTF16), and are there already primitives to be able to copy a full character (e.g. length_character(wsz))?
One of the beautiful things about UTF-8 is that if you only care about the ASCII subset, your code doesn't need to change at all. The non-ASCII characters get encoded to multi-byte sequences where all of the bytes have the upper bit set, keeping them out of the ASCII range themselves. Your CR/LF replacement should work without modification.
UTF-16 has the same property. Characters that can be encoded as a single 16-bit entity will never conflict with the characters that require multiple entities.
Do not try to keep text internally in mix of whatever encodings and work with those it is true Hell.
First pick some "internal" encoding. When target platform is UNIX then UTF-8 is good candidate, it is slightly easier to display there. When target platform is Windows then UTF-16 is good candidate, Windows uses it internally anyway everywhere. Whatever you pick, stick to it an only it.
Then you convert all incoming "dirty" text into that encoding. Also you may make some re-formatting that actually looks exactly like your code, only that on case of wchar_t containing UTF-16 you have to use literals like L'\n'.
I'm changing a software in C++, wich process texts in ISO Latin 1 format, to store data in a database in SQLite.
The problem is that SQLite works in UTF-8... and the Java modules that use same database work in UTF-8.
I wanted to have a way to convert the ISO Latin 1 characters to UTF-8 characters before storing in the database. I need it to work in Windows and Mac.
I heard ICU would do that, but I think it's too bloated. I just need a simple convertion system(preferably back and forth) for these 2 charsets.
How would I do that?
ISO-8859-1 was incorporated as the first 256 code points of ISO/IEC 10646 and Unicode. So the conversion is pretty simple.
for each char:
uint8_t ch = code_point; /* assume that code points above 0xff are impossible since latin-1 is 8-bit */
if(ch < 0x80) {
append(ch);
} else {
append(0xc0 | (ch & 0xc0) >> 6); /* first byte, simplified since our range is only 8-bits */
append(0x80 | (ch & 0x3f));
}
See http://en.wikipedia.org/wiki/UTF-8#Description for more details.
EDIT: according to a comment by ninjalj, latin-1 translates direclty to the first 256 unicode code points, so the above algorithm should work.
TO c++ i use this:
std::string iso_8859_1_to_utf8(std::string &str)
{
string strOut;
for (std::string::iterator it = str.begin(); it != str.end(); ++it)
{
uint8_t ch = *it;
if (ch < 0x80) {
strOut.push_back(ch);
}
else {
strOut.push_back(0xc0 | ch >> 6);
strOut.push_back(0x80 | (ch & 0x3f));
}
}
return strOut;
}
If general-purpose charset frameworks (like iconv) are too bloated for you, roll your own.
Compose a static translation table (char to UTF-8 sequence), put together your own translation. Depending on what do you use for string storage (char buffers, or std::string or what) it would look somewhat differently, but the idea is - scroll through the source string, replace each character with code over 127 with its UTF-8 counterpart string. Since this can potentially increase string length, doing it in place would be rather inconvenient. For added benefit, you can do it in two passes: pass one determines the necessary target string size, pass two performs the translation.
If you don't mind doing an extra copy, you can just "widen" your ISO Latin 1 chars to 16-bit characters and thus get UTF-16. Then you can use something like UTF8-CPP to convert it to UTF-8.
In fact, I think UTF8-CPP could even convert ISO Latin 1 to UTF-8 directly (utf16to8 function) but you may get a warning.
Of course, it needs to be real ISO Latin 1, not Windows CP 1232.