Getting the number of bytes in a Unicode string - c++

I have a Unicode string and I need to know the number of bytes it uses.
In general, I know that wcslen(s) * 2 will work. But my understanding is that this is not reliable when working with Unicode.
Is there a reliable and performant way to get the number of bytes used by a Unicode string?

wcslen counts the number of wchar_t entities, until it finds a NUL character. It doesn't interpret the data in any way.
(wcslen(s) + 1) * sizeof(wchar_t) will always, reliably calculate the number of bytes required to store the string s.

Related

How to ignore accents in a string so it does not alter its length?

I am determining the length of certain strings of characters in C++ with the function length(), but noticed something strange: say I define in the main function
string str;
str = "canción";
Then, when I calculate the length of str by str.length() I get as output 8. If instead I define str = "cancion" and calculate str's length again, the output is 7. In other words, the accent on the letter 'o' is altering the real length of the string. The same thing happens with other accents. For example, if str = "für" it will tell me its length is 4 instead of 3.
I would like to know how to ignore these accented characters when determinig the lenght of a string; however, I wouldn't want to ignore isolated characters like '. For example, if str = livin', the lenght of str must be 6.
It is a difficult subject. Your string is likely UTF-8 encoded, and str.length() counts bytes. An ASCII character can be encoded in 1 byte, but characters with codes larger than 127 is encoded in more than 1 byte.
Counting unicode code points may not give you the answer you needed. Instead, you need to take account the width of the code point to handle separated accents and code points with double width (and maybe there are other cases as well). So this is difficult to do this properly without using a library.
You may want to check out ICU.
If you have a constrained case and you don't want to use a library for this, you may want to check out UTF-8 encoding (it is not difficult), and create a simple UTF-8 code point counter (a simple algorithm could be to count bytes where (b&0xc0)!=0x80).
Sounds like UTF-8 encoding. Since the characters with the accents cannot be stored in a single byte, they are stored in 2 bytes. See https://en.wikipedia.org/wiki/UTF-8

Represent a buffer efficiently as unicode string

I have a random buffer.
I need to encode it to a unicode string (utf16 LE. as used by windows wide-char specification) so it can be used as PWSTR. For example when calling to StringCchPrintfW
A possible solution can be to use base64. But in order to make it a unicode string I will have to add a zero byte after every char, which will be inefficient in space.
And if I will just print the buffer, it might contain '\0' which will terminate the string, or '%' which will effect the formatting (maybe it can be escaped), or other unicode chars that will prevent it from being used in formatting.
The code to generate the string that will be printed, and parsing it in the end will be written in C#, but the buffer will be used in windows C++ to be used in a formatting and then written to a file.
Here are two methods I can think of:
The easy one: convert each of your bytes in a UTF-16 wchar_t by summing 0x8000 to its value (i.e. you append a 0x80 byte). The efficiency is only 50% but at least you spare the base64 conversion, which would lower the efficiency to 37.5%.
The efficient but complicated one: read your data in 15-bit chunks (if your total number of bits is not a multiple of 15, pad with null bits at the end). Convert each chunk in a UTF-16 character by adding 0x4000 to its value. Then add a final wchar_t of value 0xC000 + n, where n (0 <= n <= 14) is the number of padding bits in the final chunk. In exchange for a much more complicated algorithm, you get a very good efficiency: 93.75%.
Both of the method avoid all the perils of using binary data in a UTF-16 format string: no null bytes, no '%' characters, no surrogate pairs, only printable characters (most of which are Chinese ideograms).

Get number of characters in string?

I have an application, accepting a UTF-8 string of a maximum 255 characters.
If the characters are ASCII, (characters number == size in bytes).
If the characters are not all ASCII and contains Japanese letters for example, given the size in bytes, how can I get the number of characters in the string?
Input: char *data, int bytes_no
Output: int char_no
You can use mblen to count the length or use mbstowcs
source:
http://www.cplusplus.com/reference/cstdlib/mblen/
http://www.cl.cam.ac.uk/~mgk25/unicode.html#mod
The number of characters can be counted in C in a portable way using
mbstowcs(NULL,s,0). This works for UTF-8 like for any other supported
encoding, as long as the appropriate locale has been selected. A
hard-wired technique to count the number of characters in a UTF-8
string is to count all bytes except those in the range 0x80 – 0xBF,
because these are just continuation bytes and not characters of their
own. However, the need to count characters arises surprisingly rarely
in applications.
you can save a unicode char in a wide char wchar_t
There's no such thing as "character".
Or, more precisely, what "character" is depends on whom you ask.
If you look in the Unicode glossary you will find that the term has several not fully compatible meanings. As a smallest component of written language that has semantic value (the first meaning), á is a single character. If you take á and count basic unit of encoding for the Unicode character encoding (the third meaning) in it, you may get either one or two, depending on what exact representation (normalized or denormalized) is being used.
Or maybe not. This is a very complicated subject and nobody really knows what they are talking about.
Coming down to earth, you probably need to count code points, which is essentially the same as characters (meaning 3). mblen is one method of doing that, provided your current locale has UTF-8 encoding. Modern C++ offers more C++-ish methods, however, they are not supported on some popular implementations. Boost has something of its own and is more portable. Then there are specialized libraries like ICU which you may want to consider if your needs are much more complicated than counting characters.

std::string and UTF-8 encoded unicode

If I understand well, it is possible to use both string and wstring to store UTF-8 text.
With char, ASCII characters take a single byte, some chinese characters take 3 or 4, etc. Which means that str[3] doesn't necessarily point to the 4th character.
With wchar_t same thing, but the minimal amount of bytes used per characters is always 2 (instead of 1 for char), and a 3 or 4 byte wide character will take 2 wchar_t.
Right ?
So, what if I want to use string::find_first_of() or string::compare(), etc with such a weirdly encoded string ? Will it work ? Does the string class handle the fact that characters have a variable size ? Or should I only use them as dummy feature-less byte arrays, in which case I'd rather go for a wchar_t[] buffer.
If std::string doesn't handle that, second question: are there libraries providing string classes that could handle that UTF-8 encoding so that str[3] actually points to the 3rd character (which would be a byte array from length 1 to 4) ?
You are talking about Unicode. Unicode uses 32 bits to represent a character. However since that is wasting memory there are more compact encodings. UTF-8 is one such encoding. It assumes that you are using byte units and it maps Unicode characters to 1, 2, 3 or 4 bytes. UTF-16 is another that is using words as units and maps Unicode characters to 1 or 2 words (2 or 4 bytes).
You can use both encoding with both string and wchar_t. UTF-8 tends to be more compact for english text/numbers.
Some things will work regardless of encoding and type used (compare). However all functions that need to understand one character will be broken. I.e the 5th character is not always the 5th entry in the underlying array. It might look like it's working with certain examples but It will eventually break.
string::compare will work but do not expect to get alphabetical ordering. That is language dependent.
string::find_first_of will work for some but not all. Long string will likely work just because they are long while shorter ones might get confused by character alignment and generate very hard to find bugs.
Best thing is to find a library that handles it for you and ignore the type underneath (unless you have strong reasons to pick one or the other).
You can't handle Unicode with std::string or any other tools from Standard Library. Use external library such as: http://utfcpp.sourceforge.net/
You are correct for those:
...Which means that str[3] doesn't necessarily point to the 4th character...only use them as dummy feature-less byte arrays...
string of C++ can only handle ascii characters. This is different from the String of Java, which can handle Unicode characters. You can store the encoding result (bytes) of Chinese characters into string (char in C/C++ is just byte), but this is meaningless as string just treat the bytes as ascii chars, so you cannot use string function to process it.
wstring may be something you need.
There is something that should be clarified. UTF-8 is just an encoding method for Unicode characters (transforming characters from/to byte format).

Differences in string class implementations

Why are string classes implemented in several different ways and what are the advantages and disadvantages? I have seen it done several differents ways
Only using a simple char (most basic way).
Supporting UTF8 and UTF16 through a templated string, such as string<UTF8>. Where UTF8 is a char and UTF16 is an unsigned short.
Having both a UTF8 and UTF16 in the string class.
Are there any other ways to implement a string class that may be better?
As far as I know std::basic_string<wchar_t> where sizeof(wchar_t) == 2 is not UTF16 encoding. There are more than 2^16 characters in unicode, and codes go at least up to 0xFFFFF which is > 0xFFFF (2byte wchar_t capacity). As a result, proper UTF16 should use variable number of bytes per letter (one 2byte wchar_t or two of them), which is not the case with std::basic_string and similar classes that assume that one string element == one character.
As far as I know there are two ways to deal with unicode strings.
Either use big enough type to fit any character into single string element (for example, on linux it is quite normal to see sizeof(wchar_t) == 4), so you'll be able to enjoy "benefits" (basically, easy string length calculation and nothing else) of std::string-like classes.
Or use variable-length encoding (UTF8 - 1..4 bytes per char or UTF16 - 2..4 bytes per char), and well-tested string class that provides string-manipulation routines.
As long as you don't use char it doesn't matter which method you use. char-based strings are likely to cause trouble on machines with different 8bit codepage, if you weren't careful enough to take care of that (It is safe to assume that you'll forget about it and won't be careful enough - Microsoft Applocale was created for a reason).
Unicode contains plenty of non-printable characters (control and formatting characters in unicode), so that pretty much defeats any benefit method #1 could provide. Regardless, if you decide to use method #1, you should remember that wchar_t is not big enough to fit all possible characters on some compilers/platforms (windows/microsoft compiler), and that std::basic_string<wchar_t> is not a perfect solution because of that.
Rendering internationalized text is PAIN, so the best idea would be just to grab whatever unicode-compatible string class (like QString) there is that hopefully comes with text layout engine (that can properly handle control characters and bidirectional text) and concentrate on more interesting programming problems instead.
-Update-
If unsigned short is not UTF16, then what is, unsigned int? What is UTF8 then? Is that unsigned char?
UTF16 is variable-length character encoding. UTF16 uses 1..2 2-byte (i.e. uint16_t, 16 bit) elements per character. I.e. number of of elements in UTF16 string != number of characters in string for UTF16. You can't calculate string length by counting elements.
UTF8 is another variable-length encoding, based on 1byte elements (8 bit, 1 byte or "unsigned char"). One unicode character ("code point") in UTF8 takes 1..4 uint8_t elements. Once again, number of elements in string != number of characters in string. The advantage of UTF8 is characters that exist within ASCII take exactly 1 byte per character in UTF8, which saves a bit of space, while in UTF16, character always takes at least 2 bytes.
UTF32 is fixed-length character encoding, that always uses 32bit (4 bytes or uint32_t) per character. Currently any unicode character can fit into single UTF32 element, and UTF32 will probably remain fixed-length for a long time (I don't think that all languages of Earth combined would produce 2^31 different characters). It wastes more memory, but number of elements in string == number of characters in string.
Also, keep in mind, that C++ standard doesn't specify how big "int" or "short" should be.