Right to left isolate string. C++ - c++

does anyone have experience in Unicodes?
I am facing a tough problem with Farsi unicodes.
I have an std::wstring s = (L"\u0634\u0646\u0628\u0647"); which is a Farsi word. When I debug it, I see that the underlying word is exactly what I want, but reversed. So I have researched and found that u2067 is for right to left reading the string.
NOTE:
I cannot reverse the string manually because Farsi characters are changing their shape regardless of their position in the string.
So I added the 2067 int the beginning and got
std::wstring s = (L"\u2067\u0634\u0646\u0628\u0647");.
But now the underlying string is the same , just added a square in the beginning if the string instead of reversing.
Does anyone have experince with this stuff? Please suggest a solution. Thanks!

The underlying string will be the same. You haven't changed the order of bytes, which is written right there in the code. But a renderer that understands Unicode should take those bytes and display the characters right-to-left. That's a visual thing. It has nothing to do with the encoding. From your question, it's not entirely clear what else you expected. It may be that you are viewing the string in a debugger, and the debugger does not support this feature of Unicode. If you try outputting the string to a proper console you ought to see it as you expect.

Related

Replace function has no effect

So the problem I am having right now is Replace function having no effect. The code is
param = WScript.Arguments(0)
param=Replace(param, "б", "vb")
MsgBox(param)
The second parameter is a cyrillic character (which I suspect could be the reason), yet I don't get any errors. It's just the output is literally the same as the input regardless of whether that б character occurs in the input or not
The reason why this problem occured was encoding indeed: changing UTF-8 to UTF-16 solved the problem without any further manipulations

Convert path to \\

Okay, after two days of searching the web and MSDN, I didn't found any real solution to this problem, so I'm gonna ask here in hope I've overlooked something.
I have open dialog window, and after I get location from selected file, it gives the string in following way C:\file.exe. For next part of mine program I need C:\\file.exe. Is there any Microsoft function that can solve this problem, or some workaround?
ofn.lpstrFile = fileName;
char fileNameStr[sizeof(fileName)+1] = "";
if (GetOpenFileName(&ofn))
strcpy(fileNameStr, fileName);
DeleteFile(fileName); // doesn't works, invalid path
I've posted only this part of code, because everything else works fine and isn't relevant to this problem. Any assistence is greatly appreciated, as I'm going mad in last two days.
You are confusing the requirement in C and C++ to escape backslash characters in string literals with what Windows requires.
Windows allows double backslashes in paths in only two circumstances:
Paths that begin with "\\?\"
Paths that refer to share names such as "\\myserver\foo"
Therefore, "C:\\file.exe" is never a valid path.
The problem here is that Microsoft made the (disastrous) decision decades ago to use backslashes as path separators rather than forward slashes like UNIX uses. That decision has been haunting Windows programmers since the early 1980s because C and C++ use the backslash as an escape character in string literals (and only in literals).
So in C or C++ if you type something like DeleteFile("c:\file.exe") what DeleteFile will see is "c:ile.exe" with an unprintable 0xf inserted between the colon and "ile.exe". That's because the compiler sees the backslash and interprets it to mean the next character isn't what it appears to be. In this case, the next character is an f, which is a valid hex digit. Therefore, the compiler converts "\f" into the character 0xf, which isn't valid in a file name.
So how do you create the path "c:\file.exe" in a C/C++ program? You have two choices:
"c:/file.exe"
"c:\\file.exe"
The first choice works because in the Win32 API (and only the API, not the command line), forward slashes in paths are accepted as path separators. The second choice works because the first backslash tells the compiler to treat the next character specially. If the next character is a hex digit, that's what you will get. If the next character is another backslash, it will be interpreted as exactly that and your string will be correct.
The library Boost.Filesystem "provides portable facilities to query and manipulate paths, files, and directories".
In short, you should not use strings as file or path names. Use boost::filesystem::path instead. You can still init it from a string or char* and you can convert it back to std::string, but all manipulations and decorations will be done correctly by the class.
Im guessing you mean convert "C:\file.exe" to "C:\\file.exe"
std::string output_string;
for (auto character : input_string)
{
if (character == '\\')
{
output_string.push_back(character);
}
output_string.push_back(character);
}
Please note it is actually looking for a single backslash to replace, the double backslash used in the code is to escape the first one.

How to drop control characters when using WideCharToMultibyte

I'm working on a Windows UI Automation client that interfaces with some legacy code. At the point where the legacy code is called I have to convert the LPWSTR to a char *, which works in most cases, but sometimes the input strings contain control characters (such as the invisible LTR control character), and WideCharToMultibyte always maps those characters to '?'.
Is it possible to drop those characters? Is there another function better suited to this purpose? Any help would be greatly appreciated!
Doesn't look like there's a single function that accomplishes this, so I'm using Mark Random's solution from the comments.
Let lpDefaultChar point to a character that will never be in your string, such as 0x01, then remove those characters from the output. – Mark Ransom

Remove a sub string in a Korean string in C++

I have a Korean string: "태권소녀 1". And now I want to remove a substring, " 1" (a space and '1' character). How can I do it in C++?
With the English string it works ok, but I cannot do it with Korean yet.
Thank you so much if you can give me some ideas.
thestring.erase(thestring.find(" 1"),2);
assuming, it's there. This is not the code to use it's a hint about what to look up in the documentation.
The problem you have is probably to determine the size in bytes of the particular string in characters. It depends on the encoding, but generally you may want to look at the family of functions with mb in their names (which stands for multibyte).

Unicode character for superscript shows a square box: ࠚ

Using the following code to create a Unicode string:
wchar_t HELLO[20];
wsprintf(HELLO, TEXT("%c"), 0x2074);
When I display this onto a Win32 Control like a Text box or a button it gets mapped to a [] Square.
How do I fix this ?
I tried compiling with both Eclipse(MinGW) and Microsoft Visual C++ (2010).
Also, UNICODE is defined at the top
Edit:
I think it might be something to do with my system, because when I visit: http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
some of the unicode characters don't appear.
The font you are using does not contain a glyph for that character. You will likely need to install some new fonts to overcome this deficiency.
The character you have picked out is 'SAMARITAN MODIFIER LETTER EPENTHETIC YUT' (U+081A). Perhaps you were after U+2074, i.e. 'SUPERSCRIPT FOUR' (U+2074). You need hex for that: 0x2074.
Note you changed the question to read 0x2074 but the original version read 2074. Either way, if you see a box that indicates your font is missing that glyph.
The characters you are getting from Wikipedia are expressed in hexadecimal, so your code should be:
wchar_t HELLO[20];
wsprintf(HELLO, TEXT("%c"), (wchar_t)0x2074); // or TEXT('\x2074')
If it still doesn't work, it's a font problem; if you need a pan-Unicode font, it seems that Code2000 is one of the most complete out there.
Funny fact: the character that has the decimal code 2074 (i.e. hex 81a) seems to actually be a box (or it's such a strange beast that even the image outline at FileFormat.Info is wrong). :)
For the curious ones: it turns out that 0x081a is this thing: