I realize there may be similar questions, but I have been really stuck for a while now. I'm operating on Windows 10 using C++ and the Win32 API.
I need to read a user textbox for a hexadecimal code, something like Ѱ or U+0470, although the 'U+' is not going to be present. After receiving this value, I need to take the code and print the appropriate symbol.
Using the code below I can print any Unicode symbol:
createWindow(L"Static", L"\u2230", WS_VISIBLE | WS_CHILD, 10,10, 190,40, hWnd, NULL,NULL,NULL);
I can also read the textbox using GetWindowText(). However, I can not figure out how to get the user's input 0470 and print this as a Unicode character in the textbox.
You'll need to convert the string L"0470" into a number. It's a base-16 number, so you'd use std::wcstoul from <cwchar>. You can then cast this number to a WCHAR. I assume you know how to turn a single character into a string.
Related
Visual Studio 2015, C++ language, debugging.
In the Watch1 window I look the values of my variables (strings) of the wchar_t* and char* types. The first of them is Unicode and the second is ANSI (CP_OEMCP codepage). In the Watch1 window the text of the wchar_t* variable is displaying correctly, but the text of the char* variable is displaying unreadable. Can I point the necessary codepage for the individual string variable in the Watch1 window? I want to see both values of my strings correctly in the Watch1 window.
Maybe for such cases is exists the some syntax, similar the $err,hr (the text of the last error, which was gotten via the GetLastError() function).
UPD (the screen added)
Console window has the right output, but in the memory and in the Watch1 window I see unreadable string for my ansiText variable.
The problem is that the original string (starting with hex values 8D A0 A6) is not on Windows-1251 (Windows Cyrillic) code page, but on OEM 866 code page. These two are different, and Visual Studio expects Windows-1251, because that's system's code page (code page used for non-Unicode applications).
It is not possible to specify a code page when you watch a string in debugger. Everything inside should be Unicode anyway, or at least UTF-8, and for those two there are format specifiers, su and s8. See MSDN for all format specifiers.
What you can do is have the following function integrated in the code, and when you want to see some non-ANSI (or non-CP_ACP, to be precise) string just call this function with the string and code page as parameters (but use the function only once in Watch window):
LPCWSTR ViewString(LPCSTR szString, UINT nCodePage)
{
static WCHAR szTemp[1024];
MultiByteToWideChar(nCodePage, 0, szString, -1, szTemp, 1024);
return szTemp;
}
So, in your case in Watch window instead of (char*)ansiText there would be ViewString(ansiText, 866). Also, note that this is not actually "ANSI text", but "OEM text".
I don't know what exactly your program is supposed to do, but I would convert all non-Unicode strings to Unicode at the earliest point in code (right where you get a non-Unicode string), and in your code always work just with Unicode strings. To convert OEM 866 string to Unicode you can use function MultiByteToWideChar with CodePage parameter = 866.
i'm creating a simple program in C++ using WinAPI, see this code below:
CreateWindowW(L"STATIC", L"Portão", WS_CHILD | WS_VISIBLE, 10, 10, 100, 20, hwnd, (HMENU)ID_LABEL1, NULL, NULL);
The above code is to create a static control on the main form, the problem is that the 2nd parameter uses a brazilian portuguese word with accent (Portão means Gate), and it give an error, the error is:
C:\CBProjects\ListF\main.cpp|46|error: converting to execution character set: Invalid argument|
i'm using wide character(wchar_t*), but if i replace "Portão" for "Portao" (without accent), it works just fine, Why? How can i solve this?
i'm using Code::Blocks IDE with MinGW Compilator.
C++ has notions of source character set and execution character set. Basically source character set is about characters in file with code and execution character set is about internal string representation in compiler.
Please look at this stack overflow question for more details on this topic.
I have Multi-Byte Char Set MFC windows application. Now I need to display international single byte ASCI characters on windows controls. I can't use ASCI characters directly because to display them correctly it is required windows locale be set to adequate country. I need to display characters in all windows locale cases. For this purpose I must convert ASCI to unicode. I can display required international characters in MessageBoxW, but how to display them on windows MFC controls using SetWindowText?
To show unicode string in MessageBoxW I construct it in wstring
WORD g [] = {0x105,0x106,0x107,0x108,0x109,0x110,0x111,0x112,0x113,0x114,0x115,0x116,0x117,0x118,0x119,0x120};
wstring gg (reinterpret_cast<wchar_t*>(g),15);
MessageBoxW(NULL, gg.c_str() , gg.c_str() , MB_ICONEXCLAMATION | MB_OK);
Seting MFC form control text:
class MyFrm: public CDialogEx
{
virtual BOOL OnInitDialog();
}
...
BOOL MyFrm::OnInitDialog()
{
GetDlgItem(IDC_EDIT_TICKET_NUMBER)->SetWindowText( ???);
}
Is it possible somehow convert wstring gg to CString and show unicode chars on window control?
You could try casting your CDialogEx 'this' object to HWND and then call explictly Win32 API to set text using wchars. So your code will look something like this:
BOOL MyFrm::OnInitDialog()
{
SetDlgItemTextW((HWND)(*this), IDC_EDIT_TICKET_NUMBER, gg.c_str());
}
But as I mentioned earlier Unicode is supported starting from Windows XP and using ASCII is really not a good idea unless you're targeting those very very old OS'es before it. Using them nowdays will cause ALL ASCII strings you pass to be firstly converted into Unicode by the Win32 API. So it is a better idea to switch your project entirely to UNICODE.
First, note that you can simply directly initialize a std::wstring with your Unicode hex character data, without any ugly useless reinterpret_cast<wchar_t*>, etc.
Instead of this:
WORD g [] = {0x105,0x106,0x107,0x108,...,0x120};
wstring gg (reinterpret_cast<wchar_t*>(g),15);
just consider that:
wstring text = L"\x0105\x0106\x0108...\0x0120";
The latter seems much cleaner to me.
Second, if you want to pass an instance to std::wstring to an MFC method that expects a const wchar_t* input string pointer, just consider using wstring::c_str() method.
In addition, the best suggestion I can give you is to just port your app to Unicode.
ASCII/MBCS should be considered programming model of the past for MFC; they bring lots of problem when you want to write "international" code.
Ok, so I'm trying to develop an app using C++ and Qt4 for Linux that will map certain key sequences to special Unicode characters. Also, I'm trying to make it bilingual, so the special Unicode character sent depends on the selected language. Example: AltGr+s will send ß or ș, depending whether German or Romanian is selected. On Windows, I have achieved this using AutoHotKey. However, I couldn't get IronAHK to work on Linux so I have written myself a nice Qt Application for it, using Qxt to register "global" shortcuts. I have tried this snippet:
void mainWnd::sendKeypress( unsigned int keycode )
{
Display *display = QX11Info::display();
Window curr_focus;
int revert_to;
XGetInputFocus( display, &curr_focus, &revert_to );
XTestFakeKeyEvent( display, keycode, true, 0 );
XTestFakeKeyEvent( display, keycode, false, 1 );
XFlush( display );
}
copied from another application(where it works), but here it seems to do nothing. Also, there might be a problem with the fact that the characters I'm trying to send aren't found on a US 101 Keyboard, that I currently use on my laptop(and as the layout in the OS).
So my question is: how do I make the app send a Unicode character to whichever app has focus, inserting a special character(sort of like KCharMap)? Remember, these are special characters which are not found on a normal US Keyboard. Thanks in advance.
I'm getting a value from the registry. This value might have double byte characters in it.
I will later have to transfer this across the network to a C# client to display. C# is all unicode.
The function returns MBCS if you call it non-unicode.
What should I use?
string result = string(cbData);
RegQueryValueExA(h_sub_key, "DisplayName", NULL, NULL, (LPBYTE) &result[0], &cbData)
or
string result = string(cbData);
RegQueryValueExW(h_sub_key, L"DisplayName", NULL, NULL, (LPBYTE) &result[0], &cbData)
Using Unicode whenever possible will make your life easier. The registry contains Unicode natively and converts to MBCS on the fly when you use ReqQueryValueExA, why would you want to do an unneeded conversion?
Converting to UTF-8 from UTF-16 might make sense for information going over the network, but if you control both ends of the connection it wouldn't be necessary.
No, that's not the way it works. The string you get back from the first snippet is encoded according to the current system code page. Could be a double-byte encoding. Could be anything. Big problem of course, the C# code on the other end of that Internet connection has no way to guess what the code page might be.
So do not use the first snippet. The second one gets you the string in utf16, the native encoding used in Windows, result needs to be an std::wstring. Also the encoding used by C# so you could send the binary string. Although that's not typically a good idea, xml is popular. It is up to you to set the wire format.