Change input locale (keyboard -- left shift+ alt + 1) key sequence PROGRAMMATICALLY - c++

On XP, if you go to
control panel -> regional and language Options -> Languages Tab -> Details ->
If you have more than one keyboard in use, then, click Key Settings. Those are the settings I would like to change. I would like to set it up so that the DVORAK keyboard is Left Alt + Shift + 1. I can use C++, C# or whatever. I already know how to load a keyboard:
HKL dvorakhkl = LoadKeyboardLayout(TEXT("00010409"), 0);
That loads the dvorak keyboard. This sets it to default:
SystemParametersInfo(SPI_SETDEFAULTINPUTLANG, 0, (PVOID)&dvorakhkl, 0);
Also, I can change the top part of said dialog box
"Switch between Input Languages"
UINT val = 1;//"1" = ALT+SHIFT, "2" = CTRL+SHIFT, and "3" = none.
System.ParametersInfo(SPI_SETLANGTOGGLE, 0, 0, val);
Let me know if you can help. Thanks!
Aaron

By default all programs use the C local (Because we all program in C dialects I suppose)
You can imbue streams with an appropriate local.
Just remember that you must imbue the stream before opening/using it. An attempt to imbue a stream after it has been opened/used will be silently ignored.
This means that for std::cin and std::cout you should probably do it immediately on startup in main() to avoid the potential of them being used.
When creating a locale object if you specify the empty striung it will pick up the name of the local from the environment (ie one of the environment variables).
See:
http://www.cplusplus.com/reference/iostream/ios_base/imbue/

Related

How to get macOS keyboard shortcuts set in System Preferences programmatically?

On macOS the key combination CMD+Backtick is used to cycle through the open windows of an application when using an english keyboard. On German keyboards for example the combination is CMD+<. This shortcut can even be configured using System Preferences -> Keyboard -> Shortcuts -> Keyboard -> Move focus to next window.
For my multi-window GUI application using FLTK I want to utilize this shortcut, but have no idea how to fetch the combination the user has set on his or her system. So what I'm looking for is a macOS system call that gives me the key combination that is used to Move focus to next window on this very Mac.
Of course if there would be a somewhat builtin way using FLTK I'd prefer that over having to use native system calls.
Googling for this issue is a nightmare ...
Update 08/10/2017
Öö's answer gave me some ideas for additional research. I've since learned that the preferences are stored in com.apple.symbolichotkeys, more precisely in key 27.
27 = {
enabled = 1;
value = {
parameters = (
98,
11,
524288
);
type = standard;
};
};
Parameter 1 (98): That's the ASCII code for "b". The first parameter has the ascii code of the shortcut used or 65535 if it's a non-ascii character.
Parameter 2 (11): That's the keyboard code for the kVK_ANSI_B (source). These codes are keyboard dependent. On a US keyboard, kVK_ANSI_Z is 0x06, while on a german keyboard it's 0x10.
Parameter 3 (524288): That's for the modifier key:
0x000000 => "No modifier",
0x020000 => "Shift",
0x040000 => "Control",
0x080000 => "Option",
0x100000 => "Command",
(0x80000 equals 524288.)
So my task just seems to be to parse the output of defaults read com.apple.symbolichotkeys, get the key combinations from the parameter dictionary, interpret those combinations correctly depending on the keyboard layout and use these information to set the callbacks in my FLTK app.
I can't test right now the answer ... but I would first try to popen the defaults command like:
HFILE file;
if (!(file = popen("defaults read NSGlobalDomain NSUserKeyEquivalents", "r")))
{
return nullptr;
}
const int MAX_BUF_SIZE = 512;
char temp[MAX_BUF_SIZE+1] = "";
while (fgets(temp, MAX_BUF_SIZE, file) > 0)
{
printf("%s",temp);
memset(temp, 0, MAX_BUF_SIZE+1);
}
pclose(file);
Here I just printf its output but you will likely want to parse it.

Writing unicode(?) character directly from source code to WriteConsoleOutput

I'm trying to use WriteConsoleOutput from the WinApi to write characters to the command prompt window buffer. The thing is, I'd really like to be able to write characters such as ☺ directly into the source code, as-is, instead of using some kind of encoding/notation like '\uFFFF' or '0xFF', since I don't understand them too well (differences between codepages/character sets/etc.)
The code below showcases the simplest form of my problem. Running this code does not print ☺ into the command prompt window, but a question mark (?) instead.
#include <Windows.h>
int main()
{
HANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);
CHAR_INFO c[1] = {0};
COORD cS = {1, 1};
COORD cH = {0, 0};
SMALL_RECT sr = {0, 0, 0, 0};
c[0].Attributes = FOREGROUND_INTENSITY;
c[0].Char.UnicodeChar = '☺';
WriteConsoleOutput(h, c, cS, cH, &sr);
Sleep(5000);
return 0;
}
It is vital for my code to display output identically between all Windows versions, regardless of the languages installed/used. So to my knowledge (which admittedly is absolutely minimal), I'd need to set a specific codepage (one which would hopefully be supported by the command prompt in any language Windows).
I've tried:
• Changing from using the CHAR_INFO.UnicodeChar to CHAR_INFO.AsciiChar
• Fiddling around with SetConsoleCP and SetConsoleOutputCP functions, but I haven't got a clue on how to utilize them to help me with this problem.
• Changing the Visual Studio -> Project -> Project properties.. -> Character Set setting to every possible value.
• Using specifically either WriteConsoleOutputA or WriteConsoleOutputW in addition to the aforementioned settings
• Changing the source code file encoding to UTF-8 with(/out) signature.
In my project I'm programmatically setting the command prompt font to 8x8 Terminal, which to my knowledge does not support actual unicode characters. The available characters are displayed here. Those characters do include '☺', so I'm not entirely sure my question is about unicode. I have no idea anymore. Please help.
C source has to be ascii only. If you embed non-ascii characters in a C source file, and IDE might show them in what appears to be the correct format, but the compiler quite likely treats them differently, and the executable function you pass them to can treat them differently still. It's just not portable or reliable. But you can use the escape sequence \x to embed arbitrary bytes in C strings.
UTF-8 is good for internal use, but Windows APIs don't yet support it, so you need to convert to Windows 16 bit chars (UTF-16 nearly but not quite), to display extended characters. However you have to ensure that you are calling the wide character version of the Windows API. Most Windows API functions that take string come in a A and W version (ascii and wide) for binary backwards compatibility. If you query the identifier in the IDE (go to definition etc) you should see which version you have.

WxWidget: understanding if key at left of "1" has been pressed

Is there a somehow portable way to understand if the key at left of "1" on the top raw of the keyboard has been pressed, by analyzing a wxKeyEvent?
For that key, in my keyboard both GetRawKeyCode() and GetKeyCode () return 126, which is 0x7E, and which seems to correspond to what I read here, but I don't know if it is portable to "any" (a good majority) of keyboards.
The rationale behind: my window react by pressing 0, 1, 2, 3, and I want that the key at the left of 1 gives the same behaviour of 0.
There is no portable way to do it (of course, all keyboards may not even have a key to the left of "1"). And I don't think raw key code is a good way to identify this key even under Windows as it will be different if you use non-US keyboard layout. I'd probably use the bits 16-23 of the raw key flags which contain the scan code there. Raw key flags should also work with wxGTK as they contain the hardware_keycode there. I am not sure about OS X, the raw flags there are just the modifiers and I don't know if the key code is layout-independent.

Can I get a code page from a language preference?

Windows seems to keep track of at least four dimensions of "current locale":
http://www.siao2.com/2005/02/01/364707.aspx
DEFAULT USER LOCALE
DEFAULT SYSTEM LOCALE
DEFAULT USER INTERFACE LANGUAGE
DEFAULT INPUT LOCALE
My brain hurts just trying to keep track of what the hell four separate locale's are useful for...
However, I don't grok the relationship between code page and locale (or LCID, or Language ID), all of which appear to be different (e.g. Japanese (Japan) is LANGID = 0x411 location code 1, but the code page for Japan is 932).
How can I configure our application to use the user's desired language as the default MBCS target when converting between Unicode and narrow strings?
That is to say, we used to be an MBCS application. Then we switched to Unicode. Things work well in English, but fail in Asian languages, apparently because Windows conversion functions WideCharToMultiByte and MultiByteToWideChar take an explicit code page (not a locale ID or language ID), which can be set to CP_ACP (default to ANSI code page), but don't appear to have a value for "default to user's default interface language's code page".
I mean, this is some seriously convoluted twaddle. Four separate dimensions of "current language", three different identifier types, as well as (different) string-identifiers for C library and C++ standard library.
In our previous MBCS builds, disk I/O and user I/O worked correctly: everything remained in the DEFAULT SYSTEM LOCALE (Windows XP term: "Language for non-Unicode Programs"). But now, in our UNICODE builds, everything tries to use "C" as the locale, and file I/O fails to properly transcode UNICODE to user's locale, and vice verse.
We want to have text files written out (when narrow) using the current user's language's code page. And when read in, the current user's language's code page should be converted back to UNICODE.
Help!!!
Clarification: I would ideally like to use the MUI language code page rather than the OS default code page. GetACP() returns the system default code page, but I am unaware of a function that returns the user's chosen MUI language (which auto-reverts to system default if no MUI specified / installed).
I agree with the comments by Jon Trauntvein, the GetACP function does reflect the user's language settings in the control panel. Also, based on the link to the "sorting it all out" blog, that you provided, DEFAULT USER INTERFACE LANGUAGE is the language that the Windows user interface will use, which is not the same as the language to be used by programs.
However, if you really want to use DEFAULT USER INTERFACE LANGUAGE then you get it by calling GetUserDefaultUILanguage and then you can map the language id to a code page, using the following table.
Language Identifiers and Locales
You can also use the GetLocaleInfo function to do the mapping, but first you would have to convert the language id that you got from GetUserDefaultUILanguage into a locale id, and I think you will get the name of the code page instead of a numeric value, but you could try it and see.
If all you want to be able to do is configure a locale object to use the currently selected locale settings, you should be able to do something like this:
std::locale loc = std::locale("");
You can also access the current code page in windows using the Win32 ::GetACP() function. Here is an example that I implemented in a string class to append multi-byte characters to a unicode string:
void StrUni::append_mb(char const *buff, size_t buff_len)
{
UINT current_code_page = ::GetACP();
int space_needed;
if(buff_len == 0)
return;
space_needed = ::MultiByteToWideChar(
current_code_page,
MB_PRECOMPOSED | MB_ERR_INVALID_CHARS,
buff,
buff_len,
0,
0);
if(space_needed > 0)
{
reserve(this->buff_len + space_needed + 1);
MultiByteToWideChar(
current_code_page,
MB_PRECOMPOSED | MB_ERR_INVALID_CHARS,
buff,
buff_len,
storage + this->buff_len,
space_needed);
this->buff_len += space_needed;
terminate();
}
}
Just use CW2A() or CA2W() which will take care of the conversion for you using the current system locale (or language used for non-Unicode applications).
FWIW, this is what I ended up doing:
#define _CONVERSION_DONT_USE_THREAD_LOCALE // force CP_ACP *not* CP_THREAD_ACP for MFC CString auto-conveters!!!
In application startup, construct the desired locale: m_locale(FStringA(".%u", GetACP()).GetString(), LC_CTYPE)
force it to agree with GetACP(): // force C++ and C libraries based on setlocale() to use system locale for narrow strings
m_locale = ::std::locale::global(m_locale); // we store the previous global so we can restore before termination to avoid memory loss
This gives me relatively ideal use of MFC's built-in narrow<->wide conversions in CString to automatically use the user's default language when converting to or from MBCS strings for the current locale.
Note: m_locale is type ::std::locale

How to create a partition without Windows assigning a drive letter?

I am trying to initialize and partition an attached virtual hard disk through the Windows API. I have been successful using DeviceIoControl() to do so, however whenever I apply the desired drive layout Windows is automatically assigning a drive letter to the partition and popping up an annoying "Would you like to format?" dialog.
My intent is to handle the formatting and mounting of this partition later in the program, but I'm not sure how to stop this behavior. I have tried setting RecognizedPartition to FALSE, but this seems to have no effect.
Relevant code:
Layout.PartitionStyle = PARTITION_STYLE_MBR;
Layout.PartitionCount = 4;
Layout.Mbr.Signature = MY_DISK_MBR_SIGNATURE;
Layout.PartitionEntry[0].PartitionStyle = PARTITION_STYLE_MBR;
Layout.PartitionEntry[0].PartitionNumber = 1;
Layout.PartitionEntry[0].StartingOffset.QuadPart = MY_DISK_OFFSET;
Layout.PartitionEntry[0].PartitionLength.QuadPart =
(Geom.DiskSize.QuadPart - MY_DISK_OFFSET);
Layout.PartitionEntry[0].Mbr.PartitionType = PARTITION_IFS;
Layout.PartitionEntry[0].Mbr.BootIndicator = FALSE;
Layout.PartitionEntry[0].Mbr.RecognizedPartition = FALSE;
Layout.PartitionEntry[0].Mbr.HiddenSectors =
(MY_DISK_OFFSET / Geom.Geometry.BytesPerSector);
for (int i = 0; i < 4; i++)
{
Layout.PartitionEntry[i].RewritePartition = TRUE;
}
if (!DeviceIoControl(hDisk, IOCTL_DISK_SET_DRIVE_LAYOUT_EX,
Layout, dwLayoutSz, NULL, 0, &dwReturn, NULL))
{
// Handle error
}
DeviceIoControl(hDisk, IOCTL_DISK_UPDATE_PROPERTIES,
NULL, 0, NULL, 0, &dwReturn, NULL);
What can I do to prevent automatic drive letter assignment?
The only reliable way I could find to work around this issue was to stop the "Shell Hardware Detection" service while the volume was created and formatted. However, this approach is so unapologetically silly that I refused to put it into code.
Another "hackish" option is to have the service start up and then immediately spawn itself (or a "worker" executable) in a hidden window via CreateProcess() with the CREATE_NO_WINDOW flag.
Since this software runs as a system service and I'd rather not complicate the code for something that only happens once or twice over the lifetime of the system, I've just had to accept that sometimes there will occasionally be an Interactive Services Detection window pop up for a few moments while creating the partitions.
If anyone discovers a good method for preventing the format prompt while programmatically creating and formatting a drive, I'll happily change the accepted answer (and owe you a beer).
It's been awhile since I've used this API, but from memory you can't. But it's doesn't stop you from removing the drive letter assignment after the fact.
I'm not sure if it will stop the format prompt tho, all the times that I have done this the partition has already been formatted correctly before I do the disk layout update.
I just solved this problem, by waiting for several seconds for the drive to be available and then directly issue a format action. See my answer here.
Rufus has an interesting workaround: it installs a window event hook that detects the "do you want to format this drive?" prompts and immediately closes them. See source code here.
It then goes on to arrange to mount only the partitions it cares about, but that's orthogonal.