This is for an app running on Windows 10. I have two keyboard layouts loaded, ENG US and ENG INT
I am using GetKeyboardLayout(0) however I get the same result regardless of which layout I'm using.
How can I detect which of the two keyboard layouts are in use?
This maybe my mistake, if I make the call like
GetKeyboardLayout(GetWindowThreadProcessId(::GetForegroundWindow(), 0))
Then I get the correct result each time. Now I'm confused because I was under the impression that the keyboard layout was global on Windows 10.
Languages, Locales, and Keyboard Layouts:
Applications typically use locales to set the language in which input
and output is processed. Setting the locale for the keyboard, for
example, affects the character values generated by the keyboard.
Setting the locale for the display or printer affects the glyphs
displayed or printed. Applications set the locale for a keyboard by
loading and using keyboard layouts. They set the locale for a display
or printer by selecting a font that supports the specified locale.
Applications are generally not expected to manipulate input languages
directly. Instead, the user sets up language and layout combinations,
then switches among them.
Calls the ActivateKeyboardLayout function to activate the user's default layout for that language.
Calls the GetKeyboardLayout function to get layout.
Both are thread-based or process-based.
I guess what you might want to get is the input locale identifier for the system default input language, which is global.
SystemParametersInfo(SPI_GETDEFAULTINPUTLANG, 0, &hkl, 0);
Related
Background Information: I'm developing an Windows 10 app. Within my app, I'm working with nested consoles. I'm familiar with GetSystemMetrics() and using some of it's parameters to define my console physical appearance (i.e. SM_CXBORDER, SM_CXDLGFRAME, etc).
Snip: Nested Child Console
Problem: What parameter should I look into if I want my nested child consoles (i.e. child's child console) to be resizable? My current logic output a user cmd onto this console. Overtime, the outputs accumulate. For example, if a user inputs the cmd Time 10 times then he/she will need to start scrolling through the outputs to see any previous output. In the desired scenario, the user can input the cmd Time 10 times without having to scroll which can be done by extending the console vertically. As a user, I rather extend the console than scroll through the outputs. This is purely for better visibility and less congestion.
Attempt: I tried altering DLGFRAME, DLGWINFRAME, RESIZEFRAME, and SCROLL. However, I didn't have much success.
There is no layout engine in the classic Windows API that will make your window extend size automatically
"Fit window size to text" is a feature that is implemented only in more sophisticated GUI toolkits.
If you insist on using the classic Windows API for your GUI (kind of like using stone age tools) - the only option is to calculate how big your rendered text is going to be (either assume it is always one line or use DrawText with the DT_CALCRECT flag) and extend your main window and text control by that amount.
On the whole you would be far more wise to switch to a real GUI toolkit, than wrestle with WINAPI and reinvent extremely complex wheels
BTW don't call it a console - because console is a term used to refer to Windows console terminals that use a different API - your question is confusing with existing terminology
Is it possible to create a keyboard shortcut to switch between the monitor and portion selection of this wacom preferences window, via a c++ console program?
Sorry if this is poorly worded, I've had trouble trying to find the right words to search for ways to do it.
I think it should be possible, although a bit tedious. You should be able to use the Windows API, and try to EnumWindows/EnumDesktopWindows to identify the respective application Window, and its respective controls (which are also Windows).
You should identify the window title, and class ids, for the app window, and the checkbox button controls, then when you enumerate through all the desktop windows, you can identify the ones you are interested in.
Then you can use the SendMessage() API to send messages to the controls (Windows) of interest to manipulate them.
It's a bit tedious, but sounds possible.
An example of use here to get an idea:
http://www.cplusplus.com/forum/windows/25280/
I need to get scan codes of keyboard buttons (or any other codes) in layout-independent way. More specific, let's say I have QEditText and catching keystrokes from it. Now I'm starting to press a single button, and when the layout is English it has keycode=X, then I'm switching layout to Russian (German, French, whatever) and keycode becomes Y - but the physical button is the same. So I need to know code of that physical button, how to do this?
I am not sure if you will be able to do this only from code itself by some qt/x11 methods, but there is a tool that helps in similar situations: xbindkeys. You can read more here:
https://unix.stackexchange.com/questions/91355/shortcut-keys-that-are-independent-to-keyboard-layout
If you can't use xbindkeys, you can still check its code and see how the author achieved this.
Using EnumWindows and GetWindowText, I see many titles with "M' and "Default IME".
What is their primary function?.. It seems to be something quite fundamental.
I'm not sure about the "M" one, but the "Default IME" window is created by the default Input Method Editor (IME).
An IME allows the user to enter characters in a script that may involve a number of separate keystrokes, e.g. Chinese or Korean.
Different IMEs can be installed via the Region and Language dialogs in Control Panel.
Its not unusual for a number of hidden windows to exist on Windows, especially when there are COM components running (for example, a single threaded [STA] apartment uses a window message pump to serialize actions).
how would i read keystrokes using the win32 api? i would also like to see them from international keyboards like german umlauts.
thanks
There's a difference between keyboard presses and the characters they generate.
At the lowest level, you can poll the keyboard state with GetKeyboardState. That's often how keylogging malware does it, since it requires the least privileges and sees everything regardless of where the focus is. The problem with this approach (besides requiring constant polling) is that you have to piece together the keyboard state into keystrokes and then keystrokes into a character stream. You have to know how the keyboard is mapped, you have keep state of shift keys, control keys, alt keys, etc. You have to know about auto-repeat, dead keys, and possibly other complications.
If you have privileges you can install a keyboard hook, as Jens mentioned in his answer.
If you have focus, and you're a console app, you use one of the functions to read from standard input. On Windows, it's hard to get true Unicode input. You generally get so-called ANSI characters, which correspond to the current code page for the console window. If you know the code page, you can use MultiByteToWideChar to convert the single- or multi-byte input into UTF-16 (which Windows documentation calls Unicode). From there you can convert it to UTF-8 (with WideCharToMultiByte) or whatever other Unicode encoding you want.
If you have focus, and you're a GUI app, you can see keystrokes with WM_KEYDOWN (and friends). You can can also get fully resolved UTF-16 characters with WM_CHAR (or UTF-32 from WM_UNICHAR). If you need UTF-8 from those, you'll have to do a conversion.
To get keyboard input regardless of focus, you'll probably need to hook the keyboard.
Take a look at SetWindowsHookEx with WH_KEYBOARD or WH_KEYBOARD_LL. Add a W to the call for the Unicode variant.