I'm writing a program in C++ to implement the keyboard backlight feature from OS X on MacBook Pro's running a Linux distro. So far, it turns the backlight on, on boot and if no keyboard and mouse events are registered for 20 seconds, it will turn it back off, and of course turn it on yet again when an event is registered. Next thing I need the program to do, is to capture keypresses on the keyboard-backlight-up/down keys, but I'm not sure how to approach this.
I am currently using XScreenSaverQueryInfo to get the idle time of keyboard and mouse events, so a method using X11 API would be okay. I have done a lot of googling but havent found a way that I felt sure about going with. The problem I'm seeing with lots of the methods I found, is that they use keycode to identify the key, but I dont think that would be a viable solution since the program should work for any keyboard-layout available.
Any idea of a method and API I should go with? What would work the best?
Regards,
The normal way to do this is with XGrabKey(). It uses keycodes, but you wouldn't hardcode the keycode, you'd get it with XKeysymToKeycode(). To be more correct you'd also want to redo the grab when you get a MappingNotify (XMappingEvent). (Note, MappingNotify, not MapNotify.) If there isn't a keysym for these keys - there probably isn't on old X versions, but hopefully newer X.org versions have one - then you just have to hardwire the keycode. Which won't be very robust or portable but probably works for everyone on Linux with the same hardware model.
Be prepared that key grabs are global, so if you try to XGrabKey() and something else has already grabbed that key, you'll get an X error - by default that exits the program. Another quirk of XGrabKey() is that it grabs the key with a precise modifier set. For example, to handle both with and without NumLock, you need to grab twice. See Global Hotkey with X11/Xlib
In a normal Linux setup (if you wanted to get a feature like this into upstream projects), the desktop environments don't want lots of separate apps fighting over the key grabs and getting errors. So there will be some central coordination points, such as the window manager or a special daemon might do all the keybindings and forward commands to other processes as needed. So you would probably want to look at patching the same upstream code that handles other special keys like this, if you were trying to get your feature integrated into distributions by default.
Another thing to be aware of is the Xkb API, which is a lot more complicated. There is some brain-bending way to grab keys with Xkb but I don't know of any advantage to going that route.
If you haven't done that yet, familiarize yourself with xev. Start it, give it the focus, and press the keys, to see what's happening.
Related
I've fought for a couple of hours with a bug due to a behavior of SDL2 of which I didn't know anything.
In particular, I didn't know that, on mobile, whenever the user touches the screen, two events are sent:
The first one was quite obvious to me: a finger down event.
The second one was indeed less obvious: a mouse button down event.
The same applies for the finger up/mouse button up events.
Because of them, an internal command was thrown twice giving me an headache.
The target is to support both mobile and desktop environments for reasons that are beyond the purpose of the question.
Also, I can guess SDL2 works like that in order to support smooth migration of already existent codebase.
Anyway, is there a (let me say) SDL2-way to inhibit the mouse related events on mobile?
Honestly, they don't make much sense from my point of view and I would like to get rid of them, unless the software is executed on a desktop environment.
Also, I don't want neither to use compile time parameters nor to have dedicated parts of code the aim of which is to suppress those events on mobile.
The code is quite simple. Below a (maybe) meaningful, reduced example:
SDL_Event ev;
while(SDL_PollEvent(&ev)) {
switch(event.type) {
case SDL_FINGERDOWN:
// read it and throw an internal event E
break;
case SDL_MOUSEBUTTONDOWN:
// read it and throw an internal event E
break;
}
}
Unfortunately, both the events above are read when the user touches the screen, as explained.
* EDIT *
I didn't mention that I was testing my application on an Android device and I'm far to be sure that the same problem arises on iOS.
See the response below. It seems indeed that the issue (that so far I've understood it is not exactly an issue) is mainly due to the way SDL2 treats by default finger events on Android.
Even though I really like the idea of Christophe to add an event filter, I've found that SDL2 already gives support for this problem in terms of an hint on Android.
In particular, there exists the hint SDL_HINT_ANDROID_SEPARATE_MOUSE_AND_TOUCH.
One can set it by means of the function SDL_SetHint. See here for further details.
It's as simple as:
SDL_SetHint(SDL_HINT_ANDROID_SEPARATE_MOUSE_AND_TOUCH, "1");
As from the documentation of the hint, by default it is set to 0 and that means that:
mouse events will be handled as touch events and touch will raise fake mouse events
That's true, it was my problem, but by setting it to 1 we can get what follows:
mouse events will be handled separately from pure touch events
So, no longer fake mouse events on Android devices.
The reason behind the default value is not so clear to me, but this one sounds really like the right way to achieve it.
EDIT (more details)
This change seems to be recent.
Here is a link to the libsdl forum where they were discussing the issues that arose as a consequence of the patch that introduced this behavior.
Someone had the same problem I had and some others were trying also to explain why the patch had been accepted.
EDIT: alternative solution
The hint SDL_HINT_ANDROID_SEPARATE_MOUSE_AND_TOUCH is available since SDL v2.0.4, so it seems that the only viable solution for lower versions of SDL2 is to use an event filter.
Anyway, I discourage to query the platform by using SDL_GetPlatform in order to decide if to set or not to set an event filter.
Instead, as from the documentation of both SDL_MouseMotionEvent and SDL_MouseButtonEvent, there exists the which parameter that:
[...] may be SDL_TOUCH_MOUSEID, for events that were generated by a touch input device, and not a real mouse. You might want to ignore such events, if your application already handles SDL_TouchFingerEvent.
Because of that, I suggest to set an event filter no matter of what's the underlying platform, thus to queue the events or to filter them if needed.
Unfortunately, there is no such platform specific deactivation parameter.
The cleanest way to do it would hence be in the initialisation code to query for the platform with SDL_GetPlatform() and, if mobile, set an eventfilter with SDL_SetEventFilter() which prevents the mouse events from being queued.
It's not exactly the answer you expect, but I see no other sdl alternative.
A simpler approach, if you control the code of your event loop, would be to set a flag instead of an event filter, and if the flag is set do nothing on the mouse event. This second approach is however not so clean, as you have to take care of platform specific behaviour in all your code, whereas it's much more isolated in the first alternative.
I want to monitor when a key is changed/added/deleted to the registry whenever application is being installed or removed. I have tested the sample code from the msdn(link) and it works fine.
But the problem is that it does not tell me which key has actually been modified/added/deleted. How can i retrieve this information using c++?
There are only 3 ways, none of which is both easy and adequate:
RegNotifyChangeKeyValue:
Doesn't give you the info you need, but is very easy to use.
EVENT_TRACE_FLAG_REGISTRY which is part of Event Tracing for Windows
which is what ProcMon uses. It works well, but it's quite difficult to use.
I'm not sure exactly how to use it myself, but if I figure it out I'll post it here.
CmRegisterCallback:
Requires kernel-mode driver, which is a pain in 64-bit.
But it's the most perfect solution otherwise.
Unfortunately Event Tracing for Windows (EWT) does not allow to see full key path in the event. You get only a partial key name and a strange handle with is actually a key control block. It's not so simple to get information from this block.
Yes the process monitor uses EWT, but it does not use Windows Kernel Trace as a provider.
I've written a program based on an empty Win32 console app in VS2008 running on Win7 64bit. The program is entirely menu based spawning from a main.cpp which only calls external functions that lead to other interfaces based on the users needs (e.g. cashier, inventory, report, etc...). What I would love to do is provide a new console window for each interface.
Ideally it would close the main menu upon invoking any other interfaces and so on as the user progresses through its functions, including reopening the main menu when necessary.
The basis for doing it this way is that I'm starting a new semester next week diving deeper in OOP with C++ and I wanted to go over my text and complete the capstone project which progresses with the topics to ensure that I have all the basics down pat. As much as I would love to do this the smartest-easiest way, it's best if I stick to the limited knowledge presented in the book which only hints at STL and speaks nothing of additional libraries like boost.
I, of course, have searched on SO and elsewhere looking for the solution. I have found answers, most of them falling outside of my tight requirements, some dealing with building a console window from scratch. While from-scratch seems the most promising, it seemed to be dealing with those not using a robust IDE like VS and I don't know if it will cause more conflict than it's worth, or if it can even be used in multiplicity. The majority, however, left me with the impression it isn't possible. The one exception to this was linking a console to a process. This is what I hope is in my future!
What brought me to this was the need to present a clean look at each turn of events. At first I was fooling around with trying to clear the screen with a basic function like void clearScreen(int lines); but this will always clear from the bottom. So, if I clear the screen before the next interface it's still at the bottom. If I clear it then accept input, the prompt is still at the bottom.
In case it hasn't been clear up to this point. My question is:
Is it possible, within reason, to produce multiple console windows which are tied to processes, or is there an easy way which I do not know to manipulate the scrolling of the main console window?
Even though I "need" to stay within the confines of the baby-step process of traditional learning, I would love to hear any input aside from switching the app type.
This is more of an OCD issue than a requirement of the task, so if the effort isn't worth the benefit that's okay too.
There is no portable way of moving the cursor around the console window - in Unix/Linux, you can send terminal codes for that, in Windows I have no idea.
What would work cross-platform, but be terribly slow and not too nice, would be:
read your input character-by-character
remember where on the screen the next character should appear
redraw the whole screen after each key press
If you want to do better, you must turn to platform-specific solutions, or find a library which would do it for you (like ncurses in the Unix world), but I don't know if any of these fit in your requirements.
You can set the cursor-position on Windows using SetConsoleCursorPosition.
Since you were saying something about VS, I assume restricting yourself to Windows isn't a problem. If so, you can use the Windows API for this.
Other than that, ncurses seems to be at least partially ported to most common platforms.
If you were looking for a way to do this in standard C++ - it doesn't exist. C++ doesn't require the platform it's running on to even have a console, so there are no console manipulation functions.
Both aren't that hard to use, but if this is really just some student thingy where you expect to learn something useful you probably shouldn't bother. Console manipulation isn't something you'll have or want to do very often.
Although it may not have been clear in my original question, I was looking for a solution to be used in a console window. Ideally the solution would have been operable on at least Linux and Windows because any programs I write for school must be compiled on each. This wasn't an assignment but it's obviously advantageous to learn things that are usable there as well.
Here's what I found ...Solution thanks to Tim Wei
void clearScreen()
{
#ifdef _WIN32
system("cls");
#else
system("clear");
#endif
}
This, as simple as it is, was exactly what I was looking for. The function clears the screen and puts the cursor at the top of the console window providing a way to provide static headers or titles with changing data tables. It also allows for simple text based animations - if you like that sort of thing. It made a significant difference in the look, feel and consistency in my console applications this semester!
I'm making a program that needs to block all input during a short critical section. I used BlockInput, but it still allows the user to use hotkeys like Ctrl+Alt+F1 or Ctrl+Alt+F2 (switching taskbar in both displays). It is crucial that the user is not able to use these two hotkeys.
I read some things about a hook, but I'm not sure where to start with this solution. Any help would be greatly appreciated.
Thanks!
A keyboard hook could do the trick - check out SetWindowsHookEx. Note that it gets tricky on 64-bit systems.
But may I suggest simply setting your process/thread priority to some ludicrously high value? Windows will really favor your process then, and at the highest settings even keyboard and mouse stopped working - I found that out the hard way. :)
Basically what I am trying to do is write my own pseudo task bar in C++. The program needs to idle until another program is started up, at which point it needs to visually depict that the other program is running. For each other program that is running, the user should be able to click on the visual representation and have Windows switch focus to the selected program.
The big underlying question at this point: is this even a possibility? Or has Windows hidden most/all of its fiddly-bits to make this close to, if not completely, impossible?
[EDIT:] restructured the question
The obvious starting point would be SetWindowsHookEx(WH_SHELL,...); which will get you notifications when top-level windows are created or destroyed (along with some other related events, like a different window being activated, a window's title changing, etc.)
Think ahead to actually bringing the window to the front, as I once researched myself.
SetForegroundWindow() won't work unless issued from the foreground process - neither SwitchToThisWindow() nor the AttachThreadInput() kludge seemed to always work, but maybe I just wasn't doing it right. Anyway as far as I know there no way to make a window foreground as good as Windows does, please enlighten me if say you discover say an undocumented call which actually Works.
It seems possible to me at least in a basic way:
1. Set up a shell hook as described by Jerry
2. figure the executable file from the module handle to access it's icons using shell services
The Vista-like feature of keeping a 'live' miniature of the screen seems much more challenging.