I'm looking for a method to programatically detect Windows 8 Slate devices using C/C++. My definition of "Slate" is "a portable computing device equipped with a touchscreen but without a dedicated physical keyboard" (so including devices that come with a keyboard dock, but excluding laptops and tradition tablets where the physical keyboard is attached).
I tried using WMI Win32_SystemEnclosure and checking the ChassisTypes, but one Slate reported the ChassisTypes as being "Hand Held" and another reported "Main System Chassis", so this doesn't seem to be reliable.
I'm not able to provide any code, since I don't have a "slate" device to test it on, but I can offer you some suggestions.
You'll probably want to use a heuristic sort of approach, using several API calls to determine the presence or status of various bits of hardware, then determine if the system matches what you're looking for. The GetSystemMetrics API is likely to be the most useful to you; after looking through some documentation, here are the calls that are likely to help you.
GetSystemMetrics with SM_CONVERTIBLESLATEMODE: returns 0 if the system is in Slate Mode and non-zero otherwise. There's no guarantee this will mean the system is an actual slate device, but it can at least tell you if the device has a slate mode and is using it.
GetSystemMetrics with SM_DIGITIZER: returns a bitfield value that tells you whether the system supports touch or a pen. If GetSystemMetrics(SM_DIGITIZER) & TABLET_CONFIG_NONE evaluates to true, your device probably isn't a slate. You can also make good use of the other bitflags this call gives you access to.
GetSystemMetrics with SM_MOUSEPRESENT: tells you whether a mouse is present. This is a very weak test since the docs say virtual mice or sometimes just a mouse port will be enough to set this flag, but it's still worth testing. If a mouse isn't present, your device has a higher chance of being a slate.
GetSystemMetrics with SM_TABLETPC: similar to the SM_DIGITIZER test, this tells you if the Tablet PC Input service is started or not. If the service isn't started, your device probably isn't a tablet.
GetSystemPowerStatus could provide a few useful heuristics as well. This API returns a SYSTEM_POWER_STATUS structure which you can test in the following ways:
If ACLineStatus is 0, your device isn't connected to AC power, so it's more likely to be a slate.
If BatteryFlag is 128, there is no system battery, so your device probably isn't a slate. If it's any other value (except 255, which is unknown status) there is a battery, which means your device is more likely to be a slate.
You can also look into WMI's Win32_Keyboard, in particular its Availability, ConfigManagerErrorCode, and Status properties. At the end of the day there is no way to determine whether keyboard input is from a physical or virtual keyboard, but you can at least attempt to test for a physical keyboard.
Your WMI Win32_SystemEnclosure test would become another heuristic in the list. See what ChassisTypes returns: Desktop, Low Profile Desktop, Mini Tower, Tower, and Laptop probably mean the device isn't a slate. Pizza Box, Portable, Notebook (although generally notebook == laptop in common verbiage, so this will require testing), Hand Held, Space Saving, and Lunch Box are probably more likely to be slates. You can also try to run calculations on the Depth, Height, Width and Weight properties, since anything over a certain size and weight probably won't be a portable device and therefore won't be a slate.
Sample touch detection code from MSDN:
// test for touch
int value = GetSystemMetrics(SM_DIGITIZER);
if (value & NID_READY){ /* stack ready */}
if (value & NID_MULTI_INPUT){
/* digitizer is multitouch */
MessageBoxW(hWnd, L"Multitouch found", L"IsMulti!", MB_OK);
}
if (value & NID_INTEGRATED_TOUCH){ /* Integrated touch */}
I'm not a specialist when it comes to WinAPI, but since noone else answered, maybe it will be of some help to you. On MSDN, there's a list of functions awailable for Windows store apps, which also specifies whether functions can be used on handheld devices. Seems like the Windows.Devices.Enumeration contains exactly what you need - the DeviceInformation class.
All you have to do is list all the devices (there's code one the page which says how to do that), and search the list for a keyboard.
Please note: I don't own any Windows 8 device, so I can't really test if this is helpful or not. Give it a try, and I'll delete my comment if it won't help you out.
Related
I'm writing a keylogger/mouse tracker for use in an opensource input heatmapping application basically identical to Razer's newest heatmapping software, but for use with any hardware/OS (using Qt's amazing cross platform SDK). As you would imagine, this involves intercepting keyboard and mouse messages from the kernal when the application is not the main process.
For Windows I was drawn to GetAsyncKeyState, but there's a note on the return value from MSDN about this function returning zero if "the foreground thread belongs to another process and the desktop does not allow the hook or the journal record."
Barging ahead regardless, I wrote a method for getting the keyboard state (that triggers every set interval of time via Qt's QTimer methods) and it just worked:
//The following executes every 100th of a second:
for (int i = 0; i < 256; ++i)
{
keyboardArray[i] = GetAsyncKeyState(i);
}
As I watch this array in the debugger, I can see the values in the array change as I type even when the application is not the main process. So, for my computer at least this function works at monitoring key states when the main thread is not focused on my application.
My question is: In what instances does Windows not allow hooks or the journal record? In other words, are there some versions of Windows and/or privileges a user could have/not have where this method could fail? I don't really have access to a bunch of different machines to test this on.
My specs are Windows 7 Home Premium 64 bit, Intel i7 930 (2.8 GHz, quad core hyper threaded), 12 GB DDR3 1333 MHz memory, 2x Nvidia 460 if any of that helps.
Best Regards,
Weikardzaena
EDIT:
Hans Passant gave me an example of situations where this type of implementation would fail: mainly applications on Windows that include User Interface Privilege Isolation (UIPI). Basically if an application is really important to the operating system (like a command prompt) then this type of message intercept will not work. I even tested it and it's true: my application stops updating the keyboard array when a command prompt is the main thread.
This and what LoPiTaL said suggests that only specific applications will not allow this type of intercept to occur. I'm mainly aiming this application toward gamers who (like myself) would like to see key presses and mouse clicks for their gameplay, so maybe I don't care about this issue as much, but if I want to expand this to general use (including people who use CMD a lot) then it seems like there's actually no way to intercept key messages for those types of elevated applications.
Is that true, or can methods like SetWindowsHookEx still intercept messages to UIPI applications? I was trying to avoid implementing hooks directly because that might be viewed as a virus on people's home machines, and capturing and re-emitting every input message just slows down everything, which in gaming is pretty big deal.
This is the context: I am running Debian GNU/Linux and I switch reguarly with desktop environments ("DE" for the next).
My question is simple : I want to know which operation, syscalls or even functions used when I press the keyboard key "Print Screen".
Does the way changes with DE? I.e. do Mate, Gnome, KDE, LXDE or Xfce (etc) used a particular call of their own code or is there a generic syscall?
I think the answer (if any) is not Debian relative but more X or Wayland, is not it?
Thank you in advance for you advices and answers :)
PS: I precise that I read a good part of X lib source code, but did not find something useful.
Print screen itself is definitely not a syscall, but the kernel daemon which gets key presses definitely causes a routine to execute that uses what you would call a "syscall." I put that in quotes, because printscreen probably causes a program to run which is already in kernel space, which means there won't be any system calls to the kernel since you're already there (unless the window manager actually runs at user space, which isn't true for Mac OSX or windows, and I'm assuming for linux as well).
How does it work? It probably works by copying the current display from the screen buffer (region of ram which is DMA'd to your graphics card), and then transforming the pixel representation into a bit map.
The basic principle can be found in the xwd tool.
The code isn't that bad to read. In the simple scenario, it used XGetImage, but if the screen has multiple visual regions, it gets more complex, but the fundamental principle is to use XGetPixel to get screen pixels and XPutPixel to store in the temporary image.
What happens when you press PrtScrn is the same thing, except it may be some other application that starts. Exactly what application depends on what the graphics package is in the distribution (Gnome, KDE, Unity, etc). But internally, they will do something very similar.
Edit:
As Peter points out, if the windowing system is "compositing" (that is, each window draws its own content off-screen, and the graphics hardware combines the output via composition), then the screen capture is required to ask the composition system to render the output off-screen, and then copy that.
i'd like to make a function that can move a window in Linux in C++ by its PID. So I've tryed in under Windows. But I have trouble to compile it for Linux.
Is there any mean to do it with Qt ? Since I haven't found one, I've tryed to compile for Linux.
I'm using the MoveWindow function, which is part of the Windows API. Is there any Linux equivalent ?
You don't have to do that by hand if you don't really want to as there already are lots of tools out there, that can perform such tasks as moving, resizing, maximizing and whatever windows.
One tool you might want to take a closer look upon goes by the name of wmctrl even if you don't intend to use maybe you'll find some interesting tricks by taking a look into the sources.
The task of moving a window only known by the pid of the client that created the window might not be the easiest tasks of all for a couple of reasons.
First of all you really shouldn't try to do this as in the X Windows philosophy it is the job of the window manager to arrange the windows on the screen.
As well ICCCM (see: http://de.wikipedia.org/wiki/Inter-Client_Communication_Conventions_Manual) as the EWM spec (see: http://standards.freedesktop.org/wm-spec/wm-spec-latest.html) strongly discourage any client from trying to move resize or whatever on its own. Most probably moving windows "owned" by another client might be considered even bigger evil.
The second problem you might face is, that the X 11 protocol doesn't have any notion of pid.
As it was designed to be used over a network you never can't be really sure the program runs on the same machine as the one you are currently sitting in front of. As such there isn't much sense in something like a pid as by chance there might be any number of clients with identical pids displaying windows on the same X Server if they ran on different machines.
Fortunately enough it is not all that bad, as the EWMH spec encourages any client to set the _NET_WM_PID property on its top level window the the pid of the client that created the window.
Again adhering to the EWMH spec isn't enforced by the X Server in any way so that while practically propably almost all clients will set it there's still no guarantee you'll find the window belonging to a specific pid.
Possibilities
While the whole points mentioned until here might seem rather limiting in fact most probably rather the opposite is true. Even because practically it is relatively easy to totally mess up any other client running in an X session the whole set of rules about how to be a good citizen in the X word were introduced.
As the X11 protocol itself is a network protocol (well not 100% true as locally running clients most probably will be communicating with the X Server via a UNIX domain socket) there isn't any specific library required to talk to the X Server.
Talking about C as mentioned in your question the Xlib has long been the one and only one implementation in wide use but there's also another binding called xcb. With a slightly changed API in comparison to the Xlib.
Xlib
Speaking Xlib I've never ever used any xcb until now, so I can't tell you too much about it might be the following methods that might be of use.
XOpenDisplay - open connection to the X server
XQueryTree - aquire the tree of windows currently alive on the server
XInternAtom - no fear it isn't dangerous. Just read about it in the manuals as you'll need it the get the "atom" mapping to _NET_WM_PID mentioned above
XListProperties - search for the _NET_WM_PID property with the value you are looking for
XConfigureWindow, XMoveWindow, XResizeWindow, ... - to finally perform whatever you wish to do.
All functions mentioned above should be documented in the manual pages. Just use man XOpenDisplay for example.
Oh, and be sure to learn about all the other tools at your disposal to further investigate about the X Window world. Run xlsatoms, check what xwininfo reports an take a list at the output of xprop for one single (!) window alone. Try to set some yourself to see what happens xprop will even do that for you if you ask politely.
I am trying to fix an Audacity bug that revolves around portmixer. The output/input level is settable using the mac version of portmixer, but not always in windows. I am debugging portmixer's window code to try to make it work there.
Using IAudioEndpointVolume::SetMasterVolumeLevelScalar to set the master volume works fine for onboard sound, but using pro external USB or firewire interfaces like the RME Fireface 400, the output volume won't change, although it is reflected in Window's sound control panel for that device, and also in the system mixer.
Also, outside of our program, changing the master slider for the system mixer (in the taskbar) there is no effect - the soundcard outputs the same (full) level regardless of the level the system says it is at. The only way to change the output level is using the custom app that the hardware developers give with the card.
The IAudioEndpointVolume::QueryHardwareSupport function gives back ENDPOINT_HARDWARE_SUPPORT_VOLUME so it should be able to do this.
This behavior exists for both input and output on many devices.
Is this possibly a Window's bug?
It is possible to workaround this by emulating (scaling) the output, but this is not preferred as it is not functionally identical - better to let the audio interface do the scaling (esp. for input if it involves a preamp).
The cards you talk about -like the RME- ones simply do not support setting the master or any other level through software, and there is not much you can do about it. This is not a Windows bug. One could argue that giving back ENDPOINT_HARDWARE_SUPPORT_VOLUME is a bug though, but that likely originates from the driver level, not Windows itself.
The only solution I found so far is hooking up a debugger (or adding a dll hook) to the vendor supplied software and looking at the DeviceIOControl calls it makes (those are the ones used to talk to the hardware) while setting the volume in the vendor software. Pretty hard to do this for every single card, but probably worth doing for a couple of pro cards. Especially for Audacity, for open source audio software it's actually not that bad so I can imagine some people being really happy if the volume on their card could be set by it. (at the time we were exclusively using an RME Multiface I spent quite some time in figuring out the DeviceIOControl calls, but in the end it was definitely worth it as I could set the volume in dB for any point in the matrix)
When I run this code:
MIXERLINE MixerLine;
memset( &MixerLine, 0, sizeof(MIXERLINE) );
MixerLine.cbStruct = sizeof(MIXERLINE);
MixerLine.dwComponentType = MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT;
mmResult = mixerGetLineInfo( (HMIXEROBJ)m_dwMixerHandle, &MixerLine, MIXER_GETLINEINFOF_COMPONENTTYPE );
Under XP MixerLine.cChannels comes back as the number of channels that the sound card supports. Often 2, these days often many more.
Under Vista MixerLine.cChannels comes back as one.
I have been then getting a MIXERCONTROL_CONTROLTYPE_VOLUME control and setting the volume for each channel that is supported, and setting the volumne control to different levels on different channels so as to pan music back and forth between the speakers (left to right).
Obviously under Vista this approach isn't working since there is only one channel. I can set the volume and it is for both channels at the same time.
I tried to get a MIXERCONTROL_CONTROLTYPE_PAN for this device, but that was not a valid control.
So, the question for all you MMSystem experts is this: what type of control do I need to get to adjust the left/right balance? Alternately, is there a better way? I would like a solution that works with both XP and Vista.
Computer Details: Running Vista Ultimta 32 bit SP1 and all latest patches. Audio is provided by a Creative Audigy 2 ZS card with 4 speakers attached which can all be properly addressed (controlled) through Vista's sound panel. Driver is latest on Creative's site (SBAX_PCDRV_LB_2_18_0001). The Vista sound is not set to mono, and all channels are visable and controlable from the sound panel.
Running the program in "XP Compatibility Mode" does not change the behaviour of this problem.
If you run your application in "XP compatibility" mode, the mixer APIs should work much closer to the way they did in XP.
If you're not running in XP mode, then the mixer APIs reflect the mix format - if your PC's audio solution is configured for mono, then you'll see only one channel, but if you're machine is configured for multichannel output the mixer APIs should reflect that.
You can run the speaker tuning wizard to determine the # of channels configured for your audio solution.
Long time Microsoftie Larry Osterman has a blog where he discusses issues like this because he was on the team that redid all the audio stuff in Vista.
In the comments to this blog post he seems to indicate that application controlled balance is not something they see the need for:
CN, actually we're not aware of ANY situations in which it's appropriate for an application to control its balance. Having said that, we do support individual channel volumes for applications, but it is STRONGLY recommended that apps don't use it.
He also indicates that panning the sound from one side to the other can be done, but it is dependent on whether the hardware supports it:
Joku, we're exposing the volume controls that the audio solution implements. If it can do pan, we do pan (we actually expose separate sliders for the left and right channels).
So that explains why the MIXERCONTROL_CONTROLTYPE_PAN thing failed -- the audio hardware on your system does not support it.