Difference between XTestFakeButtonEvent & XSendEvent - c++

I'm trying to write simple mouse clicker for ubuntu via x11.
For first i wrote first variant (based on XSendEvent) of clicking procedure:
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
void mouseClick(int button)
{
Display *display = XOpenDisplay(NULL);
XEvent event;
if(display == NULL)
{
std::cout << "clicking error 0" << std::endl;
exit(EXIT_FAILURE);
}
memset(&event, 0x00, sizeof(event));
event.type = ButtonPress;
event.xbutton.button = button;
event.xbutton.same_screen = True;
XQueryPointer(display, RootWindow(display, DefaultScreen(display)), &event.xbutton.root, &event.xbutton.window, &event.xbutton.x_root, &event.xbutton.y_root, &event.xbutton.x, &event.xbutton.y, &event.xbutton.state);
event.xbutton.subwindow = event.xbutton.window;
while(event.xbutton.subwindow)
{
event.xbutton.window = event.xbutton.subwindow;
XQueryPointer(display, event.xbutton.window, &event.xbutton.root, &event.xbutton.subwindow, &event.xbutton.x_root, &event.xbutton.y_root, &event.xbutton.x, &event.xbutton.y, &event.xbutton.state);
}
if(XSendEvent(display, PointerWindow, True, 0xfff, &event) == 0)
std::cout << "clicking error 1" << std::endl;
XFlush(display);
event.type = ButtonRelease;
event.xbutton.state = 0x100;
if(XSendEvent(display, PointerWindow, True, 0xfff, &event) == 0)
std::cout << "clicking error 2" << std::endl;
XFlush(display);
XCloseDisplay(display);
}
This code works fine on every application except chrome (with mozilla works fine too).
So i wrote second variant (based on XTestFakeButtonEvent):
#include <X11/extensions/XTest.h>
void SendClick(int button, Bool down)
{
Display *display = XOpenDisplay(NULL);
XTestFakeButtonEvent(display, button, down, CurrentTime);
XFlush(display);
XCloseDisplay(display);
}
And this code works fine everyvere include chrome.
Calling of those functions is very simple
// XSendEvent variant
mouseClick(1);
// XTestFakeButtonEvent variant
SendClick(1, true); // press lmb
SendClick(1, false); // release lmb
1: help me to understand what i'm doing wrong (or what wrong in chrome maybe) in first variant.
1.1: I think that i'm trying to send event not for needed window, when i open display with XOpenDisplay(NULL);. Does chrome have different connection system with x11 server?
2: is it good idea to use second variant in applications? It pretty short and works fine with every app i have)
P.S. to compile this code you need add -lX11 -lXtst libs

XSendEvent produces events that are marked as sent. Events sent by the server are not marked.
typedef struct {
int type;
unsigned long serial;
Bool send_event;  // <----- here
Display *display;
Window window;
} XAnyEvent;
Some applications ignore events that have this flag set, for security reasons. Think of malware that somehow gets access to your X11 server — it can trick any application into doing whatever it wants by sending those events.
It is perfectly OK to use the second variant on your own machine, but it relies on an extension that can be disabled (again, for security reasons) and so not necessarily works on other people's X11 servers.

On XCB you can use the following function to verify if event was sent via XSendEvent() / xcb_send_event() API:
static bool fromSendEvent(const void *event)
{
// From X11 protocol: Every event contains an 8-bit type code. The most
// significant bit in this code is set if the event was generated from
// a SendEvent request.
const xcb_generic_event_t *e = reinterpret_cast<const xcb_generic_event_t *>(event);
return (e->response_type & 0x80) != 0;
}
AFACT, there is no way to tell if event was send via XTest extension.
You should use XTest as it will work better, XSendEvents don't know anything about internal X server state. Fom XSendEvent manual:
"The contents of the event are otherwise unaltered and unchecked by the X server except to force send_event to True".
So with XSendEvent you might have unexpected issues in some situations.

Although not by using Xlib directly, but through a python Xlib wrapper library as a proxy to Xlib, the first approach currently does work on all windows I currently have open on my desktop, other than with IntelliJ.
In this first approach, you are sending the event directly to a target window, and as others have noted, your event is also marked (tainted) with an attribute value marking it as a simulated one. The receiving window might act on it just the same, as many application windows do.
With the second approach however, you are emulating the actual thing happening ― per my understanding virtually indistinguishable from a user solicited event: the event goes through the fuller X11 flow of handling for a user input event (rather than being blindly dispatched directly to the target window) which means that it will trickle down to the window (or Gnome desktop widget) under the pointer as in the natural flow of events for real user solicited events.
As such, the second approach appears to be more broadly applicable than the first approach ― it will have the desired effect also for windows that opt not to act on the event sent to them through the first approach, as well as on e.g. Gnome desktop elements which are not ordinary windows per-se (such as the language and power widgets). You supply the coordinates without any mention of a window, and the click goes through.
If I had to come up with some kind of explanation for this duality of routes, I might think that XSendEvent is more of a general purpose event sending facility, whereas XTEST provides means for specifically simulating user input events.

Related

How to imitate mouse events with full functionality in Qt

I am trying to remote control a Qt application over UDP messages from my host computer.
Currently, the remote controlled computer runs a Qt application that receives messages and creates Mouse Events. Host computer sends type, button, modifier and position (between 0 and 1) whenever there is a new Mouse Event. I am trying to create events with these values in remote controlled computer and imitate them just like they are created by a user.
void MainWindow::slotNewMouseEvent(MouseMsg msg)
{
auto point = scalePoint(msg.x, msg.y); // basic conversion for coordinates
switch(msg.type)
{
case QEvent::MouseButtonPress:
QTest::mousePress(this->childAt(point), (Qt::MouseButton) msg.button,
(Qt::KeyboardModifiers) msg.modifier);
break;
case QEvent::MouseButtonRelease:
QTest::mouseRelease(this->childAt(point), (Qt::MouseButton) msg.button,
(Qt::KeyboardModifiers) msg.modifier);
break;
case QEvent::MouseButtonDblClick:
QTest::mouseDClick(this->childAt(point), (Qt::MouseButton) msg.button,
(Qt::KeyboardModifiers) msg.modifier);
break;
case QEvent::MouseMove:
QTest::mouseMove(this->childAt(point), point);
break;
default:
break;
}
}
void MainWindow::slotNewMouseEvent(MouseMsg msg)
{
auto point = scalePoint(msg.x, msg.y); // basic conversion for coordinates
QMouseEvent *evt = new QMouseEvent((QEvent::Type) msg.type,
point,
(Qt::MouseButton) msg.button,
(Qt::MouseButtons) msg.buttons,
(Qt::KeyboardModifiers) msg.modifier);
qApp->postEvent(this->childAt(point), evt);
// I was expecting funcionalities like this one to be done automatically
if(evt->type() == QEvent::MouseButtonDblClick || evt->type() == QEvent::MouseButtonPress)
{
this->childAt(point)->setFocus();
}
qApp->processEvents();
}
Both of these functions gives almost same results which is not desired. I was expecting sending newly created events to the top widget on screen would work. However, some functionalities like setting focus on clicked element, opening context menu by right click or selecting elements of a combobox (elements window is not a widget) does not work.
Is there a solution with keeping those events in remote controlled computer's Qt application? I would prefer not using system libraries like "windows.h" to prevent harms to the system and making my application cross-platform. Do I have to use them?

Qt5 Not Registering Touch Events

I'm working on determining if a certain touchscreen will be compatible with an application and recently got a loaner model of an Elo 2402L touchscreen. I've installed the driver the company provides and was able to see multi-touch events using the evtest utility (parser for /dev/input/eventX).
The thing is that I'm running Scientific Linux 6.4, which uses Linux kernel 2.6.32. I've seen a lot of mixed information on touchscreen compatibility for Linux kernels before 3.x.x. Elo says that their driver only supports single-touch for 2.6.32. Also, I've seen people say that the majority of the compatibility issues with touch events in this kernel version are with Xorg interfaces.
I developed a very simple Qt5 application to test whether Qt could detect the touch events or not, because I'm not sure whether Qt applications are X-based and if they read events directly from /dev/input or something else.
However, despite a simple mouse event handler being able to correctly register mouse events, I also created a simple touch event handler and nothing happens when I touch the main screen. There is a beep, as part of the driver that Elo provides makes a beep when the screen is touched, so I know that SOMETHING is registering that touch, but neither the desktop, nor this application seem to recognize the touch event.
Also, yes, the WA_AcceptTouchEvents attribute is set to true in the window's constructor.
I have a simple mainwindow.h:
...
protected:
int touchEvent(QTouchEvent *ev);
...
And mainwindow.cpp:
MainWindow::MainWindow(QWidget *parent) {
...
setAttribute(Qt::WA_AcceptTouchEvents, true);
touchPoints = 0;
}
...
int MainWindow::touchEvent(QTouchEvent *ev) {
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints--;
break;
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
Is there something wrong with the way I'm using the touch event handler? Or is there some issue with the device itself? Does Qt read input events directly from /dev/input, or does it get its input events from X?
Very confused here, as I haven't used Qt before and want to narrow down the cause before I say that it's the device causing the issue.
Also, if anyone has any insight into the device / kernel compatibility issue, that would be extremely helpful.
The QTouchEvent documentation says:
Touch events occur when pressing, releasing, or moving one or more
touch points on a touch device (such as a touch-screen or track-pad).
To receive touch events, widgets have to have the
Qt::WA_AcceptTouchEvents attribute set and graphics items need to have
the acceptTouchEvents attribute set to true.
Probably you just need to call setAttribute(Qt::WA_AcceptTouchEvents, true) inside the MainWindow constructor.
Is there something wrong with the way I'm using the touch event handler?
There is no touch event handler. If you change:
int touchEvent(QTouchEvent *ev);
to:
int touchEvent(QTouchEvent *ev) override;
(which you should always do when you are trying to override virtual functions so you can catch exactly this kind of mistake), you'll see that there is no such function for you to override. What you need to override is the event() handler:
protected:
bool event(QEvent *ev) override;
You need to check for touch events there:
bool MainWindow::event(QEvent *ev)
{
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints++;
break;
default:
return QMainWindow(ev);
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
However, it might be better to work with gestures instead of touch events. But I don't know what kind of application you're writing. If you wanted to let Qt recognize gestures rather than implementing them yourself through touch events, you would first grab the gestures you want, in this case pinching:
setAttribute(Qt::WA_AcceptTouchEvents);
grabGesture(Qt::PinchGesture);
and then handle it:
bool MainWindow::event(QEvent *ev)
{
if (e->type() != QEvent::Gesture) {
return QMainWindow::event(e);
}
auto* gestEv = static_cast<QGestureEvent*>(e);
if (auto* gest = gestEv->gesture(Qt::PinchGesture)) {
auto* pinchGest = static_cast<QPinchGesture*>(gest);
auto sf = pinchGest->scaleFactor();
// You could use the pinch scale factor here to zoom an image
// for example.
e->accept();
return true;
}
return QMainWindow::event(e);
}
Working with gestures instead of touch events has the advantage of using the platform's gesture recognition facilities, like those of Android and iOS. But again, I don't know what kind of application you're writing and on what kind of platform you're working on.

C++ Global Hotkeys with platform APIs

I'm working on an application for taking screenshots on Windows, OSX and Linux in C++/Qt. Now I need to set global hotkeys, so the user can take screenshots when the application is running in the background. I tried with Qxt and UGlobalHotkey, which are both Qt libraries, but neither of them seemed to work.
I tried to implement it for OSX with Carbon (tutorial), but I need to call a class member function, which just doesn't work. Could someone provide me with an example? You can find my code here. The function i need to call is new_screenshot().
Or is there any other way to achieve something like this? I really need my application to take a screenshot from the background, otherwise it's pretty useless (yes, I should probably have implemented it at the very beginning to see if it even works). Would it maybe be better to have a separate client for every platform (Cocoa Swift for OSX, GTK for Linux, C# client for Windows)? I have often thought about this the past few days.
Do I understand correctly that you want to call new_screenshot from the hot key event handler? If so, InstallApplicationEventHandler lets you pass a pointer to user data in 4th argument. Pass a pointer to your MainWindow instance (based on code from the tutorial):
MainWindow *mainWindow = ... // get main window somehow
InstallApplicationEventHandler(&MyHotKeyHandler,1,&eventType,mainWindow,NULL);
Then you can use it in the event handler.
OSStatus MyHotKeyHandler(EventHandlerCallRef nextHandler,EventRef theEvent, void *userData)
{
//Do something once the key is pressed
static_cast<MainWindow*>(userData)->new_screenshot();
return noErr;
}
I did something in the past with MFC and WIN32 API....so it only works on Windows...but pressing ALT+F10 was able to hide/show a window...
void CWinHideDlg::OnButtonActive()
{
CString tmp;
GetDlgItemText(IDC_BUTTON_ACTIVE,tmp);
if(0 == strcmp(tmp.GetBuffer(tmp.GetLength()),"Activate"))
{
m_myAtom=GlobalAddAtom("MY_GLOBAL_HOT_HIDE_KEY");
int err=RegisterHotKey(this->GetSafeHwnd(),m_myAtom,MOD_ALT,VK_F10);
SetDlgItemText(IDC_BUTTON_ACTIVE,"Stop");
CButton *pBtn = (CButton *)GetDlgItem(IDC_BUTTON_UNHIDE);
pBtn->EnableWindow(TRUE);
SetDlgItemText(IDC_STATIC_INFO,"Set the mouse over the window \nand press ALT + F10 to hide it...");
}
else
{
UnregisterHotKey(this->GetSafeHwnd(),m_myAtom);
GlobalDeleteAtom(m_myAtom);
CButton *pBtn = (CButton *)GetDlgItem(IDC_BUTTON_UNHIDE);
pBtn->EnableWindow(FALSE);
SetDlgItemText(IDC_BUTTON_ACTIVE,"Activate");
}
}
Basically this code activates/deactivates the hot key ALT+F10, once it activates you can hide/unhide a running window on the system by setting the mouse pointer over the window and press ALT+F10...
This is from the WindowProc function:
if(message == WM_HOTKEY)
{
CString tmp;
POINT pc;
GetCursorPos(&pc);
if(GetAsyncKeyState(VK_F10))
{
HWND hwnd=::WindowFromPoint(pc);
if(hwnd)
{
tmp.Format("%08Xh",hwnd);
m_HideWins.InsertString(m_HideWins.GetCount(),tmp);
::ShowWindow(hwnd,SW_HIDE);
}
}
}
You can use the code to register your own HOT Key and use it to take a screenshot...
Hope it helps...

How come allegro automatically handles minimize button, but not close button?

Here is a sample from Allegro5 tutorial: (to see the original sample, follow the link, I've simplified it a bit for illustratory purposes.
#include <allegro5/allegro.h>
int main(int argc, char **argv)
{
ALLEGRO_DISPLAY *display = NULL;
ALLEGRO_EVENT_QUEUE *event_queue = NULL;
al_init()
display = al_create_display(640, 480);
event_queue = al_create_event_queue();
al_register_event_source(event_queue, al_get_display_event_source(display));
al_clear_to_color(al_map_rgb(0,0,0));
al_flip_display();
while(1)
{
ALLEGRO_EVENT ev;
ALLEGRO_TIMEOUT timeout;
al_init_timeout(&timeout, 0.06);
bool get_event = al_wait_for_event_until(event_queue, &ev, &timeout);
//-->// if(get_event && ev.type == ALLEGRO_EVENT_DISPLAY_CLOSE) {
//-->// break;
//-->// }
al_clear_to_color(al_map_rgb(0,0,0));
al_flip_display();
}
al_destroy_display(display);
al_destroy_event_queue(event_queue);
return 0;
}
If I don't manually check for the ALLEGRO_EVENT_DISPLAY_CLOSE, then I can't close the window or terminate the program (without killing the process through task manager). I understand this. But in this case I don't understand how the minimize button works without me manually handling it. Can someone please explain?
Disclaimer: I don't know Allegro.
Minimizing a window at the most basic level only involves work from the process that deals with the windows (the Window Manager), not the process itself.
Terminating a program, usually requires files to be closed or memory to be freed or something else that only the process itself can do.
The biggest reason that you must handle it yourself via an event is that closing (destroying) a window invalidates the ALLEGRO_DISPLAY * pointer. The request to terminate the window comes from a different thread, so it would be unsafe to destroy it immediately. Allowing you to process it yourself on your own time is safe and easy, and fits in with the event model that Allegro 5 uses for all other things.
There are other ways to solve the problem, but they are no more simple than this method and don't really have any major advantages.
I don't know anything about allegro, but minimizing windows is generally handled by the window manager without the need of further intervention by your program. The main window is set to a "minimized"-state and your program continues running in the background without a visible window.
You can check if your app is being minized by intercepting specific window-messages (those being WM_ACTIVATEAPP, WM_ACTIVATE or WM_SIZE). Maybe allegro provides something like that, too.
In contrast closing the window does need to be done by your program. Clicking on the X simply sends a message to the window (WM_CLOSE), that the user has clicked it, and you have to respond accordingly (save states, quit the program, or you could prevent it).
At least that's how the normal winapi works, and allegro seems to work the same way.

C++ subclassing a form to trap F1 - F12 keys

The main form opens a child form that has a handful of button CONTROLs on it. I need to trap keyboard events so I subclassed one of the controls. All is good until the control loses focus of course.
Ideally, as long as this child form is open I would like to assign the focus to this control and thus trap all the keystrokes, no matter where the user clicks.
I suspect superclassing might be a better way to go but I am not as familiar with it.
Perhaps what I should do is use accelerators on the main form?
ADDED:
I should mention that the main form has a large listview control that is subclassed to recover up/down arrows and mousewheel etc.
The traditional way is to install a keyboard hook (SetWindowsHookEx), but you need to inject it into every application, and it doesn't work across 32/64 bit boundaries.
What you can do however, and quite easily at that, is to poll the keyboard with GetKeyboardState on a timer and check whether your f1-f12 keys are activated. The timer can be as slow ticking as 100ms and it will catch almost everything while using virtually no resources.
Assuming that this is within Windows and the Win32 API, one option is to look for messages in your main GetMessage, TranslateMessage, DispatchMessage loop. You can special-case any message within this loop, irrespective of which window it's aimed at.
You should probably use IsChild to check that the message is intended for a control on your main window (as opposed to some dialog box or message box that might be displayed separately). Getting the logic right can be fiddly, too. It would be best to only intercept messages when you know your control has lost the focus, and only intercept the exact messages you need to.
Years ago, I wrote a library message loop with a lot of this built in. I had a simple manager class that held pointers to instances of my own little window class. The loop knew the difference between dialogs and normal windows, gave each window class a chance to spy on its childrens messages, and so on. You won't be able to run this directly and the conventions are a bit strange, but you might find this useful...
int c_Window_List::Message_Loop (void)
{
MSG msg;
bool l_Handled;
while (GetMessage (&msg, NULL, 0, 0))
{
l_Handled = false;
c_Windows::c_Cursor l_Cursor;
bool ok;
for (ok = l_Cursor.Find_First (g_Windows); ok; ok = l_Cursor.Step_Next ())
{
if (IsChild (l_Cursor.Key (), msg.hwnd))
{
if (l_Cursor.Data ().f_Accelerators != NULL)
{
l_Handled = TranslateAccelerator (l_Cursor.Key (), l_Cursor.Data ().f_Accelerators, &msg);
if (l_Handled) break;
}
if (l_Cursor.Data ().f_Manager != 0)
{
l_Handled = l_Cursor.Data ().f_Manager->Spy_Msg (l_Cursor.Key (), msg);
}
if (l_Handled) break;
if (l_Cursor.Data ().f_Is_Dialog)
{
l_Handled = IsDialogMessage (l_Cursor.Key (), &msg);
if (l_Handled) break;
}
}
}
if (!l_Handled)
{
TranslateMessage (&msg);
DispatchMessage (&msg);
}
if (g_Windows.Size () == 0)
{
// When all windows have closed, exit
PostQuitMessage (0);
}
}
return msg.wParam;
}
The f_ prefixes mean field - I picked up the m_ convention later, but this code hasn't been revisited in a very long time. f_Manager in particular points to an instance of my c_Window_Base class. The c_Cursor class is a kind of iterator, used to step through all the windows stored in the g_Windows variable (actually a static class member rather than a global).