I made my first GUI for Unix with QT. At the end of my GUI I start another program, which is based on python. There the user would need to press one button to complete the process.
Now I want to automate that last click. I already read a little about POSIX but I'm not quite sure if it can help me. I was thinking that, if I wont be able to access the programm directly, maybe I could at least set to mouse to a certain position and simulate a click? I know this solution is very dirty, but it might work because I will be using the GUI on one certain touch screen only
Related
I would like a python option but that doesn't seem likely, I looked into c++ but I'm not intimately familiar and the methods I tried were not working. I need to be able to move the mouse(which I have been able to do) inside the game (which never works), and to press and or hold keys, AutoHotKey doesn't work for moving the mouse inside of popup dialogs only for controlling recoil and such. I have permission from the admin, I admin on one of his other servers, I'm not looking to release hacks for the game, its just a project I dabbled with for a while and would like to see out.
Does anyone have experience with this or ideas as to how I can simulate input from mouse or keyboard?
i have expeience in ahk and game bot making
https://www.youtube.com/user/FloowSnaake/videos
I'm currently trying to develop an application to use two mice to perform completely different actions in Windows. However, after having spent couple days on it, I'm starting to wonder if what I want to do is even possible using Windows APIs. As I'm far from being an expert in Windows APIs, I would like to get your opinions to know whether I'm going in the right direction or whether I should try to do it completely differently (maybe developing a driver ?).
Here's what I want to do : Imagine two mice are plugged in my computer. I would like to use the first one as a regular mouse, while the second one would be used to perform completely different actions. For instance, by clicking the second left mouse button, it would open a new tab in Firefox (sending a CTRL+T command to FireFox app) and when clicking the right button, it would send a CTRL+C. Then, by moving the second mouse upwards, it would zoom in, and when moving it downwards, the firefox page would zoom out (so the mouse cursor on screen would remain fix while doing that !). The idea is to recognize as well which application is currently used (which one has mouse/keyboard focus) and perform different actions depending on it. So for instance, the second mouse left click would generate a CTRL+T in FireFox, a CTRL+B in WORD and a CTRL+S in Notepad (in fact, the idea is to parameterize those actions at will). All of that while the first mouse must continue to act just as a regular mouse.
So, it's important to understand that my application will run in the background and will never, per se, interact directly with the user (no GUI as it doesn't require the user to input anything). Its purpose is just to modify the mouse inputs coming from the second mouse and send other inputs(messages) to the application currently being used.
So far, I'm using raw input. I'm able to differentiate which mouse is being used and I'm able to send messages (application specific) to other applications when an action is performed on the second mouse. I'm even able to lock the cursor on screen when the second mouse is moved (so as only the corresponding message is sent to the application of interest !). However, I'm unable to block the button messages sent by the second mouse to the app with the mouse focus. Hence, when clicking on the second mouse right button in Notepad for instance, my specific command ("aaa" for the moment as I'm just trying with letters for sake of simplicity) is sent (and displayed in the notepad window) BUT the contextual Notepad menu opens as well… (hence it's received as well a WM_RBUTTONDOWN message).
My question is then : How can I block the mouse button messages ((WM_RBUTTONDOWN, and so on…) to be received by other applications when the second mouse is used? Is it even possible ? The problem is that (in my understanding) those messages have higher priority over the WM_input messages… So when I read the WM_input message in my application and detects that the button was pressed from the second mouse, it's already too late and the WM_xBUTTONDOWN was already sent !)
I know that using the mouse hooks, I could block those but then, there is no way to differentiate the origin of the message (and of course, detecting which mouse is used is the main point of my application).
I've tried as well using DirectInput8 but it doesn't support anymore the usage of several mice (Windows specifically says to use raw input to this effect).
So, I guess that by know you've gotten that I'm quite lost and have no idea whether what I want to do it even achievable. Any help would be more than welcome.
Looking forward to reading your replies.
I was about to suggest hooks, but then I read that you looked into that already. I guess, the last resort for your problem would be to write your own driver.
After Windows installed the second mouse in it's usual way, you can go to the Device Manager and change the driver of the mouse you want to "repurpose" to your own driver.
Although, developing a driver is probably nothing one will do as a side task in a project.
Is it possible to create a keyboard shortcut to switch between the monitor and portion selection of this wacom preferences window, via a c++ console program?
Sorry if this is poorly worded, I've had trouble trying to find the right words to search for ways to do it.
I think it should be possible, although a bit tedious. You should be able to use the Windows API, and try to EnumWindows/EnumDesktopWindows to identify the respective application Window, and its respective controls (which are also Windows).
You should identify the window title, and class ids, for the app window, and the checkbox button controls, then when you enumerate through all the desktop windows, you can identify the ones you are interested in.
Then you can use the SendMessage() API to send messages to the controls (Windows) of interest to manipulate them.
It's a bit tedious, but sounds possible.
An example of use here to get an idea:
http://www.cplusplus.com/forum/windows/25280/
I'm working on a clone of Yakuake and, if you have used it, you'd know that one of it's features is stealing the focus for easiness.
Basically, you hit the "show" hotkey, the app appears and you can write on it.
You could be doing whatever thing with whatever app, (being Yakuake hidden), but as soon as you hit the hotkey, Yakuake appears and steals the focus. I want to do the same with my app.
I know there are some window manager rules that prevent applications from doing this, but Yakuake is doing it, why I'm not able to do it?
Also, this application is meant to be compatible with Windows, Linux and Mac, so no KDE or Gnome or < insert_your_favourite_window_manager_here > hacks; I won't go the detect-WM-and-do-hack way.
PS: I'm doing that app in C++ and Qt4.
EDIT:
Just to make it clear, I'm not asking for any code (but if you actually have some example, I appreaciate it). I'm asking for a way for doing it. What should I do to make the WM assign the focus to my app. Is there any standard way for doing so?
There is the Qt::WindowStaysOnTopHint....
The solution is simpler than I thought. I did an animation with a duration of 0s and at the end of the animation I just did a focus. This did the work.
If you want to do it with a "show" hotkey or shortcut you'll have to create and use a hook on the keybord.
Qt don't provide such things so you'll have to do it by yourself.
you can have a look at this post : QT background process
I don't know for other OS.
When you'll get the right keyboard event from your hook, you can create a window with the "allwas on top hint" and that should by ok.
This is a weird question. I have written a .NET application that starts a process. This process is a MFC application written in c++. For some reason the process does not start doing anything until the form is displayed to the user for the first time. For example, if the process starts minimized I have to un-minimize it (click on it) before it will start doing whatever it's supposed to do. Also, if my application is running and starting this process while the screen is locked the process behaves the same as if it's minimized. It doesn't start doing anything until I unlock the screen and it is displayed to the user for the first time. Like I said, this is a weird question so I hope I'm conveying the problem properly.
Sounds like your functionality is embedded in MFC Windows' load event. If you want the application to be more reactive, move that code to your application class.