I have an existing (large) X application based on raw XLib. I would like to add additional windows to this application using Qt 4. What is the best way to do this?
Research so far:
(If it matters for the details, I'm looking at Qt 4.7.4 right now.)
My existing application calls XtAppNextEvent in a loop to handle its events. What I am hoping to do is replace this event loop with a Qt-based event loop, let Qt handle its own events, and make calls to XtDispatchEvent for non-Qt events.
I have located the part of Qt that processes X events (in src/gui/kernel/qapplication_x11.cpp, QApplication::x11ProcessEvent). I believe the key part of this function is:
QETWidget *widget = (QETWidget*)QWidget::find((WId)event->xany.window);
which determines whether the event refers to a window that Qt knows about. For non-Qt windows, this returns NULL. There are a couple of processing exceptions after this, then a block like:
if (!widget) { // don't know this windows
QWidget *popup = QApplication::activePopupWidget();
if (popup) {
// ... bunch of stuff not involving widget ...
}
return -1;
}
What I was hoping was there would be an event callback at this point that was called for non-Qt related window events, so I could simply implement a virtual function in my derived QApplication and proceed with the application's existing event processing. I can add such a function and rebuild Qt, but I would rather avoid that if possible.
Am I on the right track with this, or might there be a better way?
I have found existing questions similar to this, but they're all for Windows (MFC or .NET). This is specific to X.
The solution I ended up with was to find a copy of the Qt Motif Extension (it's no longer available from Digia directly as it is now unsupported, but you can still find copies of qtmotifextension-2.7-opensource.zip). In there, the qtmotif.h and qtmotif.cpp modules show how to create a QAbstractEventDispatcher that handles X events for both Xt/Motif and Qt components.
Related
I have Application A which is a third party Windows application (with GUI) that uses the Qt library.
I want to write Application B which will be responsible for starting Application A. I want Application B to also find buttons on Application B (QWidgets) and send them mouse inputs (click, double click etc).
I can run Application A through using the start function on a QProcess.
How do I do the following from my instance of the QProcess:
Get the top level window(s) for the process
Get a widget by name or (other identifiable attribute)
Get the widgets caption, colour and coordinates (and maybe some other data)
Send mouse move and click events to specific coordinates
Give a widget keyboard focus
Send keyboard key presses
Note - I know how to do this with the Windows API, but asking for a way via Qt. The specific Application A in question does not use the native windowing system, therefore window handles will not show up in Spy++ or Windows API functions.
Update 1 - Cannot seem to get any meaningful objects through the process's children
My attempt at getting child widgets for the process:
QProcess* process = new QProcess();
QString program = "\"C:\\Program Files (x86)\\foo\\bar.exe\"";
process->start(program);
auto widgets = process->findChildren<QWidget*>("", Qt::FindChildrenRecursively);
auto i = widgets.count();
// i = 0
If I find children of type <QObject*> I get 4 results. I used metaObject()->className() to see that I have two pairs of QWindowsPipeReader and QWinOverlappedIoNotifier objects.
Update 2 - Cannot create/inject window from another process
I noticed that when I run the QProcess I can use Windows API functions to get the top level window (top level only). I read in the Qt documentation that you can create a QWindow from the handle of a window in another process using QWindow::fromWinId.
Using this function will throw an 0xC0000005: Access violation reading location 0x00000000. error. I am not passing in a null handle. I am using reinterpret_cast to get the HWND to a WId type. It creates a QWindow only when I create a QApplication beforehand.
The new QWindow will have no children (using window->findChildren<QObject*>("", Qt::FindChildrenRecursively); I assume that the creation of the QWindow does not bring across associated child widgets.
Update 3 - I am currently reading into whether inter-process communication can be used
I have come across various threads, questions and code snippets regarding ICP in Qt. I don't see anything so far that specifically shows that ICP is possible when one of the processes is third party.
I have seen that the Squish Gui test tool lets you interrogate QWidget properties.
This will never work the way you intend it to.
If you really wish to take direct control over the other application, you must inject your code into that application. Let's assume that the application uses a dynamically linked Qt. Then:
Build a binary-compatible version of Qt, using the same compiler that was used to build the application you intend to tweak.
Replace the application's Qt with yours. Everything should still work fine, given that yours should be binary compatible. If not, the binary compatibility isn't there, and you must tweak things until they work.
Edit your Qt to add a hook to initialize your code at the end of QApplication constructor. This makes the Qt module that provides QApplication dependent on your code.
Put your code into a dll that the widgets (for Qt 5) or gui (for Qt 4) module is now dependent on.
Again replace the app's Qt with yours, with hooks that start your code.
Your code will of course need to be asynchronous, and will be monitoring the application's progress by inspecting the qApp->activeWindow(), qApp->allWidgets(), etc.
That's pretty much the only way to do it. You can of course inject your code in any other way you desire, but you will need to work with a binary-compatible version of Qt just to compile your code. Note that binary compatibility encompasses more than merely using the same compiler version and Qt version. Many of the configure switches must be the same, too.
I tried to understand Qts platform handling for hours now, but I dont get it. For my hotkey handling I currently use a mixture of X11extras for getting display, xlib for key conversions and xcb for the Qt native event handling. Three libs to link. And where does the undocumented QPA play a role here? Now I wonder if this is necessary. I need some clarification here. I am using qt 5.4. What is the way to go for the future?
For Qt, you shouldn't need to do any native platform coding for key events unless you're using native windows. Read about QEvent and the event functions in QWidget.
Use QWidget::nativeEvent() or QCoreApplication::installNativeEventFilter() if you need direct access to raw X11 events. Native event filters at the application level are processed for every event and have more of a performance impact than subclassing QWidget and reimplementing nativeEvent().
Read about QWindow::fromWinId() and QWidget::createWindowContainer() if you need to embed a native window as a child widget.
I have an existing application that uses MFC for the UI, and I'm trying to migrate to Qt. For the most part the migration is straight forward, but I'm not sure how to manage the enabled state of actions (menu and toolbar items).
In MFC you implement a callback with enable/disable logic, and this is called when the item is displayed. In Qt, you only have access to the setEnabled() method.
Is there a built-in or standardized way of connecting an update callback to an action? or do I need to create my solution using timers and registering actions with it? In a large application such as the one I'm working with, the 'should enable' logic can jump all over the place - i.e. certain files on disk must exist, the main display must have a selection, the application's ProcessManager::isProcessing() must be false, etc. It doesn't seem practical to rely on setEnabled() being called on specific actions when there are so many conditions behind the enable/disable logic.
The most "standard" Qt way would be the use of signals/slots.
In my MDI apps, which are based on the Qt MainWindow/MDI examples, I just connect a single "updateMenus()" function to the signal emitted whenever an MDI subwindow is shown or hidden.
Now that may not be enough granularity for your application. So what you could do is - still have a single "updateMenus()" method - but connect it to each menu's "aboutToShow()/aboutToHide()" signals.
That way you keep the logic from sprawling all over the place, and only update menus right when they are needed (like in MFC's OnCmdUI()).
Here's my mainwindow constructor:
mp_mdiArea = new QMdiArea();
setCentralWidget(mp_mdiArea);
connect(mp_mdiArea, SIGNAL(subWindowActivated(QMdiSubWindow*)), this, SLOT(updateMenus()));
And here's my updateMenus():
void MainWindow::updateMenus()
{
bool hasMdiChild = (activeMdiChild() != nullptr);
mp_actionSave->setEnabled(hasMdiChild);
mp_actionSaveAs->setEnabled(hasMdiChild);
mp_actionClose->setEnabled(hasMdiChild);
}
See Qt 4.8 doc for menu->aboutToShow()/Hide() here
In our project we have three independent applications, and we have to develop a QT control application that controls these three applications. The main window will be seperated to three sub windows - each one display another one application.
I thought to use QX11EmbedWidget and QX11EmbedContainer widgets, but two problems with that:
The QX11Embed* is based on X11 protocol and I dont know if it's supported on non-x11 systems like Windows OS.
Since QT 5 these classes are not existing, and the QT documentation doesn't mention why.
So that I dont know whether to use it or not - I'll be happy to get an answers.
In addition, I see that the QT 5.1 contains QWidget::createWindowContainer(); function that in some posts it looks like this should be the replacement to the X11Embed. Can anyone please explian me more how can I use this function to create a QT widget that will run another application (a Calculator for example) inside its?
I have searched a lot in Google, and didn't find answers to my Qs.
Can anyone please help me? Am I on the right way?
Thanks!
If all three independent applications are written with Qt, and you have their source, you should be able to unify them just through the parenting of GUI objects in Qt.
http://qt-project.org/doc/qt-4.8/objecttrees.html
http://qt-project.org/doc/qt-4.8/widgets-and-layouts.html
http://qt-project.org/doc/qt-4.8/mainwindows-mdi.html
If you don't have access to them in that way, what you are talking about is like 3rd party window management. It is kind of like writing a shell, like Windows Explorer, that manipulates the state and the size of other window applications.
Use a program like Spy++ or AutoIt Spy for Windows and the similar ones for other OS's, and learn the identifying markings of your windows you want to control, like the class, the window title, etc. Or you can launch the exe yourself in a QProcess::startDetached() sort of thing.
http://qt-project.org/doc/qt-5.1/qtcore/qprocess.html#startDetached
Then using the OS dependent calls control the windows. The Qt library doesn't have this stuff built in for third party windows, only for ones under the QApplication that you launched. There are a lot of examples of doing things like this by AutoHotKey, or AHK. It is a scripting language that is made for automating a lot of things in the windows environment, and there is port for Mac as well (though I haven't tried the mac port myself).
So in the end you are looking at finding your window probably with a call like this:
#include <windows.h>
HWND hwnd_1 = ::FindWindow("Window_Class", "Window Name");
LONG retVal = GetWindowLongA(hwnd_1, GWL_STYLE); // to query the state of the window
Then manipulate the position and state of the window like so:
::MoveWindow(hwnd_1, x, y, width, height, TRUE);
::ShowWindow(hwnd_1, SW_SHOWMAXIMIZED);
You can even draw widgets on top of the windows you are controlling if you set your window flags correctly for the windows you are manipulating.
transparent QLabel with a pixmap
Cannot get QSystemTrayIcon to work correctly with activation reason
Some gotchas that come up in Windows when doing all of this, is finding out the quirks of the Windows UI when they set the Display scaling different from what you expect, and if you want to play nice with the Task bar, and handling all the modal windows of your programs you are manipulating.
So overall, it is do-able. Qt will make a nice interface for performing these commands, but in the end you are looking at a lot of work and debugging to get it in a beautiful, reliable, window manager.
Hope that helps.
I never tried it myself, but from the docs in Qt 5.1 I would try QWindow::fromId(WId id), which gives you a QWindow, which should be embeddable with createWindowContainer:
QWindow * QWindow::fromWinId(WId id) [static] Creates a local
representation of a window created by another process or by using
native libraries below Qt.
Given the handle id to a native window, this method creates a QWindow
object which can be used to represent the window when invoking methods
like setParent() and setTransientParent(). This can be used, on
platforms which support it, to embed a window inside a container or to
make a window stick on top of a window created by another process.
But no guarantee. :-)
I'm writing an editor and I have a problem which means calling native file save/open dialog from my opengl app. The editor is written with my in-game opengl gui. So i came up with idea that when user press "load" or "save", I will create a thread which will create required (non-visible) wx window and it will call wxFileDialog and after the job is done I will delete that thread. Is it possible or maybe there are better aproaches to acces file open/save dialog in cross platform way from an opengl app?
wxWidget has a OpenGL widget. Put your OpenGL stuff into this one, forward the event received by the widget to your GUI system, then you'll not have to battle for the event loop.
As others have already said, the simplest solution is to use wxWidgets for the main loop and wxGLCanvas for OpenGL stuff.
But if this is impossible, for some reason, you should indeed be able to use wxWidgets from another thread. Just remember that wxWidgets GUI functionality can only be used from a single thread so you need to initialize it from that thread too. And, of course, you'll need to handle thread synchronization yourself as wxWidgets won't know anything about the rest of your program.
If you are using GLUT, or equivalent, then you do NOT have a cross-platform framework. If you want a cross-platform app, then your will have to choose a framework ( e.g wxWidgets or Qt or whatever ) and proceed from there. Otherwise, you can use the native calls to the windows API if you are on windows, and the equivalent on other platforms.
GLUT only gives you a console style application. If you want a GUI, then you have to choose a GUI framework, even if you do not want cross-platform. There as many to choose from, the choice mostly depends on which you are most familiar with. Then you add the calls to the OpenGL library from you GUI application, however built. This way, you do not have to muck around with multiple threads.
It may be that you have a massive investment in your GLUT application, and do not wish to discard it merely to get a few GUI capabilities. In this case, I recommend building a new GUI app, separate from your GLUT application, which communicates with your existing app using a socket ( or other interprocess com system ) but runs in a separate process. This way you will not encounter all the ghastly, hard to fix bugs, created by multithreaded apps.