I already have an MFC gui standalone program. What should be done to make it a VST 2.x plugin? (It would a lot of rework if I use VSTGUI/win32/qt/etc - or is it possible/appropriate to use VSTGUI?)
Which VST interfaces (gui and others) should I implement for VST 2.x gui plugin?
You are worried about the GUI of a VST when in fact you should be worried about the structure of the rest of your code. VST 2.x hands you a HWND for a frame, all you have to do is create a child window that hosts your GUI. MFC, raw WIN32 - does not matter.
However, the real 'problem' is in the rest of the VST 2.x interface. You should study this interface and learn how it works. Then you'll be able to assess if your code is in the correct structure to easily interface as a VST plugin.
you just need to slave your window code to the HWND you're given. The easiest way is just to slave your whole window using SetParent, and then implement MFC like you would in a normal app.
However, there are no knobs, nor digital or analog readouts. Even with MFC if you want to make a polished VST interface you'll be rolling your own UI code either way.
So it's almost worth it just to handle the WM_XXXX messages and do the windowing and drawing all yourself.
Related
I like C++, and I have used GUI many times in C#, but this time I would like to make a GUI in C++. I already know the basics of the Win32 API, such as creating a window, resource scripts, commands and processing of commands, and the basics of some controls.
But what I would like to know, is how to choose between pure Win32 API or MFC to make applications with sidebars that can be disconnected and connected from the window just by clicking and pulling, as the image below:
And the other type of control I'd like to know, is what kind of list is this in red in the image below? In the bottom circled, I know it's a mix of tree view with that kind of list. I thought it was a table control or similar, but it is not.
Anyway, I must continue studying pure Win32 API, or should I jump directly to MFC? I do not intend to use .NET or C#, only pure C++ with some libs.
You can also take a look at more modern C++ gui frameworks, like Qt.
If you want to learn more about Windows, you can use either Winapi or MFC. //MFC is just a pretty thin (and oop) layer over Winapi.
QBittorrent is using Qt framework, so those are most likely a QListWidget/QListView and QTreeWidget/QTreeView.
In our project we have three independent applications, and we have to develop a QT control application that controls these three applications. The main window will be seperated to three sub windows - each one display another one application.
I thought to use QX11EmbedWidget and QX11EmbedContainer widgets, but two problems with that:
The QX11Embed* is based on X11 protocol and I dont know if it's supported on non-x11 systems like Windows OS.
Since QT 5 these classes are not existing, and the QT documentation doesn't mention why.
So that I dont know whether to use it or not - I'll be happy to get an answers.
In addition, I see that the QT 5.1 contains QWidget::createWindowContainer(); function that in some posts it looks like this should be the replacement to the X11Embed. Can anyone please explian me more how can I use this function to create a QT widget that will run another application (a Calculator for example) inside its?
I have searched a lot in Google, and didn't find answers to my Qs.
Can anyone please help me? Am I on the right way?
Thanks!
If all three independent applications are written with Qt, and you have their source, you should be able to unify them just through the parenting of GUI objects in Qt.
http://qt-project.org/doc/qt-4.8/objecttrees.html
http://qt-project.org/doc/qt-4.8/widgets-and-layouts.html
http://qt-project.org/doc/qt-4.8/mainwindows-mdi.html
If you don't have access to them in that way, what you are talking about is like 3rd party window management. It is kind of like writing a shell, like Windows Explorer, that manipulates the state and the size of other window applications.
Use a program like Spy++ or AutoIt Spy for Windows and the similar ones for other OS's, and learn the identifying markings of your windows you want to control, like the class, the window title, etc. Or you can launch the exe yourself in a QProcess::startDetached() sort of thing.
http://qt-project.org/doc/qt-5.1/qtcore/qprocess.html#startDetached
Then using the OS dependent calls control the windows. The Qt library doesn't have this stuff built in for third party windows, only for ones under the QApplication that you launched. There are a lot of examples of doing things like this by AutoHotKey, or AHK. It is a scripting language that is made for automating a lot of things in the windows environment, and there is port for Mac as well (though I haven't tried the mac port myself).
So in the end you are looking at finding your window probably with a call like this:
#include <windows.h>
HWND hwnd_1 = ::FindWindow("Window_Class", "Window Name");
LONG retVal = GetWindowLongA(hwnd_1, GWL_STYLE); // to query the state of the window
Then manipulate the position and state of the window like so:
::MoveWindow(hwnd_1, x, y, width, height, TRUE);
::ShowWindow(hwnd_1, SW_SHOWMAXIMIZED);
You can even draw widgets on top of the windows you are controlling if you set your window flags correctly for the windows you are manipulating.
transparent QLabel with a pixmap
Cannot get QSystemTrayIcon to work correctly with activation reason
Some gotchas that come up in Windows when doing all of this, is finding out the quirks of the Windows UI when they set the Display scaling different from what you expect, and if you want to play nice with the Task bar, and handling all the modal windows of your programs you are manipulating.
So overall, it is do-able. Qt will make a nice interface for performing these commands, but in the end you are looking at a lot of work and debugging to get it in a beautiful, reliable, window manager.
Hope that helps.
I never tried it myself, but from the docs in Qt 5.1 I would try QWindow::fromId(WId id), which gives you a QWindow, which should be embeddable with createWindowContainer:
QWindow * QWindow::fromWinId(WId id) [static] Creates a local
representation of a window created by another process or by using
native libraries below Qt.
Given the handle id to a native window, this method creates a QWindow
object which can be used to represent the window when invoking methods
like setParent() and setTransientParent(). This can be used, on
platforms which support it, to embed a window inside a container or to
make a window stick on top of a window created by another process.
But no guarantee. :-)
I'm writing an editor and I have a problem which means calling native file save/open dialog from my opengl app. The editor is written with my in-game opengl gui. So i came up with idea that when user press "load" or "save", I will create a thread which will create required (non-visible) wx window and it will call wxFileDialog and after the job is done I will delete that thread. Is it possible or maybe there are better aproaches to acces file open/save dialog in cross platform way from an opengl app?
wxWidget has a OpenGL widget. Put your OpenGL stuff into this one, forward the event received by the widget to your GUI system, then you'll not have to battle for the event loop.
As others have already said, the simplest solution is to use wxWidgets for the main loop and wxGLCanvas for OpenGL stuff.
But if this is impossible, for some reason, you should indeed be able to use wxWidgets from another thread. Just remember that wxWidgets GUI functionality can only be used from a single thread so you need to initialize it from that thread too. And, of course, you'll need to handle thread synchronization yourself as wxWidgets won't know anything about the rest of your program.
If you are using GLUT, or equivalent, then you do NOT have a cross-platform framework. If you want a cross-platform app, then your will have to choose a framework ( e.g wxWidgets or Qt or whatever ) and proceed from there. Otherwise, you can use the native calls to the windows API if you are on windows, and the equivalent on other platforms.
GLUT only gives you a console style application. If you want a GUI, then you have to choose a GUI framework, even if you do not want cross-platform. There as many to choose from, the choice mostly depends on which you are most familiar with. Then you add the calls to the OpenGL library from you GUI application, however built. This way, you do not have to muck around with multiple threads.
It may be that you have a massive investment in your GLUT application, and do not wish to discard it merely to get a few GUI capabilities. In this case, I recommend building a new GUI app, separate from your GLUT application, which communicates with your existing app using a socket ( or other interprocess com system ) but runs in a separate process. This way you will not encounter all the ghastly, hard to fix bugs, created by multithreaded apps.
I am given a video calling software which implements an activex control to render the video in a web browser. As activex works only in IE i am given the task of implementing a cross-browser version of the activex control using FireBreath framework.
I need to write a wrapper class for the activeX control.
I am new to activex,visual studio(eveything involved in the project). And the activex code has thousands of lines of code. It is taking a long time for me to understand the code.
Does anyone have any good example wrapper classes and any other suggestions or links which would help my project?
The closest I know of is this: https://github.com/firebreath/FBAXExample
It's an example of hosting an activex control inside a FireBreath plugin. You'd be better off (and a lot cleaner) if you can do a complete port, but it may be possible to do it with just a wrapper; you may also want to look at the WebView library in FireBreath itself, which embeds IE inside a FireBreath plugin. You can find it here: https://github.com/firebreath/FireBreath/tree/master/src/libs/WebView/Win
Of course it would mean you're plugin is not cross-platform but let's focus on the technical side...
Is a browser plugin (like done in NPAPI) restricted in what it can do? Or do you get fairly free reign to access the PC and the render-window you're given? For instance can you create Win32/MFC controls in your browser this way?
A side question - is your browser plugin conceptually akin to a .DLL, which is therefore just arbitrary compiled code implementing a specific interface for browser control/communication?
There are 2 types of NPAPI plugins: windowed and windowless plugins. Both of them has some advantages and disadvantages (see this link). When you deal with windowed plugin on Win32 you get HWND of browser plugin window and you can work with it like with any window in OS.