I'm looking for an efficient way to use two separate windows for one OpenGL program in C++. It is a drawing program and like Photoshop or Illustrator I would like to have a tool bar that is a separate window that can be moved around but does not get sent behind the composition window when the user starts to draw. Can this be done?
Related
I was looking at how sometimes when you right click, the menu goes outside of the window.
Is this implemented with a separate window? If so, how can I get this functionality. I am trying to use GLFW, but I understand if it isn't possible.
Currently I am on windows, but I like keeping my options open, which is why GLFW would be preferable.
I noticed that GLUT has such a feature. If you are confused to what I am looking at then look at that.
Thanks for any help!!
Overlapping menus (in MS Windows) have to be implemented as a new top-level window, you would have a new OpenGL rendering context and draw the menu in that space - yes, it's a fair bit of work all for the edge-case of a menu overspilling the parent window,
However this isn't often a problem in OpenGL programming because if you're working on a full-screen game then the menu will always be displayed within the main window, and even if it isn't a full-screen a game your users really won't notice them as games tend to use different UI concepts like radial-menus which wouldn't overspill the parent window.
Or if you're working on a non-game title, chances are it isn't full-screen and is going to be an OpenGL rendering area within a larger application that is rendered using a native UI toolkit (e.g. 3ds Max, AutoCAD, etc), in which case no problem: just use native menus.
You can, of course, use native menus in an OpenGL application anyway, provided you do the necessary plumbing for native window messages.
I've been weighing the pro's/con's of making a gui app, and I've decided a console app is much more powerful for my calculator, especialy since it does different things like foil, quadratic equations, etc. So my question is make the console look like a gui based app?
The answer to your question depends on exactly what you mean by "console." If you're talking about Windows console windows, then the answer is "maybe." Some Windows installations can emulate VGA/EGA graphics within a console window, making them able to play old games for DOS.
Your mission would be to implement every GUI widget you need, such as clickable buttons, text-entry fields, etc. in terms of simple graphics primitives for drawing lines and rectangles. Then you have to write code that figures out where the mouse is and draws the mouse pointer in the right spot. You'd also have to write code to make the cursor blink, to make the arrow keys move the cursor, and to make it possible to select characters in a text entry box and copy, cut, and paste them.
When you got done, you'd have a program that works on some people's computers, but not on others. On some Windows installations, the console windows can't do graphics or go fullscreen. Your app wouldn't work at all on those systems, although you could write a fullscreen Windows app using a 2D game library such as SDL or Allegro instead of writing a console app, which would bring you back to the previous paragraph.
As you might have guessed by now, rolling your own GUI would be a whole lot more work than writing a Windows GUI program in which the buttons, text fields, etc are already implemented for you, the cursor already blinks, the mouse already clicks, etc.
Also, the code that does the actual calculations should be totally separate
from the code that gets the input from the user and puts the answers on the screen, so that code shouldn't factor into whether you want to write a GUI or a console app. They shouldn't even be in the same .cpp file as the I/O routines.
Now, some programmers use the term "console" to refer to xterm windows on Linux. These are not the same thing at all, and cannot draw graphics (and "console" is the wrong name for them to boot). But sometimes you see menus and stuff within them, "drawn" with colored text. Usually, these are drawn and managed using the external dialog shell command.
For a while I've been using SDL to write my 3D engine,and have recently been implementing an editor that can export an optimized format for the type of engine Im building. Right now the editor is fairly simple, objects can just be moved around and their textures and models can be changed. As of right now, I'm using SDL with OpenGL to render everything, but I want to use Qt for the GUI part of the editor, that way it looks native on every platform. I've got it working great so far, I'm running a QApplication inside of the SDL application, so it basically just opens 2 windows, one that uses SDL and OpenGL, and the other using Qt. Doing a bit of research, I've found that you can manually update a QApplication, which totally removes any threading problems, and everything works. Just in case you're having a hard time visualizing this, heres a picture:
What my goal is to merge these windows into one, because on smaller screens (like my laptop's) it makes it really hard to keep track of all the different windows that I would eventually have. I know theres a way to render to Qt with OpenGL, but can this be integrated with SDL? Am I going to need to move away from using an SDL window and use a QT one if the editor is enabled? Just to clarify, when the engine isn't in editor mode, it won't use and Qt, just SDL, so optimally I wouldn't need to do this.
Drop the SDL part. You have Qt, which does everything SDL does as well. Just subclass a QGLWidget and use that.
You can keep your game and editor separate processes and still make them part of the same app.
Just spawn the window where you want the game to run as part of Qt, and at least in windows, you can then pass the window handle to your game, and make sure when your game is setting up, instead of creating the window yourself in SDL and binding the opengl context to it, you can actually bind to the existing handle.
There are some gotchas with this technique to watch out for such as input focus I believe (I tested with directX, but it might be similar with SDL). The problem is that the foreground mode does some dumb checks on the "root" window which for me was not the window that owned the opengl context, so it failed to initialize. However background mode worked. I think that was for a joystick now that I think about it, but anyway, that's how you can merge everything together.
Just like to add a bit question similar to this link. Is there anyway I could do to draw outside the client window using OpenGL without any native commands to be used? or is this beyond OpenGL previleges?
Other ways I could think of is to draw a sub window and remove the built in borders and button on it, then draw what I wanted there?
I've seen things like this and I was wondering if this was possible, say I run my application
and it will show the render on whatever is below it.
So basically, rendering on the screen without a window.
Possible or a lie?
Note: Want to do this on windows and in c++.
It is possible to use your application to draw on other application's windows. Once you have found the window you want, you have it's HWND, you can then use it just like it was your own window for the purposes of drawing. But since that window doesn't know you have done this, it will probably mess up whatever you have drawn on it when it tries to redraw itself.
There are some very complicated ways of getting around this, some of them involve using windows "hooks" to intercept drawing messages to that window so you know when it has redrawn so that you can do your redrawing as well.
Another option is to use clipping regions on a window. This can allow you to give your window an unusual shape, and have everything behind it still look correct.
There are also ways to take over drawing of the desktop background window, and you can actually run an application that draws animations and stuff on the desktop background (while the desktop is still usable). At least, this was possible up through XP, not sure if it has changed in Vista/Win7.
Unfortunately, all of these options are too very complex to go in depth without more information on what you are trying to do.
You can use GetDesktopWindow(), to get the HWND of the desktop. But as a previous answer says (SoapBox), be careful, you may mess up the desktop because the OS expects that it owns it.
I wrote an open source project a few years ago to achieve this on the desktop background. It's called Uberdash. If you follow the window hierarchy, the desktop is just a window in a sort of "background" container. Then there is a main container and a front container. The front container is how windows become full screen or "always on top." You may be able to use Aero composition to render a window with alpha in the front container, but you will need to pass events on to the lower windows. It won't be pretty.
Also, there's a technology in some video cards called overlays/underlays. You used to be able to render directly to an overlay. Your GPU would apply it directly, with no interference to main memory. So even if you took a screen capture, your overlay/underlay would not show up in the screen cap. Unfortunately MS banned that technology in Vista...