Simple OpenGL GUI Framework User Interaction Advice? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm designing a simple GUI framework from scratch as a project, using OpenGL and nothing else external and need some advice on how I might implement user interaction.
Basically, I've a base class GUIItem from which all elements inherit. This gives each item some basic variables such as position, a vector to contain child elements as well as some basic functions for mouse movement and clicking.
All elements are setup as above, with their relevant member variables.
What I'm struggling with is how to implement user interaction properly. In my window manager I would create a new instance of an item, say GUIButton and call it button1. The window manager would, upon a click occurring, iterate through its list of elements and any child elements they may have, calculating a rectangular area around the object based on its coordinates, height and width, then running any "on click" function associated with said item, like change the value of textlabel1.
Firstly, is there a better way to do this calculation? It would work for rectangular elements but spherical objects and others would have a much larger erroneous area which could be clicked. Ideally I would check pixels but I've no real idea how that would be achieved. I've heard about but never used GLUT (my project only allows use of this for handling mouse/keyboard interaction though). Does GLUT provide anything to assist in this case?
My main issue is with handling what would occur when an "On click" event actually occurred. At the moment GUIButton for example, has an "On click" function built in, so as far as I can see, I'd have to do something like make it a virtual function, meaning that each new button I created would have to have its own class just to overwrite the "on click" function and each instance of a button would be an instance of a unique class that simply inherited off of GUIButton. This seems messy to me, as I've no idea where I would store all those classes, and it seems a lot of extra code. Would I be creating a button1.cpp and button1.h file?
Any advice on this really would be welcome as I'm new to C++, OpenGL and it's the first time I've been exposed to GUI programming and there's not a lot to go on when an existing GUI framework is the usual choice.

if you want something stupidly simple and fast then you could:
create shadow screen buffer containing ID/index/pointer instead of color
pre-render this buffer
Just render each of your visual component to it but instead coloring/texturing just fill in the ID/index/pointer of rendered component. Do not forget to clear this with some NULL first ... After this you should have mask of your components. You need to do this just once ...
On mouse events
you simply convert mouse coordinates to the shadow screen space and pick the value. If it is NULL then you clicked or whatever on empty area. If it contains ID instead update or call the callbacks for component ID. if you have a list of all components then ID can be the list index, otherwise use its actual pointer or encode in style (component_type, component_index). As you can see this is pretty fast O(1) item selection no matter how many components you have ... The shadow screen can have different resolution then your actual screen (to preserve memory).
This have pixel perfect mouse selection accuracy no matter the shape of your components without the need for nested component search loops.
[Notes]
As I did this stuff here are some hints:
create a window class containing configuration of your components for single screen. Programs have usually more screens with different set of components and doing dynamically the screens over and over again just because you switch page/screen sucks.
use separate list of components one list per component type.
create IDE editor for your windows see drag & drop example in C++ it might get handy for this. Add get,set functions controlled by string/enum or flag to easy obtain/change properties to make Object Inspector possible. Also this is how mine IDE looks like:
The window is saved from IDE directly as C++ code I can just copy to my App. This is the above example without the knob (forgot to save it):
//---------------------------------------------------------------------------
// OpenGL VCL window beg: win
win.grid.allocate(0);
win.grid.num=0;
win.scale.allocate(0);
win.scale.num=0;
win.button.allocate(0);
win.button.num=0;
win.knob.allocate(0);
win.knob.num=0;
win.scrollbar.allocate(3);
win.scrollbar.num=3;
win.scrollbar[0].x0=200.0;
win.scrollbar[0].y0=19.0;
win.scrollbar[0].xs=256.0;
win.scrollbar[0].ys=16.0;
win.scrollbar[0].fxs=8.0;
win.scrollbar[0].fys=19.0;
win.scrollbar[0].name="_vcl_scrollbar0";
win.scrollbar[0].hint="";
win.scrollbar[0].min=0.000;
win.scrollbar[0].max=1.000;
win.scrollbar[0].pos=0.000;
win.scrollbar[0].dpos=0.100;
win.scrollbar[0].horizontal=1;
win.scrollbar[0].style=0;
win.scrollbar[0].resize();
win.scrollbar[1].x0=200.0;
win.scrollbar[1].y0=45.0;
win.scrollbar[1].xs=256.0;
win.scrollbar[1].ys=16.0;
win.scrollbar[1].fxs=8.0;
win.scrollbar[1].fys=19.0;
win.scrollbar[1].name="_vcl_scrollbar1";
win.scrollbar[1].hint="";
win.scrollbar[1].min=0.000;
win.scrollbar[1].max=1.000;
win.scrollbar[1].pos=0.000;
win.scrollbar[1].dpos=0.100;
win.scrollbar[1].horizontal=1;
win.scrollbar[1].style=0;
win.scrollbar[1].resize();
win.scrollbar[2].x0=200.0;
win.scrollbar[2].y0=70.0;
win.scrollbar[2].xs=256.0;
win.scrollbar[2].ys=16.0;
win.scrollbar[2].fxs=8.0;
win.scrollbar[2].fys=19.0;
win.scrollbar[2].name="_vcl_scrollbar2";
win.scrollbar[2].hint="";
win.scrollbar[2].min=0.000;
win.scrollbar[2].max=1.000;
win.scrollbar[2].pos=0.000;
win.scrollbar[2].dpos=0.100;
win.scrollbar[2].horizontal=1;
win.scrollbar[2].style=0;
win.scrollbar[2].resize();
win.interpbox.allocate(0);
win.interpbox.num=0;
win.dblist.allocate(0);
win.dblist.num=0;
// OpenGL VCL window end: win
//---------------------------------------------------------------------------
Look at images here plotting real time Data on Oscillocope for some ideas (I got this working for both GDI and OpenGL)
It is better to use pixel units instead of OpenGL <-1,+1> screen units for better visual quality and editing comfort.

Related

Qt | creating a dynamic object in a customized scene

Good day to all!
I would like to learn from the respected community how it is possible using Qt Designer to create a simple GUI that performs what is shown in the attached image. Namely: after a single click of the LMB, a marker is placed at the place of the click, indicating the beginning of the polygon, and a temporary polygon of the desired type (but of different sizes) begins to follow the mouse cursor until we click on the LMB again, after which the final polygon will be built from the point of the first click of the LMB to the point of the second click. Such a ready-made polygon will have to be stored somewhere in memory for further work with it.
As a polygon here, I show a complex sector built on some calculated points, but as a simple example, you can take a straight line - I don't think that the essence of the program changes much.
I have already looked at many examples of different implementations of different things on QGraphicsScene, but I could not figure out exactly how to create an object in the way I needed. That is, I know that we need a separate class describing the polygon and calculating the coordinates of which it consists, but I don't quite understand how to implement the dynamics - with each mousemoveivent, delete the temporary polygon and draw a new one for the new coordinates, or how?
P.S. If it won't be so difficult, could someone also show how to implement what was conceived through the redefined paintScene class, replacing the standard QGraphicsScene class and inheriting from it? I am faced with the fact that when creating a custom class of the scene object, clicking on the objects attached to it is ignored by the program, and instead of, for example, dragging an object on the scene when clicking on it, the mousePressEvent of the scene, not the object, is triggered, and I do not understand what the problem is.
P.P.S. I apologize for my English and thank everyone for any help!

wxWidgets Overlay Text (C++)

I am trying to place some overlay text over a wxPanel.
I have several panels with completely different Content but I want to place this overlay text over all of these panels in the top right Corner of the panel.
I am restricted to wxWidgets 2.8.12..
Do you see any way to achieve this behaviour?
Edit:
Here a bit more detailed Version of what I am trying to do:
I have a Layout that consists of e.g. 5 containers and each container can contain a module. A module can be either wxPanels that contain Plain text or Input controls or for example a OpenGL canvas or an Image or something else.
Because I have much content and it does not fit on a single page I want to make the modules inside a Container exchangeable. It would be also nice if the user is able to perform this action only by using its keyboard. E.g. if he presses the key "3" the content of the third container has to be switched.
To handle these shortcuts isn't a problem. However I need to signalize to the user the identifier / hotkey of the containers.
I could do this by placing a additional headline above each container, but I want to waste as little space as possible on the gui.
I also could draw directly to the modules content, but I would have to do this for every module and every module is designed in a different way (images, multi column, opengl, ...) and maybe even by different persons.
So I am looking for a simple solution to indicate the number of these containers that does not consume that much space.
Thanks for your help
You can use a wxWindowDC to draw anywhere on the window, even on child windows. However anything you draw will be painted over whenever the windows or controls repaint themselves. You can draw your overlay in an UpdateUI event handler to minimize this. I have used this approach with success on Windows with wxWidgets 2.8.12. Not sure if it works with OpenGL though.

How to determine the content step value of scrollarea

I'm making my own UI from scratch using OpenGL that is why I'm asking this and please don't make any discouragement as this is just a hobby project.
Currently, I'm stuck implementing how this scrollbars really work. In my current implementation, the content scrolls at the wrong step value as well as the thumb, meaning, I set the value manually like 1px step for each of them.
The structure of my scrollbar implementation is describe as follows:
I draw scrollbars i.e the main rectangle where the 3 button lies.
Those 3 buttons are, thumb, buttonBack and buttonNext.
All of them do the basic logic of scrollbars i.e when I click each one of them, they moved. But the whole part(scrollbar) don't know how to scroll contents
So what I did is: I make another object and I call it scrollarea
It has two scrollbars, vertical and horizontal scrollbar.
I made a function called scrollToX and scrollToY which
does what I named to them.
But the step values I set to them are
manually set up.
I try to google some scrollbar, scrollarea, scrollview or whatever you call to that scrollable rectangle thing, but all I see are implementation and I cannot find any guides how to build your own. I have no choice but to look at their implementation. I try my best to comprehend what they did but their implementation of how their whole UI structure is very different to mine, and I cannot find anything useful there.
So I ask again here if anybody can explain me well how to make a properly functional scrollbar.
Most specific things I'm really concerned of are:
How do I determine the thumb step value?
How do I determine the content step value?
All of these depend on your content -
Is it just an image ? If so, you only need to change the offset depending on the size of the image.
Is it a list of values like in Windows explorer ? Then you need to create a data structure first that contains all of it, and shows the content that fits within the window as it scrolls.
OpenGL does not fit into this discussion.

Efficient method for finding object in map based on coordinates

I am building an editor using C++/Qt which has a click-and-drag feel to it. The behavour is similar to schematic editors (Eagle, KiCAD, etc), Microsoft Visio, or other programs where you drag objects from a toolbar into a central editing area.
My problem is that when the user clicks inside the custom widget I want to be able to select the instance of the box-like object and manipulate it. There will also be lines connecting the boxes together. However, I can't decide on an efficient method for selecting those objects.
I have two main thoughts on how to do the programming for this: The first is that the widget which is drawing the entire editor would simply encapsulate every one of the instances of the box. The other is to have each instance of the box (which is in my Model) carry with it an instance of a QWidget which would handle rendering the box (which would be in my View...but it would end up being strongly attached to the model). As for the lines connecting them, since they don't have a square bounding boxes they will have to be rendered by the containing widget.
So here is the summary of how I see this being done:
The editor widget turns into a container which holds the widgets and the widgets process their own click events. The potential issues here are that I don't know how to make the custom widget turn into a layout which lets click-and-drag functionality.
The editor widget takes care of all the rendering and processes the mouse clicks (the easier way in that I don't have to worry about layout...its just selecting the instances efficiently that I don't know what would be best).
So, now that there is a bit of background, for the 2nd method I plan on having each box-like instance having a bounding rectangle and the lines being represented by 3-4 pixel wide bounding rectangle segments (they are at 90 degree angles). I could iterate through every single box and line, but that seems really inefficient.
The big question: Is there some sort of data structure I can hold rectangles in and link them to widgets (or anything else for that matter) and then give it two coordinates (such as mouse coordinates) and have it spit me out the bounding box or linked object that those coordinates are inside of?
It sounds like your real question is about finding a good way to implement your editor, not the specifics of rectangle intersection performance.
You may be interested in Qt's "Diagram Scene" example project, which demonstrates the QGraphicsScene API. It sounds like a good fit for the scenario you describe. (The full source for the example ships with Qt.)
The best part is that you still don't have to implement hit testing yourself, because the API already provides what you are looking for (e.g., QGraphicsScene::itemAt()).
It's worth noting that internally, QGraphicsScene uses a simple iterative method to perform hit tests. As others have pointed out, this isn't going to be a serious bottleneck unless your scenes have a lot of individual items.

Game Interface Design - VBO or Immediate Mode?

I've been developing a 2d RPG based on the LWJGL alongside with Java 1.6 for 3 months now. My next goal is to write all of the non-game-ish stuff. This includes menus, text input boxes, buttons and things like the inventory and character information screens. As I am a Computer Engineering student, I'm trying to write everything on my own (except, of course, for the OpenGL part of the LWJGL) so that I "test" myself on the writing of a simple 2d game engine.
I know that making such things from scratch requires basically mapping textures to quads (like the buttons), writing stuff on them and testing mouse/keyboard events which trigger other events inside the code.
The doubt I have is: should I use VBO's (as I'm using for the actual game rendering) or Immediate Mode when rendering such elements? I don't really know if Immediate Mode would be such a drop on performance. Another point is: do the interface elements have to be updated as fast as the game itself? I don't think so, because nothing is actually moving... Are actual games made like that?
Immediate Mode is more straightforward for the task, you would not need to take care about caching and controls composition/batching. Performance dropoff is not that big, unless you render a lot of text (thousands of letters) with each glyph in a separate glBegin..glEnd. If you don't use VBO anywhere else I would recommend trying it out for text output and doing everything else in easier Immediate Mode.
GUI elements might not change as often as game state does, but there's a catch - you could need to update them each time there's a cursor interaction (e.g. button gets OnMouseOver event and needs to be rendered with a highlight). These kind of events may happen very frequently, so thats why rendering may be needed at a full speed.