I've been developing a 2d RPG based on the LWJGL alongside with Java 1.6 for 3 months now. My next goal is to write all of the non-game-ish stuff. This includes menus, text input boxes, buttons and things like the inventory and character information screens. As I am a Computer Engineering student, I'm trying to write everything on my own (except, of course, for the OpenGL part of the LWJGL) so that I "test" myself on the writing of a simple 2d game engine.
I know that making such things from scratch requires basically mapping textures to quads (like the buttons), writing stuff on them and testing mouse/keyboard events which trigger other events inside the code.
The doubt I have is: should I use VBO's (as I'm using for the actual game rendering) or Immediate Mode when rendering such elements? I don't really know if Immediate Mode would be such a drop on performance. Another point is: do the interface elements have to be updated as fast as the game itself? I don't think so, because nothing is actually moving... Are actual games made like that?
Immediate Mode is more straightforward for the task, you would not need to take care about caching and controls composition/batching. Performance dropoff is not that big, unless you render a lot of text (thousands of letters) with each glyph in a separate glBegin..glEnd. If you don't use VBO anywhere else I would recommend trying it out for text output and doing everything else in easier Immediate Mode.
GUI elements might not change as often as game state does, but there's a catch - you could need to update them each time there's a cursor interaction (e.g. button gets OnMouseOver event and needs to be rendered with a highlight). These kind of events may happen very frequently, so thats why rendering may be needed at a full speed.
Related
I've got a C++/QWidgets application with 2 Qt3DWindow embedded. The 3D windows can be hidden (i.e. the user switches to another view), while work is done in other parts of the application. Whenever the user switches back to the view with the embedded Qt3DWindow there are a lot of changes to be made to the scenegraph.
Rendering is set to OnDemand, but whenever the view is switched back to the 3D windows there is a big lag, up to several seconds. Profiling the application has shown that a lot of work is done for rendering at that point because the scenegraph is changing (most time is spent in the material classes).
My working theory is, that each change to the scenegraph (i.e. removing an entity, adding a new one) causes a separate render update, which leads to the lag due to the amount of separate changes.
So my question is:
Is there a way to stop all render updates until the scenegraph has been updated/modified completely?
I'm thinking of something similar to the beginResetModel() and endResetModel() methods in QAbstractItemModel.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm designing a simple GUI framework from scratch as a project, using OpenGL and nothing else external and need some advice on how I might implement user interaction.
Basically, I've a base class GUIItem from which all elements inherit. This gives each item some basic variables such as position, a vector to contain child elements as well as some basic functions for mouse movement and clicking.
All elements are setup as above, with their relevant member variables.
What I'm struggling with is how to implement user interaction properly. In my window manager I would create a new instance of an item, say GUIButton and call it button1. The window manager would, upon a click occurring, iterate through its list of elements and any child elements they may have, calculating a rectangular area around the object based on its coordinates, height and width, then running any "on click" function associated with said item, like change the value of textlabel1.
Firstly, is there a better way to do this calculation? It would work for rectangular elements but spherical objects and others would have a much larger erroneous area which could be clicked. Ideally I would check pixels but I've no real idea how that would be achieved. I've heard about but never used GLUT (my project only allows use of this for handling mouse/keyboard interaction though). Does GLUT provide anything to assist in this case?
My main issue is with handling what would occur when an "On click" event actually occurred. At the moment GUIButton for example, has an "On click" function built in, so as far as I can see, I'd have to do something like make it a virtual function, meaning that each new button I created would have to have its own class just to overwrite the "on click" function and each instance of a button would be an instance of a unique class that simply inherited off of GUIButton. This seems messy to me, as I've no idea where I would store all those classes, and it seems a lot of extra code. Would I be creating a button1.cpp and button1.h file?
Any advice on this really would be welcome as I'm new to C++, OpenGL and it's the first time I've been exposed to GUI programming and there's not a lot to go on when an existing GUI framework is the usual choice.
if you want something stupidly simple and fast then you could:
create shadow screen buffer containing ID/index/pointer instead of color
pre-render this buffer
Just render each of your visual component to it but instead coloring/texturing just fill in the ID/index/pointer of rendered component. Do not forget to clear this with some NULL first ... After this you should have mask of your components. You need to do this just once ...
On mouse events
you simply convert mouse coordinates to the shadow screen space and pick the value. If it is NULL then you clicked or whatever on empty area. If it contains ID instead update or call the callbacks for component ID. if you have a list of all components then ID can be the list index, otherwise use its actual pointer or encode in style (component_type, component_index). As you can see this is pretty fast O(1) item selection no matter how many components you have ... The shadow screen can have different resolution then your actual screen (to preserve memory).
This have pixel perfect mouse selection accuracy no matter the shape of your components without the need for nested component search loops.
[Notes]
As I did this stuff here are some hints:
create a window class containing configuration of your components for single screen. Programs have usually more screens with different set of components and doing dynamically the screens over and over again just because you switch page/screen sucks.
use separate list of components one list per component type.
create IDE editor for your windows see drag & drop example in C++ it might get handy for this. Add get,set functions controlled by string/enum or flag to easy obtain/change properties to make Object Inspector possible. Also this is how mine IDE looks like:
The window is saved from IDE directly as C++ code I can just copy to my App. This is the above example without the knob (forgot to save it):
//---------------------------------------------------------------------------
// OpenGL VCL window beg: win
win.grid.allocate(0);
win.grid.num=0;
win.scale.allocate(0);
win.scale.num=0;
win.button.allocate(0);
win.button.num=0;
win.knob.allocate(0);
win.knob.num=0;
win.scrollbar.allocate(3);
win.scrollbar.num=3;
win.scrollbar[0].x0=200.0;
win.scrollbar[0].y0=19.0;
win.scrollbar[0].xs=256.0;
win.scrollbar[0].ys=16.0;
win.scrollbar[0].fxs=8.0;
win.scrollbar[0].fys=19.0;
win.scrollbar[0].name="_vcl_scrollbar0";
win.scrollbar[0].hint="";
win.scrollbar[0].min=0.000;
win.scrollbar[0].max=1.000;
win.scrollbar[0].pos=0.000;
win.scrollbar[0].dpos=0.100;
win.scrollbar[0].horizontal=1;
win.scrollbar[0].style=0;
win.scrollbar[0].resize();
win.scrollbar[1].x0=200.0;
win.scrollbar[1].y0=45.0;
win.scrollbar[1].xs=256.0;
win.scrollbar[1].ys=16.0;
win.scrollbar[1].fxs=8.0;
win.scrollbar[1].fys=19.0;
win.scrollbar[1].name="_vcl_scrollbar1";
win.scrollbar[1].hint="";
win.scrollbar[1].min=0.000;
win.scrollbar[1].max=1.000;
win.scrollbar[1].pos=0.000;
win.scrollbar[1].dpos=0.100;
win.scrollbar[1].horizontal=1;
win.scrollbar[1].style=0;
win.scrollbar[1].resize();
win.scrollbar[2].x0=200.0;
win.scrollbar[2].y0=70.0;
win.scrollbar[2].xs=256.0;
win.scrollbar[2].ys=16.0;
win.scrollbar[2].fxs=8.0;
win.scrollbar[2].fys=19.0;
win.scrollbar[2].name="_vcl_scrollbar2";
win.scrollbar[2].hint="";
win.scrollbar[2].min=0.000;
win.scrollbar[2].max=1.000;
win.scrollbar[2].pos=0.000;
win.scrollbar[2].dpos=0.100;
win.scrollbar[2].horizontal=1;
win.scrollbar[2].style=0;
win.scrollbar[2].resize();
win.interpbox.allocate(0);
win.interpbox.num=0;
win.dblist.allocate(0);
win.dblist.num=0;
// OpenGL VCL window end: win
//---------------------------------------------------------------------------
Look at images here plotting real time Data on Oscillocope for some ideas (I got this working for both GDI and OpenGL)
It is better to use pixel units instead of OpenGL <-1,+1> screen units for better visual quality and editing comfort.
What I need to do is create a program that overlays the whole screen and every 30 seconds the screen needs to flash black once.
the program just needs to be on top of everything, doesn't have to work over the top of games, but wouldn't say no if it did!
But i've got no idea where to start. Ideally the solution would be cross-platform for both windows and osx.
Does anybody have any ideas about where I should start or could whip up a quick demo?
OpenGL (you tagged it as such) will not help you with this.
Create a program, that overlays the whole screen,
The canonical way to do this is by creating a decorationless, borderless top level window with some stay-on-top property being set.
and every 30 seconds the screen needs to flash black once.
How do you define "flash back once"? You mean you want the display become visible for one single vertical retrace period or a given amount of time? Being the electronics tinkerer I am, honestly, I'd do this using a handfull of transistors, resistors and capacitors, blanking the analog VGA signal.
Anyway, if you want to do this using software, this is going to be hard work. If you'd do this using the aforementioned stay-on-top window, when you "flash" it away, all the programs with visible output would receive redraw events, which to process would take some time. In the best case scenario the system uses a compositing window manager which can practically immediately show the desktop. Without a compositor its going to be impossible to "flash" the screen.
Ideally the solution would be cross-platform for both windows and osx
A task like this can not be solved cross plattform. There's too much OS dependent work to do for this.
I presume this is for some kind of nerological or psychological experiment. I think doing this using some VGA intercepting circurity would be actually the easier, quicker to implement solution. I can help you with that. But I think there's another StackExchange better suited for this. Unfortunately digital display interfaces (DVI, HDMI and Display Port) use a complex line code scheme, which can not be blanked as easily as VGA, so you must have a computer capable of analog (=VGA) output and a display with a VGA input.
I am working on a touch screen application (WinRT) and currently draw some graphics to the screen. Because it is touch, I want to enable pinch-to-zoom for scaling the entire content. For a better experience, I only want to redraw the graphics, once the pinch gesture is complete. For the intermediate scalings, I would like to reuse the current bitmap, and perform (if possible) a gpu-only scaling (enlarge bitmap).
Basically, I want to do exactly what iOS and Windows Phone have been doing for years now.
How can I implement this in Direct2D?
As a bonus, If you know a good ressource for reading on Direct2D, please tell me. The MSDN documentation is really poor and I have to hunt different blogs and magazine articles to learn :(
What I tried so far:
m_target->SetTransform(
D2D1::Matrix3x2F::Scale(
D2D1::Size(1.5f, 1.5f),
D2D1::Point2F(500.0f, 500.0f))
);
However, if I do this for interactive elements (like page zoom-in/zoom-out), all objects are rendered (which is also slow).
Another option could be to draw into a BITMAP and use that as the base for the transforms. However, I am not sure if this is a good approach.
Note: I am currently debugging on a Desktop but want to target tablets. I have to consider that tablets are orders of magnitude slower. That's why I try to optimize this functionality.
Thanks!
Was looking into making a game with Qt and was wondering if QML has gotten to the point yet where it could be used as a serious tool on the desktop. Have seen some post from Qt stating that they will be transitioning most things to QML eventually, so this seems like it may be the way to go, at least according to Qt.
Edit: I realize that QML probably wouldn't be the best bet for a 3D game with heavy graphics. Was looking more for something that did mostly 2D stuff like a platformer type game.
Seen this http://labs.qt.nokia.com/2010/08/12/a-guide-to-writing-games-with-qml/. So its obviously possible to some extent. I have also seen some impressive games made solely with java script, which I believe is the base of QML. I was just curious as to what would be the best way to go with Qt at the moment since things have been changing lately...
It may depend on "how long" you want to wait before releasing your game.
The Trolls/Qt are re-doing its "graphics stack" right now: Rather than historic "every-widget-renders-itself" (which is the wrong paradigm for games and rich mobile apps), they are re-implementing to a single graphics stack that renders the WHOLE interface, where the "widgets" themselves are mere data-sets that feed into the rendering. In short, the goal is to make desktop/mobile applications with the exact same performance as the high-end games have done for decades (with their own graphics stack that looks nothing like the typical X/Xlib/Motif/Xvt/Win/MFC/Qt applications graphics stack). Further, the Qt5 plans (in planning/development now, they claim a release sometime next year) are reliant upon OpenGL for this graphics stack implementation.
After this work, the pipeline will be: Widgets==>QML==>(C++ Graphics Stack)==>Hardware. Currently (Qt 4 and previous) it is: QML==>Widgets==>(C++ Graphics Stack)==>Hardware.
You can google for various posts/discussions on this, or here's a long-ish presentation that talks about these efforts: http://qt.nokia.com/developer/learning/online/talks/developerdays2010/tech-talks/performance-do-graphics-the-right-way/
IMHO, QML makes more sense for games, since the interface components are "independent actors" (e.g., not tied to each other through layouts). That's also why QML makes so much more sense for mobile (where real estate is a premium), and for very flashy desktop apps (although it is still relatively young and unproven for that).
QML already has lots of rendering/animation options, but they are mostly a very rich 2D (but with which you could simulate 3D pretty well). The QML 3D is undergoing heavy revision right now, but the new stuff looks really good (and sits on OpenGL). So, if you want heavy 3D, it might be experimentation time for the moment, until you see the new Qt5 interfaces and can take advantage of the hardware acceleration (depending on how much 3D you need).
The performance specs I've seen from the new Qt5 stuff with the new graphics stack (in prototype development) are quite impressive, so much so that I've been thinking about writing some games in QML just to play with it. If this were twelve-months-from-now (or so, after the release of Qt5), I'd bet QML would be the best/easiest decision for games (because the components are independent actors, it's so simple to use, and I'd push all the game-specific heavy stuff into C++, which is really easy to do with QML on top).
QML is definitely a viable option for designing 2D games and can save you a lot of time and lines of code.
V-Play (v-play.net) is a cross platform 2D game engine based on Qt/QML with many useful V-Play QML game components for handling multiple display resolutions & aspect ratios, animations, particles, physics, multi-touch, gestures, path finding and more (API reference).
If you are curious about the games made with V-Play, here is a quick selection of them:
Squaby: a tower defense game
Chicken Outbreak: a platformer like Doodle Jump
Blockoban: puzzle game
Crazy Elephant: a game similar to Angry Birds
Snowball Mania: multiplayer action game
Blitzkopf: brain game
Remember that QML is only designed to lay out the UI. Fundamentally, it acts as a QGraphicsView with lots of utility functions (code in QML is at least three times shorter than the Qt/C++ equivalent, at least, that's how I felt it)
The core of the application is still handled by either javascript files (if things are not too complex) or C++/Qt files (if you want the full extent of Qt possibilities)
As Jeremy Salwen said, for a game that looks like a smartphone app game (big sprites that move with nice transitions, with simple logic behind it), then QML is more than sufficient.
But if you want something complex, then you will end up using in QML only QDeclarativeItem classes that you will have previously defined in C++. Not that useful.