I have two threads: One main-thread (opengl) for 3d-rendering and one thread for logic. How should I connect the threads if I want to create a box-mesh in the rendering thread, if the order comes from the logic-thread?
In this case the logic-thread would use opengl-commands, which is not possible because every opengl-command should only be exectued in the main-thread. I know that I can not share opengl context over different threads (which seems to be a bad idea), so how should I solve this problem? Do there exist some general purpose design pattern or something else? Thanks.
You could implement a draw commands queue. Each draw command contains whatever is needed to make the required OpenGL calls. Each frame the rendering thread empties the current queue and processes the commands. Any other thread prepares its own commands and enqueues them at any time to the queue for the next frame.
Very primitive draw commands can be implemented as a class hierarchy with virtual Draw method. Of course this is not a small change at all but modern engines adopt this approach, of course much more advanced version of it. It can be efficient if the subsystems which submitted their command objects re-use them in the next frame, including their buffers. So each submodule constantly prepares and updates the draw command but submits it only when it should be rendered based on some logic.
There are various ways to approach this. One is to implement a command queue with the logic thread being a command producer and the rendering thread the consumer.
Another approach is to make use of an auxiliary OpenGL context, which is setup to share the primary OpenGL context data. You can have both contexts made current at the same time in different threads. And for OpenGL-3.x core onward, you can make current a context without a drawable. You can then use the auxiliary context to load new data, map buffers and so on.
Currently I'm creating an OpenGL application that will render to multiple "targets" (e.g. different Winforms Panels, separate windows etc.).
I actually have a working prototype that uses multiple rendering threads (one for each target). In the threads I use wglMakeCurrent to switch between the different rendering targets.
The problem is: this approach is incredibly slow. Sometimes a rendering cycle can take 30ms or more (and I'm only drawing a single quad).
While doing a little research, I read that rendering on multiple threads in parallel is a very very bad idea (race conditions and whatnot). So now I need some advice on how to do this the right way.
Should I use only one thread to draw to all my "targets"?
Can I get rid of all those slow wglMakeCurrent calls?
Are there any other tricks/best practices to do fast multi-window rendering?
My straight answer would be NO. But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They used video editing software. They recorded two nearly deterministic run-throughs of their engine and spliced them together.
As for the question posed by your title, not within the same window. It may be possible within the same application from two windows, but you'd be better off with two separate applications.
Yes, it is possible. I did this as an experiment for a graduate course; I implemented half of a deferred shading graphics engine in OpenGL and the other half in D3D10. You can share surfaces between OpenGL and D3D contexts using the appropriate vendor extensions.
Does it have any practical applications? Not many that I can think of. I just wanted to prove that it could be done :)
I digress, however. That video is just a side-by-side of two separately recorded videos of the Haven benchmark running in the two different APIs.
My straight answer would be NO.
My straight answer would be "probably yes, but you definitely don't want to do that."
But I am curious how they created this video http://www.youtube.com/watch?v=HC3JGG6xHN8
They prerendered the video, and simply combined it via video editor. Because camera has fixed path, that can be done easily.
Anyway, you could render both (DirectX/OpenGL) scenes onto offscreen buffers, and then combine them using either api to render final result. You would read data from render buffer in one api and transfer it into renderable buffer used in another api. The dumbest way to do it will be through system memory (which will be VERY slow), but it is possible that some vendors (nvidia, in particular) provide extensions for this scenario.
On windows platform you could also place two child windows/panels side-by-side on the main windows (so you'll get the same effect as in that youtube video), and create OpenGL context for one of them, and DirectX device for another. Unless there's some restriction I'm not aware of, that should work, because in order to render 3d graphics, you need window with a handle (HWND). However, both windows will be completely independent of each other and will not share resources, so you'll need 2x more memory for textures alone to run them both.
I realize this is probably a ridiculous question, but before trying to figure out what libraries to use for which projects, I think it makes sense to really understand the purpose of such libraries first.
A lot of video games use libraries like OpenGL. All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something. Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max. The models are textured and are good to go. It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing. That would be both extremely time consuming and would make the models useless. So where does OpenGL or Direct 3D come in in relation to video games and 3d art? What is so crucial about them when all the graphics are already created and just need to be loaded and drawn? Are they used mainly for shaders and effects?
This question may just prove how new I am to this, but it's one I've never heard asked. I'm just starting to learn programming and I'm understanding the code and logic fairly well, but I don't understand graphics libraries or certain frameworks at all and tutorials are not helping.
It seems like all you'd need to do is write an animation loop that draws the models and updates repeatedly rather than actually program the code to draw every little thing.
Everything that happens in a computer does so because a program of some form tells it exactly what to do. The letters that this message is composed of only appear because your web-browser of choice downloaded this file via TCP/IP over an HTTP protocol, decoded its UTF-8-encoded text, interpreted that text as defined by the XML, HTML, JavaScript, and so forth standards, and then displayed the visible portion as defined by the Unicode standard for text layout and in accord with HTML et al, using the displaying and windowing abilities of your OS or window manager or whatever.
Every single part of that operation, from the downloading of the file to its display, is governed by a piece of code. Every pixel you are looking at on the screen is where it is because some code put it there.
HTML alone doesn't mean anything. You cannot just take an HTML file and blast it to the screen. Some code must interpret it. You can interpret HTML as a text file, but if you do, it loses all formatting, and you get to see all of the tags. A web browsers can interpret it as proper HTML, in which case you get to see the formatting. But in every case, the meaning of the HTML file is determined by how it is used.
The "draws the model" part of your proposed algorithm must be done by someone. If you don't write that code, then you must be using a library or some other system that will cause the model to appear. And what does that library do? How does it cause the model to appear?
A model, like an HTML web page, is meaningless by itself. Or to put it another way, your algorithm can be boiled down to this:
Animate the model.
????
Profit!
You're missing a key component: how to actually interpret the model and cause it to appear on the screen. OpenGL/D3D/a software rasterizer/etc is vital for that task.
A lot of video games use libraries like OpenGL.
First and foremost: OpenGL is not a library per-se, but an API (specification). The OpenGL API may be implemented in form as a software library, but these days is much more common to implement OpenGL in form of a driver that turns OpenGL function calls into control commands to a graphics processor sitting on a graphics card (GPU).
All the tutorials I've seen of such libraries demonstrate how to write code that tells the computer to draw something.
Yes. This is because things need to be drawn to make any use of them.
Thing is, in games these days everything is modeled using software such as Zbrush, Maya, or 3ds Max.
At this point the models just consist of a large list of numbers, and further numbers that tell, how the other numbers form some sort of geometry. Those numbers are not some sort of ready to use image.
The models are textured and are good to go.
They are a bunch of numbers, and what they have is some additional numbers controlling texturing. The textures themself are in turn just numbers.
It seems like all you'd need to do is write an animation loop that draws the models
And how do you think this drawing is going to happen? There's no magic "here you have a model, display it" function. Because for one the way in which the numbers making up a model may have any kind of meaning. So some program must give meaning to those numbers. And that is a renderer.
and updates repeatedly rather than actually program the code to draw every little thing.
Again, there is no magic "draw it" function. Drawing a model involves going through each of its numbers, it consists of, and turning those into drawing commands to the GPU.
That would be both extremely time consuming and would make the models useless.
How are the models useless, when they are what is controlling the issuing of commands to OpenGL. Or do you think OpenGL is used to actually "create" models?
So where does OpenGL or Direct 3D come in in relation to video games and 3d art?
It is used to turn the numbers a 3D model, as it is saved away from a modeller, into something pleasant to look at.
What is so crucial about them when all the graphics are already created
The graphics is not yet created, when the model is done. What's created is a model, and some auxilliary data in form of textures and shaders, which are then turned into graphics in realtime, at the execution time of the program.
and just need to be loaded and drawn?
Again, after being loaded, a model is just a bunch of numbers. And drawing means, turning those numbers into something to look at, which requires sending drawing commands to the graphics processor (GPU), which happens using a API like OpenGL or Direct3D
Are they used mainly for shaders and effects?
They are used to turn the numbers generated by a 3D modelling program (Blender, Maya, ZBrush) into an actual picture.
You have data. Like a model, with vertices, normals, and textures. As #datenwolf stated above, those are all just numbers sitting on the hard drive or in RAM, not colors on the screen.
Your CPU (which is where the program you write runs) can't talk to the screen directly. Instead, you send the data you want to draw to the GPU. The GPU then draws the data. Graphics APIs like OpenGL and Direct3D allow programs running on the CPU to send data to the GPU and customize how the GPU draws it. This is a gross simplification, but it sounds like you just need an overview.
Ultimately, every graphics program must go through a graphics API. When you draw an image, for example, you send the GPU the image, and the GPU draws it on the screen. Draw some text? Send the data to the GPU. The GPU draws it. Remember, your code can't talk to the screen. It CAN talk to the GPU through OpenGL or Direct3D, and the GPU then draws the data.
Before OpenGL and DirectX, the games had to use special instructions depending on what graphics card you had. When you bought a new game, you had to check carefully if your card was supported, or you couldn't use the game.
OpenGL and DirectX is a standardized API to the grapics cards. A library is delivered by the manufacturer of the card. If they follow the specification, you are guaranteed that games will work (if they also follow the same specification).
Open Graphics Library (OpenGL) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
I'm writing an application that is composed of multiple (16-32) plots that are updated several times a second and are drawn using openGL. Until now I've down most of the prototyping of the plots with GLUT. However I'd like to adopt a full fledge framework like QT and I'm getting ready to write a test QGLWidget.
Before I get started I'd like to figure out if its possible for multiple QGLWidgets to share a single openGL context? If so is there anything specifics I need to keep track of when sharing an openGL context between widgets?
if its possible for multiple QGLWidgets to share a single openGL context?
Now this is not possible to answer in general, because it depends on the platform in question: On X11/GLX it is indeed possible to use an indirect context on multiple drawables, however the context can be active on only one drawable at a time.
However:
It is also possible (and it is the recommended way to do this) to have multiple contexts share their data. In the very first versions of OpenGL this was only display lists, hence this still called list sharing. But with current versions of OpenGL this also includes textures, Pixel Buffer Objects and Vertex Buffer Objects. Frame Buffer Objects however can not be shared, but since textures can be used as FBO attachments that's no big deal.
QGLWidget provides a straigtforward API to share context data between QGLWidgests' contexts.
Yes, it is possible to share an opengl context by using this constructor.
If so is there anything specifics I need to keep track of when sharing
an openGL context between widgets?
I am not sure, but I don't think there is anything special you need to take care of.