Let's suppose i have an uniform variable in a GLSL shader which value is set at program startup. The value never change during the program execution.
What i want to do is to set this uniform variable from my main C++ program.
My problem is that that uniform variable seems to be cleared each time i call glUseProgram. I have to call again glUniformXX() API
Is there a way to tell OpenGL not to clear uniform variable between glUseProgram?
Uniform state is preserved till the next link operation. Indeed glUseProgram do not reset the uniform state.
You can check in Do uniform values remain in GLSL shader if unbound?
You may want to separate logic & storage of data in this situation. In this type of situation you may want to have your GL data that is initialized by the C++ application to be passed off to OpenGL & GLSL through their API calls separate from your logical calls. Then having a database type structure or class that keeps track of all instances of your programs, shaders, uniforms and variables becomes very handy.
Your main engine class or structure would be responsible for setting up your windows, your basic I/O, setting up an OpenGL context, creating your programs & shaders and the calls to pass them off to the renderer or video card. Your application logic should be separate from this as well. Another nice feature to this approach of having your data structures separate would be to have them as a part of your engine that relies on received input data from the specific application through its initialization process.
With a manager type class to maintain and keep track of all of your instances for each program, shader & its data you can keep a low cost single instance of all of your uniforms as const types in containers and variable types remain for their lifetime and then are discarded after use where new ones will be created when needed. This way when you enable & disable your programs that allocate & destroy their internal datasets, you still have access to existing data that you can retrieve and reinitialize with ease so as long as you make this type of managing class in a cache friendly way. This also relies on knowing which type of data to have as stack variables, or which type to have as dynamic memory. The use of templates can be very handy here along with smart pointers such as std::shared_ptr<T> & std::unique_ptr<T>. The reason I mentioned smart pointers here is that it does make clean up easier, it helps to manage ownership, and you can easily store, create, dispatch and remove both uniforms & variable types from a single container. However this type of construct is fairly complex and beyond the scope of trying to fully answer this question as there are too many integrated parts of a game or 3D graphics engine to display here.
Related
My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.
I don't understand what the purpose is of binding points (such as GL_ARRAY_BUFFER) in OpenGL. To my understanding glGenBuffers() creates a sort of pointer to a vertex buffer object located somewhere within GPU memory.
So:
glGenBuffers(1, &bufferID)
means I now have a handle, bufferID, to 1 vertex object on the graphics card. Now I know the next step would be to bind bufferID to a binding point
glBindBuffer(GL_ARRAY_BUFFER, bufferID)
so that I can use that binding point to send data down using the glBufferData() function like so:
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW)
But why couldn't I just use the bufferID to specifiy where I want to send the data instead? Something like:
glBufferData(bufferID, sizeof(data), data, GL_STATIC_DRAW)
Then when calling a draw function I would also just put in which ever ID to whichever VBO I want the draw function to draw. Something like:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
Why do we need the extra step of indirection with glBindBuffers?
OpenGL uses object binding points for two things: to designate an object to be used as part of a rendering process, and to be able to modify the object.
Why it uses them for the former is simple: OpenGL requires a lot of objects to be able to render.
Consider your overly simplistic example:
glDrawArrays(bufferID, GL_TRIANGLES, 0, 3)
That API doesn't let me have separate vertex attributes come from separate buffers. Sure, you might then propose glDrawArrays(GLint count, GLuint *object_array, ...). But how do you connect a particular buffer object to a particular vertex attribute? Or how do you have 2 attributes come from buffer 0 and a third attribute from buffer 1? Those are things I can do right now with the current API. But your proposed one can't handle it.
And even that is putting aside the many other objects you need to render: program/pipeline objects, texture objects, UBOs, SSBOs, transform feedback objects, query objects, etc. Having all of the needed objects specified in a single command would be fundamentally unworkable (and that leaves aside the performance costs).
And every time the API would need to add a new kind of object, you would have to add new variations of the glDraw* functions. And right now, there are over a dozen such functions. Your way would have given us hundreds.
So instead, OpenGL defines ways for you to say "the next time I render, use this object in this way for that process." That's what binding an object for use means.
But why couldn't I just use the bufferID to specifiy where I want to send the data instead?
This is about binding an object for the purpose of modifying the object, not saying that it will be used. That is... a different matter.
The obvious answer is, "You can't do it because the OpenGL API (until 4.5) doesn't have a function to let you do it." But I rather suspect the question is really why OpenGL doesn't have such APIs (until 4.5, where glNamedBufferStorage and such exist).
Indeed, the fact that 4.5 does have such functions proves that there is no technical reason for pre-4.5 OpenGL's bind-object-to-modify API. It really was a "decision" that came about by the evolution of the OpenGL API from 1.0, thanks to following the path of least resistance. Repeatedly.
Indeed, just about every bad decision that OpenGL has made can be traced back to taking the path of least resistance in the API. But I digress.
In OpenGL 1.0, there was only one kind of object: display list objects. That means that even textures were not stored in objects. So every time you switched textures, you had to re-specify the entire texture with glTexImage*D. That means re-uploading it. Now, you could (and people did) wrap each texture's creation in a display list, which allowed you to switch textures by executing that display list. And hopefully the driver would realize you were doing that and instead allocate video memory and so forth appropriately.
So when 1.1 came around, the OpenGL ARB realized how mind-bendingly silly that was. So they created texture objects, which encapsulate both the memory storage of a texture and the various state within. When you wanted to use the texture, you bound it. But there was a snag. Namely, how to change it.
See, 1.0 had a bunch of already existing functions like glTexImage*D, glTexParamter and the like. These modify the state of the texture. Now, the ARB could have added new functions that do the same thing but take texture objects as parameters.
But that would mean dividing all OpenGL users into 2 camps: those who used texture objects and those who did not. It meant that, if you wanted to use texture objects, you had to rewrite all of your existing code that modified textures. If you had some function that made a bunch of glTexParameter calls on the current texture, you would have to change that function to call the new texture object function. But you would also have to change the function of yours that calls it so that it would take, as a parameter, the texture object that it operates on.
And if that function didn't belong to you (because it was part of a library you were using), then you couldn't even do that.
So the ARB decided to keep those old functions around and simply have them behave differently based on whether a texture was bound to the context or not. If one was bound, then glTexParameter/etc would modify the bound texture, rather than the context's normal texture.
This one decision established the general paradigm shared by almost all OpenGL objects.
ARB_vertex_buffer_object used this paradigm for the same reason. Notice how the various gl*Pointer functions (glVertexAttribPointer and the like) work in relation to buffers. You have to bind a buffer to GL_ARRAY_BUFFER, then call one of those functions to set up an attribute array. When a buffer is bound to that slot, the function will pick that up and treat the pointer as an offset into the buffer that was bound at the time the *Pointer function was called.
Why? For the same reason: ease of compatibility (or to promote laziness, depending on how you want to see it). ATI_vertex_array_object had to create new analogs to the gl*Pointer functions. Whereas ARB_vertex_buffer_object just piggybacked off of the existing entrypoints.
Users didn't have to change from using glVertexPointer to glVertexBufferOffset or some other function. All they had to do was bind a buffer before calling a function that set up vertex information (and of course change the pointers to byte offsets).
It also mean that they didn't have to add a bunch of glDrawElementsWithBuffer-type functions for rendering with indices that come from buffer objects.
So this wasn't a bad idea in the short term. But as with most short-term decision making, it starts being less reasonable with time.
Of course, if you have access to GL 4.5/ARB_direct_state_access, you can do things the way they ought to have been done originally.
My best guess is that GLuint holds a pointer rather than the object, and hence it can "hold" any object, because its actually just holding a pointer to a space in memory
But if this is true why do I not need to dereference anything when using these variables?
OpenGL object names are handles referencing an OpenGL object. They are not "pointers"; they are just a unique identifier which specifies a particular object. The OpenGL implementation, for each object type, has a map between object names and the actual internal object storage.
This dichotomy exists for legacy historical reasons.
The very first OpenGL object type was display lists. You created a number of new display lists using the glNewList function. This function doesn't give you names for objects; you tell it a range of integer names that the implementation will use.
This is the foundational reason for the dichotomy: the user decides what the names are, and the implementation maps from the user-specified name to the implementation-defined data. The only limitation is that you can't use the same name twice.
The display list paradigm was modified slightly for the next OpenGL object type: textures. In the new paradigm, there is a function that allows the implementation to create names for you: glGenTextures. But this function was optional. You could call glBindTexture on any integer you want, and the implementation will, in that moment, create a texture object that maps to that integer name.
As new object types were created, OpenGL kept the texture paradigm for them. They had glGen* functions, but they were optional so that the user could specify whatever names they wanted.
Shader objects were a bit of a departure, as their Create functions don't allow you to pick names. But they still used integers because... API consistency matters even when being inconsistent (note that the extension version of GLSL shader objects used pointers, but the core version decided not to).
Of course, core OpenGL did away with user-provided names entirely. But it couldn't get rid of integer object names as a concept without basically creating a new API. While core OpenGL is a compatibility break, it was designed such that, if you coded your pre-core OpenGL code "correctly", it would still work in core OpenGL. That is, core OpenGL code should also be valid compatibility OpenGL code.
And the path of least resistance for that was to not create a new API, even if it makes the API really silly.
When creating multiple objects on screen (With different vertex lists, not re-using the same vertices) would you define a separate buffer description for each new set of vertices then call DrawIndexed()?
Currently, I'm trying to wrap this up in a function. I'm a bit confused as how to abstract "ownership" from a local matrix to each new instance of geometry buffer:
Additionally, is calling DrawIndexed()multiple times (for each object) in a class member acceptable methodology , or is calling DrawIndexed() and referencing the start element.
To sum up, what is the standard (or similar) for drawing multiple transformable geometry in DirectX 11?
Edit: Pseudo-code welcome if necessary; I think I've an idea, but nervous about implementation. (Whether or not it's optimized)
The purpose of this question was to find the way around a perceived limitation to the DirectX11 pipeline that only one Vertex and Index Buffer can be accepted into the pipeline. After some coding, the sandbox world of C++ and the DirectX11's sandbox mentality stands consistent, and the pipeline accepts as many buffers as necessary.
The problem was resolved, storing an object data class and a global dynamic structure logging all objects and their properties registered into the world. The object class created the "ownership" through natural data separation per class instance.
For future eyes.
Right now, I'm modelling some sort of little OpenGL library to fool around with graphic programming etc. Therefore, I'm using classes to wrap around specific OpenGL function calls like texture creation, shader creation and so on, so far, so good.
My Problem:
All OpenGL calls must be done by the thread which owns the created OpenGL Context (at least under Windows, every other thread will do nothing and create an OpenGL error). So, in order to get an OpenGL context, I firstly create an instance of a window class (just another wrapper around the Win API calls) and finally create an OpenGL context for that window. That sounded quiet logical to me. (If there's already a flaw in my design that makes you scream, let me know...)
If I want to create a texture, or any other object that needs OpenGL calls for creation, I basically do this (the called constructor of an OpenGL object, example):
opengl_object()
{
//do necessary stuff for object initialisation
//pass object to the OpenGL thread for final contruction
//wait until object is constructed by the OpenGL thread
}
So, in words, I create an object like any other object using
opengl_object obj;
Which then, in its contructor, puts itself into a queue of OpenGL objects to be created by the OpenGL context thread. The OpenGL context thread then calls a virtual function which is implemented in all OpenGL objects and contains the necessary OpenGL calls to finally create the object.
I really thought, this way of handling that problem, would be nice. However, right now, I think I'm awfully wrong.
The case is, even though the above way works perfectly fine so far, I'm having troubles as soon as the class hierarchy goes deeper. For example (which is not perfectly, but it shows my problem):
Let's say, I have a class called sprite, representing a Sprite, obviously. It has its own create function for the OpenGL thread in which the vertices and texture coordinates are loaded into the graphic cards memory and so on. That's no problem so far.
Let's further say, I want to have 2 ways of rendering sprites. One Instanced and one through another way. So, I would end up with 2 classes, sprite_instanced and sprite_not_instanced. Both are derived from the sprite class, as they both are sprite which are only rendered differently. However, sprite_instanced and sprite_not_instanced need further OpenGL calls in their create function.
My Solution so far (and I feel really awful about it!)
I have some kind of understanding how object generation in c++ works and how it affects virtual functions. So I decided to use the virtual create function of the class sprite only to load the vertex data and so on into the the graphics memory. The virtual create method of sprite_instanced will then do the preparation to render that sprite instanced.
So, if I want write
sprite_instanced s;
Firstly, the sprite constructor is called and after some initialisation, the constructing thread passes the object to the OpenGL thread. At this point, the passed object is merely a normal sprite, so sprite::create will be called and the OpenGL thread will create a normal sprite. After that, the constructing thread will call the constructor of sprite_instanced, again do some initialisation and pass the object to the OpenGL thread. This time however, it's a sprite_instanced and therefore sprite_instanced::create will be called.
So, if I'm right with the above assumption, everything happens exactly as it should, in my case at least. I spend the last hour reading about calling virtual functions from constructors and how the v-table is build etc. I've ran some test to check my assumption, but that might be compiler-specific so I don't rely on them 100%. In addition, it just feels awful and like a terrible hack.
Another Solution
Another possibility would be implementing factory method in the OpenGL thread class to take care of that. So I can do all the OpenGL calls inside the constructor of those objects. However, in that case, I would need a lot of functions (or one template-based approach) and it feels like a possible loss of potential rendering time when the OpenGL thread has more to do than it needs to...
My Question
Is it ok to handle it the way I described it above? Or should I rather throw that stuff away and do something else?
You were already given some good advice. So I'll just spice it up a bit:
One important thing to understand about OpenGL is, that it is a state machine, which doesn't need some elaborate "initialization". You just use it, and that's about it. Buffer Objects (Textures, Vertex Buffer Objects, Pixel Buffer Objects) may make it look different, and most tutorials and real world applications indeed fill Buffer Objects at application start.
However it is perfectly fine to create them during regular program execution. In my 3D engine I use the free CPU time during the double buffer swap for asynchronous uploads into Buffer Objects (for(b in buffers){glMapBuffer(b.target, GL_WRITE_ONLY);} start_buffer_filling_thread(); SwapBuffers(); wait_for_buffer_filling_thread(); for(b in buffers){glUnmapBuffer(b.target);}).
It's also important to understand that for simple things like sprites should not given its own VBO for each sprite. One normally groups large groups of sprites in a single VBO. You don't have to draw them all together, since you can offset into the VBO and make partial drawing calls. But this common pattern of OpenGL (geometrical objects sharing a buffer object) completely goes against that principle of your classes. So you's need some buffer object manager, that hands out slices of address space to consumers.
Using a class hierachy with OpenGL in itself is not a bad idea, but then it should be some levels higher than OpenGL. If you just map OpenGL 1:1 to classes you gain nothing but complexity and bloat. If I call OpenGL functions directly or by class, I'll still have to do all the grunt work. So a texture class should not just map the concept of a texture object, but it should also take care of interacting with Pixel Buffer Objects (if used).
If you actually want to wrap OpenGL in classes I strongly recommend not using virtual functions but static (means on the compilation unit level) inline classes, so that they become syntactic sugar the compiler will not bloat up too much.
The question is simplified by the fact a single context is assumed to be current on a single thread; actually there can be multiple OpenGL contexts, also on different threads (and while we're at, we consider context name spaces sharing).
First all, I think you should separate the OpenGL calls from the object constructor. Doing this allow you to setup an object without carrying about OpenGL context currency; successively the object could be enqueued for creation in the main rendering thread.
An example. Suppose we have 2 queues: one which holds Texture objects for loading texture data from filesystem, one which hold Texture objects for uploading texture data on GPU memory (after having loaded data, of course).
Thread 1: The texture loader
{
for (;;) {
while (textureLoadQueue.Size() > 0) {
Texture obj = textureLoadQueue.Dequeue();
obj.Load();
textureUploadQueue.Enqueue(obj);
}
}
}
Thread 2: The texture uploader code section, essentially the main rendering thread
{
while (textureUploadQueue.Size() > 0) {
Texture obj = textureUploadQueue.Dequeue();
obj.Upload(ctx);
}
}
The Texture object constructor should looks like:
Texture::Texture(const char *path)
{
mImagePath = path;
textureLoadQueue.Enqueue(this);
}
This is only an example. Of course each object has different requirements, but this solution is the most scalable.
My solution is essentially described by the interface IRenderObject (the documentation is far different from the current implementation, since I'm refactoring alot at this moment and the development is at a very alpha level). This solution is applied to C# languages, which introduce additional complexity due the garbage collection management, but the concept are perfectly adaptable to C++ language.
Essentially, the interface IRenderObject define a base OpenGL object:
It has a name (those returned by Gen routines)
It can be created using a current OpenGL context
It can be deleted using a current OpenGL context
It can be released asynchronously using an "OpenGL garbage collector"
The creation/deletion operations are very intuitive. The take a RenderContext abstracting the current context; using this object, it is possible to execute checks that can be useful to find bug in object creation/deletion:
The Create method check whether the context is current, if the context can create an object of that type, and so on...
The Delete method check whether the context is current, and more important, check whether the context passed as parameter is sharing the same object name space of the context that has created the underlying IRenderObject
Here is an example on the Delete method. Here the code works, but it doesn't work as expected:
RenderContext ctx1 = new RenderContext(), ctx2 = new RenderContext();
Texture tex1, tex2;
ctx1.MakeCurrent(true);
tex1 = new Texture2D();
tex1.Load("example.bmp");
tex1.Create(ctx1); // In this case, we have texture object name = 1
ctx2.MakeCurrent(true);
tex2 = new Texture2D();
tex2.Load("example.bmp");
tex2.Create(ctx2); // In this case, we have texture object name = 1, the same has before since the two contexts are not sharing the object name space
// Somewhere in the code
ctx1.MakeCurrent(true);
tex2.Delete(ctx1); // Works, but it actually delete the texture represented by tex1!!!
The asynchronous release operation aim to delete the object, but not having a current context ( infact the method doesn't take any RenderContext parameter). It could happen that the object is disposed in a separate thread, which doesn't have a current context; but also, I cannot rely on the gargage colllector (C++ doesn't have one), since it is executed in a thread with I have no control. Furthermore, it is desiderable to implement the IDisposable interface, so application code can control the OpenGL object lifetime.
The OpenGL GarbageCollector, is executed on the thread having the right context current.
It is always bad form to call any virtual function in a constructor. The virtual call will not be completed as normal.
Your data structures are very confused. You should investigate the concept of Factory objects. These are objects that you use to construct other objects. You should have a SpriteFactory, which gets pushed into some kind of queue or whatever. That SpriteFactory should be what creates the Sprite object itself. That way, you don't have this notion of a partially constructed object, where creating it pushes itself into a queue and so forth.
Indeed, anytime you start to write, "Objectname::Create", stop and think, "I really should be using a Factory object."
OpenGL was designed for C, not C++. What I've learned works best is to write functions rather than classes to wrap around OpenGL functions, as OpenGL manages its own objects internally. Use classes for loading your data, then pass it to C-style functions which deal with OpenGL. You should be very careful generating/freeing OpenGL buffers in constructors/destructors!
I would avoid having your objects insert themselves into the GL thread's queue on construction. That should be an explicit step, e.g.
gfxObj_t thing(arg) // read a file or something in constructor
mWindow.addGfxObj(thing) // put the thing in mWindow's queue
This lets you do things like constructing a set of objects and then putting them all in the queue at once, and guarantees that the constructor ends before any virtual functions are called. Note that putting the enqueue at the end of the constructor does not guarantee this, because constructors are always called from top-most class down. This means that if you queue an object to have a virtual function called on it, derived classes will be enqueued before their own constructors begin acting. This means you have a race condition which can cause action on an uninitialized object! A nightmare to debug if you don't realize what you've done.
I think the problem here isn't RAII, or the fact that OpenGL is a c-style interface. It's that you're assuming sprite and sprite_instanced should both derive from a common base. These problems occur all the time with class hierarchies and one of the first lessons I learned about object orientation, mostly through many errors, is that it's almost always better to encapsulate than to derive. EXCEPT that if you're going to derive, do it through an abstract interface.
In other words, don't be fooled by the fact that both of these classes have the name "sprite" in them. They are otherwise totally different in behaviour. For any common functionality they share, implement an abstract base that encapsulates that functionality.