Summary
My goal is to write a rendering engine using DirectX 11. I plan to write it in an OOP fashion.
The plan is to write separate classes. Specifically one class for shader where I can load, compile, bind and set it for usage. Second class for creating vertex buffer, where I can load vertices, bind buffers and set it for rendering.
My Attempts
I wrote one base class `A` for initializing DirectX and maintain 2 objects, `m_device` and `m_device_context`. Then as I wrote `2` subclasses `B` and `C`.
At class B I create and bind vertex buffers and At C I compile and bind vertex shaders.
Both B and C uses m_device object to create objects and m_device_context to bind/set them.
In subclass B, I use subclass C to compile and bind shaders.
Using subclass B I initialize Base class. but at C subclass I get memory Access violation on m_device. this is probably because I have to reinitialize base class but I can't have different instances of DirectX objects.
Question
I have read that global variable or objects are not recommended but How do I solve this problem, How do I maintain global objects that I will need thought the project?
(My question is specifically about C++ implementation, not games)
I do not have access to the project but I made a quick UML diagram, I hope this will help clarifies a bit
Inheritance is built upon a is-a relationship. What most people get wrong is that it's not a in perspective of the object itself, but upon the functionality that the object brings with it.
In your case, the functionality of the D3D class is to initialize and allocate the 3D engine context. Right?
The Demo and Shader classes do not extend that behavior, which means that there are no is-a relationship.
So in your case, those classes requires the D3D class to be able to function. Another solution might be do not use the D3D class at all in Demo or Shader. It depends on if the D3D class provides any other functionality (methods) that the Demo or Shader classes use.
You should also be really careful with exposing fields in your classes since it can become a real headache when your application grows. Try instead to expose functionality through methods.
Related
I have a general question, which can be simply explained by this:
I have 2 objects: bullet and human.
Both of these objects have rigidbodies, and I would like to call both of their OnCollide() method on colliding with each other.
My question is, that how can I implement OnCollide() differently (and effectively) for these objects while the only difference between their game objects is their physical respresentation?
Make a Behaviour abstract class, which has an OnCollide method, implement this class by BulletBehaviour and HumanBehaviour class which overrides OnCollide(), then this object's OnCollide will be called in the Rigidbody's OnCollide()?
I don't think that this is a good way of solving the problem. Or is it?
When creating multiple objects on screen (With different vertex lists, not re-using the same vertices) would you define a separate buffer description for each new set of vertices then call DrawIndexed()?
Currently, I'm trying to wrap this up in a function. I'm a bit confused as how to abstract "ownership" from a local matrix to each new instance of geometry buffer:
Additionally, is calling DrawIndexed()multiple times (for each object) in a class member acceptable methodology , or is calling DrawIndexed() and referencing the start element.
To sum up, what is the standard (or similar) for drawing multiple transformable geometry in DirectX 11?
Edit: Pseudo-code welcome if necessary; I think I've an idea, but nervous about implementation. (Whether or not it's optimized)
The purpose of this question was to find the way around a perceived limitation to the DirectX11 pipeline that only one Vertex and Index Buffer can be accepted into the pipeline. After some coding, the sandbox world of C++ and the DirectX11's sandbox mentality stands consistent, and the pipeline accepts as many buffers as necessary.
The problem was resolved, storing an object data class and a global dynamic structure logging all objects and their properties registered into the world. The object class created the "ownership" through natural data separation per class instance.
For future eyes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
The OpenGL standard pages states that the OpenGL is callable from C and C++. The API, however, is of course in pure C. As the OpenGL uses for example a lot of enumerations, using enum-classes (from C++11) could greatly reduce number of errors and make the API more feasible for beginners. It could be seen that lot of the bindings like OpenTK (for C#) are created; creating good C++ API shouldn't be much harder.
I weren't able to find anything that was more than an obscure wrapper, hence my questions:
Is there a well-known C++ wrapper using C++11 facilities for OpenGL? and if not,
Is something like this planned by anyone well-known (which especially means the Khronos)?
The whole way OpenGL works does not really well map to OOP: http://www.opengl.org/wiki/Common_Mistakes#The_Object_Oriented_Language_Problem
What's not stated in this article is the context affinity problem. In OpenGL everything happens on the context, so a class "Texture", to be correct would be nothing more than a glorifed handle to be used with the context.
This is wrong:
class Texture {
/* ... */
public:
void bind();
}
It would only work if the texture was part of the currently active context.
This is no better either:
class Texture {
/* ... */
public:
void bind(Context &ctx);
}
The texture still must be part of the context ctx, and it would only work, if ctx was active at the moment.
So what about this:
class Context {
/* ... */
public:
void bindTextureToUnit(TextureUnit &tu, Texture &t);
};
Better, but still not correct as the context must be the one currently active in the current thread. You may think "oh, I'll just throw an exception if context is not active". Please don't do this.
So what about this
class ActiveContext : Context {
/* ... */
public:
void bindTextureToUnit(TextureUnit &tu, Texture &t);
}
Now you've ended up with making sure that there can be only one ActiveContext instance per thread. Which ends you up in all kinds of weird thread singleton mess.
In fact I numerously tried to implement a clean and sane mapping from OpenGL state and objects into a set of C++ classes, but there are always cases where is simply doesn't work out or ends up in horrible code mess.
IMHO it's far better to not try mapping the OpenGL API into a set of C++ classes (it can't be done sanely), but instead use the regular OpenGL API from specialized classes. Any OpenGL context management is so dependent in the program in questions, that it must be tailored specifically to said program.
Wrapping OpenGL and an OpenGL object model are two different concepts. OpenGL entities can easily be made into objects to wrap their functionality and indeed if you want to write a renderer that can be instantiated with, say, either OpenGL or D3D, this is a strict necessity.
I have classes like this:
class Device
class State
class Buffer
class BufferUniform
class BufferVertices
class BufferIndices
class BufferArray
class Texture
class Texture1d
class Texture2d
class Texture3d
class TextureCubeMap
class TextureArray
class TextureRender
class TextureFrame
class Shader
class ShaderPixel
class ShaderVertex
class ShaderGeometry
class ShaderEvaluator
class ShaderTessellator
class ShaderProgram
class ShaderGenerator
class ShaderGeneratorParser
class ShaderGeneratorNode
class ShaderGeneratorCondition
... and either a D3D or an OpenGL version of each. Renderer<...> is instantiated with one set or the other at compile-time, depending on whether I want D3D or OpenGL to do the work.
Is there a well-known C++ wrapper using C++11 facilities for OpenGL? and if not,
No. There have been some attempts at it.
Is something like this planned by anyone well-known (which especially means the Khronos)?
The Khronos ARB doesn't really try to communicate directly with us about upcoming stuff. However, I highly doubt that they care about this sort of thing.
As for anyone else, again, there are some independent projects out there. But generally speaking, people who use OpenGL aren't that interested in doing this sort of thing.
The basic facts are these: OpenGL objects are directly associated with some global concept. Namely, the OpenGL context. Those objects can only be manipulated when the context they exist within (since objects can be shared between contexts) are active.
Therefore, any C++ object-oriented system must decide how fault-tolerant it wants to be. That is, what kinds of assurances that it wants to provide. If you call a function that operates on an object, how certain will you be that this call succeeds?
Here's a list of the levels of assurances that could reasonably be provided:
Completely Anal
In this case, you ensure that every function call either succeeds or properly detects an erroneous condition and fails properly.
In order to pull this off, you need an explicit OpenGL context object. Every OpenGL object must be associated with a share group among contexts (or a specific context itself, if an object type is not shareable).
All object member functions will have to take a context object, which must be the context to which they belong or a member of the share group for that context. There would have to be a per-thread cache of the context, so that they can check to see if the current context is the one they were given, and make it current if it is not.
When a context is destroyed, every object that relied on that context's existence must instantly become non-functional. Thus, every such object needs to have access to some tag (such as via a std::weak_ptr) to let them know that all of their function calls will fail.
If the objects are going to be properly RAII'd, then each object must be able to ensure that a context that they can be destroyed in (ie: glDelete* function) is current. And if it isn't, they need to make one current. So basically, every object needs to hold a reference to a valid context or otherwise needs to be able to create one.
On a personal note, I find this all to be painfully silly. No API is this fault tolerant, nor does it need to be. There's a lot of pointless babysitting and sharing of information. C++ is not a safe language, so it shouldn't waste good memory and/or performance just to provide you this level of safety.
Sane
Here, we get rid of the context-based safety checks. It is up to you the user to make sure that the proper contexts are current before trying to use any object in any way. This is just as true of C++. The main feature of this API is just being nicer than the raw C API.
The OpenGL objects would be RAII style, but they would also have a "dispose" function, which can be called if you want them to clear themselves without deleting their corresponding objects. This is useful if you're shutting down a context and don't want to have to run through and destruct all of the objects.
This system would basically assume that you're wanting pure Direct State Access for these classes. So none of the modifying member functions actually bind the object to the context. They can achieve that in one of several ways:
Use EXT_DSA or the equivalent, where available.
If EXT_DSA or equivalent is not available, store the modified state and send the modifications the next time this object is bound.
Or just bind it, make the modification, and unbind it.
Certain kinds of modifications can't use #2. For example, glBufferData, glBufferSubData and glTexSubImage*D calls. The user really expects them to happen now. These functions should be named in such a way that they are distinguishable from guaranteed non-binding functions.
Any such binding functions should make no effort to restore the previous bound state of the object.
Permissive
Basically, there's a 1:1 correspondence between C++ member functions and C functions. Sure, you'll use C++ operator overloading and such to reduce the needless variations of functions. But when it comes right down to it, you're pretty much writing your C++ code the way you did your C code.
Objects may employ RAII, but they won't provide any real convenience beyond that. Member functions will either bind the object themselves or expect you to have bound them. Or they will use DSA and fail if DSA isn't available.
Why bother?
At the end of the day, there's really nothing much to gain from having a C++ interface to OpenGL. Sure, you get RAII. Well, you can get RAII just fine by using a std::unique_ptr with a special deleter functor (yes, really, this is very possible). But beyond some slight convenience with the API, what real expressive power do you gain that you did not have before?
If you're serious about using OpenGL to develop an application, you're probably going to build a rendering system that, relative to the rest of your code, abstracts OpenGL's concepts away. So nobody would see your fancy C++ interface besides your renderer. And your renderer could just as easily use OpenGL's C API. Why build your renderer off of an abstraction if it buys you pretty much nothing.
And if you're just toying around with OpenGL... what does it matter? Just use the interface you have.
Right now, I'm modelling some sort of little OpenGL library to fool around with graphic programming etc. Therefore, I'm using classes to wrap around specific OpenGL function calls like texture creation, shader creation and so on, so far, so good.
My Problem:
All OpenGL calls must be done by the thread which owns the created OpenGL Context (at least under Windows, every other thread will do nothing and create an OpenGL error). So, in order to get an OpenGL context, I firstly create an instance of a window class (just another wrapper around the Win API calls) and finally create an OpenGL context for that window. That sounded quiet logical to me. (If there's already a flaw in my design that makes you scream, let me know...)
If I want to create a texture, or any other object that needs OpenGL calls for creation, I basically do this (the called constructor of an OpenGL object, example):
opengl_object()
{
//do necessary stuff for object initialisation
//pass object to the OpenGL thread for final contruction
//wait until object is constructed by the OpenGL thread
}
So, in words, I create an object like any other object using
opengl_object obj;
Which then, in its contructor, puts itself into a queue of OpenGL objects to be created by the OpenGL context thread. The OpenGL context thread then calls a virtual function which is implemented in all OpenGL objects and contains the necessary OpenGL calls to finally create the object.
I really thought, this way of handling that problem, would be nice. However, right now, I think I'm awfully wrong.
The case is, even though the above way works perfectly fine so far, I'm having troubles as soon as the class hierarchy goes deeper. For example (which is not perfectly, but it shows my problem):
Let's say, I have a class called sprite, representing a Sprite, obviously. It has its own create function for the OpenGL thread in which the vertices and texture coordinates are loaded into the graphic cards memory and so on. That's no problem so far.
Let's further say, I want to have 2 ways of rendering sprites. One Instanced and one through another way. So, I would end up with 2 classes, sprite_instanced and sprite_not_instanced. Both are derived from the sprite class, as they both are sprite which are only rendered differently. However, sprite_instanced and sprite_not_instanced need further OpenGL calls in their create function.
My Solution so far (and I feel really awful about it!)
I have some kind of understanding how object generation in c++ works and how it affects virtual functions. So I decided to use the virtual create function of the class sprite only to load the vertex data and so on into the the graphics memory. The virtual create method of sprite_instanced will then do the preparation to render that sprite instanced.
So, if I want write
sprite_instanced s;
Firstly, the sprite constructor is called and after some initialisation, the constructing thread passes the object to the OpenGL thread. At this point, the passed object is merely a normal sprite, so sprite::create will be called and the OpenGL thread will create a normal sprite. After that, the constructing thread will call the constructor of sprite_instanced, again do some initialisation and pass the object to the OpenGL thread. This time however, it's a sprite_instanced and therefore sprite_instanced::create will be called.
So, if I'm right with the above assumption, everything happens exactly as it should, in my case at least. I spend the last hour reading about calling virtual functions from constructors and how the v-table is build etc. I've ran some test to check my assumption, but that might be compiler-specific so I don't rely on them 100%. In addition, it just feels awful and like a terrible hack.
Another Solution
Another possibility would be implementing factory method in the OpenGL thread class to take care of that. So I can do all the OpenGL calls inside the constructor of those objects. However, in that case, I would need a lot of functions (or one template-based approach) and it feels like a possible loss of potential rendering time when the OpenGL thread has more to do than it needs to...
My Question
Is it ok to handle it the way I described it above? Or should I rather throw that stuff away and do something else?
You were already given some good advice. So I'll just spice it up a bit:
One important thing to understand about OpenGL is, that it is a state machine, which doesn't need some elaborate "initialization". You just use it, and that's about it. Buffer Objects (Textures, Vertex Buffer Objects, Pixel Buffer Objects) may make it look different, and most tutorials and real world applications indeed fill Buffer Objects at application start.
However it is perfectly fine to create them during regular program execution. In my 3D engine I use the free CPU time during the double buffer swap for asynchronous uploads into Buffer Objects (for(b in buffers){glMapBuffer(b.target, GL_WRITE_ONLY);} start_buffer_filling_thread(); SwapBuffers(); wait_for_buffer_filling_thread(); for(b in buffers){glUnmapBuffer(b.target);}).
It's also important to understand that for simple things like sprites should not given its own VBO for each sprite. One normally groups large groups of sprites in a single VBO. You don't have to draw them all together, since you can offset into the VBO and make partial drawing calls. But this common pattern of OpenGL (geometrical objects sharing a buffer object) completely goes against that principle of your classes. So you's need some buffer object manager, that hands out slices of address space to consumers.
Using a class hierachy with OpenGL in itself is not a bad idea, but then it should be some levels higher than OpenGL. If you just map OpenGL 1:1 to classes you gain nothing but complexity and bloat. If I call OpenGL functions directly or by class, I'll still have to do all the grunt work. So a texture class should not just map the concept of a texture object, but it should also take care of interacting with Pixel Buffer Objects (if used).
If you actually want to wrap OpenGL in classes I strongly recommend not using virtual functions but static (means on the compilation unit level) inline classes, so that they become syntactic sugar the compiler will not bloat up too much.
The question is simplified by the fact a single context is assumed to be current on a single thread; actually there can be multiple OpenGL contexts, also on different threads (and while we're at, we consider context name spaces sharing).
First all, I think you should separate the OpenGL calls from the object constructor. Doing this allow you to setup an object without carrying about OpenGL context currency; successively the object could be enqueued for creation in the main rendering thread.
An example. Suppose we have 2 queues: one which holds Texture objects for loading texture data from filesystem, one which hold Texture objects for uploading texture data on GPU memory (after having loaded data, of course).
Thread 1: The texture loader
{
for (;;) {
while (textureLoadQueue.Size() > 0) {
Texture obj = textureLoadQueue.Dequeue();
obj.Load();
textureUploadQueue.Enqueue(obj);
}
}
}
Thread 2: The texture uploader code section, essentially the main rendering thread
{
while (textureUploadQueue.Size() > 0) {
Texture obj = textureUploadQueue.Dequeue();
obj.Upload(ctx);
}
}
The Texture object constructor should looks like:
Texture::Texture(const char *path)
{
mImagePath = path;
textureLoadQueue.Enqueue(this);
}
This is only an example. Of course each object has different requirements, but this solution is the most scalable.
My solution is essentially described by the interface IRenderObject (the documentation is far different from the current implementation, since I'm refactoring alot at this moment and the development is at a very alpha level). This solution is applied to C# languages, which introduce additional complexity due the garbage collection management, but the concept are perfectly adaptable to C++ language.
Essentially, the interface IRenderObject define a base OpenGL object:
It has a name (those returned by Gen routines)
It can be created using a current OpenGL context
It can be deleted using a current OpenGL context
It can be released asynchronously using an "OpenGL garbage collector"
The creation/deletion operations are very intuitive. The take a RenderContext abstracting the current context; using this object, it is possible to execute checks that can be useful to find bug in object creation/deletion:
The Create method check whether the context is current, if the context can create an object of that type, and so on...
The Delete method check whether the context is current, and more important, check whether the context passed as parameter is sharing the same object name space of the context that has created the underlying IRenderObject
Here is an example on the Delete method. Here the code works, but it doesn't work as expected:
RenderContext ctx1 = new RenderContext(), ctx2 = new RenderContext();
Texture tex1, tex2;
ctx1.MakeCurrent(true);
tex1 = new Texture2D();
tex1.Load("example.bmp");
tex1.Create(ctx1); // In this case, we have texture object name = 1
ctx2.MakeCurrent(true);
tex2 = new Texture2D();
tex2.Load("example.bmp");
tex2.Create(ctx2); // In this case, we have texture object name = 1, the same has before since the two contexts are not sharing the object name space
// Somewhere in the code
ctx1.MakeCurrent(true);
tex2.Delete(ctx1); // Works, but it actually delete the texture represented by tex1!!!
The asynchronous release operation aim to delete the object, but not having a current context ( infact the method doesn't take any RenderContext parameter). It could happen that the object is disposed in a separate thread, which doesn't have a current context; but also, I cannot rely on the gargage colllector (C++ doesn't have one), since it is executed in a thread with I have no control. Furthermore, it is desiderable to implement the IDisposable interface, so application code can control the OpenGL object lifetime.
The OpenGL GarbageCollector, is executed on the thread having the right context current.
It is always bad form to call any virtual function in a constructor. The virtual call will not be completed as normal.
Your data structures are very confused. You should investigate the concept of Factory objects. These are objects that you use to construct other objects. You should have a SpriteFactory, which gets pushed into some kind of queue or whatever. That SpriteFactory should be what creates the Sprite object itself. That way, you don't have this notion of a partially constructed object, where creating it pushes itself into a queue and so forth.
Indeed, anytime you start to write, "Objectname::Create", stop and think, "I really should be using a Factory object."
OpenGL was designed for C, not C++. What I've learned works best is to write functions rather than classes to wrap around OpenGL functions, as OpenGL manages its own objects internally. Use classes for loading your data, then pass it to C-style functions which deal with OpenGL. You should be very careful generating/freeing OpenGL buffers in constructors/destructors!
I would avoid having your objects insert themselves into the GL thread's queue on construction. That should be an explicit step, e.g.
gfxObj_t thing(arg) // read a file or something in constructor
mWindow.addGfxObj(thing) // put the thing in mWindow's queue
This lets you do things like constructing a set of objects and then putting them all in the queue at once, and guarantees that the constructor ends before any virtual functions are called. Note that putting the enqueue at the end of the constructor does not guarantee this, because constructors are always called from top-most class down. This means that if you queue an object to have a virtual function called on it, derived classes will be enqueued before their own constructors begin acting. This means you have a race condition which can cause action on an uninitialized object! A nightmare to debug if you don't realize what you've done.
I think the problem here isn't RAII, or the fact that OpenGL is a c-style interface. It's that you're assuming sprite and sprite_instanced should both derive from a common base. These problems occur all the time with class hierarchies and one of the first lessons I learned about object orientation, mostly through many errors, is that it's almost always better to encapsulate than to derive. EXCEPT that if you're going to derive, do it through an abstract interface.
In other words, don't be fooled by the fact that both of these classes have the name "sprite" in them. They are otherwise totally different in behaviour. For any common functionality they share, implement an abstract base that encapsulates that functionality.
I am working on an application with a GUI using wxWidgets. I got an object used as a "model": its data has to be used to draw the ui and the ui should modify it. Let's call this class Model.
The structure of the applications looks like this:
A wxApp-derived object, that possesses:
a wxFrame-derived object, that possesses a
wxGLCanvas-derived object.
another wxFrame-derived object.
For the Model class,
I could use a singleton that
would make things very simple: I
could just use model.getThatData()
or model.setThatData() anywhere.
However, I can't disagree when people
say that it's a global variable with
a fancy dress.
I could also use dependency
injection (or is it something
else): I instanciate Model in the
wxApp object, and then I pass a
reference to the instance model in
the constructors of both wxFrame-derived classes,
same thing with wxGLCanvas
constructor, and I store the
reference as an attribute of the
needed classes.
However, this doesn't seem either a
very good solution. Suppose the
first wxFrame object doesn't need
to use model. We will nontheless
have to pass a reference to model
in its constructor to be able to
pass it to the wxGLCanvas-derived
object. So that design could lead to
many (?) unnecessary passings.
?
What do you think ? I have been asking myself this question for a long time...
However, this doesn't seem either a very good solution. Suppose the first wxFrame object doesn't need to use model. We will nontheless have to pass a reference to model in its constructor to be able to pass it to the wxGLCanvas-derived object. So that design could lead to many (?) unnecessary passings.
Passing pointers around is peanuts compared to the nightmares of untangling the dependencies between classes/objects, hidden in the implementation (== singletons).
The #2 is the way I do it. The goal is to be able just by looking at the class declaration to have an idea about the class prerequisites. Ideally, if in the context I have everything what c'tor/init method needs, I should be able to instantiate and use the object. That way the life-cycle also becomes clear: the prerequisites may not be released until the object is released.
Does the frame dependent on a specific canvas class? Or the canvas object interchangeable?
If the latter is the case, then the constructor for the frame should be parameterized by a reference to a canvas object. This way, the application will take care of instantiating the model, creating the canvas using said model, and passing the canvas to the frame. The frame will no more be dependent on the model directly.
If the frame is dependent on a specific canvas class (that is, the frame instantiates its own canvas, and knows what type of canvas it wants). Then if the canvas' constructor is dependent on the Model object, by proxy your frame is also dependent on the model. So #2 is correct.
Put it into a simple MVC model. (Recall that C interacts with M and V, and M and V do not interact with each other.)
Your model is (obviously) the "M" in MVC. Your widgets are the "V" in MVC.
See the problem here? You're trying to give the "M" to the "V"; you're missing the "C" to delegate everything. Your "C" may be your wxApp (it depends on how you want to design things).
In other words, the controller should give the data the view needs from the model to the view; the view shouldn't grab its own data directly from the model.
(Therefore, both of your proposals are, in my opinion, poor options in an MVC application.)