Related
I am in a bit of a pickle. I am using the boost::serialization in order to save/load a pointer from memory. The saving part, I have no issues. I was able to verify that the serialization class is able to save the pointer without any issue. As a side note, the pointer class is a custom class that I created.
Some background. I am using the wxwidgets library to create a GUI. I am using the latest version (v3.1.0). The object inherits from the wxGLCanvas class. Which requires a pointer to the parent window. The class is being used to draw a grid on the screen and the user can interact with the grid by placing geometry shapes (mainly squares, arcs, and lines). Each shape is its own class. Within my class, I have datatypes that specify the grid step size, the placement of the camera, the zoom level, and the geometry shape vectors. All of these are able to be saved. Note that my class does specify other data types as well but I am not saving these so they are irrelevant to the discussion. As a side note, the class in question is called modelDefinition
Now, we come to the load part of the class. My current implementation is as such:
void MainFrame::load(string filePath)
{
std::ifstream loadFile(filePath);
if(loadFile.is_open())
{
modelDefinition temp(this, wxPoint(6, 6), this->GetClientSize(), _problemDefinition, this->GetStatusBar());
//modelDefinition tempDefintion = (*_model);
boost::archive::text_iarchive ia(loadFile);
ia >> _problemDefinition;
ia >> temp;
temp.copyModel(*_model);
//*_model = temp;
//(*_model) = tempDefintion;
_model->Refresh();
}
}
Implementation of the copy function:
void copyModel(modelDefinition &target)
{
target.setGridPreferences(_preferences);
target.setEditor(_editor);
target.setZoomX(_zoomX);
target.setZoomY(_zoomY);
target.setCameraX(_cameraX);
target.setCameraY(_cameraY);
}
My idea is this, that I create a temporary variable and initialize it to the values that I need it to be. Currently, it is empty. Then, I will load the data into the temporary variable and then copy the needed data structures into my main variable. However, the program crashes at ia >> temp. I am not sure why right now. I go into the debugger and I do not get access to the call stack after the crash. I have a feeling that it is crashing within the boost library. I did place a break point within the serialize function in modelDefinition and the program never made it.
I did come across this forum posting:
Boost serialization with pointers and non-default constructor
To be honest, I am not too sure if it applies to me. I am trying to think of a way that does but so far I can not find any reason that applies to me.
Here is the declaraction of the modelDefinition constructor:
modelDefinition::modelDefinition(wxWindow *par, const wxPoint &point, const wxSize &size, problemDefinition &definition, wxStatusBarBase *statusBar) : wxGLCanvas(par, wxID_ANY, NULL, point, size, wxBORDER_DOUBLE | wxBORDER_RAISED)
par MUST have a value. Null values are not accepted. I did see that the forum post did override the load function and grabbed the values and passed them into the constructor of the class. However, in my case, par is a this pointer and I am not able to serialize the function and load this back into the program (besides, this will change on every single function call). this refers back to the parent window. And overriding the load function in a different namespace prevent me from passing this into the function. So basically, that option is out of the water (unless I am missing something).
Again, since I can't pass in NULL into the wxGLCanvas constructor, this option is off the table:
modelDefinition _model = new modelDefinition();
modelDefinition::modelDefinition() : wxGLCanvas(NULL, 0)
And I believe that this option is also off the table since my parent window that would be associated with the canvas is in a different namespace:
template<class Archive>
inline void load_construct_data(
Archive & ar, modelDefintion * foo, const unsigned int file_version
){
double test;// There would be more after this but to simplify the posting, I am just throwing this in here.
ar >> test;
::new(modelDefintion)_model(this, test); // Yeah, I don't think that this is going to work here.
}
Again, this would need to be pointing to the parent window, which I don't think that I have access to.
So right now, I am a little lost on my options. So far, I am thinking that I will continue to be working on the first case to see where the program is crashing.
Although, I could really use someone's help in solving this issue. How can I load back the data structure of a non-default constructor pointer where I cannot save the data from the inherited object (because modelDefinition inherits from wxGLCanvas data type and I am unable to save this data type)?
Yes, I am aware of the minimal example. It will take me sometime to create a minimal example. If the forum people need it to effectively come up with a solution, then I will do it and post here. But again, it will take time and could be rather long.
Yes, load/save construct data is the tool to deal with non-default constructibles.
Your problem is different: you need state from outside because you are trying to load objects that require the state during construction, but it never got saved in the first place. Had it been, you could re-create the parent window just like it existed during serialization.
The only "workaround" I can see here is to use global state (i.e. access it through (thread) global variables).
I do not recommend it, but you're in a pickle so it's good to think about workarounds, even bad ones
As soon as you salvaged your data from the old-style archives, I suggest serializing into a format that
saves all required construct data
serializes a data struct not tied to the GUI elements
Of course I don't know about the over-arching goal here, so I can't say which approach is more apt, but without context I'd always strife for separation of concerns, i.e. de-coupling the serialization from any UI elements.
I've been working with pointers for a few years now, but I only very recently decided to transition over to C++11's smart pointers (namely unique, shared, and weak). I've done a fair bit of research on them and these are the conclusions that I've drawn:
Unique pointers are great. They manage their own memory and are as lightweight as raw pointers. Prefer unique_ptr over raw pointers as much as possible.
Shared pointers are complicated. They have significant overhead due to reference counting. Pass them by const reference or regret the error of your ways. They're not evil, but should be used sparingly.
Shared pointers should own objects; use weak pointers when ownership is not required. Locking a weak_ptr has equivalent overhead to the shared_ptr copy constructor.
Continue to ignore the existence of auto_ptr, which is now deprecated anyhow.
So with these tenets in mind, I set off to revise my code base to utilize our new shiny smart pointers, fully intending to clear to board of as many raw pointers as possible. I've become confused, however, as to how best take advantage of the C++11 smart pointers.
Let's assume, for instance, that we were designing a simple game. We decide that it is optimal to load a fictional Texture data type into a TextureManager class. These textures are complex and so it is not feasible to pass them around by value. Moreover, let us assume that game objects need specific textures depending on their object type (i.e. car, boat, etc).
Prior, I would have loaded the textures into a vector (or other container like unordered_map) and stored pointers to these textures within each respective game object, such that they could refer to them when they needed to be rendered. Let's assume the textures are guaranteed to outlive their pointers.
My question, then, is how to best utilize smart pointers in this situation. I see few options:
Store the textures directly in a container, then construct a unique_ptr in each game object.
class TextureManager {
public:
const Texture& texture(const std::string& key) const
{ return textures_.at(key); }
private:
std::unordered_map<std::string, Texture> textures_;
};
class GameObject {
public:
void set_texture(const Texture& texture)
{ texture_ = std::unique_ptr<Texture>(new Texture(texture)); }
private:
std::unique_ptr<Texture> texture_;
};
My understanding of this, however, is that a new texture would be copy-constructed from the passed reference, which would then be owned by the unique_ptr. This strikes me as highly undesirable, since I would have as many copies of the texture as game objects that use it -- defeating the point of pointers (no pun intended).
Store not the textures directly, but their shared pointers in a container. Use make_shared to initialize the shared pointers. Construct weak pointers in the game objects.
class TextureManager {
public:
const std::shared_ptr<Texture>& texture(const std::string& key) const
{ return textures_.at(key); }
private:
std::unordered_map<std::string, std::shared_ptr<Texture>> textures_;
};
class GameObject {
public:
void set_texture(const std::shared_ptr<Texture>& texture)
{ texture_ = texture; }
private:
std::weak_ptr<Texture> texture_;
};
Unlike the unique_ptr case, I won't have to copy-construct the textures themselves, but rendering the game objects is expensive since I would have to lock the weak_ptr each time (as complex as copy-constructing a new shared_ptr).
So to summarize, my understanding is such: if I were to use unique pointers, I would have to copy-construct the textures; alternatively, if I were to use shared and weak pointers, I would have to essentially copy-construct the shared pointers each time a game object is to be drawn.
I understand that smart pointers are inherently going to be more complex than raw pointers and so I'm bound to have to take a loss somewhere, but both of these costs seem higher than perhaps they should be.
Could anybody point me in the correct direction?
Sorry for the long read, and thanks for your time!
Even in C++11, raw pointers are still perfectly valid as non-owning references to objects. In your case, you're saying "Let's assume the textures are guaranteed to outlive their pointers." Which means you're perfectly safe to use raw pointers to the textures in the game objects. Inside the texture manager, store the textures either automatically (in a container which guarantees constant location in memory), or in a container of unique_ptrs.
If the outlive-the-pointer guarantee was not valid, it would make sense to store the textures in shared_ptr in the manager and use either shared_ptrs or weak_ptrs in the game objects, depending on the ownership semantics of the game objects with regards to the textures. You could even reverse that - store shared_ptrs in the objects and weak_ptrs in the manager. That way, the manager would serve as a cache - if a texture is requested and its weak_ptr is still valid, it will give out a copy of it. Otherwise, it will load the texture, give out a shared_ptr and keep a weak_ptr.
To summarize your use case:
*) Objects are guaranteed to outlive their users
*) Objects, once created, are not modified (I think this is implied by your code)
*) Objects are reference-able by name and guaranteed to exist for any name your app will ask for (I'm extrapolating -- I'll deal below with what to do if this is not true.)
This is a delightful use case. You can use value semantics for textures throughout your application! This has the advantages of great performance and being easy to reason about.
One way to do this is have your TextureManager return a Texture const*. Consider:
using TextureRef = Texture const*;
...
TextureRef TextureManager::texture(const std::string& key) const;
Because the underling Texture object has the lifetime of your application, is never modified, and always exists (your pointer is never nullptr) you can just treat your TextureRef as simple value. You can pass them, return them, compare them, and make containers of them. They are very easy to reason about and very efficient to work on.
The annoyance here is that you have value semantics (which is good), but pointer syntax (which can be confusing for a type with value semantics). In other words, to access a member of your Texture class you need to do something like this:
TextureRef t{texture_manager.texture("grass")};
// You can treat t as a value. You can pass it, return it, compare it,
// or put it in a container.
// But you use it like a pointer.
double aspect_ratio{t->get_aspect_ratio()};
One way to deal with this is to use something like the pimpl idiom and create a class that is nothing more than a wrapper to a pointer to a texture implementation. This is a bit more work because you'll end up creating an API (member functions) for your texture wrapper class that forward to your implementation class's API. But the advantage is that you have a texture class with both value semantics and value syntax.
struct Texture
{
Texture(std::string const& texture_name):
pimpl_{texture_manager.texture(texture_name)}
{
// Either
assert(pimpl_);
// or
if (not pimpl_) {throw /*an appropriate exception */;}
// or do nothing if TextureManager::texture() throws when name not found.
}
...
double get_aspect_ratio() const {return pimpl_->get_aspect_ratio();}
...
private:
TextureImpl const* pimpl_; // invariant: != nullptr
};
...
Texture t{"grass"};
// t has both value semantics and value syntax.
// Treat it just like int (if int had member functions)
// or like std::string (except lighter weight for copying).
double aspect_ratio{t.get_aspect_ratio()};
I've assumed that in the context of your game, you'll never ask for a texture that isn't guaranteed to exist. If that is the case, then you can just assert that the name exists. But if that isn't the case, then you need to decide how to handle that situation. My recommendation would be to make it an invariant of your wrapper class that the pointer can't be nullptr. This means that you throw from the constructor if the texture doesn't exist. That means you handle the problem when you try to create the Texture, rather than to have to check for a null pointer every single time you call a member of your wrapper class.
In answer to your original question, smart pointers are valuable to lifetime management and aren't particularly useful if all you need is to pass around references to object whose lifetime is guaranteed to outlast the pointer.
You could have a std::map of std::unique_ptrs where the textures are stored. You could then write a get method that returns a reference to a texture by name. That way if each model knows the name of its texture(which it should) you can simple pass the name into the get method and retrieve a reference from the map.
class TextureManager
{
public:
Texture& get_texture(const std::string& key) const
{ return *textures_.at(key); }
private:
std::unordered_map<std::string, std::unique_ptr<Texture>> textures_;
};
You could then just use a Texture in the game object class as opposed to a Texture*, weak_ptr etc.
This way texture manager can act like a cache, the get method can be re-written to search for the texture and if found return it from the map, else load it first, move it to the map and then return a ref to it
Before I get going, as I accidentally a novel...
TL;DR Use shared pointers for figuring out responsibility issues, but be very cautious of cyclical relationships. If I were you, I would use a table of shared pointers to store your assets, and everything that needs those shared pointers should also use a shared pointer. This eliminates the overhead of weak pointers for reading (as that overhead in game is like creating a new smart pointer 60 times a second per object). It's also the approach my team and I took, and it was super effective. You also say your textures are guaranteed to outlive the objects, so your objects cannot delete the textures if they use shared pointers.
If I could throw my 2 cents in, I'd like to tell you about an almost identical foray I took with smart pointers in my own video game; both the good and the bad.
This game's code takes an almost identical approach to your solution #2: A table filled with smart-pointers to bitmaps.
We had some differences though; we had decided to split our table of bitmaps into 2 pieces: one for "urgent" bitmaps, and one for "facile" bitmaps. Urgent bitmaps are bitmaps that are constantly loaded into memory, and would be used in the middle of battle, where we needed the animation NOW and didn't want to go to the hard disk, which had a very noticeable stutter. The facile table was a table of strings of file paths to the bitmaps on the hdd. These would be large bitmaps loaded at the beginning of a relatively long section of
gameplay; like your character's walking animation, or the background image.
Using raw pointers here has some problems, specifically ownership. See, our assets table had a Bitmap *find_image(string image_name) function. This function would first search the urgent table for the entry matching image_name. If found, great! Return a bitmap pointer. If not found, search the facile table. If we find a path matching your image name, create the bitmap, then return that pointer.
The class to use this the most was definitely our Animation class. Here's the ownership problem: when should an animation delete its bitmap? If it came from the facile table then there's no problem; that bitmap was created specifically for you. It's your duty to delete it!
However, if your bitmap came from the urgent table, you could not delete it, as doing so would prevent others from using it, and your program goes down like E.T. the game, and your sales follow suit.
Without smart pointers, the only solution here is to have the Animation class clone its bitmaps no matter what. This allows for safe deletion, but kills the speed of the program. Weren't these image supposed to be time sensitive?
However, if the assets class were to return a shared_ptr<Bitmap>, then you have nothing to worry about. Our assets table was static you see, so those pointers were lasting until the end of the program no matter what. We changed our function to be shared_ptr<Bitmap> find_image (string image_name), and never had to clone a bitmap again. If the bitmap came from the facile table, then that smart pointer was the only one of its kind, and was deleted with the animation. If it was an urgent bitmap, then the table still held a reference upon Animation destruction, and the data was preserved.
That's the happy part, here's the ugly part.
I've found shared and unique pointers to be great, but they definitely have their caveats. The largest one for me is not having explicit control over when your data gets deleted. Shared pointers saved our asset lookup, but killed the rest of the game on implementation.
See, we had a memory leak, and thought "we should use smart pointers everywhere!". Huge mistake.
Our game had GameObjects, which were controlled by an Environment. Each environment had a vector of GameObject *'s, and each object had a pointer to its environment.
You should see where I'm going with this.
Objects had methods to "eject" themselves from their environment. This would be in case they needed to move to a new area, or maybe teleport, or phase through other objects.
If the environment was the only reference holder to the object, then your object couldn't leave the environment without getting deleted. This happens commonly when creating projectiles, especially teleporting projectiles.
Objects also were deleting their environment, at least if they were the last ones to leave it. The environment for most game states was a concrete object as well. WE WERE CALLING DELETE ON THE STACK! Yeah we were amateurs, sue us.
In my experience, use unique_pointers when you're too lazy to call delete and only one thing will ever own your object, use shared_pointers when you want multiple objects to point to one thing, but can't decide who has to delete it, and be very wary of cyclical relationships with shared_pointers.
According to wikipedia prototype pattern is :
The prototype pattern is a creational design pattern used in software development when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to:
Avoid subclasses of an object creator in the client application, like the abstract factory pattern does.
Avoid the inherent cost of creating a new object in the standard way (e.g., using the new keyword) when it is prohibitively expensive for a given application.
I saw certain demo codes of this pattern in C++ all of them are using copy constructor.
Can anyone explain how point number two applies(in general as well as in context of C++) as we are using copy constructor anyways in clone function. If it can be done without copy constructor then an example code snippet would be great.
You can copy without dynamic allocation. For example, here's a cloning that only happens in a local scope:
Foo prototype;
void local()
{
Foo x = prototype; // first copy
x.mutate();
Foo y = x; // another copy
}
No dynamic allocation is used, ever.
It is true that return new Foo(*this); also makes a copy, but what's more important is that that object is allocated dynamically. That's the cost that your article to which is alluding.
In a game I've been making in Java, I ran into an interesting situation that fit the bill of a prototype pattern quite well. You see, I had this Animation object that stored a container of images to flip through, as well as some other data that tracked how long since the last frame was rendered, which frame it was on, if the animation was running or not, etc.
I found that for multiple characters to use the same Animation object was causing problems. If two characters shared an animation, they would turn on and turn off the animation at conflicting times for each other. I would have guys standing still with walking animations, or moving with standing animations. Creation of the animation objects were costly and time consuming what with creating the sprites, setting the ammount of time they would display for, creating an interval queue of images, etc.
Instead, I made the Animation object a prototype object. If an Animation clones itself, It shares the original collection of frames with all other animations since those are immutable, but also expensive to construct. Instead the new objects would share this immutable base, but have their own information of which frame to draw and when.
Think of it like a projector. When it get's cloned, the new projector might have it's own information on if it's running or not, which frame it's on, etc, but it may be using the same piece of film that the original projector is using. The reason why they don't trip each other up is that the film is immutable. (and expensive to create)
In all honesty, the usage of the prototype in this manner is a great way to implement a flyweight pattern. Objects that share Objects that are expensive to create. If you "clone" them, they would be instantiated with their new transient state, but still share those expensive base objects with it's creator.
Calling the copy constructor for object which isn't use dynamic memory inside itself is much more faster then perform any allocation in dynamic memory via new. Because allocation in dynamic memory is a kind of system call.
I tend to wrap OpenGL objects in their own classes. In OpenGL there is the concept of binding, where you bind your object, do something with it and then unbind it. For example, a texture:
glBindTexture(GL_TEXTURE_2D, TextureColorbufferName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1000);
glBindTexture(GL_TEXTURE_2D, 0);
Wrapping this would be something like:
texture->bind();
texture->setParameter(...);
texture->setParameter(...);
texture->unBind();
The problem here, is that I want to avoid the bind() and unBind() functions, and instead just be able to call the set methods and the GLObject will be bound automaticly.
I could just do it in every method implementation:
public void Texture::setParameter(...)
{
this.bind();
// do something
this.unBind();
}
Though then I have to do that for every added method! Is there a better way, so it is automatically done before and after every method added?
Maybe a context object may help here. Consider this small object:
class TextureContext {
public:
TextureContext(char* texname) {
glBindTexture(GL_TEXTURE_2D, texname);
}
~TextureContext() {
glBindTexture(GL_TEXTURE_2D, 0);
}
};
This object is now used within a scope:
{
TextureContext mycont(textname);
mytexture->setParameter(...);
mytexture->setParameter(...);
mytexture->setParameter(...);
}
the object mycont only lives in the scope and calls its destructor (ant the unbind method respectively) automatically once the scope is left.
EDIT:
Maybe you can adjust the TextureContext class to take your Texture instance instead in the constructor and retrieves the texture name itself before binding the texture.
The problem here, is that I want to avoid the bind() and unBind() functions
Most likely, you won't be able to get rid of them completely. glTexImage2D will require bind/unBind (or lock/unlock if you wnat DirectX-style names)
I could just do it in every method implementation:
You shouldn't do that, because you'll get massive performance drop. It is called "state trashing", if I remember correctly. If you need to modify multiple parameters, you should modify them all in one go. Calling "bind"/"unbind" frequently will be extremely inefficient and isn't recommended in performance guidelines I saw (i.e. DirectX SDK, nvidia documentation, random GDC papaers, etc).
Is there a better way, so it is automatically done before and after every method added?
Yes, there is. Cache multiple state changes and delay actual OpenGL calls till they're absolutely necessary.
I.e. if your program requests to set Min filter, Mag filter, wrap_s, wrap_t and other parameters for glTexParameter, and your texture isn't currently bound to any texutre stage, don't set parameters now. Store them in internal list (within this specific texture), and set them all at once next time texture is bound. Or when list reaches certain size, or when user calls glGetTexParameter. Once it is time to set parameters, bind texture, set parameters, and unbind it, if necessary.
There are two potential problems with this approach, though:
Time spent on changing texture states might become a bit unpredictable. (because you won't be sure when exactly actual OpenGL cal will be performed).
It might be harder to detect OpenGL errors using this method.
If internal list uses dynamically allocated memory and frequently calls new/delete, this might result in bottleneck, because calls to new/delete can be slow. This can be solved by using fixed-size circular buffer for that internal list.
I tend to wrap OpenGL objects in their own classes.
A guy goes to the doctor and say, "It hurts whenever I raise my arm like this." So the doctor says, "Then stop raising your arm like that."
Your problem is that you're trying to provide an object-based interface to a system that is not object based. That will only end in tears and/or pain.
A far better solution would be to raise the level of your abstraction. Instead of wrapping OpenGL objects directly, wrap your graphics API. You might have some external concept of "texture" which would store an OpenGL texture object, but you wouldn't expose functions that change parameters on that texture directly. Or even indirectly.
Don't make promises with your API that you can't keep. Raise the abstraction to the point where the external code simply doesn't care what the texture's filtering mode and other parameters are.
Alternatively, have the filtering (and wrapping) be part of the object's constructor, a fixed value set at creation time.
Right now, I'm modelling some sort of little OpenGL library to fool around with graphic programming etc. Therefore, I'm using classes to wrap around specific OpenGL function calls like texture creation, shader creation and so on, so far, so good.
My Problem:
All OpenGL calls must be done by the thread which owns the created OpenGL Context (at least under Windows, every other thread will do nothing and create an OpenGL error). So, in order to get an OpenGL context, I firstly create an instance of a window class (just another wrapper around the Win API calls) and finally create an OpenGL context for that window. That sounded quiet logical to me. (If there's already a flaw in my design that makes you scream, let me know...)
If I want to create a texture, or any other object that needs OpenGL calls for creation, I basically do this (the called constructor of an OpenGL object, example):
opengl_object()
{
//do necessary stuff for object initialisation
//pass object to the OpenGL thread for final contruction
//wait until object is constructed by the OpenGL thread
}
So, in words, I create an object like any other object using
opengl_object obj;
Which then, in its contructor, puts itself into a queue of OpenGL objects to be created by the OpenGL context thread. The OpenGL context thread then calls a virtual function which is implemented in all OpenGL objects and contains the necessary OpenGL calls to finally create the object.
I really thought, this way of handling that problem, would be nice. However, right now, I think I'm awfully wrong.
The case is, even though the above way works perfectly fine so far, I'm having troubles as soon as the class hierarchy goes deeper. For example (which is not perfectly, but it shows my problem):
Let's say, I have a class called sprite, representing a Sprite, obviously. It has its own create function for the OpenGL thread in which the vertices and texture coordinates are loaded into the graphic cards memory and so on. That's no problem so far.
Let's further say, I want to have 2 ways of rendering sprites. One Instanced and one through another way. So, I would end up with 2 classes, sprite_instanced and sprite_not_instanced. Both are derived from the sprite class, as they both are sprite which are only rendered differently. However, sprite_instanced and sprite_not_instanced need further OpenGL calls in their create function.
My Solution so far (and I feel really awful about it!)
I have some kind of understanding how object generation in c++ works and how it affects virtual functions. So I decided to use the virtual create function of the class sprite only to load the vertex data and so on into the the graphics memory. The virtual create method of sprite_instanced will then do the preparation to render that sprite instanced.
So, if I want write
sprite_instanced s;
Firstly, the sprite constructor is called and after some initialisation, the constructing thread passes the object to the OpenGL thread. At this point, the passed object is merely a normal sprite, so sprite::create will be called and the OpenGL thread will create a normal sprite. After that, the constructing thread will call the constructor of sprite_instanced, again do some initialisation and pass the object to the OpenGL thread. This time however, it's a sprite_instanced and therefore sprite_instanced::create will be called.
So, if I'm right with the above assumption, everything happens exactly as it should, in my case at least. I spend the last hour reading about calling virtual functions from constructors and how the v-table is build etc. I've ran some test to check my assumption, but that might be compiler-specific so I don't rely on them 100%. In addition, it just feels awful and like a terrible hack.
Another Solution
Another possibility would be implementing factory method in the OpenGL thread class to take care of that. So I can do all the OpenGL calls inside the constructor of those objects. However, in that case, I would need a lot of functions (or one template-based approach) and it feels like a possible loss of potential rendering time when the OpenGL thread has more to do than it needs to...
My Question
Is it ok to handle it the way I described it above? Or should I rather throw that stuff away and do something else?
You were already given some good advice. So I'll just spice it up a bit:
One important thing to understand about OpenGL is, that it is a state machine, which doesn't need some elaborate "initialization". You just use it, and that's about it. Buffer Objects (Textures, Vertex Buffer Objects, Pixel Buffer Objects) may make it look different, and most tutorials and real world applications indeed fill Buffer Objects at application start.
However it is perfectly fine to create them during regular program execution. In my 3D engine I use the free CPU time during the double buffer swap for asynchronous uploads into Buffer Objects (for(b in buffers){glMapBuffer(b.target, GL_WRITE_ONLY);} start_buffer_filling_thread(); SwapBuffers(); wait_for_buffer_filling_thread(); for(b in buffers){glUnmapBuffer(b.target);}).
It's also important to understand that for simple things like sprites should not given its own VBO for each sprite. One normally groups large groups of sprites in a single VBO. You don't have to draw them all together, since you can offset into the VBO and make partial drawing calls. But this common pattern of OpenGL (geometrical objects sharing a buffer object) completely goes against that principle of your classes. So you's need some buffer object manager, that hands out slices of address space to consumers.
Using a class hierachy with OpenGL in itself is not a bad idea, but then it should be some levels higher than OpenGL. If you just map OpenGL 1:1 to classes you gain nothing but complexity and bloat. If I call OpenGL functions directly or by class, I'll still have to do all the grunt work. So a texture class should not just map the concept of a texture object, but it should also take care of interacting with Pixel Buffer Objects (if used).
If you actually want to wrap OpenGL in classes I strongly recommend not using virtual functions but static (means on the compilation unit level) inline classes, so that they become syntactic sugar the compiler will not bloat up too much.
The question is simplified by the fact a single context is assumed to be current on a single thread; actually there can be multiple OpenGL contexts, also on different threads (and while we're at, we consider context name spaces sharing).
First all, I think you should separate the OpenGL calls from the object constructor. Doing this allow you to setup an object without carrying about OpenGL context currency; successively the object could be enqueued for creation in the main rendering thread.
An example. Suppose we have 2 queues: one which holds Texture objects for loading texture data from filesystem, one which hold Texture objects for uploading texture data on GPU memory (after having loaded data, of course).
Thread 1: The texture loader
{
for (;;) {
while (textureLoadQueue.Size() > 0) {
Texture obj = textureLoadQueue.Dequeue();
obj.Load();
textureUploadQueue.Enqueue(obj);
}
}
}
Thread 2: The texture uploader code section, essentially the main rendering thread
{
while (textureUploadQueue.Size() > 0) {
Texture obj = textureUploadQueue.Dequeue();
obj.Upload(ctx);
}
}
The Texture object constructor should looks like:
Texture::Texture(const char *path)
{
mImagePath = path;
textureLoadQueue.Enqueue(this);
}
This is only an example. Of course each object has different requirements, but this solution is the most scalable.
My solution is essentially described by the interface IRenderObject (the documentation is far different from the current implementation, since I'm refactoring alot at this moment and the development is at a very alpha level). This solution is applied to C# languages, which introduce additional complexity due the garbage collection management, but the concept are perfectly adaptable to C++ language.
Essentially, the interface IRenderObject define a base OpenGL object:
It has a name (those returned by Gen routines)
It can be created using a current OpenGL context
It can be deleted using a current OpenGL context
It can be released asynchronously using an "OpenGL garbage collector"
The creation/deletion operations are very intuitive. The take a RenderContext abstracting the current context; using this object, it is possible to execute checks that can be useful to find bug in object creation/deletion:
The Create method check whether the context is current, if the context can create an object of that type, and so on...
The Delete method check whether the context is current, and more important, check whether the context passed as parameter is sharing the same object name space of the context that has created the underlying IRenderObject
Here is an example on the Delete method. Here the code works, but it doesn't work as expected:
RenderContext ctx1 = new RenderContext(), ctx2 = new RenderContext();
Texture tex1, tex2;
ctx1.MakeCurrent(true);
tex1 = new Texture2D();
tex1.Load("example.bmp");
tex1.Create(ctx1); // In this case, we have texture object name = 1
ctx2.MakeCurrent(true);
tex2 = new Texture2D();
tex2.Load("example.bmp");
tex2.Create(ctx2); // In this case, we have texture object name = 1, the same has before since the two contexts are not sharing the object name space
// Somewhere in the code
ctx1.MakeCurrent(true);
tex2.Delete(ctx1); // Works, but it actually delete the texture represented by tex1!!!
The asynchronous release operation aim to delete the object, but not having a current context ( infact the method doesn't take any RenderContext parameter). It could happen that the object is disposed in a separate thread, which doesn't have a current context; but also, I cannot rely on the gargage colllector (C++ doesn't have one), since it is executed in a thread with I have no control. Furthermore, it is desiderable to implement the IDisposable interface, so application code can control the OpenGL object lifetime.
The OpenGL GarbageCollector, is executed on the thread having the right context current.
It is always bad form to call any virtual function in a constructor. The virtual call will not be completed as normal.
Your data structures are very confused. You should investigate the concept of Factory objects. These are objects that you use to construct other objects. You should have a SpriteFactory, which gets pushed into some kind of queue or whatever. That SpriteFactory should be what creates the Sprite object itself. That way, you don't have this notion of a partially constructed object, where creating it pushes itself into a queue and so forth.
Indeed, anytime you start to write, "Objectname::Create", stop and think, "I really should be using a Factory object."
OpenGL was designed for C, not C++. What I've learned works best is to write functions rather than classes to wrap around OpenGL functions, as OpenGL manages its own objects internally. Use classes for loading your data, then pass it to C-style functions which deal with OpenGL. You should be very careful generating/freeing OpenGL buffers in constructors/destructors!
I would avoid having your objects insert themselves into the GL thread's queue on construction. That should be an explicit step, e.g.
gfxObj_t thing(arg) // read a file or something in constructor
mWindow.addGfxObj(thing) // put the thing in mWindow's queue
This lets you do things like constructing a set of objects and then putting them all in the queue at once, and guarantees that the constructor ends before any virtual functions are called. Note that putting the enqueue at the end of the constructor does not guarantee this, because constructors are always called from top-most class down. This means that if you queue an object to have a virtual function called on it, derived classes will be enqueued before their own constructors begin acting. This means you have a race condition which can cause action on an uninitialized object! A nightmare to debug if you don't realize what you've done.
I think the problem here isn't RAII, or the fact that OpenGL is a c-style interface. It's that you're assuming sprite and sprite_instanced should both derive from a common base. These problems occur all the time with class hierarchies and one of the first lessons I learned about object orientation, mostly through many errors, is that it's almost always better to encapsulate than to derive. EXCEPT that if you're going to derive, do it through an abstract interface.
In other words, don't be fooled by the fact that both of these classes have the name "sprite" in them. They are otherwise totally different in behaviour. For any common functionality they share, implement an abstract base that encapsulates that functionality.