This is something from the theory of c++ which I never quite got.
I have very large list of small structures (in size) called nodes inside Class A.
So I have:
private:
QList<Node> nodes;
Now Class A has instance of Class B (called cB) and for one of its functions it requires to iterate over nodes and just read them (it will not modify the information in any way).
function ClassB::readNodes(){
for (int i = 0; i < nodes.size(); i++){
// READ nodes and do stuff with that information
}
}
So as far as I know there two ways to approach this. This is my current approach Inside Class B I have:
public:
void setNodes(const QList *list) {nodes = list;}
private:
QList<Node> *nodes;
And then somewhere in Class A I do:
cb.setNodes(&nodes);
The objective of this approach is that I want to avoid copying a very large data structure each time readNodes() is called, and also there is only one copy of the data structure in memory. This is important because Class A will continually change nodes.
The second approach is much simpler and more clear in my opinion. Simply define readNodes as:
function ClassB::readNodes(const QList<Node> &nodes){
for (int i = 0; i < nodes.size(); i++){
// READ nodes and do stuff with that information
}
}
But I'm not totally sure that this will insume a performance penalty. As far as I understand it this approach also treats nodes as a reference and so no deep copy of nodes occurs. Is this correct?
Most of the Qt classes(including container classes) are implicitly shared. So even if you pass them around by value, there won't be a deep copy until you access to those items.
Passing const references of objects is a more general and safe approach though.
Related
I believe this will be my first question for the site, so I apologize for any mistakes or errors in this post. I am a beginner C++ programmer as well, so forgive me if my questions come across as “noobish”.
Background: A collection of Parent Entity objects are created at startup (and currently not removed or added-to during runtime), and are then linked to a series of Activator Entity objects (both at the beginning, and during, runtime) through a Child Entity object. When establishing a link, the Parent generates a Child (which is stored in a local vector), and returns a pointer to the Child for the Activator to store.
Activators will “activate” children they are linked with, which will then do jobs based off internal and Parent settings. After being activated, they are also updated periodically by the Parent, continuing until eventually deactivating.
Below is a simplified example of the classes present.
class ParentEntity {
std::vector<ChildEntity> m_Children;
std::vector<ChildEntity*> m_ActiveChildren;
public:
//Funcs
ParentEntity(unsigned expectedChildren) { m_Children.reserve(expectedChildren); }
ChildEntity* AddChild(){
m_Children.push_back(ChildEntity(*this));
return &(m_Children.back());
}
void RemoveChild(unsigned iterator) {
//Can't figure a way to remove from the m_Children list without disrupting all pointers.
//m_Children.erase(m_Children.begin() + iterator); Uses Copy operators, which wont work as Const values will be present in Child
}
void AddActiveChild(ChildEntity* activeChild) {
m_ActiveChildren.push_back(activeChild);
}
bool Update(){ //Checks if Children are active,
if (!m_ActiveChildren.empty()) {
std::vector<ChildEntity*> TempActive;
TempActive.reserve(m_ActiveChildren.size());
for (unsigned i = 0; i < m_ActiveChildren.size(); i++) {
if (m_ActiveChildren[i]->Update()) {
TempActive.push_back(m_ActiveChildren[i]);
}
}
if (!TempActive.empty()) {
m_ActiveChildren = TempActive;
return true;
}
else {
m_ActiveChildren.clear();
return false;
}
}
else {
return false;
}
}
};
class ChildEntity {
public:
ChildEntity(ParentEntity& Origin) //Not const because it will call Origin functions that alter the parent
:
m_Origin(Origin)
{}
void SetActive() {
m_ChildActive = true;
m_Origin.AddActiveChild(this);
}
bool Update() { //Psuedo job which causes state switch
srand(unsigned(time(NULL)));
if ((rand() % 10 + 1) > 5) {
m_ChildActive = false;
}
return m_ChildActive;
}
private:
ParentEntity& m_Origin;
bool m_ChildActive = false;
};
class ActivatorEntity {
std::vector<ChildEntity*> ActivationTargets;
public:
ActivatorEntity(unsigned expectedTargets) { ActivationTargets.reserve(expectedTargets); }
void AddTarget(ParentEntity& Target) {
ActivationTargets.push_back(Target.AddChild());
}
void RemoveTarget(unsigned iterator) {
ActivationTargets.erase(ActivationTargets.begin() + iterator);
}
void Activate(){
for (unsigned i = 0; i < ActivationTargets.size(); i++) {
ActivationTargets[i]->SetActive();
}
}
};
With that all laid out, my three questions are:
Is there a way to update Pointers when a vector resizes?
When a Child is added, if it goes past the expected capacity, the vector creates a new array and moves the original objects to the new location. This breaks all of the Activator pointers, and any m_ActiveChild pointers, as they are pointing to the old location.
Is there a way to remove Child objects from the m_Children vector?
Since ChildEntity objects will host const items within them, copy assignment operations won’t work smoothly, and the Vector’s erase function won’t work. The m_Children vector could be rebuilt without the unwanted object through a temporary vector and copy constructor, but this leads to all of the pointers being wrong again.
Please let me know if there are any other suggested optimizations or corrections I should make!
Thank you all for your help!
Your problem, abstractly seen, is that on one hand you have collections of objects that you want to iterate through, kept in a container; and that on the other hand these objects are linked to each other. Re-ordering the container destroys the links.
Any problem can be solved by an additional indirection: Putting not the objects but object handles in the container would make re-ordering possible without affecting cross-references. The trivial case would be to simply use pointers; modern C++ would use smart pointers.
The disadvantage here is that you'll move to dynamic allocation which usually destroys locality right away (though potentially not if most allocations happen during initialization) and carries the usual run-time overhead. The latter may be prohibitive for simple, short-lived objects.
The advantage is that handling pointers enables you to make your objects polymorphic which is a good thing for "activators" and collections of "children" performing "updates": What you have here is the description of an interface which is typically implemented by various concrete classes. Putting objects in a container instead of pointers prevents such a design because all objects in a container must have the same concrete type.
If you need to store more information you can write your own handle class encapsulating a smart pointer; perhaps that's a good idea from the beginning because it is easily extensible without affecting all client code with only a moderate overhead (both in development and run time).
As part of a simulation I'm currently iterating through a large vector of objects looking for those that have an attribute. It seems to me that it would be faster to store the addresses of those objects in another vector of pointers until I no longer need to operate on them... example code would looks like this... I'm not sure about the erase part.
public:
bool studying = false;
void study();
};
void student::study()
{
//Read a book
studying = false;
};
int vector_question_main(int argc, char* argv[])
{
std::vector<student> masterList;
std::vector<student*> studyingList;
student lazy1;
student good1;
good1.studying = true;
student* s = &masterList.at(1);
studyingList.push_back(s);
for (int i = 0; i < studyingList.size(); i++)
{
studyingList.at(i)->study();
if (studyingList.at(i)->studying == false)
{
studyingList.erase(i);
}
}
}
Am I on the right track or is there a "better" way? To help define my situation I won't know in advance how many objects I have or how many I need to track. I will need to iterate over and operate on them allot.
Edit:
Thanks for the initial responses.
#PaulMcKenzie
The hard requirements/statements of qualities I need from my collection of objects so far is that...
it's easily re-sizeable due to wanting to add/remove objects
frequently.
I can iterate over the collection (in no particular order) performing
some action such as getting/setting an attribute or calling a method.
A objects method will only perform some action on its own attributes
based on its own attributes or parameters.
I expect a future requirement will be that I want to only iterate over a smaller subset. prematurely optimising. Thanks #Jack Deeth
I'm having trouble implementing an ImageManager into my program. I had success using this method with references:
//definition in Brick.h
ImageManager &imgr;
//constructor taking &imgr as a reference upon creation of object
Brick::Brick(ImageManager &im) : imgr(im){
//imgr is now a reference in my class, so it points to the same object that imgr in another class would point to
//this essentially makes one "static" instance of imgr, so all my graphic objects are dealing with the same instance of my ImageManager
imgr.doStuff()
}
This method of passing around my imgr used to work, until I started trying to remove obejcts from vectors. For instance, in my Level class I try to remove elements from a vector of Brick objects,
void Level::RemoveLine(int line){
//loop through every piece, loop through given piece's rects, if the rect falls on the removed line, then remove the piece
for(int i = 0; i < gamePieces_.size(); i++){
//crt new iterator per each gamepiece
auto write = gamePieces_[i].GetPieceRectangles().begin();
int j = 0;
for(auto read = write; read != gamePieces_[i].GetPieceRectangles().end(); read++){
if(gamePieces_[i].GetPieceRectangles()[j].GetActiveLine() != line){
if(read != write){
write = std::move(read);
}
write++;
}
}
gamePieces_[i].GetPieceRectangles().erase(write, gamePieces_[i].GetPieceRectangles().end());
}
}
but this doesn't work because ImageManager &imgr declared in Brick.h doesn't have a copy constructor, so it can't be copied in vectors when I try to .erase() the element. My goal is to implement one static ImageManager object to be used throughout all my classes. How would I go about doing this?
"My goal is to implement one static ImageManager object to be used throughout all my classes"
You can implement ImageManager as Singleton class. But I have learnt to use singleton only if there's no other option.
You can also use static data members in your class. In this way only one copy of your class's data members would be in circulation.
Generally speaking this kind of code isn't what you want. Take a look at the Singleton design pattern.
https://en.wikipedia.org/wiki/Singleton_pattern
I'm not quite sure that I need an object pool, yet it seems the most viable solution, but has some un-wanted cons associated with it. I am making a game, where entities are stored within an object pool. These entities are not allocated directly with new, instead a std::deque handles the memory for them.
This is what my object pool more or less looks like:
struct Pool
{
Pool()
: _pool(DEFAULT_SIZE)
{}
Entity* create()
{
if(!_destroyedEntitiesIndicies.empty())
{
_nextIndex = _destroyedEntitiesIndicies.front();
_destroyedEntitiesIndicies.pop();
}
Entity* entity = &_pool[_nextIndex];
entity->id = _nextIndex;
return entity;
}
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
private:
std::deque<Entity> _pool;
std::queue<int> _destroyedEntitiesIndicies;
int _nextIndex = 0;
};
If I destroy an entity, it's ID will be added to the _destroyedEntitiesIndicies queue, which will make it so that the ID will be re-used, and lastly it's ID will be set to 0. Now the only pitfall to this is, if I destroy an entity and then immediately create a new one, the Entity that was previously destroyed will be updated to be the same entity that was just created.
i.e.
Entity* object1 = pool.create(); // create an object
pool.destroy(object1); // destroy it
Entity* object2 = pool.create(); // create another object
// now object1 will be the same as object2
std::cout << (object1 == object2) << '\n'; // this will print out 1
This doesn't seem right to me. How do I avoid this? Obviously the above will probably not happen (as I'll delay object destruction until the next frame). But this may cause some disturbance whilst saving entity states to a file, or something along those lines.
EDIT:
Let's say I did NULL entities to destroy them. What if I was able to get an Entity from the pool, or store a copy of a pointer to the actual entity? How would I NULL all the other duplicate entities when destroyed?
i.e.
Pool pool;
Entity* entity = pool.create();
Entity* theSameEntity = pool.get(entity->getId());
pool.destroy(entity);
// now entity == nullptr, but theSameEntity still points to the original entity
If you want an Entity instance only to be reachable via create, you will have to hide the get function (which did not exist in your original code anyway :) ).
I think adding this kind of security to your game is quite a bit of an overkill but if you really need a mechanism to control access to certain parts in memory, I would consider returning something like a handle or a weak pointer instead of a raw pointer. This weak pointer would contain an index on a vector/map (that you store somewhere unreachable to anything but that weak pointer), which in turn contains the actual Entity pointer, and a small hash value indicating whether the weak pointer is still valid or not.
Here's a bit of code so you see what I mean:
struct WeakEntityPtr; // Forward declaration.
struct WeakRefIndex { unsigned int m_index; unsigned int m_hash; }; // Small helper struct.
class Entity {
friend struct WeakEntityPtr;
private:
static std::vector< Entity* > s_weakTable( 100 );
static std::vector< char > s_hashTable( 100 );
static WeakRefIndex findFreeWeakRefIndex(); // find next free index and change the hash value in the hashTable at that index
struct WeakEntityPtr {
private:
WeakRefIndex m_refIndex;
public:
inline Entity* get() {
Entity* result = nullptr;
// Check if the weak pointer is still valid by comparing the hash values.
if ( m_refIndex.m_hash == Entity::s_hashTable[ m_refIndex.m_index ] )
{
result = WeakReferenced< T >::s_weakTable[ m_refIndex.m_index ];
}
return result;
}
}
This is not a complete example though (you will have to take care of proper (copy) constructors, assignment operations etc etc...) but it should give you the idea what I am talking about.
However, I want to stress that I still think a simple pool is sufficient for what you are trying to do in that context. You will have to make the rest of your code to play nicely with the entities so they don't reuse objects that they're not supposed to reuse, but I think that is easier done and can be maintained more clearly than the whole handle/weak pointer story above.
This question seems to have various parts. Let's see:
(...) If I destroy an entity and then immediately create a new one,
the Entity that was previously destroyed will be updated to be the
same entity that was just created. This doesn't seem right to me. How
do I avoid this?
You could modify this method:
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
To be:
void destroy(Entity *&x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
x = NULL;
}
This way, you will avoid the specific problem you are experiencing. However, it won't solve the whole problem, you can always have copies which are not going to be updated to NULL.
Another way is yo use auto_ptr<> (in C++'98, unique_ptr<> in C++-11), which guarantee that their inner pointer will be set to NULL when released. If you combine this with the overloading of operators new and delete in your Entity class (see below), you can have a quite powerful mechanism. There are some variations, such as shared_ptr<>, in the new version of the standard, C++-11, which can be also useful to you. Your specific example:
auto_ptr<Entity> object1( new Entity ); // calls pool.create()
object1.release(); // calls pool.destroy, if needed
auto_ptr<Entity> object2( new Entity ); // create another object
// now object1 will NOT be the same as object2
std::cout << (object1.get() == object2.get()) << '\n'; // this will print out 0
You have various possible sources of information, such as the cplusplus.com, wikipedia, and a very interesting article from Herb Shutter.
Alternatives to an Object Pool?
Object pools are created in order to avoid continuous memory manipulation, which is expensive, in those situations in which the maximum number of objects is known. There are not alternatives to an object pool that I can think of for your case, I think you are trying the correct design. However, If you have a lot of creations and destructions, maybe the best approach is not an object pool. It is impossible to say without experimenting, and measuring times.
About the implementation, there are various options.
In the first place, it is not clear whether you're experiencing performance advantages by avoiding memory allocation, since you are using _destroyedEntitiesIndicies (you are anyway potentially allocating memory each time you destroy an object). You'll have to experiment with your code if this is giving you enough performance gain in contrast to plain allocation. You can try to remove _destroyedEntitiesIndicies altogether, and try to find an empty slot only when you are running out of them (_nextIndice >= DEFAULT_SIZE ). Another thing to try is discard the memory wasted in those free slots and allocate another chunk (DEFAULT_SIZE) instead.
Again, it all depends of the real use you are experiencing. The only way to find out is experimenting and measuring.
Finally, remember that you can modify class Entity in order to transparently support the object pool or not. A benefit of this is that you can experiment whether it is a really better approach or not.
class Entity {
public:
// more things...
void * operator new(size_t size)
{
return pool.create();
}
void operator delete(void * entity)
{
}
private:
Pool pool;
};
Hope this helps.
My application uses a large amount of Panda objects. Each Panda has a list of Bamboo objects. This list does not change once the Panda is initialized (no Bamboo objects are added or removed). Currently, my class is implemented as follows:
class Panda
{
int a;
int b;
int _bambooCount;
Bamboo* _bamboo;
Panda (int count, Bamboo* bamboo)
{
_bambooCount = count;
_bamboo = new Bamboo[count];
// ... copy bamboo into the array ...
}
}
To alleviate the overhead of allocating an array of Bamboo objects, I could implement this class as follows -- basically, instead of creating objects via the regular constructor, a construction method allocates a single memory block to hold both the Panda object and its Bamboo array:
class Panda
{
int a;
int b;
Panda ()
{
// ... other initializations here ...
}
static Panda *createPanda (int count, Bamboo* bamboo)
{
byte* p = new byte[sizeof(Panda) +
sizeof(Bamboo) * count];
new (p) Panda ();
Bamboo* bamboo = (Bamboo*)
p + sizeof(Panda);
// ... copy bamboo objects into the memory
// behind the object...
return (Panda*)p;
}
}
Can you foresee any problems with the second design, other than the increased maintenance effort? Is this an acceptable design pattern, or simply a premature optimization that could come back to bite me later?
C++ gives you another option. You should consider using std::vector.
class Panda
{
int a;
int b;
std::vector<Bamboo> bamboo;
// if you do not want to store by value:
//std::vector< shared_ptr<Bamboo> > bamboo;
Panda (int count, Bamboo* bamb) : bamboo( bamb, bamb+count ) {}
}
If you want to store Panda and Bamboos in continuous memory you could use solution from this article. The main idea is to overload operator new and operator delete.
How do we convince people that in programming simplicity and clarity --in short: what mathematicians call 'elegance'-- are not a dispensable luxury, but a crucial matter that decides between success and failure?
-- Edsger W. Dijkstra
You'll be bitten if someone takes a Panda by value e.g.
//compiler allocates 16-bytes on the stack for this local variable
Panda panda = *createPanda(15, bamboo);
It may be acceptable (but is very probably a premature and horrible optimization) if you only ever refer to things by pointer and never by value, and if you beware the copy constructor and assignment operator.
Based on my experience, premature optimization is most always "premature".. That is to say you should profile your code and determine whether or not there is a need for optimization or you are just creating more work for yourself in the long run.
Also, it seems to me that the questions as to whether the optimization is worth it or not depends a lot on the size of the Bamboo class and the average number of Bamboo objects per Panda.
This was find in C.
But in C++ there is no real need.
The real question is why do you want to do this?
This is a premature optimization, just use a std::vector<> internally and all your problems will disappear.
Because you are using a RAW pointer internally that the class owns you would need to override the default versions of:
Default Constructor
Destructor
Copy Constructor
Assignment operator
If you're that desperate, you can probably do something like this:
template<std::size_t N>
class Panda_with_bamboo : public Panda_without_bamboo
{
int a;
int b;
Bamboo bamboo[N];
}
But I believe you're not desperate, but optimizing prematurely.
You use "new" look of new operator. It is fully correct relative Panda, but why don't you use Bamboo initializer?