Class destruction segfault - c++

This post is not a duplicate. Read my question first.
I'm certainly missing something here. I would like to have my Entity class be able to destroy an instance of itself when the health quantity is zero.
virtual int takeDamage(int damage) {
health -= damage;
if(health <= 0) delete this;
}
What's wrong with the above code? The program crashes with a segfault when the last line above is called.
So far I have tried:
Making a custom destructor
Making a destructor function:
virtual void destroy(Entity*e) {
delete e;
}
Destroying the object where it was allocated
I removed the last line of takeDamage() and then called delete object from main.
if(tris[i]->getHealth() <=0) delete tris[i]; //called from the main function where a temporary collision detection occurs over all the entities in the program
I've located some of the issue, and it comes down to the following. The instances of all the Entity(s) in my program (of one particular specialization, ie: Triangles) are stored in a vector. For example, the above code uses this vector: vector<Triangle*> tris; (Triangle inherits Entity). Then, that vector is iterated through and various operations are performed, such as collision detection, AI, etc. Now, when one of the objects is deleted, the next time we have iterated through the entire vector, we come to the object which has been deleted. The next question is, what would I do to shrink that vector? (Now this is a good place to flag the question is perhaps needing of a separate question!)

Given your description, There is a way to do this safely using algorithm functions. The algorithm functions you may be looking for are std::stable_partition, for_each, and erase.
Assume you have the "game loop", and you're testing for collisions during the loop:
#include <algorithm>
#include <vector>
//...
std::vector<Entity*> allEntities;
//...
while (game_is_stlll_going())
{
auto it = allEntities.begin();
while (it != allEntitities.end())
{
int damage = 0;
// ... assume that damage has a value now.
//...
// call the takeDamage() function
it->takeDamage(damage);
++it;
}
// Now partition off the items that have no health
auto damaged = std::stable_partition(allEntities.begin(),
allEntities.end(), [](Entity* e) { return e->health > 0; });
// call "delete" on each item that has no health
std::for_each(damaged, allEntities.end(), [] (Entity *e) { delete e; });
// erase the non-health items from the vector.
allEntities.erase(damaged, allEntities.end());
// continue the game...
}
Basically what the loop does is that we go through all the entities and call takeDamage for each one. We don't delete any entities at this stage. When the loop is done, we check to see which items have no health by partitioning off the damaged items using the std::stable_partition algorithm function.
Why stable_partition and not just call std::remove or std::remove_if and before erasing the items, call delete on the removed items? The reason is that with remove / remove_if, you cannot do a multi-step deletion process (call delete and then erase it from the vector). The remove/remove_if algorithm functions assumes what has been moved to the end of the container (i.e. the "removed" items) are no longer needed and thus cannot be used for anything except for the final call to vector::erase. Calling delete on removed items is undefined behavior.
So to solve this issue, you need to "partition off" the bad items first, deallocate the memory for each item, and then remove them. We need to use a partitioning algorithm, so we have a choice of std::stable_partition or std::partition. Which one? We choose std::stable_partition. The reason why we chose std::stable_partition over std::partition is to keep the relative order of the items intact. The order of the items may be important for your game implementation.
Now we call stable_partition to partition the items into two sides of the entities vector -- the good items on the left of the partition, the bad items on the right of the partition. The lambda function is used to decide which entity goes where by testing the health value. The "partition" in this case is an iterator which is returned by stable_partition.
Given this, we call for_each, where the lambda calls delete on each item to the right of the partition, and then a vector::erase() to remove everything to the right of the partition. Then we loop again, redoing this whole process until the game is finished.
Another thing you will notice is the safety of the code above. If all entries have some health, then the calls to stable_partition, for_each, and erase are essentially no-ops. So there is no need to explicitly check for at least one item having no health (but nothing stops you to from doing so, if you feel you need this micro-optimization).
And also, make sure your Entity base class has a virtual destructor, otherwise your code will exhibit undefined behavior on the delete.
Edit: The takeDamage() function should be rewritten to not call delete this.
virtual int takeDamage(int damage) {
health -= damage;
return 0;
}

Related

Weird behaviour encapsulated std::list in std::vector sometimes

I am working on a RTS game. This game has buildings and actors. A building or actor is an instance of the buildings or actors class. These instances are stored in a std::vector for easy interaction by iterator id.
The actors and buildings can do tasks, these are stored in an std::list instantiated by the class in other words that list lives inside std::vector<actors>. The problem now is that every so often when information about this tasks needs retrieval or the tasks has to be removed from the list the program will crash with either "access violation", "trying to pop_front on empty list" or "trying to call front() on a empty list". Even though the previous line of code was a double check to see if the list was indeed not empty! It is also hard to reproduce because it happens every so often.
I suspect that someway somehow iterators or pointers get invalidated, since the list lives in a vector. I tried circumventing that by reserving space for 1600 units and 1600 buildings. However the problem still persists.
if (!this->listOfOrders.empty()) {
switch (this->listOfOrders.front().orderType) { //error here calling front() on empty list
case stackOrderTypes::stackActionMove:
this->updateGoal(this->listOfOrders.front().goal, 0);
break;
case stackOrderTypes::stackActionGather:
this->updateGoal(this->listOfOrders.front().goal, 0);
this->setGatheringRecource(true);
break;
}
}
I am really at a loss here.
Simplified class construct to illustrate:
enum class stackOrderTypes
{
stackActionMove,
stackActionGather
//and so on...
};
struct goal
{
int x;
int y;
}
struct orderStack
{
cords goal;
stackOrderTypes orderType;
};
class actors
{
public:
//other functions here
void doNextStackedCommand();
void stackOrder(cords Goal, stackOrderTypes orderType);
private:
//other stuff goes here
std::list<orderStack> listOfOrders;
};
std::vector<actors> listOfActors; //all actors live in here
To know what problem is causing this error, you need first to find exactly where in your code the problem is happening.
However, from what you describe I see a possible cause of you problem : items removals.
I suppose you have some loops iterating over your orders, and under some conditions you remove orders from your order list. Check if your code looks like this :
for (auto it = orderList.begin(); it != orderList.end(); ++it)
{
// Some code there
if (OrderIsFinished() == true)
orderList.erase(it);
// Some code there
}
This is a common mistake. After calling erase, the item pointed by the iterator it gets removed and it is invalidated. To keep iterators consistent after a removal, you need to change the way you iterate through all elements :
auto it = orderList.begin();
while (it != orderList.end())
{
// Some code there
if (OrderIsFinished() == true)
it = orderList.erase(it);
else
++it;
}
It was in fact a data race issue. There was a std::async updater running for builds while the update function was also called in the main thread!

C++, A way to update Pointers after vector resize, and erase vector objects without copying?

I believe this will be my first question for the site, so I apologize for any mistakes or errors in this post. I am a beginner C++ programmer as well, so forgive me if my questions come across as “noobish”.
Background: A collection of Parent Entity objects are created at startup (and currently not removed or added-to during runtime), and are then linked to a series of Activator Entity objects (both at the beginning, and during, runtime) through a Child Entity object. When establishing a link, the Parent generates a Child (which is stored in a local vector), and returns a pointer to the Child for the Activator to store.
Activators will “activate” children they are linked with, which will then do jobs based off internal and Parent settings. After being activated, they are also updated periodically by the Parent, continuing until eventually deactivating.
Below is a simplified example of the classes present.
class ParentEntity {
std::vector<ChildEntity> m_Children;
std::vector<ChildEntity*> m_ActiveChildren;
public:
//Funcs
ParentEntity(unsigned expectedChildren) { m_Children.reserve(expectedChildren); }
ChildEntity* AddChild(){
m_Children.push_back(ChildEntity(*this));
return &(m_Children.back());
}
void RemoveChild(unsigned iterator) {
//Can't figure a way to remove from the m_Children list without disrupting all pointers.
//m_Children.erase(m_Children.begin() + iterator); Uses Copy operators, which wont work as Const values will be present in Child
}
void AddActiveChild(ChildEntity* activeChild) {
m_ActiveChildren.push_back(activeChild);
}
bool Update(){ //Checks if Children are active,
if (!m_ActiveChildren.empty()) {
std::vector<ChildEntity*> TempActive;
TempActive.reserve(m_ActiveChildren.size());
for (unsigned i = 0; i < m_ActiveChildren.size(); i++) {
if (m_ActiveChildren[i]->Update()) {
TempActive.push_back(m_ActiveChildren[i]);
}
}
if (!TempActive.empty()) {
m_ActiveChildren = TempActive;
return true;
}
else {
m_ActiveChildren.clear();
return false;
}
}
else {
return false;
}
}
};
class ChildEntity {
public:
ChildEntity(ParentEntity& Origin) //Not const because it will call Origin functions that alter the parent
:
m_Origin(Origin)
{}
void SetActive() {
m_ChildActive = true;
m_Origin.AddActiveChild(this);
}
bool Update() { //Psuedo job which causes state switch
srand(unsigned(time(NULL)));
if ((rand() % 10 + 1) > 5) {
m_ChildActive = false;
}
return m_ChildActive;
}
private:
ParentEntity& m_Origin;
bool m_ChildActive = false;
};
class ActivatorEntity {
std::vector<ChildEntity*> ActivationTargets;
public:
ActivatorEntity(unsigned expectedTargets) { ActivationTargets.reserve(expectedTargets); }
void AddTarget(ParentEntity& Target) {
ActivationTargets.push_back(Target.AddChild());
}
void RemoveTarget(unsigned iterator) {
ActivationTargets.erase(ActivationTargets.begin() + iterator);
}
void Activate(){
for (unsigned i = 0; i < ActivationTargets.size(); i++) {
ActivationTargets[i]->SetActive();
}
}
};
With that all laid out, my three questions are:
Is there a way to update Pointers when a vector resizes?
When a Child is added, if it goes past the expected capacity, the vector creates a new array and moves the original objects to the new location. This breaks all of the Activator pointers, and any m_ActiveChild pointers, as they are pointing to the old location.
Is there a way to remove Child objects from the m_Children vector?
Since ChildEntity objects will host const items within them, copy assignment operations won’t work smoothly, and the Vector’s erase function won’t work. The m_Children vector could be rebuilt without the unwanted object through a temporary vector and copy constructor, but this leads to all of the pointers being wrong again.
Please let me know if there are any other suggested optimizations or corrections I should make!
Thank you all for your help!
Your problem, abstractly seen, is that on one hand you have collections of objects that you want to iterate through, kept in a container; and that on the other hand these objects are linked to each other. Re-ordering the container destroys the links.
Any problem can be solved by an additional indirection: Putting not the objects but object handles in the container would make re-ordering possible without affecting cross-references. The trivial case would be to simply use pointers; modern C++ would use smart pointers.
The disadvantage here is that you'll move to dynamic allocation which usually destroys locality right away (though potentially not if most allocations happen during initialization) and carries the usual run-time overhead. The latter may be prohibitive for simple, short-lived objects.
The advantage is that handling pointers enables you to make your objects polymorphic which is a good thing for "activators" and collections of "children" performing "updates": What you have here is the description of an interface which is typically implemented by various concrete classes. Putting objects in a container instead of pointers prevents such a design because all objects in a container must have the same concrete type.
If you need to store more information you can write your own handle class encapsulating a smart pointer; perhaps that's a good idea from the beginning because it is easily extensible without affecting all client code with only a moderate overhead (both in development and run time).

What data structure can I use to free memory within a contiguous stretch of memory?

I would like to simulate a population over time and keep a genealogy of the individuals that are still alive (I don't need to keep data about dead lineages). Generations are discrete and non-overlapping. For simplicity, let's assume that reproduction is asexual and each individual has exactly one parent. Here is a class Individual
class Individual
{
public:
size_t nbChildren;
const Individual* parent;
Individual(const Individual& parent);
};
In my Population class, I would have a vector for the current offsprings and of the current parents (the current parents being the offsprings of the previous generation).
class Population
{
private:
std::vector<Individual*> currentOffsprings;
std::vector<Individual*> currentParents;
public:
addIndividual(const Individual& parent) // Is called from some other module
{
Individual* offspring = new Individual(parent);
currentOffsprings.push_back(offspring);
}
void pruneDeadLineages() // At the end of each generation, get rid of ancestors that did not leave any offsprings today
{
// Collect the current parents that have not left any children in the current generation of offsprings
std::queue<Individual*> individualsWithoutChildren; // FIFO structure
for (auto& currentParent : currentParents)
{
if (currentParent->nbChildren() == 0)
{
individualsWithoutChildren.push(currentParent);
}
}
// loop through the FIFO to get rid of all individuals in the tree that don't have offspring in this generation
while (individualsWithoutChildren.size() != 0)
{
auto ind = individualsWithoutChildren.pop_front();
if (ind->nbChildren == 0)
{
ind->parent.nbChildren--;
if (ind->parent.nbChildren == 0)
{
individualsWithoutChildren.push(ind->parent);
}
delete ind;
}
}
}
void newGeneration() // Announce the beginning of a new generation from some other module
{
currentParents.swap(currentOffsprings); // Set offsprings as parents
currentOffsprings.resize(0); // Get rid of pointers to parents (now grand parents)
}
void doStuff() // Some time consuming function that will run each generation
{
for (auto ind : currentOffspings)
{
foo(ind);
}
}
};
Assuming that the slow part of my code will be looping through the individuals in the doStuff method, I would like to keep individual contiguous in memory and hence
std::vector<Individual*> currentOffsprings;
std::vector<Individual*> currentParents;
would become
std::vector<Individual> currentOffsprings;
std::vector<Individual> currentParents;
Now the problem is that I don't want to consume memory for ancestors that did not leave any offspring in the current generation. In other words, I don't want to keep whole vectors of length of the number of individuals per generation in the population for each generation. I thought I could implement a destructor of Individual that does nothing, so that the Individuals of the grand parent generation do not get killed at the line currentOffsprings.resize(0); in void Population::newGeneration(). Then in void Population::pruneDeadLineages(), I would explicitly delete the individuals with a method Individual::destructor() instead of using delete or Individual::~Individual().
Is it silly? Would it be memory safe (or yield to segmentation fault or memory leaks)? What other option do I have to 1) make sure that current generations individuals are contiguous in memory and 2) I can free the memory within this contiguous stretch of memory for ancestors that did not leave any offsprings?
I don't really understand why do you need to have your Individuals stored contiguously in memory.
Since you'll have to remove some of them and add others at each generation, you'll have to perform a reallocation for the whole bunch of Individuals in order to keep them contiguous in memory.
But anyway, I will not question what you want to do.
I think the easiest way is to let std::vector do the things for you. No need of pointers.
At each next generation, you move the offsprings from currentOffsprings to currentParents and you clear() the currentOffsprings.
Then, for each parent that do not have any children in the current generation, you can just use erase() from std::vector to remove them and consequently let the std::vector taking care of keeping its elements contiguous.
Better than 100 words, something like:
void Population::newGeneration()
{
currentParents.swap(currentOffsprings);
currentOffsprings.clear();
}
void Population::pruneDeadLineages()
{
currentParents.erase(std::remove_if(currentParents.begin(), currentParents.end(), [](const Individual & ind){return ind.nbChildren == 0;}), currentParents.end());
}
Of course it assumes that the parents and offsprings are defined in Population as:
std::vector<Individual> currentParents;
std::vector<Individual> currentOffsprings;
Note: As std::remove_if moves the elements to remove at the end of the container, the elements to keep stays contiguous and there is thus no reallocation when performing the erasing.
This way your two requirements (keep Individuals contiguous in memory and get rid of dead lineages) will be filled, without doing weird things with destructors,...
But as you have two std::vector, you are assured to have currentOffsprings stored contiguously in memory, same thing for currentParents.
But the two std::vectors are absolutely not guaranteed to be contiguous each other (But I think you are already aware of this and that it is not what you want).
Let me know if I have misunderstood your actual problem

C++ STL vector with odd size, 0 capacity before anything is added

I am debugging an issue where there is a seg fault when trying to call push_back to a vector. The seg fault is on the first attempt to add anything to the vector. For debug purposes, I printed out capacity and size before this first attempt, and the result is size: 529486, and capacity: 0.
The frustrating part is that this is an add-on vector, following the same formula used to work with other vectors, and those work. The size and capacity behave as expected with these other vectors.
As rough pseudo-code for what I am doing:
class NEWTYPE{
public:
Object* objPtr;
NEWTYPE();
void update(float );
void setObject(Object* o);
};
class ABCD{
std::vector<TYPE1*> type1List;
std::vector<TYPE2*> type2List;
std::vector<TYPE3*> type3List;
std::vector<TYPE4*> type4List;
std::vector<TYPE5*> type5List; // <== there were 5 other vectors working
std::vector<NEWTYPE*> NEWTYPEList;
}
void ABCD::addType1(TYPE1* n){
cout << type1List.size() << type1List.capacity; // <== as expected
type1List.push_back(n); // <== Works for each old type
}
void ABCD::addNewType(NEWTYPE* n){
cout << NEWTYPEList.size() << NEWTYPEList.capacity; // size: 529486, capacity:0 before first call
NEWTYPEList.push_back(n); // <== seg fault
}
ABCD instance;
// foo() : This procedure works correctly for the other vectors
void foo(){
NEWTYPE* test;
test = new NEWTYPE();
instance.addNewType(test);
}
I am not quite at a point to try to extract things to reproduce in a simple test case. That is one of my next steps.
Anyway, if anyone can point me in the right direction on this, I appreciate the advice. Thanks!
In my case, this turned out to be a build-related issue.
I updated the "master class" (ABCD in the pseudocode) to add the new vector. However, the file that declared the instance was not being rebuilt. The push_back function call was called for a vector that did not exist.
FYI...sorry for not being clear on my original question. Since I was using the same procedures as working parts of the code, my line of thinking was that I may have violated some stack constraint, or exceeded some default setting for vector related to how many vectors were being used.
Garbage returned by std::vector::capacity() and std::vector::size() most probably point to an uninitialized object. As object in your example is global I suspect function foo called from another global object constructor. As order of initialization of global objects is not defined you can have different behavior with different instances of vectors. Possible solution - use singleton object with a local static object:
ABCD &getInstance()
{
static ABCD theInstance;
return theInstance;
}
this way theInstance will be initialized when function getInstance() is called first time. This method does not solve issue with destructor order though, you should design your program the way that destrutcors of global objects should not call methods of other global objects, or use different singleton type (for example phoenix).

Alternatives to an Object Pool?

I'm not quite sure that I need an object pool, yet it seems the most viable solution, but has some un-wanted cons associated with it. I am making a game, where entities are stored within an object pool. These entities are not allocated directly with new, instead a std::deque handles the memory for them.
This is what my object pool more or less looks like:
struct Pool
{
Pool()
: _pool(DEFAULT_SIZE)
{}
Entity* create()
{
if(!_destroyedEntitiesIndicies.empty())
{
_nextIndex = _destroyedEntitiesIndicies.front();
_destroyedEntitiesIndicies.pop();
}
Entity* entity = &_pool[_nextIndex];
entity->id = _nextIndex;
return entity;
}
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
private:
std::deque<Entity> _pool;
std::queue<int> _destroyedEntitiesIndicies;
int _nextIndex = 0;
};
If I destroy an entity, it's ID will be added to the _destroyedEntitiesIndicies queue, which will make it so that the ID will be re-used, and lastly it's ID will be set to 0. Now the only pitfall to this is, if I destroy an entity and then immediately create a new one, the Entity that was previously destroyed will be updated to be the same entity that was just created.
i.e.
Entity* object1 = pool.create(); // create an object
pool.destroy(object1); // destroy it
Entity* object2 = pool.create(); // create another object
// now object1 will be the same as object2
std::cout << (object1 == object2) << '\n'; // this will print out 1
This doesn't seem right to me. How do I avoid this? Obviously the above will probably not happen (as I'll delay object destruction until the next frame). But this may cause some disturbance whilst saving entity states to a file, or something along those lines.
EDIT:
Let's say I did NULL entities to destroy them. What if I was able to get an Entity from the pool, or store a copy of a pointer to the actual entity? How would I NULL all the other duplicate entities when destroyed?
i.e.
Pool pool;
Entity* entity = pool.create();
Entity* theSameEntity = pool.get(entity->getId());
pool.destroy(entity);
// now entity == nullptr, but theSameEntity still points to the original entity
If you want an Entity instance only to be reachable via create, you will have to hide the get function (which did not exist in your original code anyway :) ).
I think adding this kind of security to your game is quite a bit of an overkill but if you really need a mechanism to control access to certain parts in memory, I would consider returning something like a handle or a weak pointer instead of a raw pointer. This weak pointer would contain an index on a vector/map (that you store somewhere unreachable to anything but that weak pointer), which in turn contains the actual Entity pointer, and a small hash value indicating whether the weak pointer is still valid or not.
Here's a bit of code so you see what I mean:
struct WeakEntityPtr; // Forward declaration.
struct WeakRefIndex { unsigned int m_index; unsigned int m_hash; }; // Small helper struct.
class Entity {
friend struct WeakEntityPtr;
private:
static std::vector< Entity* > s_weakTable( 100 );
static std::vector< char > s_hashTable( 100 );
static WeakRefIndex findFreeWeakRefIndex(); // find next free index and change the hash value in the hashTable at that index
struct WeakEntityPtr {
private:
WeakRefIndex m_refIndex;
public:
inline Entity* get() {
Entity* result = nullptr;
// Check if the weak pointer is still valid by comparing the hash values.
if ( m_refIndex.m_hash == Entity::s_hashTable[ m_refIndex.m_index ] )
{
result = WeakReferenced< T >::s_weakTable[ m_refIndex.m_index ];
}
return result;
}
}
This is not a complete example though (you will have to take care of proper (copy) constructors, assignment operations etc etc...) but it should give you the idea what I am talking about.
However, I want to stress that I still think a simple pool is sufficient for what you are trying to do in that context. You will have to make the rest of your code to play nicely with the entities so they don't reuse objects that they're not supposed to reuse, but I think that is easier done and can be maintained more clearly than the whole handle/weak pointer story above.
This question seems to have various parts. Let's see:
(...) If I destroy an entity and then immediately create a new one,
the Entity that was previously destroyed will be updated to be the
same entity that was just created. This doesn't seem right to me. How
do I avoid this?
You could modify this method:
void destroy(Entity* x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
}
To be:
void destroy(Entity *&x)
{
_destroyedEntitiesIndicies.emplace(x->id);
x->id = 0;
x = NULL;
}
This way, you will avoid the specific problem you are experiencing. However, it won't solve the whole problem, you can always have copies which are not going to be updated to NULL.
Another way is yo use auto_ptr<> (in C++'98, unique_ptr<> in C++-11), which guarantee that their inner pointer will be set to NULL when released. If you combine this with the overloading of operators new and delete in your Entity class (see below), you can have a quite powerful mechanism. There are some variations, such as shared_ptr<>, in the new version of the standard, C++-11, which can be also useful to you. Your specific example:
auto_ptr<Entity> object1( new Entity ); // calls pool.create()
object1.release(); // calls pool.destroy, if needed
auto_ptr<Entity> object2( new Entity ); // create another object
// now object1 will NOT be the same as object2
std::cout << (object1.get() == object2.get()) << '\n'; // this will print out 0
You have various possible sources of information, such as the cplusplus.com, wikipedia, and a very interesting article from Herb Shutter.
Alternatives to an Object Pool?
Object pools are created in order to avoid continuous memory manipulation, which is expensive, in those situations in which the maximum number of objects is known. There are not alternatives to an object pool that I can think of for your case, I think you are trying the correct design. However, If you have a lot of creations and destructions, maybe the best approach is not an object pool. It is impossible to say without experimenting, and measuring times.
About the implementation, there are various options.
In the first place, it is not clear whether you're experiencing performance advantages by avoiding memory allocation, since you are using _destroyedEntitiesIndicies (you are anyway potentially allocating memory each time you destroy an object). You'll have to experiment with your code if this is giving you enough performance gain in contrast to plain allocation. You can try to remove _destroyedEntitiesIndicies altogether, and try to find an empty slot only when you are running out of them (_nextIndice >= DEFAULT_SIZE ). Another thing to try is discard the memory wasted in those free slots and allocate another chunk (DEFAULT_SIZE) instead.
Again, it all depends of the real use you are experiencing. The only way to find out is experimenting and measuring.
Finally, remember that you can modify class Entity in order to transparently support the object pool or not. A benefit of this is that you can experiment whether it is a really better approach or not.
class Entity {
public:
// more things...
void * operator new(size_t size)
{
return pool.create();
}
void operator delete(void * entity)
{
}
private:
Pool pool;
};
Hope this helps.