tbb::concurrent_hash_map throws SIGSEGV - c++

I'm running a small program built using TBB on Windows with mingw32. It does a parallel_for. Inside the parallel_for my object makes changes to a concurrent_hash_map object. It starts running but later throws a SIGSEGV when I try to use an accessor. I don't know where the problem is.
My object:
class Foobar
{
public:
Foobar(FoobarParent* rw) : _rw(rw)
{
_fooMap = &_rw->randomWalkers();
}
void operator() (const tbb::blocked_range<size_t>&r ) const
{
for(size_t i = r.begin(); i != r.end(); ++i)
{
apply(i);
}
}
private:
void apply(int i) const
{
pointMap_t::accessor a;
_fooMap->find(a, i);
Point3D current = a->second;
Point3D next = _rw->getNext(current);
if (!_rw->hasConstraint(next))
{
return;
}
a->second = next;
}
FoobarParent* _rw;
pointMap_t* _fooMap;
};
pointMap_t is defined as:
typedef tbb::concurrent_hash_map<int, Point3D> pointMap_t;
Can someone shed a light on this issue? I'm new to TBB. The signal is thrown when the apply method calls a->second.

There are two potential problems in this code.
First, if find() does not find the specified key, it will fail to dereference a->second. You should rewrite it either with insert() which will ensure existence of the element or add condition checking like:
if( a ) // process it
Second, you call getNext and hasConstraint under the lock of the accessor. It is dangerous to call anything under the lock since it can have another lock inside or a call to TBB and thus can lead to deadlock or other problems.

Related

Prevent or detect "this" from being deleted during use

One error that I often see is a container being cleared whilst iterating through it. I have attempted to put together a small example program demonstrating this happening. One thing to note is that this can often happen many function calls deep so is quite hard to detect.
Note: This example deliberately shows some poorly designed code. I am trying to find a solution to detect the errors caused by writing code such as this without having to meticulously examine an entire codebase (~500 C++ units)
#include <iostream>
#include <string>
#include <vector>
class Bomb;
std::vector<Bomb> bombs;
class Bomb
{
std::string name;
public:
Bomb(std::string name)
{
this->name = name;
}
void touch()
{
if(rand() % 100 > 30)
{
/* Simulate everything being exploded! */
bombs.clear();
/* An error: "this" is no longer valid */
std::cout << "Crickey! The bomb was set off by " << name << std::endl;
}
}
};
int main()
{
bombs.push_back(Bomb("Freddy"));
bombs.push_back(Bomb("Charlie"));
bombs.push_back(Bomb("Teddy"));
bombs.push_back(Bomb("Trudy"));
for(size_t i = 0; i < bombs.size(); i++)
{
bombs.at(i).touch();
}
return 0;
}
Can anyone suggest a way of guaranteeing this cannot happen?
The only way I can currently detect this kind of thing is replacing the global new and delete with mmap / mprotect and detecting use after free memory accesses. This and Valgrind however sometimes fail to pick it up if the vector does not need to reallocate (i.e only some elements removed or the new size is not yet the reserve size). Ideally I don't want to have to clone much of the STL to make a version of std::vector that always reallocates every insertion/deletion during debug / testing.
One way that almost works is if the std::vector instead contains std::weak_ptr, then the usage of .lock() to create a temporary reference prevents its deletion whilst execution is within the classes method. However this cannot work with std::shared_ptr because you do not need lock() and same with plain objects. Creating a container of weak pointers just for this would be wasteful.
Can anyone else think of a way to protect ourselves from this.
Easiest way is to run your unit tests with Clang MemorySanitizer linked in.
Let some continuous-integration Linux box to do it automatically on each push
into repo.
MemorySanitizer has "Use-after-destruction detection" (flag -fsanitize-memory-use-after-dtor + environment variable MSAN_OPTIONS=poison_in_dtor=1) and so it will blow the test up that executes the code and that turns your continuous-integration red.
If you have neither unit tests nor continuous integration in place then you can also just manually debug your code with MemorySanitizer but that is hard way compared with the easiest. So better start to use continuous integration and write unit tests.
Note that there may be legitimate reasons of memory reads and writes after destructor has been ran but memory hasn't yet been freed. For example std::variant<std::string,double>. It lets us to assign it std::string then double and so its implementation might destroy the string and reuse same storage for double. Filtering such cases out is unfortunately manual work at the moment, but tools evolve.
In your particular example the misery boils down to no less than two design flaws:
Your vector is a global variable. Limit the scope of all of your objects as much as possible and issues like this are less likely to occur.
Having the single responsibility principle in mind, I can hardly imagine how one could come up with a class that needs to have some method that either directly or indirectly (maybe through 100 layers of call stack) deletes objects that could happen to be this.
I am aware that your example is artificial and intentionally bad, so please don't get me wrong here: I'm sure that in your actual case it is not so obvious how sticking to some basic design rules can prevent you from doing this. But as I said, I strongly believe that good design will reduce the likelyhood of such bugs coming up. And in fact, I cannot remember that I was ever facing such an issue, but maybe I am just not experienced enough :)
However, if this really keeps being an issue despite sticking with some design rules, then I have this idea how to detect it:
Create a member int recursionDepth in your class and initialize it with 0
At the beginning of each non-private method increment it.
Use RAII to make sure that at the end of each method it is decremented again
In the destructor check it to be 0, otherwise it means that the destructor is directly or indirectly called by some method of this.
You may want to #ifdef all of this and enable it only in debug build. This would essentially make it a debug assertion, some people like them :)
Note, that this does not work in a multi threaded environment.
In the end I went with a custom iterator that if the owner std::vector resizes whilst the iterator is still in scope, it will log an error or abort (giving me a stacktrace of the program). This example is a bit convoluted but I have tried to simplify it as much as possible and removed unused functionality from the iterator.
This system has flagged up about 50 errors of this nature. Some may be repeats. However Valgrind and ElecricFence at this point came up clean which is disappointing (In total they flagged up around 10 which I have already fixed since the start of the code cleanup).
In this example I use clear() which Valgrind does flag as an error. However in the actual codebase it is random access erases (i.e vec.erase(vec.begin() + 9)) which I need to check and Valgrind unfortunately misses quite a few.
main.cpp
#include "sstd_vector.h"
#include <iostream>
#include <string>
#include <memory>
class Bomb;
sstd::vector<std::shared_ptr<Bomb> > bombs;
class Bomb
{
std::string name;
public:
Bomb(std::string name)
{
this->name = name;
}
void touch()
{
if(rand() % 100 > 30)
{
/* Simulate everything being exploded! */
bombs.clear(); // Causes an ABORT
std::cout << "Crickey! The bomb was set off by " << name << std::endl;
}
}
};
int main()
{
bombs.push_back(std::make_shared<Bomb>("Freddy"));
bombs.push_back(std::make_shared<Bomb>("Charlie"));
bombs.push_back(std::make_shared<Bomb>("Teddy"));
bombs.push_back(std::make_shared<Bomb>("Trudy"));
/* The key part is the lifetime of the iterator. If the vector
* changes during the lifetime of the iterator, even if it did
* not reallocate, an error will be logged */
for(sstd::vector<std::shared_ptr<Bomb> >::iterator it = bombs.begin(); it != bombs.end(); it++)
{
it->get()->touch();
}
return 0;
}
sstd_vector.h
#include <vector>
#include <stdlib.h>
namespace sstd
{
template <typename T>
class vector
{
std::vector<T> data;
size_t refs;
void check_valid()
{
if(refs > 0)
{
/* Report an error or abort */
abort();
}
}
public:
vector() : refs(0) { }
~vector()
{
check_valid();
}
vector& operator=(vector const& other)
{
check_valid();
data = other.data;
return *this;
}
void push_back(T val)
{
check_valid();
data.push_back(val);
}
void clear()
{
check_valid();
data.clear();
}
class iterator
{
friend class vector;
typename std::vector<T>::iterator it;
vector<T>* parent;
iterator() { }
iterator& operator=(iterator const&) { abort(); }
public:
iterator(iterator const& other)
{
it = other.it;
parent = other.parent;
parent->refs++;
}
~iterator()
{
parent->refs--;
}
bool operator !=(iterator const& other)
{
if(it != other.it) return true;
if(parent != other.parent) return true;
return false;
}
iterator operator ++(int val)
{
iterator rtn = *this;
it ++;
return rtn;
}
T* operator ->()
{
return &(*it);
}
T& operator *()
{
return *it;
}
};
iterator begin()
{
iterator rtn;
rtn.it = data.begin();
rtn.parent = this;
refs++;
return rtn;
}
iterator end()
{
iterator rtn;
rtn.it = data.end();
rtn.parent = this;
refs++;
return rtn;
}
};
}
The disadvantages of this system is that I must use an iterator rather than .at(idx) or [idx]. I personally don't mind this one so much. I can still use .begin() + idx if random access is needed.
It is a little bit slower (nothing compared to Valgrind though). When I am done, I can do a search / replace of sstd::vector with std::vector and there should be no performance drop.

Is c++11 "range-based for" thread-safe?

I have an environment modelled by lines and points packed in two std::vector.
I want to calculate a field generated by this environnement. I multithreaded the process. As the environment is totally defined at the begining, threads should only read on it so I don't use any syncrhonisation as discribed here and there.
The problem comes now : when I iterate through the lines in the environement, I have two different behaviours depending if I use the c++11 range-based for statment or if I use a more common for statement with iterators.
It seems that the range-based for isn't thread-safe and I'm wondering why?
If I'm not right assuming that, it might significate I have a more deep problem that may reappear latter.
Here is a piece of code, the first worker seems to work, the second provoke a segfault.
Worker::worker(Environement const* e, int id):threadId(id),env(e)
{
}
// worker that seems to do his job.
void Worker::run() const
{
cout<<"in thread n "<<_threadId<<endl;
vector<Line> const* lines = &env->_lines;
for(std::vector<Line>::const_iterator it = lines->begin() ;it != lines->end(); ++it ){
it->hello();
}
}
// create a segfault
void Worker::run2() const
{
cout<<"in thread n "<<_threadId<<endl;
vector<Line> const& lines = env->_lines;
for(auto it : lines){
it.hello();
}
}
The simplified structure of data if needed:
struct Line
{
void hello() const {std::cout<<"hello"<<std::endl;}
}
struct Environment
{
std::vector<Line> _lines;
std::vector<Point> _points;
}

An attempt to create atomic reference counting is failing with deadlock. Is this the right approach?

So I'm attempting to create copy-on-write map that uses an attempt at atomic reference counting on the read-side to not have locking.
Something isn't quite right. I see some references getting over-incremented and some are going down negative, so something isn't really atomic. In my tests I have 10 reader threads looping 100 times each doing a get() and 1 writer thread doing 100 writes.
It gets stuck in the writer because some of the references never go down to zero, even though they should.
I'm attempting to use the 128-bit DCAS technique laid explained by this blog.
Is there something blatantly wrong with this or is there an easier way to debugging this rather than playing with it in the debugger?
typedef std::unordered_map<std::string, std::string> StringMap;
static const int zero = 0; //provides an l-value for asm code
class NonBlockingReadMapCAS {
public:
class OctaWordMapWrapper {
public:
StringMap* fStringMap;
//std::atomic<int> fCounter;
int64_t fCounter;
OctaWordMapWrapper(OctaWordMapWrapper* copy) : fStringMap(new StringMap(*copy->fStringMap)), fCounter(0) { }
OctaWordMapWrapper() : fStringMap(new StringMap), fCounter(0) { }
~OctaWordMapWrapper() {
delete fStringMap;
}
/**
* Does a compare and swap on an octa-word - in this case, our two adjacent class members fStringMap
* pointer and fCounter.
*/
static bool inline doubleCAS(OctaWordMapWrapper* target, StringMap* compareMap, int64_t compareCounter, StringMap* swapMap, int64_t swapCounter ) {
bool cas_result;
__asm__ __volatile__
(
"lock cmpxchg16b %0;" // cmpxchg16b sets ZF on success
"setz %3;" // if ZF set, set cas_result to 1
: "+m" (*target),
"+a" (compareMap), //compare target's stringmap pointer to compareMap
"+d" (compareCounter), //compare target's counter to compareCounter
"=q" (cas_result) //results
: "b" (swapMap), //swap target's stringmap pointer with swapMap
"c" (swapCounter) //swap target's counter with swapCounter
: "cc", "memory"
);
return cas_result;
}
OctaWordMapWrapper* atomicIncrementAndGetPointer()
{
if (doubleCAS(this, this->fStringMap, this->fCounter, this->fStringMap, this->fCounter +1))
return this;
else
return NULL;
}
OctaWordMapWrapper* atomicDecrement()
{
while(true) {
if (doubleCAS(this, this->fStringMap, this->fCounter, this->fStringMap, this->fCounter -1))
break;
}
return this;
}
bool atomicSwapWhenNotReferenced(StringMap* newMap)
{
return doubleCAS(this, this->fStringMap, zero, newMap, 0);
}
}
__attribute__((aligned(16)));
std::atomic<OctaWordMapWrapper*> fReadMapReference;
pthread_mutex_t fMutex;
NonBlockingReadMapCAS() {
fReadMapReference = new OctaWordMapWrapper();
}
~NonBlockingReadMapCAS() {
delete fReadMapReference;
}
bool contains(const char* key) {
std::string keyStr(key);
return contains(keyStr);
}
bool contains(std::string &key) {
OctaWordMapWrapper *map;
do {
map = fReadMapReference.load()->atomicIncrementAndGetPointer();
} while (!map);
bool result = map->fStringMap->count(key) != 0;
map->atomicDecrement();
return result;
}
std::string get(const char* key) {
std::string keyStr(key);
return get(keyStr);
}
std::string get(std::string &key) {
OctaWordMapWrapper *map;
do {
map = fReadMapReference.load()->atomicIncrementAndGetPointer();
} while (!map);
//std::cout << "inc " << map->fStringMap << " cnt " << map->fCounter << "\n";
std::string value = map->fStringMap->at(key);
map->atomicDecrement();
return value;
}
void put(const char* key, const char* value) {
std::string keyStr(key);
std::string valueStr(value);
put(keyStr, valueStr);
}
void put(std::string &key, std::string &value) {
pthread_mutex_lock(&fMutex);
OctaWordMapWrapper *oldWrapper = fReadMapReference;
OctaWordMapWrapper *newWrapper = new OctaWordMapWrapper(oldWrapper);
std::pair<std::string, std::string> kvPair(key, value);
newWrapper->fStringMap->insert(kvPair);
fReadMapReference.store(newWrapper);
std::cout << oldWrapper->fCounter << "\n";
while (oldWrapper->fCounter > 0);
delete oldWrapper;
pthread_mutex_unlock(&fMutex);
}
void clear() {
pthread_mutex_lock(&fMutex);
OctaWordMapWrapper *oldWrapper = fReadMapReference;
OctaWordMapWrapper *newWrapper = new OctaWordMapWrapper(oldWrapper);
fReadMapReference.store(newWrapper);
while (oldWrapper->fCounter > 0);
delete oldWrapper;
pthread_mutex_unlock(&fMutex);
}
};
Maybe not the answer but this looks suspicious to me:
while (oldWrapper->fCounter > 0);
delete oldWrapper;
You could have a reader thread just entering atomicIncrementAndGetPointer() when the counter is 0 thus pulling the rug underneath the reader thread by deleting the wrapper.
Edit to sum up the comments below for potential solution:
The best implementation I'm aware of is to move fCounter from OctaWordMapWrapper to fReadMapReference (You don't need the OctaWordMapWrapper class at all actually). When the counter is zero swap the pointer in your writer. Because you can have high contention of reader threads which essentially blocks the writer indefinitely you can have highest bit of fCounter allocated for reader lock, i.e. while this bit is set the readers spin until the bit is cleared. The writer sets this bit (__sync_fetch_and_or()) when it's about to change the pointer, waits for the counter to fall down to zero (i.e. existing readers finish their work) and then swap the pointer and clears the bit.
This approach should be waterproof, though it's obviously blocking readers upon writes. I don't know if this is acceptable in your situation and ideally you would like this to be non-blocking.
The code would look something like this (not tested!):
class NonBlockingReadMapCAS
{
public:
NonBlockingReadMapCAS() :m_ptr(0), m_counter(0) {}
private:
StringMap *acquire_read()
{
while(1)
{
uint32_t counter=atom_inc(m_counter);
if(!(counter&0x80000000))
return m_ptr;
atom_dec(m_counter);
while(m_counter&0x80000000);
}
return 0;
}
void release_read()
{
atom_dec(m_counter);
}
void acquire_write()
{
uint32_t counter=atom_or(m_counter, 0x80000000);
assert(!(counter&0x80000000));
while(m_counter&0x7fffffff);
}
void release_write()
{
atom_and(m_counter, uint32_t(0x7fffffff));
}
StringMap *volatile m_ptr;
volatile uint32_t m_counter;
};
Just call acquire/release_read/write() before & after accessing the pointer for read/write. Replace atom_inc/dec/or/and() with __sync_fetch_and_add(), __sync_fetch_and_sub(), __sync_fetch_and_or() and __sync_fetch_and_and() respectively. You don't need doubleCAS() for this actually.
As noted correctly by #Quuxplusone in a comment below this is single producer & multiple consumer implementation. I modified the code to assert properly to enforce this.
Well, there are probably lots of problems, but here are the obvious two.
The most trivial bug is in atomicIncrementAndGetPointer. You wrote:
if (doubleCAS(this, this->fStringMap, this->fCounter, this->fStringMap, this->fCounter +1))
That is, you're attempting to increment this->fCounter in a lock-free way. But it doesn't work, because you're fetching the old value twice with no guarantee that the same value is read each time. Consider the following sequence of events:
Thread A fetches this->fCounter (with value 0) and computes argument 5 as this->fCounter +1 = 1.
Thread B successfully increments the counter.
Thread A fetches this->fCounter (with value 1) and computes argument 3 as this->fCounter = 1.
Thread A executes doubleCAS(this, this->fStringMap, 1, this->fStringMap, 1). It succeeds, of course, but we've lost the "increment" we were trying to do.
What you wanted is more like
StringMap* oldMap = this->fStringMap;
int64_t oldCounter = this->fCounter;
if (doubleCAS(this, oldMap, oldValue, oldMap, oldValue+1))
...
The other obvious problem is that there's a data race between get and put. Consider the following sequence of events:
Thread A begins to execute get: it fetches fReadMapReference.load() and prepares to execute atomicIncrementAndGetPointer on that memory address.
Thread B finishes executing put: it deletes that memory address. (It is within its rights to do so, because the wrapper's reference count is still at zero.)
Thread A starts executing atomicIncrementAndGetPointer on the deleted memory address. If you're lucky, you segfault, but of course in practice you probably won't.
As explained in the blog post:
The garbage collection interface is omitted, but in real applications you would need to scan the hazard pointers before deleting a node.
Another user has suggested a similar approach, but if you are compiling with gcc (and perhaps with clang), you could use the intrinsic __sync_add_and_fetch_4 which does something similar to what your assembly code does, and is likely much more portable.
I have used it when I implemented refcounting in an Ada library (but the algorithm remains the same).
int __sync_add_and_fetch_4 (int* ptr, int value);
// increments the value pointed to by ptr by value, and returns the new value
Although I'm not sure how your reader threads work, I suspect your problem is that you are not catching and handling possible out_of_range exceptions in your get() method that might arise from this line: std::string value = map->fStringMap->at(key);. Note that if key is not found in the map, this will throw, and exit the function without decrementing the counter, which would lead to the condition you describe (of getting stuck in the while-loop within the writer thread while waiting for the counters to decrement).
In any event, whether this is the cause of the issues you're seeing or not, you definitely need to either handle this exception (and any others) or modify your code such that there's no risk of a throw. For the at() method, I would probably just use find() instead, and then check the iterator it returns. However, more generally, I would suggest using the RAII pattern to ensure that you don't let any unexpected exceptions escape without unlocking/decrementing. For example, you might check out boost::scoped_lock to wrap your fMutex and then write something simple like this for the OctaWordMapWrapper increment/decrement:
class ScopedAtomicMapReader
{
public:
explicit ScopedAtomicMapReader(std::atomic<OctaWordMapWrapper*>& map) : fMap(NULL) {
do {
fMap = map.load()->atomicIncrementAndGetPointer();
} while (NULL == fMap);
}
~ScopedAtomicMapReader() {
if (NULL != fMap)
fMap->atomicDecrement();
}
OctaWordMapWrapper* map(void) {
return fMap;
}
private:
OctaWordMapWrapper* fMap;
}; // class ScopedAtomicMapReader
With something like that, then for example, your contains() and get() methods would simplify to (and be immune to exceptions):
bool contains(std::string &key) {
ScopedAtomicMapReader mapWrapper(fReadMapReference);
return (mapWrapper.map()->fStringMap->count(key) != 0);
}
std::string get(std::string &key) {
ScopedAtomicMapReader mapWrapper(fReadMapReference);
return mapWrapper.map()->fStringMap->at(key); // Now it's fine if this throws...
}
Finally, although I don't think you should have to do this, you might also try declaring fCounter as volatile as well (given your access to it in the while-loop in the put() method will be on a different thread than the writes to it on the reader threads.
Hope this helps!
By the way, one other minor thing: fReadMapReference is leaking. I think you should delete this in your destructor.

Position in Member Declaration Breaks Code?

A while ago I asked a question on why the following code did not work:
std::vector<std::vector<std::vector<Tile_Base*>>> map_tile; // This is located in Map object. See below.
int t_x, t_y;
t_x = t_y = 200;
map_tiles.begin(); // clear(), resize() and every other function still causes problems
The thing is, is that it should have worked, yet Visual Studios 2012 throws an exception when the resize function is called. The exception pointed to this piece of code:
*_Pnext != 0; *_Pnext = (*_Pnext)->_Mynextiter)
located in xutility. It said that there was an violating on access to reading the memory. I thought maybe somehow I lost access to the member along the way? (Using VS' watch I saw the memory was not corrupted)
So, I fiddled around with the code and tried to figure out what could possibly be going wrong, and after awhile I moved the map_tiles object down to the bottom of the list, and it worked:
// WORKS
class Map {
std::vector<Tile_Base*> spawn_tiles;
// map tile specific
bool Is_Valid(int,int);
std::string name;
std::vector<std::vector<std::vector<Tile_Base*> > > map_tiles;
public:
// ...
}
// DOESN'T WORK
class Map {
std::vector<std::vector<std::vector<Tile_Base*> > > map_tiles;
std::vector<Tile_Base*> spawn_tiles;
// map tile specific
bool Is_Valid(int,int);
std::string name;
public:
// ...
}
Any help pointing out what went wrong? I can't come up with any reasonable explanation.
A vector<T> comprises two discrete sets of data: the internal state and the array of Ts. The internal state - capacity, size, pointer - is separate from the array. The issue you're describing is normally caused by something overwriting the vector object, i.e the internal state. To track this down easily you could use a container class:
typedef std::vector<std::vector<std::vector<Tile_Base*> > > maptiles_t;
class CMapTiles
{
unsigned int m_guard;
maptiles_t m_tiles;
enum { Guard = 0xdeadbeef };
public:
CMapTiles() : m_guard(Guard), m_tiles() {}
~CMapTiles() { assert(m_guard == Guard); }
void Check()
{
#if defined(DEBUG)
if (m_guard != Guard)
DebugBreak();
#endif
}
void Resize(size_t x, size_t y)
{
Check();
auto init = std::vector<std::vector<Tile_Base*> >(y/32);
m_tiles.resize(m_x / 32, init);
Check();
}
const maptiles_t& tiles() const { Check(); return m_tiles; }
maptiles_t& tiles() { Check(); return m_tiles; }
};
And instead of using std::vector<...> map_tiles have CMapTiles map_tiles, and then when you want to get at the vector, map_tiles.tiles().
Hope this helps.

C++ std::vector::clear() crash

I've got a program where I have a std::vector as a member of a class:
class Blackboard
{
public:
inline std::vector<Vector2<int> > GetPath()
{ return m_path; }
inline void SetPath(std::vector<Vector2<int> > path)
{ m_path = path; }
inline void ClearPath()
{ if(m_path.size() > 0) m_path.clear(); }
private:
std::vector<Vector2<int>> m_path;
};
Where the Vector2 class is defined as:
template <class T>
class Vector2
{
private:
T m_x;
T m_y;
public:
Vector2(void)
{ m_x = 0; m_y = 0;}
Vector2(T x, T y)
{ m_x = x; m_y = y;}
~Vector2(void)
{ }
inline T x() const
{ return m_x; }
inline T y() const
{ return m_y; }
// ...
};
And at some point I call:
m_blackboard.ClearPath();
This works fine in debug, but crashes in release with the "Microsoft Visual Studio C Runtime Library has detected a fatal error in Test2.exe." message.
The call stack, at the last point where I can still see shows that:
Test2.exe!std::vector<RBT::Vector2<int>,
std::allocator<RBT::Vector2<int> > >::erase
(std::_Vector_const_iterator<RBT::Vector2<int>,
std::allocator<RBT::Vector2<int> > >
_First_arg={m_x=15 m_y=7 },
std::_Vector_const_iterator<RBT::Vector2<int>,
std::allocator<RBT::Vector2<int> > >
_Last_arg={m_x=15 m_y=8 }) Line 1037 + 0xe bytes C++
Here is where I'm calling the code that ends up crashing:
BTNode::Status GoToDestBehavior::Update()
{
BTEntityData::Node* node = m_dataRef->m_bTree.GetNode(m_index);
if(node->m_state == BTNode::STATE_READY)
{
BehaviorTree::RequestDeferredAction(Batch::PATHFIND, m_dataRef->m_entityID);
return BTNode::STATE_RUNNING;
}
else if(node->m_state == BTNode::STATE_RUNNING)
{
std::vector<Vector2<int>> path = m_dataRef->m_blackboard.GetPath();
EntitySystem::Entity* entity = EntitySystem::GetEntity(m_dataRef->m_entityID);
Assert(entity != NULL, "Invalid entity\n");
Assert(entity->HasComponent(Component::PHYSICS_COMP), "Associated entity must have physics component to move\n");
int phyIndex = entity->GetComponentIndex(Component::PHYSICS_COMP);
PhysicsSystem::PhysicsData * physicsData = PhysicsSystem::GetComponent(phyIndex);
Assert(physicsData != NULL, "Invalid physics data\n");
// Path is empty, so finish
if(path.size() == 0)
{
physicsData->m_dir = Direction::NONE; // Stop because we are here
return BTNode::STATE_SUCCESS;
}
// Remove last element if we are at it
//LogFmt("Size of vector %d\n", path.size());
Vector2<int> last = path.back();
if(last.x() == physicsData->m_posX && last.y() == physicsData->m_posY)
{
path.pop_back();
}
// Last node of the path has been transversed
if(path.size() == 0)
{
physicsData->m_dir = Direction::NONE; // Stop because we are here
m_dataRef->m_blackboard.ClearPath();
return BTNode::STATE_SUCCESS;
}
Vector2<int> step = path.back();
physicsData->m_dir = Direction::VectorToDirection(physicsData->m_posX, physicsData->m_posY, step.x(), step.y());
if(physicsData->m_dir == Direction::NONE)
{
m_dataRef->m_blackboard.SetPath(path);
return BTNode::STATE_FAIL;
}
m_dataRef->m_blackboard.SetPath(path);
return BTNode::STATE_RUNNING;
}
return BTNode::STATE_ERROR;
}
I don't know why it's behaving like this. Most similar issues I've found online have the problem of calling clear on an empty array, but I have a guard against that, so it shouldn't be the issue.
The other thing I can think of is my Vector2 class requiring some kind of copy constructor or something for when I add elements to the vector, but in the end it's just 2 ints, so I don't know why that might be failing.
I've been over this code too much and might be missing something obvious.
It's perfectly fine to call clear on an empty container of any sort.
Using my psychic debugging skills, I have determined that in code you aren't showing us you're accessing elements of the vector that don't actually exist (possibly before you inserted them, and probably with operator[]). Usually element creation is done through resize, push_back, or insert.
The other possibility is that you have another memory corruption somewhere in your program.
I found an issue I had due to a change in data format. The std::list I was using changed from a pointer to a list to directly the list. This started causing all sorts of errors that checking for the size of the list did not solve and were caused by a ZeroMemory()/memset() call that wiped out all of the tracking data of the list, since it was now part of the class instead of a pointer to the list.
If you have an empty list and call .clear() on it with a crash, chances are you have messed up the internal tracking memory as mentioned by Mark in his answer. Look for a place where you are doing memory clearing on containing classes and the like as the most likely culprits.
I know it's been 8 years, but I thought too I had this problem when I was destroying an empty bst into which my code was sending a nullptr value to the __p variable in the implementation of "new_allocator.h". This __p is needed to never be null, as mentioned in the file itself!
// __p is not permitted to be a null pointer.
The solution is not sending anything if you don't have something to send, basically.