I'm trying to adapt from Java to C++ and am unsure about the correct way to manage memory when I pop() an item from an STL priority_queue.
Am I supposed to use delete to clean up items removed from the queue that I no longer need? If so, how? If not, why not?
I have written myself a small program to learn how to use a priority_queue (code below). In this program it's no big deal if there is a memory leak, because it's so small-scale and ends so quickly. But I want to learn the correct way of doing things so I can write a program that correctly handles a much larger queue without memory leaks.
The thing I do not understand is this: top() returns a reference rather than a pointer. But I can't use delete on a reference, can I?
Can someone point me in the right direction here?
--------------------
struct PathCost{
int dest;
int cost;
PathCost(int _dest, int _cost){
dest = _dest;
cost = _cost;
}
bool operator<(PathCost other) const;
bool operator>(PathCost other) const;
};
bool PathCost::operator<(PathCost other) const{
return cost < other.cost;
}
bool PathCost::operator>(PathCost other) const{
return cost > other.cost;
}
int main(){
PathCost pc = PathCost(1, 2);
pc = PathCost(3, 4);
PathCost* pcp = new PathCost(5, 6);
delete pcp;
priority_queue<PathCost,
vector<PathCost>,
greater<vector<PathCost>::value_type> > tentativeQ;
cout << "loading priority queue ...\n";
tentativeQ.push(PathCost(8, 88));
tentativeQ.push(PathCost(5, 55));
tentativeQ.push(PathCost(7, 77));
tentativeQ.push(PathCost(4, 44));
cout << "\nlist items on queue in priority order ...\n";
while (!tentativeQ.empty()){
pc = tentativeQ.top();
cout << "dest:" << pc.dest << " cost:" << pc.cost << endl;
tentativeQ.pop();
/* DO I NEED TO DO MEMORY CLEANUP AT THIS POINT? */
}
}
Am I supposed to use delete to clean up items removed from the queue that I no longer need? If so, how? If not, why not?
You do not need to perform any cleanup, because the priority_queue is holding PathCost objects. When they get removed from the queue, their destructor is called automatically, as per the rules of the language.
Behind the scenes, the story can be somewhat more complicated. The inserting an item into the priority_queue data structure will, in general, result in the dynamic allocation of a copy of that object. But resource allocation and de-allocation is taken care of by the underlying data structure (by default std::vector) so you do not have to worry about memory management. Standard library containers and container adapters are said to have value semantics.
The thing I do not understand is this: top() returns a reference rather than a pointer. But I can't use delete on a reference, can I?
The priority_queue is said to own the elements it holds, so if you take a reference to one of its elements, you cannot delete it. In fact, you cannot know if it needs to be deleted at all or not. Furthermore, although you have access to references of elements in the queue, you are not forced to keep references to these elements. You can also make your own copy:
const PathCost& pRef = tentativeQ.top(); // take constant reference to top element
PathCost p = tentativeQ.top(); // make copy of last element
In the first case, you have to be careful that you do not use that reference after the top element has been removed via a call to pop().
Related
I wanted to ask you about the vector::shrink_to_fit() function.
Lets say i've got a vector of pointers to objects (or unique_ptr in my case)
and i want to resize it to the amount of objects that it stores.
At some point i remove some of the objects from the vector by choice using the release() function of unique_ptr
so there is a null pointer in that specific place in the vector as far as i know.
So i want to resize it and remove that null pointer in between the elements of the vector and i'm asking if i could do that with shrink_to_fit() function?
No, shrink_to_fit does not change the contents or size of the vector. All it might do is release some of its internal memory back to a lower level library or the OS, etc. behind the scenes. It may invalidate iterators, pointers, and references, but the only other change you might see would be a reduction of capacity(). It's also valid for shrink_to_fit to do absolutely nothing at all.
It sounds like you want the "Erase-remove" idiom:
vec.erase(std::remove(vec.begin(), vec.end(), nullptr), vec.end());
The std::remove shifts all the elements which don't compare equal to nullptr left, filling the "gaps". But it doesn't change the vector's size; instead it returns an iterator to the position in the vector just after the sequence of shifted elements; the rest of the elements still exist but have been moved from. Then the erase member function gets rid of those unnecessary end elements, reducing the vector's size.
Or as #chris notes, C++20 adds an erase overload to std::vector and a related erase_if, which makes things easier. They may already be supported in MSVC 2019. Using the new erase could just look like:
vec.erase(nullptr);
This quick test show that u can't do like this.
int x = 1;
vector<int*> a;
cout << a.capacity() << endl;
for (int i = 0; i < 10; ++i) {
a.push_back(&x);
}
cout << a.capacity() << endl;
a[9] = nullptr;
a.shrink_to_fit();
cout << a.capacity() << endl;
Result:
0
16
10
m_gates[index].release(); m_gates.shrink_to_fit();
Based on your comment, what you're looking for is simply to erase this single element from your vector right then and there. Replace both of these statements with:
m_gates.erase(m_gates.begin() + index);
Or a more generic version if swapping containers in the future is a possibility:
using std::begin;
m_gates.erase(std::next(begin(m_gates), index));
erase supports iterators rather than indices, so there's a conversion in there. This will remove the pointer from the vector while calling its destructor, which causes unique_ptr to properly clean up its memory.
Now erasing elements one by one could potentially be a performance concern. If it does end up being a concern, you can do what you were getting at in the question and null them out, then remove them all in one go later on:
m_gates[index].reset();
// At some point in the program's future:
std::erase(m_gates, nullptr);
What you have right now is highly likely to be a memory leak. release releases ownership of the managed memory, meaning you're now responsible for cleaning it up, which isn't what you were looking for. Both erase and reset (or equivalently, = {} or = nullptr) will actually call the destructor of unique_ptr while it still has ownership and properly clean up the memory. shrink_to_fit is for vector capacity, not size, and is unrelated.
at the end the solution that i found was simple:
void Controller::delete_allocated_memory(int index)
{
m_vec.erase(m_vec.begin() + index);
m_vec.shrink_to_fit();
}
it works fine even if the vector is made of unique_ptrs, as far as i know it doesn't even create the null pointer that i was talking about and it shifts left all existing objects in the vector.
what do you think?
i am currently working on a programm, where i read Information from various CAN signals and store them in 3 different vectors.
For each signal, a new pointer to an object is created an stored in two vectors.
My question is, when i am deleting the vectors, is it enough, when i delete the pointers to the objects in one vector and just clear the other one.
Here is my code:
Declaration of vectors an iterator:
std::vector <CAN_Signal*> can_signals; ///< Stores all CAN signals, found in the csv file
std::vector <CAN_Signal*> can_signals_rx; ///< Stores only the Rx CAN signals, found in the csv file
std::vector <CAN_Signal*> can_signals_tx; ///< Stores only the Tx CAN signals, found in the csv file
std::vector <CAN_Signal*>::iterator signal_iterator; ///< Iterator for iterating through the varoius vectors
Filling the vectors:
for (unsigned int i = 0; i < m_number_of_lines; ++i)
{
string s = csv_file.get_line(i);
CAN_Signal* can_signal = new CAN_Signal(s, i);
if (can_signal->read_line() == false)
return false;
can_signal->generate_data();
can_signals.push_back(can_signal);
if (get_first_character(can_signal->get_PDOName()) == 'R')
{
can_signals_rx.push_back(can_signal);
}
else if (get_first_character(can_signal->get_PDOName()) == 'T')
{
can_signals_tx.push_back(can_signal);
}
else
{
cout << "Error! Unable to detect whether signal direction is Rx or Tx!" << endl;
return false;
}
}
Deleting the vectors:
File_Output::~File_Output()
{
for (signal_iterator = can_signals.begin(); signal_iterator != can_signals.end(); ++signal_iterator)
{
delete (*signal_iterator);
}
can_signals.clear();
//for (signal_iterator = can_signals_rx.begin(); signal_iterator != can_signals_rx.end(); ++signal_iterator)
//{
// delete (*signal_iterator);
//}
can_signals_rx.clear();
//for (signal_iterator = can_signals_tx.begin(); signal_iterator != can_signals_tx.end(); ++signal_iterator)
//{
// delete (*signal_iterator);
//}
can_signals_tx.clear();
cout << "Destructor File_Output!" << endl;
}
When i uncomment the commented for-loops and run the Programm, it crashes, when the destructor is called.
So my guess is, that this is the correct way of doing so, because the pointers are allready deleted and it is enough to just clear remaining two vectors.
But i am not really sure and would really like to hear the opinion of an expert on this.
Thank you very much.
when i am deleting the vectors, is it enough, when i delete the pointers to the objects in one vector and just clear the other one.
Since the pointers in the other vectors are copies, not only is it sufficient to delete them only in the one vector, but deleting the copies would in fact have undefined behaviour. You don't ever want your program to have undefined behaviour.
Clearing any of the vectors in the destructor of File_Output seems to be unnecessary, assuming the vectors are members of File_Output. This is because the members are about to be destroyed anyway.
Am I deleting my vector of pointers to objects correctly?
With the assumption that you didn't make copies of the pointers that you delete elsewhere: Yes, that is the correct way to delete them.
Your code has a memory leak:
CAN_Signal* can_signal = new CAN_Signal(s, i);
if (can_signal->read_line() == false)
return false;
If that condition is true, then the newly allocated CAN_Signal will be leaked since the pointer is neither deleted nor stored anywhere when the function returns.
Your code is not exception safe: If any of these lines throws, then the pointer is leaked.
if (can_signal->read_line() == false)
return false;
can_signal->generate_data();
can_signals.push_back(can_signal);
It is unclear, why you would want to use explicit memory management in the first place. Unless there is a reason, I recommend that you don't do that and instead use std::vector <CAN_Signal> can_signals. This would fix your problems with memory leaks and exception safety, remove the need to implement a custom destructor, and make the implementation of copy/move constructor/assignment of File_Output simpler.
Note that if you do that, you must reserve the memory for the elements for can_signals to prevent reallocation because the pointers in other vectors would be invalidated upon reallocation. As a side-effect, this makes the program slightly faster.
I am writing a program in C++ that will be used with Windows Embedded Compact 7. I have heard that it is best not to dynamically allocate arrays when writing embedded code. I will be keeping track of between 0 and 50 objects, so I am initially allocating 50 objects.
Object objectList[50];
int activeObjectIndex[50];
static const int INVALID_INDEX = -1;
int activeObjectCount=0;
activeObjectCount tells me how many objects I am actually using, and activeObjectIndex tells me which objects I am using. If the 0th, 7th, and 10th objects were being used I would want activeObjectIndex = [0,7,10,-1,-1,-1,...,-1]; and activeObjectCount=3;
As different objects become active or inactive I would like activeObjectIndex list to remain ordered.
Currently I am just sorting the activeObjectIndex at the end of each loop that the values might change in.
First, is there a better way to keep track of objects (that may or may not be active) in an embedded system than what I am doing? If not, is there an algorithm I can use to keep the objects sorted each time I add or remove and active object? Or should I just periodically do a bubble sort or something to keep them in order?
You have a hard question, where the answer requires quite a bit of knowledge about your system. Without that knowledge, no answer I can give would be complete. However, 15 years of embedded design has taught me the following:
You are correct, you generally don't want to allocate objects during runtime. Preallocate all the objects, and move them to active/inactive queues.
Keeping things sorted is generally hard. Perhaps you don't need to. You don't mention it, but I'll bet you really just need to keep your Objects in "used" and "free" pools, and you're using the index to quickly find/delete Objects.
I propose the following solution. Change your object to the following:
class Object {
Object *mNext, *mPrevious;
public:
Object() : mNext(this), mPrevious(this) { /* etc. */ }
void insertAfterInList(Object *p2) {
mNext->mPrev = p2;
p2->mNext = mNext;
mNext = p2;
p2->mPrev = this;
}
void removeFromList() {
mPrev->mNext = mNext;
mNext->mPrev = mPrev;
mNext = mPrev = this;
}
Object* getNext() {
return mNext;
}
bool hasObjects() {
return mNext != this;
}
};
And use your Objects:
#define NUM_OBJECTS (50)
Object gObjects[NUM_OBJECTS], gFree, gUsed;
void InitObjects() {
for(int i = 0; i < NUM_OBJECTS; ++i) {
gFree.insertAfter(&mObjects[i]);
}
}
Object* GetNewObject() {
assert(mFree.hasObjects());
Object obj = mFree->getNext();
obj->removeFromList();
gUsed.insertAfter(obj);
return obj;
}
void ReleaseObject(Object *obj) {
obj->removeFromList();
mFree.insertAfter(obj);
}
Edited to fix a small glitch. Should work now, although not tested. :)
The overhead of a std::vector is very small. The problem you can have is that dynamic resizing will allocate more memory than needed. However, as you have 50 elements, this shouldn't be a problem at all. Give it a try, and change it only if you see a strong impact.
If you cannot/do not want to remove unused objects from a std::vector, you can maybe add a boolean to your Object that indicates if it is active? This won't require more memory than using activeObjectIndex (maybe even less depending on alignment issues).
To sort the data with a boolean (not active at the end), write a function :
bool compare(const Object & a, const Object & b) {
if(a.active && !b.active) return true;
else return false;
}
std::sort(objectList,objectList + 50, &compare); // if you use an array
std::sort(objectList.begin(),objectList.end(), &compare); // if you use std::vector
If you want to sort using activeObjectIndex it will be more complicated.
If you want to use a structure that is always ordered, use std::set. However it will require more memory (but for 50 elements, it won't be an issue).
Ideally, implement the following function :
bool operator<(const Object & a, const Object & b) {
if(a.active && !b.active) return true;
else return false;
}
This will allow to use directly std::sort(objectList.begin(), objectList.end()) or declare an std::set that will stay sorted.
One way to keep track of active / inactive is to have the active Objects be on a doubly linked list. When an object goes from inactive to active then add to the list, and active to inactive remove from the list. You can add these to Object
Object * next, * prev;
so this does not require memory allocation.
If no dynamic memory allocation is allowed, I would use simple c-array or std::array and an index, which points into last+1 object. Objects are always kept in sorted order.
Addition is done by inserting new object into correct position of sorted list. To find insert position lower_bound or find_if can be used. For 50 element, second probably will be faster. Removal is similar.
You should not worry about having the list sorted, as writing a method to search in a list of indices what are the ones active would be O(N), and, in your particular case, amortized to O(1), as your array seems to be small enough for this little extra verification.
You could maintain the index of the last element checked, until it reaches the limit:
unsigned int next(const unsigned int& last) {
for (unsigned int i = last + 1; i < MAX_ARRAY_SIZE; i++) {
if (activeObjectIndex[i] != -1) {
return i;
}
}
return -1;
}
However, if you really want to have a side index, you can simply double the size of the array, creating a double linked list to the elements:
activeObjectIndex[MAX_ARRAY_SIZE * 3] = {-1};
activeObjectIndex[i] = "element id";
activeObjectIndex[i + 1] = "position of the previous element";
activeObjectIndex[i + 2] = "position of the next element";
I am thinking of how I can implement std::vector from the ground up.
How does it resize the vector?
realloc only seems to work for plain old stucts, or am I wrong?
it is a simple templated class which wraps a native array. It does not use malloc/realloc. Instead, it uses the passed allocator (which by default is std::allocator).
Resizing is done by allocating a new array and copy constructing each element in the new array from the old one (this way it is safe for non-POD objects). To avoid frequent allocations, often they follow a non-linear growth pattern.
UPDATE: in C++11, the elements will be moved instead of copy constructed if it is possible for the stored type.
In addition to this, it will need to store the current "size" and "capacity". Size is how many elements are actually in the vector. Capacity is how many could be in the vector.
So as a starting point a vector will need to look somewhat like this:
template <class T, class A = std::allocator<T> >
class vector {
public:
// public member functions
private:
T* data_;
typename A::size_type capacity_;
typename A::size_type size_;
A allocator_;
};
The other common implementation is to store pointers to the different parts of the array. This cheapens the cost of end() (which no longer needs an addition) ever so slightly at the expense of a marginally more expensive size() call (which now needs a subtraction). In which case it could look like this:
template <class T, class A = std::allocator<T> >
class vector {
public:
// public member functions
private:
T* data_; // points to first element
T* end_capacity_; // points to one past internal storage
T* end_; // points to one past last element
A allocator_;
};
I believe gcc's libstdc++ uses the latter approach, but both approaches are equally valid and conforming.
NOTE: This is ignoring a common optimization where the empty base class optimization is used for the allocator. I think that is a quality of implementation detail, and not a matter of correctness.
Resizing the vector requires allocating a new chunk of space, and copying the existing data to the new space (thus, the requirement that items placed into a vector can be copied).
Note that it does not use new [] either -- it uses the allocator that's passed, but that's required to allocate raw memory, not an array of objects like new [] does. You then need to use placement new to construct objects in place. [Edit: well, you could technically use new char[size], and use that as raw memory, but I can't quite imagine anybody writing an allocator like that.]
When the current allocation is exhausted and a new block of memory needs to be allocated, the size must be increased by a constant factor compared to the old size to meet the requirement for amortized constant complexity for push_back. Though many web sites (and such) call this doubling the size, a factor around 1.5 to 1.6 usually works better. In particular, this generally improves chances of re-using freed blocks for future allocations.
From Wikipedia, as good an answer as any.
A typical vector implementation consists, internally, of a pointer to
a dynamically allocated array,[2] and possibly data members holding
the capacity and size of the vector. The size of the vector refers to
the actual number of elements, while the capacity refers to the size
of the internal array. When new elements are inserted, if the new size
of the vector becomes larger than its capacity, reallocation
occurs.[2][4] This typically causes the vector to allocate a new
region of storage, move the previously held elements to the new region
of storage, and free the old region. Because the addresses of the
elements change during this process, any references or iterators to
elements in the vector become invalidated.[5] Using an invalidated
reference causes undefined behaviour
Like this:
https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/stl_vector.h
(official gcc mirror on github)
///Implement Vector class
class MyVector {
int *int_arr;
int capacity;
int current;
public:
MyVector() {
int_arr = new int[1];
capacity = 1;
current = 0;
}
void Push(int nData);
void PushData(int nData, int index);
void PopData();
int GetData(int index);
int GetSize();
void Print();
};
void MyVector::Push(int data)
{
if (current == capacity){
int *temp = new int[2 * capacity];
for (int i = 0; i < capacity; i++)
{
temp[i] = int_arr[i];
}
delete[] int_arr;
capacity *= 2;
int_arr = temp;
}
int_arr[current] = data;
current++;
}
void MyVector::PushData(int data, int index)
{
if (index == capacity){
Push(index);
}
else
int_arr[index] = data;
}
void MyVector::PopData(){
current--;
}
int MyVector::GetData(int index)
{
if (index < current){
return int_arr[index];
}
}
int MyVector::GetSize()
{
return current;
}
void MyVector::Print()
{
for (int i = 0; i < current; i++) {
cout << int_arr[i] << " ";
}
cout << endl;
}
int main()
{
MyVector vect;
vect.Push(10);
vect.Push(20);
vect.Push(30);
vect.Push(40);
vect.Print();
std::cout << "\nTop item is "
<< vect.GetData(3) << std::endl;
vect.PopData();
vect.Print();
cout << "\nTop item is "
<< vect.GetData(1) << endl;
return 0;
}
It allocates a new array and copies everything over. So, expanding it is quite inefficient if you have to do it often. Use reserve() if you have to use push_back().
You'd need to define what you mean by "plain old structs."
realloc by itself only creates a block of uninitialized memory. It does no object allocation. For C structs, this suffices, but for C++ it does not.
That's not to say you couldn't use realloc. But if you were to use it (note you wouldn't be reimplementing std::vector exactly in this case!), you'd need to:
Make sure you're consistently using malloc/realloc/free throughout your class.
Use "placement new" to initialize objects in your memory chunk.
Explicitly call destructors to clean up objects before freeing your memory chunk.
This is actually pretty close to what vector does in my implementation (GCC/glib), except it uses the C++ low-level routines ::operator new and ::operator delete to do the raw memory management instead of malloc and free, rewrites the realloc routine using these primitives, and delegates all of this behavior to an allocator object that can be replaced with a custom implementation.
Since vector is a template, you actually should have its source to look at if you want a reference – if you can get past the preponderance of underscores, it shouldn't be too hard to read. If you're on a Unix box using GCC, try looking for /usr/include/c++/version/vector or thereabouts.
You can implement them with resizing array implementation.
When the array becomes full, create an array with twice as much the size and copy all the content to the new array. Do not forget to delete the old array.
As for deleting the elements from vector, do resizing when your array becomes a quarter full. This strategy makes prevents any performance glitches when one might try repeated insertion and deletion at half the array size.
It can be mathematically proved that the amortized time (Average time) for insertions is still linear for n insertions which is asymptotically the same as you will get with a normal static array.
realloc only works on heap memory. In C++ you usually want to use the free store.
I am observing strange behaviour of std::map::clear(). This method is supposed to call element's destructor when called, however memory is still accessible after call to clear().
For example:
struct A
{
~A() { x = 0; }
int x;
};
int main( void )
{
std::map< int, A * > my_map;
A *a = new A();
a->x = 5;
my_map.insert( std::make_pair< int, *A >( 0, a ) );
// addresses will be the same, will print 5
std::cout << a << " " << my_map[0] << " " << my_map[0]->x << std::endl;
my_map.clear();
// will be 0
std::cout << a->x << std::endl;
return 0;
}
The question is, why is variable a still accessible after its destructor was called by map::clear()? Do I need to write delete a; after calling my_map.clear() or is it safe to overwrite the contents of a?
Thanks in advance for your help,
sneg
If you store pointers on a map (or a list, or anything like that) YOU are the responsible for deleting the pointers, since the map doesn't know if they have been created with new, or not. The clear function only invokes destructors if you don't use pointers.
Oh, and one more thing: invoking a destructor (or even calling delete) doesn't mean the memory can't be accessed anymore. It only means that you will be accessing garbage if you do.
std::map does not manage the memory pointed to by the pointer values - it's up to you to do it yourself. If you don't want to use smart pointers, you can write a general purpose free & clear function like this:
template <typename M> void FreeClear( M & amap )
for ( typename M::iterator it = amap.begin(); it != amap.end(); ++it ) {
delete it->second;
}
amap.clear();
}
And use it:
std::map< int, A * > my_map;
// populate
FreeClear( my_map )
;
That's because map.clear() calls destructors of the data contained in the map, in your case, of the pointer to a. And this does nothing.
You might want to put some kind of smart pointer in the map for the memory occupied by a to be automatically reclaimed.
BTW, why do you put the template arguments in the call to make_pair? The template argument deduction should do pretty well here.
When you free a piece of heap memory, its contents don't get zeroed. They are merely available for allocation again. Of course you should consider the memory non accessible, because the effects of accessing unallocated memory are undefined.
Actually preventing access to a memory page happens on a lower level, and std libraries don't do that.
When you allocate memory with new, you need to delete it yourself, unless you use a smart pointer.
Any container stores your object Type and call corresponding constructors: internal code each node might look similar to:
__NodePtr
{
*next;
__Ty Val;
}
When you allocate it happens by constructing the val based on type and then linking. Something similar to:
_Ty _Val = _Ty();
_Myhead = _Buynode();
_Construct_n(_Count, _Val);
When you delete it calls corresponding destructors.
When you store references (pointers) it won't call any constructor nor it will destruct.
Having spent the last 2 months eating, sleeping, and breathing maps, I have a recommendation. Let the map allocate it's own data whenever possible. It's a lot cleaner, for exactly the kind of reasons you're highlighting here.
There are also some subtle advantages, like if you're copying data from a file or socket to the map's data, the data storage exists as soon as the node exists because when the map calls malloc() to allocate the node, it allocates memory for both the key and the data. (AKA map[key].first and map[key].second)
This allows you to use the assignment operator instead of memcpy(), and requires 1 less call to malloc() - the one you make.
IC_CDR CDR, *pThisCDRLeafData; // a large struct{}
while(1 == fread(CDR, sizeof(CDR), 1, fp)) {
if(feof(fp)) {
printf("\nfread() failure in %s at line %i", __FILE__, __LINE__);
}
cdrMap[CDR.iGUID] = CDR; // no need for a malloc() and memcpy() here
pThisCDRLeafData = &cdrMap[CDR.iGUID]; // pointer to tree node's data
A few caveats to be aware of are worth pointing out here.
do NOT call malloc() or new in the line of code that adds the tree node as your call to malloc() will return a pointer BEFORE the map's call to malloc() has allocated a place to hold the return from your malloc().
in Debug mode, expect to have similar problems when trying to free() your memory. Both of these seem like compiler problems to me, but at least in MSVC 2012, they exist and are a serious problem.
give some thought as to where to "anchor" your maps. IE: where they are declared. You don't want them going out of scope by mistake. main{} is always safe.
INT _tmain(INT argc, char* argv[]) {
IC_CDR CDR, *pThisCDRLeafData=NULL;
CDR_MAP cdrMap;
CUST_MAP custMap;
KCI_MAP kciMap;
I've had very good luck, and am very happy having a critical map allocate a structure as it's node data, and having that struct "anchor" a map. While anonymous structs have been abandoned by C++ (a horrible, horrible decision that MUST be reversed), maps that are the 1st struct member work just like anonymous structs. Very slick and clean with zero size-effects. Passing a pointer to the leaf-owned struct, or a copy of the struct by value in a function call, both work very nicely. Highly recommended.
you can trap the return values for .insert to determine if it found an existing node on that key, or created a new one. (see #12 for code) Using the subscript notation doesn't allow this. It might be better to settle on .insert and stick with it, especially because the [] notation doesn't work with multimaps. (it would make no sense to do so, as there isn't "a" key, but a series of keys with the same values in a multimap)
you can, and should, also trap returns for .erase and .empty() (YES, it's annoying that some of these things are functions, and need the () and some, like .erase, don't)
you can get both the key value and the data value for any map node using .first and .second, which all maps, by convention, use to return the key and data respectively
save yourself a HUGE amount of confusion and typing, and use typedefs for your maps, like so.
typedef map<ULLNG, IC_CDR> CDR_MAP;
typedef map<ULLNG, pIC_CDR> CALL_MAP;
typedef struct {
CALL_MAP callMap;
ULNG Knt;
DBL BurnRateSec;
DBL DeciCents;
ULLNG tThen;
DBL OldKCIKey;
} CUST_SUM, *pCUST_SUM;
typedef map<ULNG,CUST_SUM> CUST_MAP, CUST_MAP;
typedef map<DBL,pCUST_SUM> KCI_MAP;
pass references to maps using the typedef and & operator as in
ULNG DestroyCustomer_callMap(CUST_SUM Summary, CDR_MAP& cdrMap, KCI_MAP& kciMap)
use the "auto" variable type for iterators. The compiler will figure out from the type specified in the rest of the for() loop body what kind of map typedef to use. It's so clean it's almost magic!
for(auto itr = Summary.callMap.begin(); itr!= Summary.callMap.end(); ++itr) {
define some manifest constants to make the return from .erase and .empty() more meaningfull.
if(ERASE_SUCCESSFUL == cdrMap.erase (itr->second->iGUID)) {
given that "smart pointers" are really just keeping a reference count, remember you can always keep your own reference count, an probably in a cleaner, and more obvious way. Combining this with #5 and #10 above, you can write some nice clean code like this.
#define Pear(x,y) std::make_pair(x,y) // some macro magic
auto res = pSumStruct->callMap.insert(Pear(pCDR->iGUID,pCDR));
if ( ! res.second ) {
pCDR->RefKnt=2;
} else {
pCDR->RefKnt=1;
pSumStruct->Knt += 1;
}
using a pointer to hang onto a map node which allocates everything for itself, IE: no user pointers pointing to user malloc()ed objects, works well, is potentially more efficient, and and be used to mutate a node's data without side-effects in my experience.
on the same theme, such a pointer can be used very effectively to preserve the state of a node, as in pThisCDRLeafData above. Passing this to a function that mutates/changes that particular node's data is cleaner than passing a reference to the map and the key needed to get back to the node pThisCDRLeafData is pointing to.
iterators are not magic. They are expensive and slow, as you are navigating the map to get values. For a map holding a million values, you can read a node based on a key at about 20 million per second. With iterators it's probably ~ 1000 times as slow.
I think that about covers it for now. Will update if any of this changes or there's additional insights to share. I am especially enjoying using the STL with C code. IE: not a class in sight anywhere. They just don't make sense in the context I'm working in, and it's not an issue. Good luck.