I am trying to save into the STL map a pointer to the Block object:
void IO::parseInput(void)
{
map<string, Block*> blocksMap;
//=== Create new block and write it to the vector in database ===
Block tmpBlock (bIndex, bName, bWidth, bHeight);
Block* tmpPointer = db.addBlock(tmpBlock); //addBlock returns pointer to the added Block
tmpPointer->printParams(); // -- here is correct output
blocksMap.insert ( pair<string, Block*>(bName, tmpPointer) );
// === Test hash ===
blocksMap.find("anyLegalStringKey")->second->printParams(); // -- wrong output
}
addBlock function in DB class:
Block* DB::addBlock (Block& newBlock)
{
blocks.push_back(newBlock);
Block* ptrToLast = &blocks.back();
return ptrToLast;
}
Vector of Block's in DB class:
class DB:
{
private:
//=== Database ===
vector <Block> blocks;
};
Problem: Before writing into the map I access to the object (which pointer I want to save) using pointer tmpPointer and print all its parameters. This one works correct. Then I save this pointer in map with specific key. When i try then to access the same pointer using find in map with specific key, I get a wrong output (I also print all parameters).
When I try to access to pointers in map for any existed key it could lead to four different reactions:
Everything is OK and I get normal output (in my case I print all parameters)
I get wrong parameters (e.g. instead of index 7 I got 21814704)
Unreadable output
Segmentation fault
Interesting is that for the same Block object I have always same reaction (e.g. block with name "g22i" has always unreadable output).
Vector "blocks" in db contains correct info before and after I save pointers in map.
Thanks!
You are trying to use long-lived pointers to std::vector's elements. That's a recipe for disaster. All such pointers will be invalidated the very moment your vector decides to reallocate itself.
If you want to use pointers to vector elements, you have to make sure the vector never reallocates. I.e. you have to reserve enough capacity in advance to store all future elements. Most of the time it is suboptimal, defeats the purpose of using std::vector or plain impossible.
You can also use a container that never invalidates pointers to its elements, like std::list. But std::list does not support random access. You can use std::deque, which supports random access (albeit less efficiently than std::vector) and preserves the validity of element pointers as long as you don't insert anything into the middle of the sequence.
Finally, you can keep using std::vector, but make sure to store [smart] pointers to Block objects in it instead of Block objects themselves. The idea is to ensure that vector reallocation does not cause reallocation of the actual Block objects.
P.S. Another remark: when you declare a map<string, Block*> blocksMap, elements of such map have type pair<const string, Block*> (note the extra const). I.e. map<string, Block*>::value_type is actually pair<const string, Block*>.
Later you are trying to call blocksMap.insert with pair<string, Block*> argument, while blocksMap.insert actually expects pair<const string, Block*>. This compiles, but it involves an implicit conversion from your pair<string, Block*> to pair<const string, Block*> performed by std::pair's conversion constructor. This is not optimal performance-wise. Which is why a better idea might be to use map<string, Block*>::value_type instead of trying to spell the element type manually
blocksMap.insert ( map<string, Block*>::value_type(bName, tmpPointer) );
Related
so I have the following
vector<vector< tuple<string, double>*>*>* graph;
a 2d vector, with a tuple of string and double.
I want to initialize the graph(2d vector) with a certain size and
a new vector< tuple<string, double>*>, in each of the element of the big (outside)vector
and I used the following line
graph = new vector<vector<tuple<string, double>*>*>(67, new vector< tuple<string,double>*>());
This thing works but when I tried to free it I found out that all new vectors I created are of the
same vector.
meaning, all the elements point to the same vector. I get why this is happening but
is there a way of initialize all the vectors without having to do the for loop, ie
for(int i....)
graph->push_back(new vector< tuple<string,double>*>);
Problem summary:
In the line
graph = new vector<vector<tuple<string, double>*>*>(67, new vector< tuple<string,double>*>());
constructor 3 of std::vector is used (reference)
This will evaluate new vector<tuple<string,double>*>() once, then create a vector with 67 copies of this pointer.
Solution:
Don't use pointers unless you have a really good reason to do so. Use
vector<vector<tuple<string, double>>> graph;
then you could simply do
graph.resize(67);
to insert 67 default constructed values. No pointers needed.
Maybe you are used to languages where new is frequently used, but you shouldn't do that in C++. std::vector and std::string are fairly small objects that manage an underlying dynamic array. Creating pointers to them is usually not what you want and might also decrease performance.
I am making a game engine and need to use the std::vector container for all of the components and entities in the game.
In a script the user might need to hold a pointer to an entity or component, perhaps to continuously check some kind of state. If something is added to the vector that the pointer points to and the capacity is exceeded, it is my understanding that the vector will allocate new memory and every pointer that points to any element in the vector will become invalid.
Considering this issue i have a couple of possible solutions. After each push_back to the vector, would it be a viable to check if a current capacity variable is exceeded by the actual capacity of the vector? And if so, fetch and overwrite the old pointers to the new ones? Would this guarantee to "catch" every case that invalidates pointers when performing a push_back?
Another solution that i've found is to instead save an index to the element and access it that way, but i suspect that is bad for performance when you need to continuously check the state of that element (every 1/60 second).
I am aware that other containers do not have this issue but i'd really like to make it work with a vector. Also it might be worth noting that i do not know in advance how many entities / components there will be.
Any input is greatly appreciated.
You shouldn't worry about performance of std::vector when you access its element only 60 times per second. By the way, in Release compilation mode std::vector::operator[] is being converted to a single lea opcode. In Debug mode it is decorated by some runtime range checks though.
If the user is going to store pointers to the objects, why even contain them in a vector?
I don't feel like it is a good idea to (poor wording)->store pointers to objects in a vector. (what I meant is to create pointers that point to vector elements, i.e. my_ptr = &my_vec[n];) The whole point of a container is to reference the contents in the normal ways that the container supports, not to create outside pointers to elements of the container.
To answer your question about whether you can detect the allocations, yes you could, but it is still probably a bad idea to reference the contents of a vector by pointers to elements.
You could also reserve space in the vector when you create it, if you have some idea of what the maximum size might grow to. Then it would never resize.
edit:
After reading other responses, and thinking about what you asked, another thought occurred. If your vector is a vector of pointers to objects, and you pass out the pointers to the objects to your clients, resizing the vector does not invalidate the pointers that the vector hold. The issue becomes keeping track of the life of the object (who owns it), which is why using shared_ptr would be useful.
For example:
vector<shared_ptr> my_vec;
my_vec.push_back(stuff);
if you pass out the pointers contained in the vector to clients...
client_ptr = my_vec[3];
There will be no problem when the vector resizes. The contents of the vector will be preserved, and whatever was at my_vec[3] will still be there. The object pointed to by my_vec[3] will still be at the same address, and my_vec[3] will still contain that address. Whomever got a copy of the pointer at my_vec[3] will still have a valid pointer.
However, if you did this:
client_ptr = &my_vec[3];
And the client is dereferencing like this:
*client_ptr->whatever();
You have a problem. Now when my_vec resized, &my_vec[3] is probably no longer valid, thus client_ptr points to nowhere.
If something is added to the vector that the pointer points to and the
capacity is exceeded, it is my understanding that the vector will
allocate new memory and every pointer that points to any element in
the vector will become invalid.
I once wrote some code to analyze what happens when a vector's capacity is exceeded. (Have you done this, yet?) What that code demonstrated on my Ubuntu with g++v5 system was that std::vector code simply a) doubles the capacity, b) moves all the elements from old to the new storage, then c) cleans up the old. Perhaps your implementation is similar. I think the details of capacity expansion is implementation dependent.
And yes, any pointer into the vector would be invalidated when push_back() causes capacity to be exceeded.
1) I simply don't use pointers-into-the-vector (and neither should you). In this way the issue is completely eliminated, as it simply can not occur. (see also, dangling pointers) The proper way to access a std::vector (or a std::array) element is to use an index (via the operator[]() method).
After any capacity-expansion, the index of all elements at indexes less than the previous capacity limit are still valid, as the push_back() installed the new element at the 'end' (I think highest memory addressed.) The elements memory location may have changed, but the element index is still the same.
2) It is my practice that I simply don't exceed the capacity. Yes, by that I mean that I have been able to formulate all my problems such that I know the required maximum-capacity. I have never found this approach to be a problem.
3) If the vector contents can not be contained in system memory (my system's best upper limit capacity is roughly 3.5 GBytes), then perhaps a vector container (or any ram based container) is inappropriate. You will have to accomplish your goal using disk storage, perhaps with vector containers acting as a cache.
update 2017-July-31
Some code to consider from my latest Game of Life.
Each Cell_t (on the 2-d gameboard) has 8 neighbors.
In my implementation, each Cell_t has a neighbor 'list,' (either std::array or std::vector, I've tried both), and after the gameboard has fully constructed, each Cell_t's init() method is run, filling it's neighbor 'list'.
// see Cell_t data attributes
std::array<int, 8> m_neighbors;
// ...
void Cell_t::void init()
{
int i = 0;
m_neighbors[i] = validCellIndx(m_row-1, m_col-1); // 1 - up left
m_neighbors[++i] = validCellIndx(m_row-1, m_col); // 2 - up
m_neighbors[++i] = validCellIndx(m_row-1, m_col+1); // 3 - up right
m_neighbors[++i] = validCellIndx(m_row, m_col+1); // 4 - right
m_neighbors[++i] = validCellIndx(m_row+1, m_col+1); // 5 - down right
m_neighbors[++i] = validCellIndx(m_row+1, m_col); // 6 - down
m_neighbors[++i] = validCellIndx(m_row+1, m_col-1); // 7 - down left
m_neighbors[++i] = validCellIndx(m_row, m_col-1); // 8 - left
// ^^^^^^^^^^^^^- returns info to quickly find cell
}
The int value in m_neighbors[i] is the index into the gameboard vector. To determine the next state of the cell, the code 'counts the neighbor's states.'
Note - Some cells are at the edge of the gameboard ... in this implementation, validCellIndx() can return a value indicating 'no-neighbor', (above top row, left of left edge, etc.)
// multiplier: for 100x200 cells,20,000 * m_generation => ~20,000,000 ops
void countNeighbors(int& aliveNeighbors, int& totalNeighbors)
{
{ /* ... initialize m_count[]s to 0 */ }
for(auto neighborIndx : m_neighbors ) { // each of 8 neighbors // 123
if(no_neighbor != neighborIndx) // 8-4
m_count[ gBoard[neighborIndx].m_state ] += 1; // 765
}
aliveNeighbors = m_count[ CellALIVE ]; // CellDEAD = 1, CellALIVE
totalNeighbors = aliveNeighbors + m_count [ CellDEAD ];
} // Cell_Arr_t::countNeighbors
init() pre-computes the index to this cells neighbors. The m_neighbors array holds index integers, not pointers. It is trivial to have NO pointers-into-the-gameboard vector.
Okay, so i'm doing this game like a project where i have 3 different objects.
void Character::shoot(){
Shot* s = new Shot(blablabla);
shots.push_back(s);
}
This happens dynamically, and currently i dont delete the pointers so it has to be loads of pointers to shots in that vector after a little while.
I use Bot, Character and Shot, and as i said i need help with storing and removing a pointer to the shot dynamically, preferably from a vector. I've got it to work like i put all the shot objects in a vector, but they never disappear from there. And i want to delete them permanently from my program when they collide with something, or reaches outside my screen width.
You can use std::remove and std::erase on any std container to remove content:
Shot* to_be_removed = ...;
std::vector<Shot*>::iterator i = std::remove(shots.begin(),shots.end(),to_be_removed);
std::erase(i,shots.end());
delete (to_be_removed);//don't forget to delete the pointer
This works when you know the element you want to remove. If you don't know the element you must find a way to identify the elements you want removed. Also if you have a system to identify the element it could be easier to use the container iterator in order to do the removal:
std::vector<Shot*>::iterator i = ...;//iterator the the element you want to remove
delete (*i);//delete memory
shots.erase(i);//remove it from vector
Lastly if you want to remove all pointers from the container and delete all the items at the same time you can use std::for_each
//c++ 03 standard:
void functor(Shot* s)
{
delete(s);
}
std::for_each(shots.begin(),shots.end(),functor);
shots.clear();//empty the list
//or c++11 standard:
std::for_each(shots.begin(),shots.end(),[] (Shot * s){ delete(s); } );
//the rest is the same
Using simple vector methods
You can iterate trough std::vector and use erase() method to remove current shot:
std::vector<cls*> shots; // Your vector of shots
std::vector<cls*>::iterator current_shot = shots.begin(); // Your shot to be deleted
while( current_shot < shots.end()){
if((*current_shot)->needs_to_be_deleted()){
// Remove item from vector
delete *current_shot;
current_shot = shots.erase(current_shot);
} else {
(*current_shot)->draw();
++current_shot; // Iterate trough vector
}
}
erase() returns iterator to next element after removed element so while loop is used.
Setting values to null and calling std::remove()
Note that std::vector::erase() reorganize everything after delete items:
Because vectors use an array as their underlying storage, erasing
elements in positions other than the vector end causes the container
to relocate all the elements after the segment erased to their new
positions. This is generally an inefficient operation compared to the
one performed for the same operation by other kinds of sequence
containers (such as list or forward_list).
This way you may end up with O(n^2) complexity so you may rather set values to null and use std::remove() as suggested by juanchopanza in comment, Erase-Remove idiom:
int deleted = 0;
for( current_shot = shots.begin(); current_shot < shots.end(); ++current_shot){
if((*current_shot)->needs_to_be_deleted()){
// Remove item from vector
delete *current_shot;
*current_shot = null;
++deleted;
}
}
if( deleted){
shots.erase( std::remove(shots.begin(), shots.end(), null));
}
Using std::list
If you need large amount of shot creations and deletions std::vector may not be the best structure for containing this kind of list (especially when you want to remove items from the middle):
Internally, vectors use a dynamically allocated array to store their
elements. This array may need to be reallocated in order to grow in
size when new elements are inserted, which implies allocating a new
array and moving all elements to it. This is a relatively expensive
task in terms of processing time, and thus, vectors do not reallocate
each time an element is added to the container.
See link from Raxvan's comment for performance comparison.
You may want to use std::list instead:
List containers are implemented as doubly-linked lists; Doubly linked
lists can store each of the elements they contain in different and
unrelated storage locations. The ordering is kept by the association
to each element of a link to the element preceding it and a link to
the element following it.
Compared to other base standard sequence containers (array, vector and
deque), lists perform generally better in inserting, extracting and
moving elements in any position within the container for which an
iterator has already been obtained, and therefore also in algorithms
that make intensive use of these, like sorting algorithms.
The main drawback of lists and forward_lists compared to these other
sequence containers is that they lack direct access to the elements by
their position;
Which is great for removing items from the middle of the "array".
As for removing, use delete and std::list::erase():
std::list<cls*> shots;
std::list<cls*>::iterator current_shot;
// You have to use iterators, not direct access
for( current_shot = shots.begin(); current_shot != shots.end(); current_shot++){
if( (*current_shots)->needs_to_be_deleted()){
delete *current_shot;
shots.erase(current_shot); // Remove element from list
}
}
Using std::shared_ptr
If you have more complex program structure and you use the same object on many places and you can simply determine whether you can delete object already or you need to keep it alive for a little bit more (and if you are using C++11), you can use std::shared_ptr (or use different kind of "smart pointers" which delete data where reference count reaches zero):
// Pushing items will have slightly more complicated syntax
std::list< std::shared_ptr<cls>> shots;
shots.push_back( std::shared_ptr( new cls()));
std::list< std::shared_ptr<cls>>::iterator current_shot;
// But you may skip using delete
shots.erase(current_shot); // shared_ptr will take care of freeing the memory
Simply use "delete" to deallocate the memory:
vector<Shot*>::iterator i;
for(i = shoot.begin(); i != shoot.end(); ++i)
{
delete (*i);//delete data that was pointed
*i = 0;
}
shoot.clear();
This will delete all elements from the heap and clear your vector.
i have a project in c++03 that have a problem with data structure: i use vector instead of list even if i have to continuously pop_front-push_back. but for now it is ok because i need to rewrite too many code for now.
my approach is tuo have a buffer of last frame_size point always updated. so each frame i have to pop front and push back. (mayebe there is a name for this approach?)
so i use this code:
Point apoint; // allocate new point
apoint.x = xx;
apoint.y = yy;
int size = points.size()
if (size > frame_size) {
this->points.erase( points.begin() ); // pop_front
}
this->points.push_back(apoint);
i have some ready-to-use code for an object pool and so i thought: it is not a great optimization but i can store the front in the pool and so i can gain the allocation time of apoint.
ok this is not so useful and probably it make no sense but i ask just to educational curiosity: how can i do that?
how can i store the memory of erased element of a vector for reusing it? does this question make sense? if not, why?
.. because erase does not return the erased vector, it return:
A random access iterator pointing to the new location of the element
that followed the last element erased by the function call, which is
the vector end if the operation erased the last element in the
sequence.
i have some ready-to-use code for an object pool ... how can i do that?
Using a vector, you can't. A vector stores its elements in a contiguous array, so they can't be allocated one at a time, only in blocks of arbitrary size. Therefore, you can't use an object pool as an allocator for std::vector.
how can i store the memory of erased element of a vector for reusing it? does this question make sense? if not, why?
The vector already does that. Your call to erase moves all the elements down into the space vacated by the first element, leaving an empty space at the end to push the new element into.
As long as you use a vector, you can't avoid moving all the elements when you erase the first; if that is too inefficient, then use a deque (or possibly a list) instead.
I'm not sure to understand what you want to do, but this should be functionnally equivalent to what you wrote, without constructing a temporary Point instance:
// don't do this on an empty vector
assert (points.size() > 0);
// rotate elements in the vector, erasing the first element
// and duplicating the last one
copy (points.begin()+1, points.end(), points.begin());
// overwrite the last element with your new data
points.back().x = xx;
points.back().y = yy;
EDIT: As Mike Seymour noted in the comments, neither this solution nor the approach proposed in the question cause any new memory allocation.
static std::map <unsigned int, CPCSteps> bestKvariables;
inline void copyKBestVar(MaxMinVarMap& vMaxMinAll, size_t& K, std::vector<unsigned int>& temp)
{
// copy top k variables and store in a vector.
MaxMinVarMap::reverse_iterator iter1;
size_t count;
for (iter1 = vMaxMinAll.rbegin(), count = 0; iter1 != vMaxMinAll.rend()&& count <= K; ++iter1, ++count)
{
temp.push_back(iter1->second);
}
}
void myAlgo::phase1(unsigned int& vTarget)
{
CPCSteps KBestForT; // To store kbest variables for only target variable finally put in a global storage
KBestForT.reserve(numVars);
std::vector<unsigned int> tempKbest;
tempKbest.reserve(numVars);
.......
.......
copyKBestVar(mapMinAssoc, KBestSize, tempKbest); // Store k top variables as a k best for this CPC variable for each step
KBestForT.push_back(tempKbest);
.....
.....
bestKvariables.insert(make_pair(vTarget, KBestForT)); // Store k best in a Map
.....
....
}
The Problem: Map "bestKvariables" is not overwriting the first element, but it keeps updating the rest of the elements. I tried to debug it but the problem i found is in insert command.
Thanks in advance for helping.
Another Question: whether i can reserve the size of a map(like vector.reserve(..)) in the beginning to avoid the insertion cost.
Sorry for providing insufficient information.
I means if there are four vTarget variables 1, 2, 3, 4. I do some statistical calculations for each variable.
There are more than one iterations for these variables, i would like to store top k results of each variable in a map to use it next iteration.
I saw the first inserted variable(with key unsigned int "vTarget") is not updated on further iterations(it remains the value inserted on first iteration).
But the other variables (keys inserted after the first) are remains updated.
Another Question: whether i can reserve the size of a map(like vector.reserve(..)) in the beginning to avoid the insertion cost.
std::map does not have a reserve() function unlike std::vector.
Usually, the Standard library provides functions for containers that provide and ensure good performance or provide a means to achieve the same.
For a container like std::vector reallocation of its storage can be a very costly
operation. A simple call to push_back() can lead to every element in the std::vector to be copied to a newly allocated block of memory. A call to reserve() can avoid these unnecessary allocations and copy operations for std::vector and hence the same is provided for it.
std::map never needs to copy all of the existing/remaining elements simply because a new element was inserted or removed. Hence it does not provide any such function.
Though the Standard does not specify how a std::map should be implemented the behavior expected and the complexity desired ensures that most implementations implement it as a tree, unlike std::vector which needs elements to be allocated in contiguous memory locations.
map::insert isn't supposed to update/overwrite anything, just insert not-yet-present elements. Use operator[] for updating, it also inserts elements when the specified key is not present yet.