I have an array int *playerNum which stores the list of all the numbers of the players in the team. Each slot e.g playerNum[1]; represents a position on the team, if I wanted to add a new player for a new position on the team. That is, inserting a new element into the array somewhere near the middle, how would I go about doing this?
At the moment, I was thinking you memcpy up to the position you want to insert the player into a new array and then insert the new player and copy over the rest of it?
(I have to use an array)
If you're using C++, I would suggest not using memcpy or memmove but instead using the copy or copy_backward algorithms. These will work on any data type, not just plain old integers, and most implementations are optimized enough that they will compile down to memmove anyway. More importantly, they will work even if you change the underlying type of the elements in the array to something that needs a custom copy constructor or assignment operator.
If you have to use an array, after having made sure you have enough storage (using realloc if necessary), use memmove to shift the items from the insertion point to the end by one position, then save your new player at the desired location.
You can't use memcpy if the source and target areas overlap.
This will fail as soon as the objects in your array have non-trivial copy-constructors, and it's not idiomatic C++. Using one of the container classes is much safer (std::vector or std::list for instance).
Your solution using memcpy is correct (under few assumptions mentionned by other).
However, and since you are programming in C++. It is probably a better choice to use std::vector and its insert method.
vector<int> myvector (3,100);
myvector.insert ( 10 , 42 );
An array takes a contiguous block of memory, there is no function for you to insert an element in the middle. you can create a new one of size larger than the origin's by one then copy the original array into the new one plus the new member
for(int i=0;i<arSize/2;i++)
{
newarray[i]<-ar[i];
}
newarray[i+1]<-newelemant;
for(int j=i+1<newSize;j++,i++)
{
newarray[i]<-ar[i];
}
if you use STL, ting becomes easier, use list.
As you're talking about an array and "insert" I assume that it is a sorted array. You don't necessarily need a second array provided that the capacity N of your existing array is large enough to store more entries (N>n, where n is the number of current entries). You can move the entries from k to n-1 (zero-indexed) to k+1 to n, where k is the desired insert position. Insert the new element at index position k and increase n by one. If the array is not large enough in the beginning, you can follow your proposed approach or just reallocate a new array of larger capacity N' and copy the existing data before applying the actual insert operation described above.
BTW: As you're using C++, you could easily use std::vector.
While it is possible to use arrays for this, C++ has a better solutions to offer. For starters, try std::vector, which is a decent enough general-purpose container, based on a dynamically-allocated array. It behaves exactly like an array in many cases.
Looking at your problem, however, there are two downsides to arrays or vectors:
Indices have to be 0-based and contiguous; you cannot remove elements from the middle without losing key/value associations for everything after the removed element; so if you remove the player on position 4, then the player from position 9 will move to position 8
Random insertion and deletion (that is, anywhere except the end) is expensive - O(n), that is, execution time grows linearly with array size. This is because every time you insert or delete, a part of the array needs to be moved.
If the key/value thing isn't important to you, and insertion/deletion isn't time critical, and your container is never going to be really large, then by all means, use a vector. If you need random insertion/deletion performance, but the key/value thing isn't important, look at std::list (although you won't get random access then, that is, the [] operator isn't defined, as implementing it would be very inefficient for linked lists; linked lists are also very memory hungry, with an overhead of two pointers per element). If you want to maintain key/value associations, std::map is your friend.
Losting the tail:
#include <stdio.h>
#define s 10
int L[s];
void insert(int v, int p, int *a)
{
memmove(a+p+1,a+p,(s-p+1)*4);
*(a+p) = v;
}
int main()
{
for(int i=0;i<s;i++) L[i] = i;
insert(11,6, L);
for(int i=0;i<s;i++) printf("%d %d\n", L[i], &L[i]);
return 0;
}
Related
I am making a game engine and need to use the std::vector container for all of the components and entities in the game.
In a script the user might need to hold a pointer to an entity or component, perhaps to continuously check some kind of state. If something is added to the vector that the pointer points to and the capacity is exceeded, it is my understanding that the vector will allocate new memory and every pointer that points to any element in the vector will become invalid.
Considering this issue i have a couple of possible solutions. After each push_back to the vector, would it be a viable to check if a current capacity variable is exceeded by the actual capacity of the vector? And if so, fetch and overwrite the old pointers to the new ones? Would this guarantee to "catch" every case that invalidates pointers when performing a push_back?
Another solution that i've found is to instead save an index to the element and access it that way, but i suspect that is bad for performance when you need to continuously check the state of that element (every 1/60 second).
I am aware that other containers do not have this issue but i'd really like to make it work with a vector. Also it might be worth noting that i do not know in advance how many entities / components there will be.
Any input is greatly appreciated.
You shouldn't worry about performance of std::vector when you access its element only 60 times per second. By the way, in Release compilation mode std::vector::operator[] is being converted to a single lea opcode. In Debug mode it is decorated by some runtime range checks though.
If the user is going to store pointers to the objects, why even contain them in a vector?
I don't feel like it is a good idea to (poor wording)->store pointers to objects in a vector. (what I meant is to create pointers that point to vector elements, i.e. my_ptr = &my_vec[n];) The whole point of a container is to reference the contents in the normal ways that the container supports, not to create outside pointers to elements of the container.
To answer your question about whether you can detect the allocations, yes you could, but it is still probably a bad idea to reference the contents of a vector by pointers to elements.
You could also reserve space in the vector when you create it, if you have some idea of what the maximum size might grow to. Then it would never resize.
edit:
After reading other responses, and thinking about what you asked, another thought occurred. If your vector is a vector of pointers to objects, and you pass out the pointers to the objects to your clients, resizing the vector does not invalidate the pointers that the vector hold. The issue becomes keeping track of the life of the object (who owns it), which is why using shared_ptr would be useful.
For example:
vector<shared_ptr> my_vec;
my_vec.push_back(stuff);
if you pass out the pointers contained in the vector to clients...
client_ptr = my_vec[3];
There will be no problem when the vector resizes. The contents of the vector will be preserved, and whatever was at my_vec[3] will still be there. The object pointed to by my_vec[3] will still be at the same address, and my_vec[3] will still contain that address. Whomever got a copy of the pointer at my_vec[3] will still have a valid pointer.
However, if you did this:
client_ptr = &my_vec[3];
And the client is dereferencing like this:
*client_ptr->whatever();
You have a problem. Now when my_vec resized, &my_vec[3] is probably no longer valid, thus client_ptr points to nowhere.
If something is added to the vector that the pointer points to and the
capacity is exceeded, it is my understanding that the vector will
allocate new memory and every pointer that points to any element in
the vector will become invalid.
I once wrote some code to analyze what happens when a vector's capacity is exceeded. (Have you done this, yet?) What that code demonstrated on my Ubuntu with g++v5 system was that std::vector code simply a) doubles the capacity, b) moves all the elements from old to the new storage, then c) cleans up the old. Perhaps your implementation is similar. I think the details of capacity expansion is implementation dependent.
And yes, any pointer into the vector would be invalidated when push_back() causes capacity to be exceeded.
1) I simply don't use pointers-into-the-vector (and neither should you). In this way the issue is completely eliminated, as it simply can not occur. (see also, dangling pointers) The proper way to access a std::vector (or a std::array) element is to use an index (via the operator[]() method).
After any capacity-expansion, the index of all elements at indexes less than the previous capacity limit are still valid, as the push_back() installed the new element at the 'end' (I think highest memory addressed.) The elements memory location may have changed, but the element index is still the same.
2) It is my practice that I simply don't exceed the capacity. Yes, by that I mean that I have been able to formulate all my problems such that I know the required maximum-capacity. I have never found this approach to be a problem.
3) If the vector contents can not be contained in system memory (my system's best upper limit capacity is roughly 3.5 GBytes), then perhaps a vector container (or any ram based container) is inappropriate. You will have to accomplish your goal using disk storage, perhaps with vector containers acting as a cache.
update 2017-July-31
Some code to consider from my latest Game of Life.
Each Cell_t (on the 2-d gameboard) has 8 neighbors.
In my implementation, each Cell_t has a neighbor 'list,' (either std::array or std::vector, I've tried both), and after the gameboard has fully constructed, each Cell_t's init() method is run, filling it's neighbor 'list'.
// see Cell_t data attributes
std::array<int, 8> m_neighbors;
// ...
void Cell_t::void init()
{
int i = 0;
m_neighbors[i] = validCellIndx(m_row-1, m_col-1); // 1 - up left
m_neighbors[++i] = validCellIndx(m_row-1, m_col); // 2 - up
m_neighbors[++i] = validCellIndx(m_row-1, m_col+1); // 3 - up right
m_neighbors[++i] = validCellIndx(m_row, m_col+1); // 4 - right
m_neighbors[++i] = validCellIndx(m_row+1, m_col+1); // 5 - down right
m_neighbors[++i] = validCellIndx(m_row+1, m_col); // 6 - down
m_neighbors[++i] = validCellIndx(m_row+1, m_col-1); // 7 - down left
m_neighbors[++i] = validCellIndx(m_row, m_col-1); // 8 - left
// ^^^^^^^^^^^^^- returns info to quickly find cell
}
The int value in m_neighbors[i] is the index into the gameboard vector. To determine the next state of the cell, the code 'counts the neighbor's states.'
Note - Some cells are at the edge of the gameboard ... in this implementation, validCellIndx() can return a value indicating 'no-neighbor', (above top row, left of left edge, etc.)
// multiplier: for 100x200 cells,20,000 * m_generation => ~20,000,000 ops
void countNeighbors(int& aliveNeighbors, int& totalNeighbors)
{
{ /* ... initialize m_count[]s to 0 */ }
for(auto neighborIndx : m_neighbors ) { // each of 8 neighbors // 123
if(no_neighbor != neighborIndx) // 8-4
m_count[ gBoard[neighborIndx].m_state ] += 1; // 765
}
aliveNeighbors = m_count[ CellALIVE ]; // CellDEAD = 1, CellALIVE
totalNeighbors = aliveNeighbors + m_count [ CellDEAD ];
} // Cell_Arr_t::countNeighbors
init() pre-computes the index to this cells neighbors. The m_neighbors array holds index integers, not pointers. It is trivial to have NO pointers-into-the-gameboard vector.
A classic, I'm looking for optimisation here : I have an array of things, and after some processing I know I'm only interested in elements i to j. How to trim my array in the fatset, lightest way, with complete deletions/freeing of memory of elements before i and after j ?
I'm doing mebedded C++, so I may not be able to compile all sorts of library let's say. But std or vector things welcome in a first phase !
I've tried, for array A to be trimmed between i and j, with variable numElms telling me the number of elements in A :
A = &A[i];
numElms = i-j+1;
As it is this yields an incompatibility error. Can that be fixed, and even when fixed, does that free the memory at all for now-unused elements?
A little context : This array is the central data set of my module, and it can be heavy. It will live as long as the module lives. And there's no need to carry dead weight all this time. This is the very first thing that is done - figuring which segment of the data set has to be at all analyzed, and trimming and dumping the rest forever, never to use it again (until the next cycle where we get a fresh array with possibily a compeltely different size).
When asking questions about speed your millage may very based on the size of the array you're working with, but:
Your fastest way will be to not trim the array, just use A[index + i] to find the elements you want.
The lightest way to do this would be to:
Allocate a dynamic array with malloc
Once i and j are found copy that range to the head of the dynamic array
Use realloc to resize the dynamic array to the size j - i + 1
However you have this tagged as C++ not C, so I believe that you're also interested in readability and the required programming investment, not raw speed or weight. If this is true then I would suggest use of a vector or deque.
Given vector<thing> A or a deque<thing> A you could do:
A.erase(cbegin(A), next(cbegin(A), i));
A.resize(j - i + 1);
There is no way to change aloocated memory block size in standard C++ (unless you have POD data — in this case C facilities like realloc could be used). The only way to trim an array is to allocate new array. copy/move needed elements and destroy old array.
You can do it manually, or using vectors:
int* array = new int[10]{0,1,2,3,4,5,6,7,8,9};
std::vector<int> vec {0,1,2,3,4,5,6,7,8,9};
//We want only elements 3-5
{
int* new_array = new int[3];
std::copy(array + 3, array + 6, new_array);
delete[] array;
array = new_array;
}
vec = std::vector<int>(vec.begin()+3, vec.begin()+6);
If you are using C++11, both approaches should have same perfomance.
If you only want to remove extra elements and do not really want to release memory (for example you might want to add more elements later) you can follow NathanOliver link
However, you should consider: do you really need that memory freed immideately? Do you need to move elements right now? Will you array live for such long time that this memory would be lost for your program completely? Maybe you need a range or perharps a view to the array content? In many cases you can store two pointers (or pointer and size) to denote your "new" array, while keeping old one to be released all at once.
I'd like to fill a vector with a (known at runtime) quantity of data, but the elements arrive in (index, value) pairs rather than in the original order. These indices are guaranteed to be unique (each index from 0 to n-1 appears exactly once) so I'd like to store them as follows:
vector<Foo> myVector;
myVector.reserve(n); //total size of data is known
myVector[i_0] = v_0; //data v_0 goes at index i_0 (not necessarily 0)
...
myVector[i_n_minus_1] = v_n_minus_1;
This seems to work fine for the most part; at the end of the code, all n elements are in their proper places in the vector. However, some of the vector functions don't quite work as intended:
...
cout << myVector.size(); //prints 0, not n!
It's important to me that functions like size() still work--I may want to check for example, if all the elements were actually inserted successfully by checking if size() == n. Am I initializing the vector wrong, and if so, how should I approach this otherwise?
myVector.reserve(n) just tells the vector to allocate enough storage for n elements, so that when you push_back new elements into the vector, the vector won't have to continually reallocate more storage -- it may have to do this more than once, because it doesn't know in advance how many elements you will insert. In other words you're helping out the vector implementation by telling it something it wouldn't otherwise know, and allowing it to be more efficient.
But reserve doesn't actually make the vector be n long. The vector is empty, and in fact statements like myVector[0] = something are illegal, because the vector is of size 0: on my implementation I get an assertion failure, "vector subscript out of range". This is on Visual C++ 2012, but I think that gcc is similar.
To create a vector of the required length simply do
vector<Foo> myVector(n);
and forget about the reserve.
(As noted in the comment you an also call resize to set the vector size, but in your case it's simpler to pass the size as the constructor parameter.)
You need to call myVector.resize(n) to set (change) the size of the vector. calling reserve doesn't actually resize the vector, it just makes it so you can later resize without reallocating memory. Writing past the end of the vector (as you are doing here -- the vector size is still 0 when you write to it) is undefined behavior.
I have an assignment that requires me to sort a heap based C style array of names as they're being read rather than reading them all and then sorting. This involves a lot of shifting the contents of the array by one to allow new names to be inserted. I'm using the code below but it's extremely slow. Is there anything else I could be doing to optimize it without changing the type of storage?
//the data member
string *_storedNames = new string[4000];
//together boundary and index define the range of elements to the right by one
for(int k = p_boundary - 1;k > index;k--)
_storedNames[k]=_storedNames[k - 1];
EDIT2:
As suggested by Cartroo I'm attempting to use memmove with the dynamic data that uses malloc. Currently this shifts the data correctly but once again fails in the deallocation process. Am I missing something?
int numberOfStrings = 10, MAX_STRING_SIZE = 32;
char **array = (char **)malloc(numberOfStrings);
for(int i = 0; i < numberOfStrings; i++)
array[i] = (char *)malloc(MAX_STRING_SIZE);
array[0] = "hello world",array[2] = "sample";
//the range of data to move
int index = 1, boundary = 4;
int sizeToMove = (boundary - index) * sizeof(MAX_STRING_SIZE);
memcpy(&array[index + 1], &array[index], sizeToMove);
free(array);
If you're after minimal changes to your approach, you could use the memmove() function, which is potentially faster than your own manual version. You can't use memcpy() as advised by one commenter as the areas of memory aren't permitted to overlap (the behaviour is undefined if they do).
There isn't a lot else you can do without changing the type of your storage or your algorithm. However, if you change to using a linked list then the operation becomes significantly more efficient, although you will be doing more memory allocation. If the allocation is really a problem (and unless you're on a limited embedded system it probably isn't) then pool allocators or similar approaches could help.
EDIT: Re-reading your question, I'm guessing that you're not actually using Heapsort, you just mean that your array was allocated on the heap (i.e. using malloc()) and you're doing a simple insertion sort. In which case, the information below isn't much use to you directly, although you should be aware that insertion sort is quite inefficient compared to a bulk insert followed by a better sorting algorithm (e.g. Quicksort which you can implement using the standard library qsort() function). If you only ever need the lowest (or highest) item instead of full sorted order then Heapsort is still useful reading.
If you're using a standard Heapsort then you shouldn't need this operation at all - items are appended at the end of the array, and then the "heapify" operation is used to swap them into the correct position in the heap. Each swap just requires a single temporary variable to swap two items - it doesn't require anything to be shuffled down as in your code snippet. It does require everything in the array to be the same size (either a fixed size in-place string or, more likely, a pointer), but your code already seems to assume that anyway (and using variable length strings in a standard char array would be a pretty strange thing to be doing).
Note that strictly speaking Heapsort operates on a binary tree. Since you're dealing with an array I assume you're using the implementation where a contiguous array is used where the children of node at index n are stored at indicies 2n and 2n+1 respectively. If this isn't the case, or you're not using a Heapsort at all, you should explain in more detail what you're trying to do to get a more helpful answer.
EDIT: The following is in response to you updated code above.
The main reason you see a problem during deallocation is if you trampled some memory - in other words, you're copying something outside the size of the area you allocated. This is a really bad thing to do as you overwrite the values that the system is using to track your allocations and cause all sorts of problems which usually result in your program crashing.
You seem to have some slight confusion as to the nature of memory allocation and deallocation, first of all. You allocate an array of char*, which on its own is fine. You then allocate arrays of char for each string, which is also fine. However, you then just call free() for the initial array - this isn't enough. There needs to be a call to free() to match each call to malloc(), so you need to free each string that you allocate and then free the initial array.
Secondly, you set sizeToMove to a multiple of sizeof(MAX_STRING_SIZE), which almost certainly isn't what you want. This is the size of the variable used to store the MAX_STRING_SIZE constant. Instead you want sizeof(char*). On some platforms these may be the same, in which case things will still work, but there's no guarantee of that. For example, I'd expect it to work on a 32-bit platform (where int and char* are the same size) but not on a 64-bit platform (where they're not).
Thirdly, you can't just assign a string constant (e.g. "hello world") to an allocated block - what you're doing here is replacing the pointer. You need to use something like strncpy() or memcpy() to copy the string into the allocated block. I suggest snprintf() for convenience because strncpy() has the problem that it doesn't guarantee to nul-terminate the result, but it's up to you.
Fourthly, you're still using memcpy() and not memmove() to shuffle items around.
Finally, I've just seen your comment that you have to use new and delete. There is no equivalent of realloc() for these, but that's OK if everything is known in advance. It looks like what you're trying to do is something like this:
bool addItem(const char *item, char *list[], size_t listSize, size_t listMaxSize)
{
// Check if list is full.
if (listSize >= listMaxSize) {
return false;
}
// Insert item inside list.
for (unsigned int i = 0; i < listSize; ++i) {
if (strcmp(list[i], item) > 0) {
memmove(list + i + 1, list + i, sizeof(char*) * (listSize - i));
list[i] = item;
return true;
}
}
// Append item to list.
list[listSize] = item;
return true;
}
I haven't compiled and checked that, so watch out for off-by-one errors and the like, but hopefully you get the idea. This function should work whether you use malloc() and free() or new and delete, but it assumes that you've already copied the string item into an allocated buffer that you will keep around, because of course it just stores a pointer.
Remember that of course you need to update listSize yourself outside this function - this just inserts an item into the correct point in the array for you. If the function returns true then increment your copy of listSize by 1 - if it returns false then you didn't allocate enough memory so your item wasn't added.
Also note that in C and C++, for the array list the syntax &list[i] and list + i are totally equivalent - use the first one instead within the memmove() call if you find it easier to understand.
I think what you're looking for is heapsort: http://en.wikipedia.org/wiki/Heapsort#Pseudocode
An array is a common way to implement a binary search tree (i.e. a tree in which left children are smaller than the current node and right children are larger than the current node).
Heapsort sorts an array of a specified length. In your case, since the size of the array is going to increase "online", all you need to do is to call change the input size that you pass to heapsort (i.e. increase the number of elements being considered by 1).
Since your array is sorted and you can't use any other data structure your best bet is likely to perform a binary search, then to shift the array up one to free space at the position for insertion and then insert the new element at that position.
To minimise the cost of shifting the array, you could make it an array of pointers to string:
string **_storedNames = new string*[4000];
Now you can use memmove (although you might find now that an element-by-element copy is fast enough). But you will have to manage the allocation and deletion of the individual strings yourself, and this is somewhat error-prone.
Other posters who recommend using memmove on your original array don't seem to have noticed that each array element is a string (not a string* !). You can't use memmove or memcpy on a class like this.
I need to have a dynamic array, so I need to allocate the necessary amount of memory through a pointer. What makes me wonder about which is a good solution, is that C++ has the ability to do something like:
int * p = new int[6];
which allocates the necessary array. What I need is that, afterwards, I want to grow some parts of this array. A(n flawed) example:
int *p1 = &p[0];
int *p2 = &p[2];
int *p3 = &p[4];
// delete positions p[2], p[3]
delete [] p2;
// create new array
p2 = new int[4];
I don't know how to achieve this behavior.
EDIT: std::vector does not work for me since I need the time of insertion/deletion of k elements to be proportional to the number k and not to the number of elements stored in the std::vector.
Using pointers, in the general case, I would point to the start of any non continuous region of memory and I would keep account of how many elements it stores. Conceptually, I would fragment the large array into many small ones and not necessarily in continuous space in the memory (the deletion creates "holes" while the allocation does not necessarily "fill" them).
You achieve this behavior by using std::vector:
std::vector<int> v(6); // create a vector with six elements.
v.erase(v.begin() + 2); // erase the element at v[2]
v.insert(v.begin() + 2, 4, 0); // insert four new elements starting at v[2]
Really, any time you want to use a dynamically allocated array, you should first consider using std::vector. It's not the solution to every problem, but along with the rest of the C++ standard library containers it is definitely the solution to most problems.
You should look into STL containers in C++, for example vector has pretty much the functionality you want.
I'd advise against doing this on your own. Look up std::vector for a reasonable starting point.
another option, besides std::vector is std::deque, which works in much the same way, but is a little more efficient at inserting chunks into the middle. If that's still not good enough for you, you might get some mileage using a collection of collections. You'll have to do a little bit more work getting random access to work (perhaps writing a class to wrap the whole thing.