A classic, I'm looking for optimisation here : I have an array of things, and after some processing I know I'm only interested in elements i to j. How to trim my array in the fatset, lightest way, with complete deletions/freeing of memory of elements before i and after j ?
I'm doing mebedded C++, so I may not be able to compile all sorts of library let's say. But std or vector things welcome in a first phase !
I've tried, for array A to be trimmed between i and j, with variable numElms telling me the number of elements in A :
A = &A[i];
numElms = i-j+1;
As it is this yields an incompatibility error. Can that be fixed, and even when fixed, does that free the memory at all for now-unused elements?
A little context : This array is the central data set of my module, and it can be heavy. It will live as long as the module lives. And there's no need to carry dead weight all this time. This is the very first thing that is done - figuring which segment of the data set has to be at all analyzed, and trimming and dumping the rest forever, never to use it again (until the next cycle where we get a fresh array with possibily a compeltely different size).
When asking questions about speed your millage may very based on the size of the array you're working with, but:
Your fastest way will be to not trim the array, just use A[index + i] to find the elements you want.
The lightest way to do this would be to:
Allocate a dynamic array with malloc
Once i and j are found copy that range to the head of the dynamic array
Use realloc to resize the dynamic array to the size j - i + 1
However you have this tagged as C++ not C, so I believe that you're also interested in readability and the required programming investment, not raw speed or weight. If this is true then I would suggest use of a vector or deque.
Given vector<thing> A or a deque<thing> A you could do:
A.erase(cbegin(A), next(cbegin(A), i));
A.resize(j - i + 1);
There is no way to change aloocated memory block size in standard C++ (unless you have POD data — in this case C facilities like realloc could be used). The only way to trim an array is to allocate new array. copy/move needed elements and destroy old array.
You can do it manually, or using vectors:
int* array = new int[10]{0,1,2,3,4,5,6,7,8,9};
std::vector<int> vec {0,1,2,3,4,5,6,7,8,9};
//We want only elements 3-5
{
int* new_array = new int[3];
std::copy(array + 3, array + 6, new_array);
delete[] array;
array = new_array;
}
vec = std::vector<int>(vec.begin()+3, vec.begin()+6);
If you are using C++11, both approaches should have same perfomance.
If you only want to remove extra elements and do not really want to release memory (for example you might want to add more elements later) you can follow NathanOliver link
However, you should consider: do you really need that memory freed immideately? Do you need to move elements right now? Will you array live for such long time that this memory would be lost for your program completely? Maybe you need a range or perharps a view to the array content? In many cases you can store two pointers (or pointer and size) to denote your "new" array, while keeping old one to be released all at once.
Related
I have data which is N by 4 which I push back data as follows.
vector<vector<int>> a;
for(some loop){
...
a.push_back(vector<int>(4){val1,val2,val3,val4});
}
N would be less than 13000. In order to prevent unnecessary reallocation, I would like to reserve 13000 by 4 spaces in advance.
After reading multiple related posts on this topic (eg How to reserve a multi-dimensional Vector?), I know the following will do the work. But I would like to do it with reserve() or any similar function if there are any, to be able to use push_back().
vector<vector<int>> a(13000,vector<int>(4);
or
vector<vector<int>> a;
a.resize(13000,vector<int>(4));
How can I just reserve memory without increasing the vector size?
If your data is guaranteed to be N x 4, you do not want to use a std::vector<std::vector<int>>, but rather something like std::vector<std::array<int, 4>>.
Why?
It's the more semantically-accurate type - std::array is designed for fixed-width contiguous sequences of data. (It also opens up the potential for more performance optimizations by the compiler, although that depends on exactly what it is that you're writing.)
Your data will be laid out contiguously in memory, rather than every one of the different vectors allocating potentially disparate heap locations.
Having said that - #pasbi's answer is correct: You can use std::vector::reserve() to allocate space for your outer vector before inserting any actual elements (both for vectors-of-vectors and for vectors-of-arrays). Also, later on, you can use the std::vector::shrink_to_fit() method if you ended up inserting a lot less than you had planned.
Finally, one other option is to use a gsl::multispan and pre-allocate memory for it (GSL is the C++ Core Guidelines Support Library).
You've already answered your own question.
There is a function vector::reserve which does exactly what you want.
vector<vector<int>> a;
a.reserve(N);
for(some loop){
...
a.push_back(vector<int>(4){val1,val2,val3,val4});
}
This will reserve memory to fit N times vector<int>. Note that the actual size of the inner vector<int> is irrelevant at this point since the data of a vector is allocated somewhere else, only a pointer and some bookkeeping is stored in the actual std::vector-class.
Note: this answer is only here for completeness in case you ever come to have a similar problem with an unknown size; keeping a std::vector<std::array<int, 4>> in your case will do perfectly fine.
To pick up on einpoklum's answer, and in case you didn't find this earlier, it is almost always a bad idea to have nested std::vectors, because of the memory layout he spoke of. Each inner vector will allocate its own chunk of data, which won't (necessarily) be contiguous with the others, which will produce cache misses.
Preferably, either:
Like already said, use an std::array if you have a fixed and known amount of elements per vector;
Or flatten your data structure by having a single std::vector<T> of size N x M.
// Assuming N = 13000, M = 4
std::vector<int> vec;
vec.reserve(13000 * 4);
Then you can access it like so:
// Before:
int& element = vec[nIndex][mIndex];
// After:
int& element = vec[mIndex * 13000 + nIndex]; // Still assuming N = 13000
I have an assignment that requires me to sort a heap based C style array of names as they're being read rather than reading them all and then sorting. This involves a lot of shifting the contents of the array by one to allow new names to be inserted. I'm using the code below but it's extremely slow. Is there anything else I could be doing to optimize it without changing the type of storage?
//the data member
string *_storedNames = new string[4000];
//together boundary and index define the range of elements to the right by one
for(int k = p_boundary - 1;k > index;k--)
_storedNames[k]=_storedNames[k - 1];
EDIT2:
As suggested by Cartroo I'm attempting to use memmove with the dynamic data that uses malloc. Currently this shifts the data correctly but once again fails in the deallocation process. Am I missing something?
int numberOfStrings = 10, MAX_STRING_SIZE = 32;
char **array = (char **)malloc(numberOfStrings);
for(int i = 0; i < numberOfStrings; i++)
array[i] = (char *)malloc(MAX_STRING_SIZE);
array[0] = "hello world",array[2] = "sample";
//the range of data to move
int index = 1, boundary = 4;
int sizeToMove = (boundary - index) * sizeof(MAX_STRING_SIZE);
memcpy(&array[index + 1], &array[index], sizeToMove);
free(array);
If you're after minimal changes to your approach, you could use the memmove() function, which is potentially faster than your own manual version. You can't use memcpy() as advised by one commenter as the areas of memory aren't permitted to overlap (the behaviour is undefined if they do).
There isn't a lot else you can do without changing the type of your storage or your algorithm. However, if you change to using a linked list then the operation becomes significantly more efficient, although you will be doing more memory allocation. If the allocation is really a problem (and unless you're on a limited embedded system it probably isn't) then pool allocators or similar approaches could help.
EDIT: Re-reading your question, I'm guessing that you're not actually using Heapsort, you just mean that your array was allocated on the heap (i.e. using malloc()) and you're doing a simple insertion sort. In which case, the information below isn't much use to you directly, although you should be aware that insertion sort is quite inefficient compared to a bulk insert followed by a better sorting algorithm (e.g. Quicksort which you can implement using the standard library qsort() function). If you only ever need the lowest (or highest) item instead of full sorted order then Heapsort is still useful reading.
If you're using a standard Heapsort then you shouldn't need this operation at all - items are appended at the end of the array, and then the "heapify" operation is used to swap them into the correct position in the heap. Each swap just requires a single temporary variable to swap two items - it doesn't require anything to be shuffled down as in your code snippet. It does require everything in the array to be the same size (either a fixed size in-place string or, more likely, a pointer), but your code already seems to assume that anyway (and using variable length strings in a standard char array would be a pretty strange thing to be doing).
Note that strictly speaking Heapsort operates on a binary tree. Since you're dealing with an array I assume you're using the implementation where a contiguous array is used where the children of node at index n are stored at indicies 2n and 2n+1 respectively. If this isn't the case, or you're not using a Heapsort at all, you should explain in more detail what you're trying to do to get a more helpful answer.
EDIT: The following is in response to you updated code above.
The main reason you see a problem during deallocation is if you trampled some memory - in other words, you're copying something outside the size of the area you allocated. This is a really bad thing to do as you overwrite the values that the system is using to track your allocations and cause all sorts of problems which usually result in your program crashing.
You seem to have some slight confusion as to the nature of memory allocation and deallocation, first of all. You allocate an array of char*, which on its own is fine. You then allocate arrays of char for each string, which is also fine. However, you then just call free() for the initial array - this isn't enough. There needs to be a call to free() to match each call to malloc(), so you need to free each string that you allocate and then free the initial array.
Secondly, you set sizeToMove to a multiple of sizeof(MAX_STRING_SIZE), which almost certainly isn't what you want. This is the size of the variable used to store the MAX_STRING_SIZE constant. Instead you want sizeof(char*). On some platforms these may be the same, in which case things will still work, but there's no guarantee of that. For example, I'd expect it to work on a 32-bit platform (where int and char* are the same size) but not on a 64-bit platform (where they're not).
Thirdly, you can't just assign a string constant (e.g. "hello world") to an allocated block - what you're doing here is replacing the pointer. You need to use something like strncpy() or memcpy() to copy the string into the allocated block. I suggest snprintf() for convenience because strncpy() has the problem that it doesn't guarantee to nul-terminate the result, but it's up to you.
Fourthly, you're still using memcpy() and not memmove() to shuffle items around.
Finally, I've just seen your comment that you have to use new and delete. There is no equivalent of realloc() for these, but that's OK if everything is known in advance. It looks like what you're trying to do is something like this:
bool addItem(const char *item, char *list[], size_t listSize, size_t listMaxSize)
{
// Check if list is full.
if (listSize >= listMaxSize) {
return false;
}
// Insert item inside list.
for (unsigned int i = 0; i < listSize; ++i) {
if (strcmp(list[i], item) > 0) {
memmove(list + i + 1, list + i, sizeof(char*) * (listSize - i));
list[i] = item;
return true;
}
}
// Append item to list.
list[listSize] = item;
return true;
}
I haven't compiled and checked that, so watch out for off-by-one errors and the like, but hopefully you get the idea. This function should work whether you use malloc() and free() or new and delete, but it assumes that you've already copied the string item into an allocated buffer that you will keep around, because of course it just stores a pointer.
Remember that of course you need to update listSize yourself outside this function - this just inserts an item into the correct point in the array for you. If the function returns true then increment your copy of listSize by 1 - if it returns false then you didn't allocate enough memory so your item wasn't added.
Also note that in C and C++, for the array list the syntax &list[i] and list + i are totally equivalent - use the first one instead within the memmove() call if you find it easier to understand.
I think what you're looking for is heapsort: http://en.wikipedia.org/wiki/Heapsort#Pseudocode
An array is a common way to implement a binary search tree (i.e. a tree in which left children are smaller than the current node and right children are larger than the current node).
Heapsort sorts an array of a specified length. In your case, since the size of the array is going to increase "online", all you need to do is to call change the input size that you pass to heapsort (i.e. increase the number of elements being considered by 1).
Since your array is sorted and you can't use any other data structure your best bet is likely to perform a binary search, then to shift the array up one to free space at the position for insertion and then insert the new element at that position.
To minimise the cost of shifting the array, you could make it an array of pointers to string:
string **_storedNames = new string*[4000];
Now you can use memmove (although you might find now that an element-by-element copy is fast enough). But you will have to manage the allocation and deletion of the individual strings yourself, and this is somewhat error-prone.
Other posters who recommend using memmove on your original array don't seem to have noticed that each array element is a string (not a string* !). You can't use memmove or memcpy on a class like this.
int * a;
a = new int[10];
cout << sizeof(a)/sizeof(int);
if i would use a normal array the answer would be 10,
alas, the lucky number printed was 1, because sizeof(int) is 4 and iszeof(*int) is 4 too. How do i owercome this? In my case keeping size in memory is a complicated option. How do i get size using code?
My best guess would be to iterate through an array and search for it's end, and the end is 0, right? Any suggestions?
--edit
well, what i fear about vectors is that it will reallocate while pushing back, well you got the point, i can jus allocate the memory. Hoever i cant change the stucture, the whole code is releevant. Thanks for the answers, i see there's no way around, so ill just look for a way to store the size in memory.
what i asked whas not what kind of structure to use.
Simple.
Use std::vector<int> Or std::array<int, N> (where N is a compile-time constant).
If you know the size of your array at compile time, and it doens't need to grow at runtime, then use std::array. Else use std::vector.
These are called sequence-container classes which define a member function called size() which returns the number of elements in the container. You can use that whenever you need to know the size. :-)
Read the documentation:
std::array with example
std::vector with example
When you use std::vector, you should consider using reserve() if you've some vague idea of the number of elements the container is going to hold. That will give you performance benefit.
If you worry about performance of std::vector vs raw-arrays, then read the accepted answer here:
Is std::vector so much slower than plain arrays?
It explains why the code in the question is slow, which has nothing to do with std::vector itself, rather its incorrect usage.
If you cannot use either of them, and are forced to use int*, then I would suggest these two alternatives. Choose whatever suits your need.
struct array
{
int *elements; //elements
size_t size; //number of elements
};
That is self-explanatory.
The second one is this: allocate memory for one more element and store the size in the first element as:
int N = howManyElements();
int *array = int new[N+1]; //allocate memory for size storage also!
array[0] = N; //store N in the first element!
//your code : iterate i=1 to i<=N
//must delete it once done
delete []array;
sizeof(a) is going to be the size of the pointer, not the size of the allocated array.
There is no way to get the size of the array after you've allocated it. The sizeof operator has to be able to be evaluated at compile time.
How would the compiler know how big the array was in this function?
void foo(int size)
{
int * a;
a = new int[size];
cout << sizeof(a)/sizeof(int);
delete[] a;
}
It couldn't. So it's not possible for the sizeof operator to return the size of an allocated array. And, in fact, there is no reliable way to get the size of an array you've allocated with new. Let me repeat this there is no reliable way to get the size of an array you've allocated with new. You have to store the size someplace.
Luckily, this problem has already been solved for you, and it's guaranteed to be there in any implementation of C++. If you want a nice array that stores the size along with the array, use ::std::vector. Particularly if you're using new to allocate your array.
#include <vector>
void foo(int size)
{
::std::vector<int> a(size);
cout << a.size();
}
There you go. Notice how you no longer have to remember to delete it. As a further note, using ::std::vector in this way has no performance penalty over using new in the way you were using it.
If you are unable to use std::vector and std::array as you have stated, than your only remaning option is to keep track of the size of the array yourself.
I still suspect that your reasons for avoiding std::vector are misguided. Even for performance monitoring software, intelligent uses of vector are reasonable. If you are concerned about resizing you can preallocate the vector to be reasonably large.
I have an array int *playerNum which stores the list of all the numbers of the players in the team. Each slot e.g playerNum[1]; represents a position on the team, if I wanted to add a new player for a new position on the team. That is, inserting a new element into the array somewhere near the middle, how would I go about doing this?
At the moment, I was thinking you memcpy up to the position you want to insert the player into a new array and then insert the new player and copy over the rest of it?
(I have to use an array)
If you're using C++, I would suggest not using memcpy or memmove but instead using the copy or copy_backward algorithms. These will work on any data type, not just plain old integers, and most implementations are optimized enough that they will compile down to memmove anyway. More importantly, they will work even if you change the underlying type of the elements in the array to something that needs a custom copy constructor or assignment operator.
If you have to use an array, after having made sure you have enough storage (using realloc if necessary), use memmove to shift the items from the insertion point to the end by one position, then save your new player at the desired location.
You can't use memcpy if the source and target areas overlap.
This will fail as soon as the objects in your array have non-trivial copy-constructors, and it's not idiomatic C++. Using one of the container classes is much safer (std::vector or std::list for instance).
Your solution using memcpy is correct (under few assumptions mentionned by other).
However, and since you are programming in C++. It is probably a better choice to use std::vector and its insert method.
vector<int> myvector (3,100);
myvector.insert ( 10 , 42 );
An array takes a contiguous block of memory, there is no function for you to insert an element in the middle. you can create a new one of size larger than the origin's by one then copy the original array into the new one plus the new member
for(int i=0;i<arSize/2;i++)
{
newarray[i]<-ar[i];
}
newarray[i+1]<-newelemant;
for(int j=i+1<newSize;j++,i++)
{
newarray[i]<-ar[i];
}
if you use STL, ting becomes easier, use list.
As you're talking about an array and "insert" I assume that it is a sorted array. You don't necessarily need a second array provided that the capacity N of your existing array is large enough to store more entries (N>n, where n is the number of current entries). You can move the entries from k to n-1 (zero-indexed) to k+1 to n, where k is the desired insert position. Insert the new element at index position k and increase n by one. If the array is not large enough in the beginning, you can follow your proposed approach or just reallocate a new array of larger capacity N' and copy the existing data before applying the actual insert operation described above.
BTW: As you're using C++, you could easily use std::vector.
While it is possible to use arrays for this, C++ has a better solutions to offer. For starters, try std::vector, which is a decent enough general-purpose container, based on a dynamically-allocated array. It behaves exactly like an array in many cases.
Looking at your problem, however, there are two downsides to arrays or vectors:
Indices have to be 0-based and contiguous; you cannot remove elements from the middle without losing key/value associations for everything after the removed element; so if you remove the player on position 4, then the player from position 9 will move to position 8
Random insertion and deletion (that is, anywhere except the end) is expensive - O(n), that is, execution time grows linearly with array size. This is because every time you insert or delete, a part of the array needs to be moved.
If the key/value thing isn't important to you, and insertion/deletion isn't time critical, and your container is never going to be really large, then by all means, use a vector. If you need random insertion/deletion performance, but the key/value thing isn't important, look at std::list (although you won't get random access then, that is, the [] operator isn't defined, as implementing it would be very inefficient for linked lists; linked lists are also very memory hungry, with an overhead of two pointers per element). If you want to maintain key/value associations, std::map is your friend.
Losting the tail:
#include <stdio.h>
#define s 10
int L[s];
void insert(int v, int p, int *a)
{
memmove(a+p+1,a+p,(s-p+1)*4);
*(a+p) = v;
}
int main()
{
for(int i=0;i<s;i++) L[i] = i;
insert(11,6, L);
for(int i=0;i<s;i++) printf("%d %d\n", L[i], &L[i]);
return 0;
}
I need to have a dynamic array, so I need to allocate the necessary amount of memory through a pointer. What makes me wonder about which is a good solution, is that C++ has the ability to do something like:
int * p = new int[6];
which allocates the necessary array. What I need is that, afterwards, I want to grow some parts of this array. A(n flawed) example:
int *p1 = &p[0];
int *p2 = &p[2];
int *p3 = &p[4];
// delete positions p[2], p[3]
delete [] p2;
// create new array
p2 = new int[4];
I don't know how to achieve this behavior.
EDIT: std::vector does not work for me since I need the time of insertion/deletion of k elements to be proportional to the number k and not to the number of elements stored in the std::vector.
Using pointers, in the general case, I would point to the start of any non continuous region of memory and I would keep account of how many elements it stores. Conceptually, I would fragment the large array into many small ones and not necessarily in continuous space in the memory (the deletion creates "holes" while the allocation does not necessarily "fill" them).
You achieve this behavior by using std::vector:
std::vector<int> v(6); // create a vector with six elements.
v.erase(v.begin() + 2); // erase the element at v[2]
v.insert(v.begin() + 2, 4, 0); // insert four new elements starting at v[2]
Really, any time you want to use a dynamically allocated array, you should first consider using std::vector. It's not the solution to every problem, but along with the rest of the C++ standard library containers it is definitely the solution to most problems.
You should look into STL containers in C++, for example vector has pretty much the functionality you want.
I'd advise against doing this on your own. Look up std::vector for a reasonable starting point.
another option, besides std::vector is std::deque, which works in much the same way, but is a little more efficient at inserting chunks into the middle. If that's still not good enough for you, you might get some mileage using a collection of collections. You'll have to do a little bit more work getting random access to work (perhaps writing a class to wrap the whole thing.