Which container should I use for random access, cheap addition and removal (without de/allocation), with a known maximum size? - c++

I need the lighter container that must store till 128 unsigned int.
It must add, edit and remove each element accessing it quickly, without allocating new memory every time (I already know it will be max 128).
Such as:
add int 40 at index 4 (1/128 item used)
add int 36 at index 90 (2/128 item used)
edit to value 42 the element at index 4
add int 36 at index 54 (3/128 item used)
remove element with index 90 (2/128 item used)
remove element with index 4 (1/128 item used)
... and so on. So every time I can iterate trought only the real number of elements added to the container, not all and check if NULL or not.
During this process, as I said, it must not allocating/reallocating new memory, since I'm using an app that manage "audio" data and this means a glitch every time I touch the memory.
Which container would be the right candidate?
It sounds like a "indexes" queue.

As I understand the question, you have two operations
Insert/replace element value at cell index
Delete element at cell index
and one predicate
Is cell index currently occupied?
This is an array and a bitmap. When you insert/replace, you stick the value in the array cell and set the bitmap bit. When you delete, you clear the bitmap bit. When you ask, you query the bitmap bit.

You can just use std::vector<int> and do vector.reserve(128); to keep the vector from allocating memory. This doesn't allow you to keep track of particular indices though.
If you need to keep track of an 'index' you could use std::vector<std::pair<int, int>>. This doesn't allow random access though.

If you only need cheap setting and erasing values, just use an array. You
can keep track of what cells are used by marking them in another array (or bitmap). Or by just defining one value (e.g. 0 or -1) as an "unused" value.
Of course, if you need to iterate over all used cells, you need to scan the whole array. But that's a tradeoff you need to make: either do more work during adding and erasing, or do more work during a search. (Note that an .insert() in the middle of a vector<> will move data around.)
In any case, 128 elements is so few, that a scan through the whole array will be negligible work. And frankly, I think anything more complex than a vector will be total overkill. :)
Roughly:
unsigned data[128] = {0}; // initialize
unsigned used[128] = {0};
data[index] = newvalue; used[index] = 1; // set value
data[index] = used[index] = 0; // unset value
// check if a cell is used and do something
if (used[index]) { do something } else { do something else }

I'd suggest a tandem of vectors, one to hold the active indices, the other to hold the data:
class Container
{
std::vector<size_t> indices;
std::vector<int> data;
size_t index_worldToData(size_t worldIndex) const
{
auto it = std::lower_bound(begin(indices), end(indices), worldIndex);
return it - begin(indices);
}
public:
Container()
{
indices.reserve(128);
data.reserve(128);
}
int& operator[] (size_t worldIndex)
{
return data[index_worldToData(worldIndex)];
}
void addElement(size_t worldIndex, int element)
{
auto dataIndex = index_worldToData(worldIndex);
indices.insert(it, worldIndex);
data.insert(begin(data) + dataIndex, element);
}
void removeElement(size_t worldIndex)
{
auto dataIndex = index_worldToData(worldIndex);
indices.erase(begin(indices) + dataIndex);
data.erase(begin(indices) + dataIndex);
}
class iterator
{
Container *cnt;
size_t dataIndex;
public:
int& operator* () const { return cnt.data[dataIndex]; }
iterator& operator++ () { ++dataIndex; }
};
iterator begin() { return iterator{ this, 0 }; }
iterator end() { return iterator{ this, indices.size() }; }
};
(Disclaimer: code not touched by compiler, preconditions checks omitted)
This one has logarithmic time element access, linear time insertion and removal, and allows iterating over non-empty elements.

You could use a doubly-linked list and an array of node pointers.
Preallocate 128 list nodes and keep them on freelist.
Create a empty itemlist.
Allocate an array of 128 node pointers called items
To insert at i: Pop the head node from freelist, add it to
itemlist, set items[i] to point at it.
To access/change a value, use items[i]->value
To delete at i, remove the node pointed to by items[i], reinsert it in 'freelist'
To iterate, just walk itemlist
Everything is O(1) except iteration, which is O(Nactive_items). Only caveat is that iteration is not in index order.
Freelist can be singly-linked, or even an array of nodes, as all you need is pop and push.

class Container {
private:
set<size_t> indices;
unsigned int buffer[128];
public:
void set_elem(const size_t index, const unsigned int element) {
buffer[index] = element;
indices.insert(index);
}
// and so on -- iterate over the indices if necessary
};

There are multiple approaches that you can use, I will cite them in order of effort expended.
The most affordable solution is to use the Boost non-standard containers, of particular interest is flat_map. Essentially, a flat_map offers the interface of a map over the storage provided by a dynamic array.
You can call its reserve member at the start to avoid memory allocation afterward.
A slightly more involved solution is to code your own memory allocator.
The interface of an allocator is relatively easy to deal with, so that coding an allocator is quite simple. Create a pool-allocator which will never release any element, warm it up (allocate 128 elements) and you are ready to go: it can be plugged in any collection to make it memory-allocation-free.
Of particular interest, here, is of course std::map.
Finally, there is the do-it-yourself road. Much more involved, quite obviously: the number of operations supported by standard containers is just... huge.
Still, if you have the time or can live with only a subset of those operations, then this road has one undeniable advantage: you can tailor the container specifically to your needs.
Of particular interest here is the idea of having a std::vector<boost::optional<int>> of 128 elements... except that since this representation is quite space inefficient, we use the Data-Oriented Design to instead make it two vectors: std::vector<int> and std::vector<bool>, which is much more compact, or even...
struct Container {
size_t const Size = 128;
int array[Size];
std::bitset<Size> marker;
}
which is both compact and allocation-free.
Now, iterating requires iterating the bitset for present elements, which might seem wasteful at first, but said bitset is only 16 bytes long so it's a breeze! (because at such scale memory locality trumps big-O complexity)

Why not use std::map<int, int>, it provides random access and is sparse.

If a vector (pre-reserved) is not handy enough, look into Boost.Container for various “flat” varieties of indexed collections. This will store everything in a vector and not need memory manipulation, but adds a layer on top to make it a set or map, indexable by which elements are present and able to tell which are not.

Related

C++: std::vector with space reserving for both ends

Just some thoughts:
I wanted to erase x elements from the beginning of a std::vector with size n > x. Since a vector uses an array internally, this could be as easy as setting the pointer for vector.begin() to the index of the first element which is kept. But instead, erase shifts all elements in the array to that the first element actually starts at index 0, making the erase operation take much more time than it could.
Furthermore, if the valid 'zone' of the internal array was really controlled by just some start and end indices / pointers of the vector structure, then there would be also the option to reserve space in front of the first element. E.g., a vector is initialized and 20 spaces are reserved at the end and 10 at the beginning. Internally, then an array of space 30 (or 32) is created, where the start index/pointer points to the 11th value of the internal array, allowing to include new elements to the front of the 'vector' in constant speed.
Well, my point is, I think such a data structure would be somewhat useful, at least for my purposes. I'm pretty sure someone already thought of this and already implemented it. So I want to ask: How is the data structure called that I'm describing? If it exists, I'd love to use it. I think this is not a double-linked list, since there, every element is kind of a struct containing the element value and additional pointers to the neighbors (to my knowledge).
EDIT: And yes, such a structure would probably use more memory than necessary, especially when erasing some elements from the beginning, because then, the internal array still has the initial size. But well, memory isn't a big issue anymore for most problems, and there could be a (time-expensive) 'memory-optimize' operation to create a new, smaller array, copying over all old values and deleting the old internal array to use the smallest possible size.
Expanding on #Kerrek SB's comment, boost::circular_buffer<> does I think what you need, for example:
#include <iostream>
#include <boost/circular_buffer.hpp>
int main()
{
boost::circular_buffer<int> cb(3);
cb.push_back(1);
cb.push_back(2);
cb.push_back(3);
for( auto i : cb) {
std::cout << i << std::endl;
}
// Increase to hold two more items
cb.set_capacity(5);
cb.push_back(4);
cb.push_back(5);
for( auto i : cb) {
std::cout << i << std::endl;
}
// Increase to hold two more items
cb.rset_capacity(7);
cb.push_front(0);
cb.push_front(-1);
for( auto i : cb) {
std::cout << i << std::endl;
}
}
TBH - I have not looked at the implementation, so cannot comment on whether it moves data around (I'd be highly surprised.) but if you pull down the source, take a quick peek to satisfy if performance is a concern...
EDIT: Quick look at the code reveals that the push_xxx operations does not indeed move data around, however the xxx_capacity operations do result in a move/copy - to avoid that, ensure the ring has enough capacity at the start and it will work as you wish...

hashtable needs lots of memory

I've declared and defined the following HashTable class. Note that I needed a hashtable of hashtables so my HashEntry struct contains a HashTable pointer. The public part is not a big deal it has the traditional hash table functions so I removed them for simplicity.
enum Status{ACTIVE, DELETED, EMPTY};
enum Type{DNS_ENTRY, URL_ENTRY};
class HashTable{
private:
struct HashEntry{
std::string key;
Status current_status;
std::string ip;
int access_count;
Type entry_type;
HashTable *table;
HashEntry(
const std::string &k = std::string(),
Status s = EMPTY,
const std::string &u = std::string(),
const int &a = int(),
Type e = DNS_ENTRY,
HashTable *t = NULL
): key(k), current_status(s), ip(u), access_count(a), entry_type(e), table(t){}
};
std::vector<HashEntry> array;
int currentSize;
public:
HashTable(int size = 1181, int csz = 0): array(size), currentSize(csz){}
};
I am using quadratic probing and I double the size of the vector in my rehash function when I hit array.size()/2. The following list is used when a larger table size is needed.
int a[16] = {49663, 99907, 181031, 360461,...}
My problem is that this class consumes so much memory. I've just profiled it with massif and found out that it needs 33MB(33 million bytes!) of memory for 125000 insertion. To be clear, actually
1 insertion -> 47352 Bytes
8 insertion -> 48376 Bytes
512 insertion -> 76.27KB
1000 insertion 2MB (array size increased to 49663 here)
27000 insertion-> 8MB (array size increased to 99907 here)
64000 insertion -> 16MB (array size increased to 181031 here)
125000 insertion-> 33MB (array size increased to 360461 here)
These may be unnecessary but I just wanted to show you how memory usage changes with the input. As you can see, when rehashing is done, memory usage doubles. For example, our initial array size was 1181. And we have just seen that 125000 elements -> 33MB.
To debug the problem, I changed the initial size to 360461. Now 127000 insertion does not need rehashing. And I see that 20MB of memory is used with this initial value. That is still huge, but I think it suggests there is a problem with rehashing. The following is my rehash function.
void HashTable::rehash(){
std::vector<HashEntry> oldArray = array;
array.resize(nextprime(array.size()));
for(int j = 0; j < array.size(); j++){
array[j].current_status = EMPTY;
}
for(int i = 0; i < oldArray.size(); i++){
if(oldArray[i].current_status == ACTIVE){
insert(oldArray[i].key);
int pos = findPos(oldArray[i].key);
array[pos] = oldArray[i];
}
}
}
int nextprime(int arraysize){
int a[16] = {49663, 99907, 181031, 360461, 720703, 1400863, 2800519, 5600533, 11200031, 22000787, 44000027};
int i = 0;
while(arraysize >= a[i]){i++;}
return a[i];
}
This is the insert function used in rehashing and everywhere else.
bool HashTable::insert(const std::string &k){
int currentPos = findPos(k);
if(isActive(currentPos)){
return false;
}
array[currentPos] = HashEntry(k, ACTIVE);
if(++currentSize > array.size() / 2){
rehash();
}
return true;
}
What am I doing wrong here? Even if it's caused by rehashing, when no rehashing is done it is still 20MB and I believe 20MB is way too much for 100k items. This hashtable is supposed to contain like 8 million elements.
The fact that 360,461 HashEntry's take 20 MB is hardly surprising. Did you try looking at sizeof(HashEntry)?
Each HashEntry includes two std::strings, a pointer, and three int's. As the old joke has it, it's not easy to answer the question "How long is a string?", in this case because there are a large variety of string implementations and optimizations, so you might find that sizeof(std::string) is anywhere between 4 and 32 bytes. (It would only be 4 bytes on a 32-bit architecture.) In practice, a string requires three pointers and the string itself unless it happens to be empty. If sizeof(std::string) is the same as sizeof(void*), then you've probably got a not-too-recent GNU standard library, in which the std::string is an opaque pointer to a block containing two pointers, a reference count, and the string itself. If sizeof(std::string) is 32 bytes, then you might have a recent GNU standard library implementation in which there is a bit of extra space in the string structure for the short-string optimization. See the answer to Why does libc++'s implementation of std::string take up 3x memory as libstdc++? for some measurements. Let's just say 32 bytes per string, and ignore the details; it won't be off by much.
So two strings (32 bytes each) plus a pointer (8 bytes) plus three ints (another 12 bytes) and four bytes of padding because one of the ints is between two 8-byte aligned objects, and that's a total of 88 bytes per HashEntry. And if you have 360,461 hash entries, that would be 31,720,568 bytes, about 30 MB. The fact that you're "only" using 20MB is probably because you're using the old GNU standard library, which optimizes empty strings to a single pointer, and the majority of your strings are empty strings because half the slots have never been used.
Now, let's take a look at rehash. Reduced to its essentials:
void rehash() {
std::vector<HashEntry> tmp = array; /* Copy the entire array */
array.resize(new_size()); /* Internally does another copy */
for (auto const& entry : tmp)
if (entry.used()) array.insert(entry); /* Yet another copy */
}
At peak, we had two copies of the smaller array as well as the new big array. Even if the new array is only 20 MB, it's not surprising that peak memory usage was almost twice that. (Indeed, this is again surprisingly small, not surprisingly big. Possibly it was not actually necessary to change the address of the new vector because it was at the end of the current allocated memory space, which could just be extended.)
Note that we did two copies of all that data, and array.resize() potentially did another one. Let's see if we can do better:
void rehash() {
std::vector<HashEntry> tmp(new_size()); /* Make an array of default objects */
for (auto const& entry: array)
if (entry.used()) tmp.insert(entry); /* Copy into the new array */
std::swap(tmp, array); /* Not a copy, just swap three pointers */
}
This way, we only do one copy. Instead of a (possible) internal copy by resize, we do a bulk construction of the new elements, which should be similar. (It's just zeroing out the memory.)
Also, in the new version we only copy the actual strings once each, instead of twice each, which is the fiddliest part of the copy and thus probably quite a large saving.
Proper string management could reduce that overhead further. rehash doesn't actually need to copy the strings, since they are not changed. So we could keep the strings elsewhere, say in a vector of strings, and just use the index into the vector in the HashEntry. Since you are not expecting to hold billions of strings, only millions, the index could a four-byte int. By also shuffling the HashEntry fields around and reducing the enums to a byte instead of four bytes (in C++11, you can specify the underlying integer type of an enum), the HashEntry could be reduced to 24 bytes, and there wouldn't be a need to leave space for as many string descriptors.
Since you are using open addressing, half your hash slots have to be empty. Since HashEntry is quite large, storing a full HashEntry in each empty slot is terribly wasteful.
You should store your HashEntry structs somewhere else and put HashEntry* in your hash table, or switch to chaining with a much denser load factor. Either one will reduce this waste.
Also, if you're going to move HashEntry objects around, swap instead of copying, or use move semantics so you don't have to copy so many strings. Be sure to clear out the strings in any entries you're no longer using.
Also, even though you say you need HashTables of HashTables, you don't really explain why. It's usually more efficient to use one hash table with efficiently represented compound keys if small hash tables are not memory-efficient.
I have changed my structure a little bit just as you all suggested, but there is this one thing that nobody has noticed.
When rehashing/resizing is being done, my rehashfunction calls insert. In this insert function I am incrementing the currentSize, which holds how many elements a hashtable has. So each time a resizing is needed, currentSize doubles itself while it should have stayed the same. I removed that line and wrote the proper code for rehashing and now I think I'm okay.
I am using two different structs now, and the program consumes 1.6GB memory for 8 million elements, which is what expected due to multibyte strings, and integers. That number was like 7-8GB before.

Sorting an array of valid and invalid numbers in c++ for an embedded system

I am writing a program in C++ that will be used with Windows Embedded Compact 7. I have heard that it is best not to dynamically allocate arrays when writing embedded code. I will be keeping track of between 0 and 50 objects, so I am initially allocating 50 objects.
Object objectList[50];
int activeObjectIndex[50];
static const int INVALID_INDEX = -1;
int activeObjectCount=0;
activeObjectCount tells me how many objects I am actually using, and activeObjectIndex tells me which objects I am using. If the 0th, 7th, and 10th objects were being used I would want activeObjectIndex = [0,7,10,-1,-1,-1,...,-1]; and activeObjectCount=3;
As different objects become active or inactive I would like activeObjectIndex list to remain ordered.
Currently I am just sorting the activeObjectIndex at the end of each loop that the values might change in.
First, is there a better way to keep track of objects (that may or may not be active) in an embedded system than what I am doing? If not, is there an algorithm I can use to keep the objects sorted each time I add or remove and active object? Or should I just periodically do a bubble sort or something to keep them in order?
You have a hard question, where the answer requires quite a bit of knowledge about your system. Without that knowledge, no answer I can give would be complete. However, 15 years of embedded design has taught me the following:
You are correct, you generally don't want to allocate objects during runtime. Preallocate all the objects, and move them to active/inactive queues.
Keeping things sorted is generally hard. Perhaps you don't need to. You don't mention it, but I'll bet you really just need to keep your Objects in "used" and "free" pools, and you're using the index to quickly find/delete Objects.
I propose the following solution. Change your object to the following:
class Object {
Object *mNext, *mPrevious;
public:
Object() : mNext(this), mPrevious(this) { /* etc. */ }
void insertAfterInList(Object *p2) {
mNext->mPrev = p2;
p2->mNext = mNext;
mNext = p2;
p2->mPrev = this;
}
void removeFromList() {
mPrev->mNext = mNext;
mNext->mPrev = mPrev;
mNext = mPrev = this;
}
Object* getNext() {
return mNext;
}
bool hasObjects() {
return mNext != this;
}
};
And use your Objects:
#define NUM_OBJECTS (50)
Object gObjects[NUM_OBJECTS], gFree, gUsed;
void InitObjects() {
for(int i = 0; i < NUM_OBJECTS; ++i) {
gFree.insertAfter(&mObjects[i]);
}
}
Object* GetNewObject() {
assert(mFree.hasObjects());
Object obj = mFree->getNext();
obj->removeFromList();
gUsed.insertAfter(obj);
return obj;
}
void ReleaseObject(Object *obj) {
obj->removeFromList();
mFree.insertAfter(obj);
}
Edited to fix a small glitch. Should work now, although not tested. :)
The overhead of a std::vector is very small. The problem you can have is that dynamic resizing will allocate more memory than needed. However, as you have 50 elements, this shouldn't be a problem at all. Give it a try, and change it only if you see a strong impact.
If you cannot/do not want to remove unused objects from a std::vector, you can maybe add a boolean to your Object that indicates if it is active? This won't require more memory than using activeObjectIndex (maybe even less depending on alignment issues).
To sort the data with a boolean (not active at the end), write a function :
bool compare(const Object & a, const Object & b) {
if(a.active && !b.active) return true;
else return false;
}
std::sort(objectList,objectList + 50, &compare); // if you use an array
std::sort(objectList.begin(),objectList.end(), &compare); // if you use std::vector
If you want to sort using activeObjectIndex it will be more complicated.
If you want to use a structure that is always ordered, use std::set. However it will require more memory (but for 50 elements, it won't be an issue).
Ideally, implement the following function :
bool operator<(const Object & a, const Object & b) {
if(a.active && !b.active) return true;
else return false;
}
This will allow to use directly std::sort(objectList.begin(), objectList.end()) or declare an std::set that will stay sorted.
One way to keep track of active / inactive is to have the active Objects be on a doubly linked list. When an object goes from inactive to active then add to the list, and active to inactive remove from the list. You can add these to Object
Object * next, * prev;
so this does not require memory allocation.
If no dynamic memory allocation is allowed, I would use simple c-array or std::array and an index, which points into last+1 object. Objects are always kept in sorted order.
Addition is done by inserting new object into correct position of sorted list. To find insert position lower_bound or find_if can be used. For 50 element, second probably will be faster. Removal is similar.
You should not worry about having the list sorted, as writing a method to search in a list of indices what are the ones active would be O(N), and, in your particular case, amortized to O(1), as your array seems to be small enough for this little extra verification.
You could maintain the index of the last element checked, until it reaches the limit:
unsigned int next(const unsigned int& last) {
for (unsigned int i = last + 1; i < MAX_ARRAY_SIZE; i++) {
if (activeObjectIndex[i] != -1) {
return i;
}
}
return -1;
}
However, if you really want to have a side index, you can simply double the size of the array, creating a double linked list to the elements:
activeObjectIndex[MAX_ARRAY_SIZE * 3] = {-1};
activeObjectIndex[i] = "element id";
activeObjectIndex[i + 1] = "position of the previous element";
activeObjectIndex[i + 2] = "position of the next element";

Map inserstion failure: can not overwrite first element?

static std::map <unsigned int, CPCSteps> bestKvariables;
inline void copyKBestVar(MaxMinVarMap& vMaxMinAll, size_t& K, std::vector<unsigned int>& temp)
{
// copy top k variables and store in a vector.
MaxMinVarMap::reverse_iterator iter1;
size_t count;
for (iter1 = vMaxMinAll.rbegin(), count = 0; iter1 != vMaxMinAll.rend()&& count <= K; ++iter1, ++count)
{
temp.push_back(iter1->second);
}
}
void myAlgo::phase1(unsigned int& vTarget)
{
CPCSteps KBestForT; // To store kbest variables for only target variable finally put in a global storage
KBestForT.reserve(numVars);
std::vector<unsigned int> tempKbest;
tempKbest.reserve(numVars);
.......
.......
copyKBestVar(mapMinAssoc, KBestSize, tempKbest); // Store k top variables as a k best for this CPC variable for each step
KBestForT.push_back(tempKbest);
.....
.....
bestKvariables.insert(make_pair(vTarget, KBestForT)); // Store k best in a Map
.....
....
}
The Problem: Map "bestKvariables" is not overwriting the first element, but it keeps updating the rest of the elements. I tried to debug it but the problem i found is in insert command.
Thanks in advance for helping.
Another Question: whether i can reserve the size of a map(like vector.reserve(..)) in the beginning to avoid the insertion cost.
Sorry for providing insufficient information.
I means if there are four vTarget variables 1, 2, 3, 4. I do some statistical calculations for each variable.
There are more than one iterations for these variables, i would like to store top k results of each variable in a map to use it next iteration.
I saw the first inserted variable(with key unsigned int "vTarget") is not updated on further iterations(it remains the value inserted on first iteration).
But the other variables (keys inserted after the first) are remains updated.
Another Question: whether i can reserve the size of a map(like vector.reserve(..)) in the beginning to avoid the insertion cost.
std::map does not have a reserve() function unlike std::vector.
Usually, the Standard library provides functions for containers that provide and ensure good performance or provide a means to achieve the same.
For a container like std::vector reallocation of its storage can be a very costly
operation. A simple call to push_back() can lead to every element in the std::vector to be copied to a newly allocated block of memory. A call to reserve() can avoid these unnecessary allocations and copy operations for std::vector and hence the same is provided for it.
std::map never needs to copy all of the existing/remaining elements simply because a new element was inserted or removed. Hence it does not provide any such function.
Though the Standard does not specify how a std::map should be implemented the behavior expected and the complexity desired ensures that most implementations implement it as a tree, unlike std::vector which needs elements to be allocated in contiguous memory locations.
map::insert isn't supposed to update/overwrite anything, just insert not-yet-present elements. Use operator[] for updating, it also inserts elements when the specified key is not present yet.

inserting into the middle of an array

I have an array int *playerNum which stores the list of all the numbers of the players in the team. Each slot e.g playerNum[1]; represents a position on the team, if I wanted to add a new player for a new position on the team. That is, inserting a new element into the array somewhere near the middle, how would I go about doing this?
At the moment, I was thinking you memcpy up to the position you want to insert the player into a new array and then insert the new player and copy over the rest of it?
(I have to use an array)
If you're using C++, I would suggest not using memcpy or memmove but instead using the copy or copy_backward algorithms. These will work on any data type, not just plain old integers, and most implementations are optimized enough that they will compile down to memmove anyway. More importantly, they will work even if you change the underlying type of the elements in the array to something that needs a custom copy constructor or assignment operator.
If you have to use an array, after having made sure you have enough storage (using realloc if necessary), use memmove to shift the items from the insertion point to the end by one position, then save your new player at the desired location.
You can't use memcpy if the source and target areas overlap.
This will fail as soon as the objects in your array have non-trivial copy-constructors, and it's not idiomatic C++. Using one of the container classes is much safer (std::vector or std::list for instance).
Your solution using memcpy is correct (under few assumptions mentionned by other).
However, and since you are programming in C++. It is probably a better choice to use std::vector and its insert method.
vector<int> myvector (3,100);
myvector.insert ( 10 , 42 );
An array takes a contiguous block of memory, there is no function for you to insert an element in the middle. you can create a new one of size larger than the origin's by one then copy the original array into the new one plus the new member
for(int i=0;i<arSize/2;i++)
{
newarray[i]<-ar[i];
}
newarray[i+1]<-newelemant;
for(int j=i+1<newSize;j++,i++)
{
newarray[i]<-ar[i];
}
if you use STL, ting becomes easier, use list.
As you're talking about an array and "insert" I assume that it is a sorted array. You don't necessarily need a second array provided that the capacity N of your existing array is large enough to store more entries (N>n, where n is the number of current entries). You can move the entries from k to n-1 (zero-indexed) to k+1 to n, where k is the desired insert position. Insert the new element at index position k and increase n by one. If the array is not large enough in the beginning, you can follow your proposed approach or just reallocate a new array of larger capacity N' and copy the existing data before applying the actual insert operation described above.
BTW: As you're using C++, you could easily use std::vector.
While it is possible to use arrays for this, C++ has a better solutions to offer. For starters, try std::vector, which is a decent enough general-purpose container, based on a dynamically-allocated array. It behaves exactly like an array in many cases.
Looking at your problem, however, there are two downsides to arrays or vectors:
Indices have to be 0-based and contiguous; you cannot remove elements from the middle without losing key/value associations for everything after the removed element; so if you remove the player on position 4, then the player from position 9 will move to position 8
Random insertion and deletion (that is, anywhere except the end) is expensive - O(n), that is, execution time grows linearly with array size. This is because every time you insert or delete, a part of the array needs to be moved.
If the key/value thing isn't important to you, and insertion/deletion isn't time critical, and your container is never going to be really large, then by all means, use a vector. If you need random insertion/deletion performance, but the key/value thing isn't important, look at std::list (although you won't get random access then, that is, the [] operator isn't defined, as implementing it would be very inefficient for linked lists; linked lists are also very memory hungry, with an overhead of two pointers per element). If you want to maintain key/value associations, std::map is your friend.
Losting the tail:
#include <stdio.h>
#define s 10
int L[s];
void insert(int v, int p, int *a)
{
memmove(a+p+1,a+p,(s-p+1)*4);
*(a+p) = v;
}
int main()
{
for(int i=0;i<s;i++) L[i] = i;
insert(11,6, L);
for(int i=0;i<s;i++) printf("%d %d\n", L[i], &L[i]);
return 0;
}