Related
my function looks like this:
bool getPair(std::vector<std::vector<unsigned short>>Cards) {
std::sort(Cards.begin(), Cards.end(), Cardsort);
std::map<unsigned short, int>Counter;
for (int i = 0; i < 6; i++)
Counter[Cards[i][0]];
for (const auto& val : Counter) {
if (val.second == 2)
return true;
}
return false;
}
I'm pretty sure I'm using std::map incorrectly, I basically have the vector setup like so:
{{2,0},{3,0},{4,1},{3,0},{4,0},{5,0},{6,0}}
where the first number represents value, the second represents card suit. I realize now I should have used an object which may have made this problem less complicated but now I'm trying to use std::map to count how many times the value shows up and if it shows up two times, it show return true (in the vector it would return true at the 3) but I don't think I'm using std::map properly
I want to see if Cards has more than one of the same variable in Cards[i][0], I do not care about duplicates in Cards[i][1]
Tested this and works. Highlighted the fix
#include <iostream>
#include <vector>
#include <map>
#include <algorithm>
using namespace std;
bool getPair(std::vector<std::vector<unsigned short>>Cards) {
std::sort(Cards.begin(), Cards.end());
std::map<unsigned short, int>Counter;
for (int i = 0; i < 6; i++)
Counter[Cards[i][0]]++; // ++++++++++++++++++ need to alter the value!
for (const auto& val : Counter) {
if (val.second == 2)
return true;
}
return false;
}
int main() {
// your code goes here
// {{2,0},{3,0},{4,1},{3,0},{4,0},{5,0},{6,0}}
std::vector<std::vector<unsigned short>> c = {{2,0},{3,0},{4,1},{3,0},{4,0},{5,0},{6,0}};
std::cout << getPair(c);
return 0;
}
Here´s my suggestion.
Some remarks:
why use two loops? You already have the map entry to check, since you want to increase it, so you can check for doubles aka pairs in the counting loop. No need for a second run. This way it´s much less expensive.
I changed the vector parameter to const&. It´s a very bad idea to pass such a thing by value, at least I can´t see why that could be appropriate in that case
I left out the sorting thingy, can´t see for what end it´s needed, just reinsert it, if necessary. Sorting is very expensive.
you are right in the fact that std:: containers do not need initialization, they are proper initialized, the allocator calls the constructor of new elements, event for e.g. int thats one reason why e.g. int got a default constructor syntax and you can write funny thingies like auto a = int();.
accessing nonexistent keys of a map simply creates them
using a set and counting will definitely not yield better performance
I think the code is pretty easy to read, here you are:
#include <iostream>
#include <vector>
#include <map>
bool getPair(const std::vector<std::vector<unsigned short>>& cards) {
std::map<unsigned short, int> counts;
for(const auto& n : cards) {
if(++counts[n[0]] == 2)
return true;
}
return false;
}
int main()
{
std::vector<std::vector<unsigned short>> cards1 = {{2,0},{3,0},{4,1},{3,0},{4,0},{5,0},{6,0}};
std::vector<std::vector<unsigned short>> cards2 = {{1,0},{2,0},{4,1},{3,0},{5,0},{7,0},{6,0}};
std::cout << getPair(cards1) << "\n";
std::cout << getPair(cards2) << "\n";
return 0;
}
Edit:
Quote of the C++14 Standard regarding access to not existing members of std::map, just for the sake of completeness:
23.4.4.3 map element access [map.access]
T& operator[](const key_type& x);
Effects: If there is no key equivalent to x in the map, inserts value_type(x, T()) into the map.
Requires: key_type shall be CopyInsertable and mapped_type shall be DefaultInsertable into
*this.
Returns: A reference to the mapped_type corresponding to x in *this.
Complexity: Logarithmic.23.4.4.3 map element access
First, you address uninitialized variables in Counter, and then you don't really do anything with it (and why do you run till 6 instead of Cards.size()? Your array has size 7 BTW. Also why there is some kind of sort there? You don't need it.):
std::map<unsigned short, int>Counter;
for (int i = 0; i < 6; i++)
Counter[Cards[i][0]];
They might set the uninitialized variable automatically at 0 or they might not - it depends on the implementation as it is not specified as far as I am aware (in Debug they do set it to 0 but I doubt about the Release version). You'll need to rewrite the code as follows to make it work 100% in all circumstances:
std::map<unsigned short, int> Counter;
for (int i = 0; i < (int)Cards.size(); i++)
{
unsigned short card = Cards[i][0];
auto itr = Counter.find(card);
if(itr == Counter.end())
Counter[card] = 1;
else
itr->second++;
}
I would recommend to use std::set for this task:
std::set<unsigned short> Counter;
for (int i = 0; i < (int)Cards.size(); i++)
{
unsigned short card = Cards[i][0];
if(Counter.count(card)>0)
{
return true;
}
Counter.insert(card);
}
return false;
I have tried to access the members of a class Part that are vector elements of type integer inside the vector tasks.
#include <iostream>
#include <vector>
using namespace std;
class Part{
vector<int> tasks;
public:
void setTasks(void);
void getTasks(void);
};
void Part::setTasks(void){
vector<int>::iterator it;
int i=1;
for (it = this->tasks.begin(); it != this->tasks.end(); ++it)
{
*it=i;
i=i+1;
}
}
void Part::getTasks(void){
vector<int>::iterator it;
for (it = this->tasks.begin(); it != this->tasks.end(); ++it)
cout<<*it<<"\t";
}
int main()
{
Part one;
one.setTasks();
one.getTasks();
return 0;
}
I am simply trying to access the values and print them yet failing. There is no compilation error. In run-time, nothing is outputted in the terminal. Where is the error?
A default constructed vector has zero size, so the for loop in setTasks is never entered (since the begin() and end() iterators are the same at that point). If you set an initial size to the vector your code will work as intended. For instance, try adding the following at the beginning of setTasks
tasks.resize(10); // sets vector size to 10 elements, each initialized to 0
Another way to write that function would be
#include <numeric>
...
void Part::setTasks(void){
tasks.resize(10);
std::iota(tasks.begin(), tasks.end(), 1); // requires C++11
}
You could also set the initial size of the vector in the default constructor of Part if you wanted to. In that case add the following public constructor
Part() : tasks(10)
{}
Yet another way to achieve setting the size upon construction would be
class Part{
vector<int> tasks = vector<int>(10); // requires C++11
The size of your vector is 0 when you call setTasks(). Your iterator doesn't get you into the for loop at all. You need to think about what exactly you want your setTasks() to do. How many elements of the vector did you intend to set? You should either define your vector with that size, or use that many number of push_backs instead to set your vector to the desired value.
Your vector is empty. Try giving it a size. For example, vector<int> tasks(10). See option 3 in this.
Alternatively, you can use a "back-insert" iterator (#include <iterator>), which internally calls std::vector::push_back, like this:
void Part::setTasks(void){
auto back_it = std::back_inserter(tasks);
for(int i = 0; i < 10; ++i)
*back_it++ = i;
}
This kind of iterator is especially useful in algorithms where your destination size is unknown. Although if you know the size in advance, you should use reserve/resize or specify the size at construction, since push-ing back into a vector can sometimes be slow due to re-allocation.
The program below (well, the lines after "from here") is a construct i have to use a lot.
I was wondering whether it is possible (eventually using functions from the eigen library)
to vectorize or otherwise make this program run faster.
Essentially, given a vector of float x, this construct has recover the indexes
of the sorted elements of x in a int vector SIndex. For example, if the first
entry of SIndex is 10, it means that the 10th element of x was the smallest element
of x.
#include <algorithm>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <vector>
using std::vector;
using namespace std;
typedef pair<int, float> sortData;
bool sortDataLess(const sortData& left, const sortData& right){
return left.second<right.second;
}
int main(){
int n=20,i;
float LO=-1.0,HI=1.0;
srand (time(NULL));
vector<float> x(n);
vector<float> y(n);
vector<int> SIndex(n);
vector<sortData> foo(n);
for(i=0;i<n;i++) x[i]=LO+(float)rand()/((float)RAND_MAX/(HI-LO));
//from here:
for(i=0;i<n;i++) foo[i]=sortData(i,x[i]);
sort(foo.begin(),foo.end(),sortDataLess);
for(i=0;i<n;i++){
sortData bar=foo[i];
y[i]=x[bar.first];
SIndex[i]=bar.first;
}
for(i=0;i<n;i++) std::cout << SIndex[i] << std::endl;
return 0;
}
There's no getting around the fact that this is a sorting problem, and vectorization doesn't necessarily improve sorts very much. For example, the partition step of quicksort can do the comparison in parallel, but it then needs to select and store the 0–n values that passed the comparison. This can absolutely be done, but it starts throwing out the advantages you get from vectorization—you need to convert from a comparison mask to a shuffle mask, which is probably a lookup table (bad), and you need a variable-sized store, which means no alignment (bad, although maybe not that bad). Mergesort needs to merge two sorted lists, which in some cases could be improved by vectorization, but in the worst case (I think) needs the same number of steps as the scalar case.
And, of course, there's a decent chance that any major speed boost you get from vectorization will have already been done inside your standard library's std::sort implementation. To get it, though, you'd need to be sorting primitive types with the default comparison operator.
If you're worried about performance, you can easily avoid the last loop, though. Just sort a list of indices using your float array as a comparison:
struct IndirectLess {
template <typename T>
IndirectLess(T iter) : values(&*iter) {}
bool operator()(int left, int right)
{
return values[left] < values[right];
}
float const* values;
};
int main() {
// ...
std::vector<int> SIndex;
SIndex.reserve(n);
for (int i = 0; i < n; ++i)
SIndex.push_back(n);
std::sort(SIndex.begin(), SIndex.end(), IndirectLess(x.begin()));
// ...
}
Now you've only produced your list of sorted indices. You have the potential to lose some cache locality, so for really big lists it might be slower. At that point it might be possible to vectorize your last loop, depending on the architecture. It's just data manipulation, though—read four values, store 1st and 3rd in one place and 2nd and 4th in another—so I wouldn't expect Eigen to help much at that point.
I have a sequence of double (with no duplicates) and I need to sort them. Is filling a vector and then sorting it faster than inserting the values in a set?
Is this question answerable without a knowledge of the implementation of the standard library (and without a knowledge of the hardware on which the program will run) but just with the information provided by the C++ standard?
#include <vector>
#include <set>
#include <algorithm>
#include <random>
#include <iostream>
std::uniform_real_distribution<double> unif(0,10000);
std::default_random_engine re;
int main()
{
std::vector< double > v;
std::set< double > s;
std::vector< double > r;
size_t sz = 10;
for(size_t i = 0; i < sz; i++) {
r.push_back( unif(re) );
}
for(size_t i = 0; i < sz; i++) {
v.push_back(r[i]);
}
std::sort(v.begin(),v.end());
for(size_t i = 0; i < sz; i++) {
s.insert(r[i]);
}
return 0;
}
From the C++ standard, all we can say is that they both have the same asymptotic complexity (O(n*log(n))).
The set may be faster for large objects that can't be efficiently moved or swapped, since the objects don't need to be moved more than once. The vector may be faster for small objects, since sorting it involves no pointer updates and less indirection.
Which is faster in any given situation can only be determined by measuring (or a thorough knowledge of both the implementation and the target platform).
The use of vector may be faster because of data cache factors as the data operated upon will be in a more coherent memory region (probably).
The vector will also have less memory overhead per-value.
If you can, reserve the vector size before inserting data to minimize effort during filling the vector with values.
In terms of complexity both should be the same i.e, nlog(n).
The answer is not trivial. If you have 2 main sections in your software: 1st setup, 2nd lookup and lookup is used more than setup: the sorted vector could be faster, because of 2 reasons:
lower_bound <algorithm> function is faster than the usual tree implementation of <set>,
std::vector memory is allocated less heap page, so there will be less page faults while you are looking for an element.
If the usage is mixed, or lookup is not more then setup, than <set> will be faster. More info: Scott Meyers: Effective STL, Item 23.
Since you said sorting in a range, you could use partial_sort instead of sorting the entire collection.
If we don't want to disturb the existing collection and want to have a new collection with sorted data and no duplicates, then std::set gives us a straight forward solution.
#include <vector>
#include <set>
#include <algorithm>
#include <iostream>
using namespace std;
int main()
{
int arr[] = { 1, 3, 4, 1, 6, 7, 9, 6 , 3, 4, 9 };
vector<int> ints ( arr, end(arr));
const int ulimit = 5;
auto last = ints.begin();
advance(last, ulimit);
set<int> sortedset;
sortedset.insert(ints.begin() , last);
for_each(sortedset.begin(), sortedset.end(), [](int x) { cout << x << "\n"; });
}
I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values.
Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements?
(I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.)
Also, see my comments below for a clarification about when the initialization occurs.
SOME CODE:
void GetsCalledALot(int* data1, int* data2, int count) {
int mvSize = memberVector.size()
memberVector.resize(mvSize + count); // causes 0-initialization
for (int i = 0; i < count; ++i) {
memberVector[mvSize + i].d1 = data1[i];
memberVector[mvSize + i].d2 = data2[i];
}
}
std::vector must initialize the values in the array somehow, which means some constructor (or copy-constructor) must be called. The behavior of vector (or any container class) is undefined if you were to access the uninitialized section of the array as if it were initialized.
The best way is to use reserve() and push_back(), so that the copy-constructor is used, avoiding default-construction.
Using your example code:
struct YourData {
int d1;
int d2;
YourData(int v1, int v2) : d1(v1), d2(v2) {}
};
std::vector<YourData> memberVector;
void GetsCalledALot(int* data1, int* data2, int count) {
int mvSize = memberVector.size();
// Does not initialize the extra elements
memberVector.reserve(mvSize + count);
// Note: consider using std::generate_n or std::copy instead of this loop.
for (int i = 0; i < count; ++i) {
// Copy construct using a temporary.
memberVector.push_back(YourData(data1[i], data2[i]));
}
}
The only problem with calling reserve() (or resize()) like this is that you may end up invoking the copy-constructor more often than you need to. If you can make a good prediction as to the final size of the array, it's better to reserve() the space once at the beginning. If you don't know the final size though, at least the number of copies will be minimal on average.
In the current version of C++, the inner loop is a bit inefficient as a temporary value is constructed on the stack, copy-constructed to the vectors memory, and finally the temporary is destroyed. However the next version of C++ has a feature called R-Value references (T&&) which will help.
The interface supplied by std::vector does not allow for another option, which is to use some factory-like class to construct values other than the default. Here is a rough example of what this pattern would look like implemented in C++:
template <typename T>
class my_vector_replacement {
// ...
template <typename F>
my_vector::push_back_using_factory(F factory) {
// ... check size of array, and resize if needed.
// Copy construct using placement new,
new(arrayData+end) T(factory())
end += sizeof(T);
}
char* arrayData;
size_t end; // Of initialized data in arrayData
};
// One of many possible implementations
struct MyFactory {
MyFactory(int* p1, int* p2) : d1(p1), d2(p2) {}
YourData operator()() const {
return YourData(*d1,*d2);
}
int* d1;
int* d2;
};
void GetsCalledALot(int* data1, int* data2, int count) {
// ... Still will need the same call to a reserve() type function.
// Note: consider using std::generate_n or std::copy instead of this loop.
for (int i = 0; i < count; ++i) {
// Copy construct using a factory
memberVector.push_back_using_factory(MyFactory(data1+i, data2+i));
}
}
Doing this does mean you have to create your own vector class. In this case it also complicates what should have been a simple example. But there may be times where using a factory function like this is better, for instance if the insert is conditional on some other value, and you would have to otherwise unconditionally construct some expensive temporary even if it wasn't actually needed.
In C++11 (and boost) you can use the array version of unique_ptr to allocate an uninitialized array. This isn't quite an stl container, but is still memory managed and C++-ish which will be good enough for many applications.
auto my_uninit_array = std::unique_ptr<mystruct[]>(new mystruct[count]);
C++0x adds a new member function template emplace_back to vector (which relies on variadic templates and perfect forwarding) that gets rid of any temporaries entirely:
memberVector.emplace_back(data1[i], data2[i]);
To clarify on reserve() responses: you need to use reserve() in conjunction with push_back(). This way, the default constructor is not called for each element, but rather the copy constructor. You still incur the penalty of setting up your struct on stack, and then copying it to the vector. On the other hand, it's possible that if you use
vect.push_back(MyStruct(fieldValue1, fieldValue2))
the compiler will construct the new instance directly in the memory thatbelongs to the vector. It depends on how smart the optimizer is. You need to check the generated code to find out.
You can use boost::noinit_adaptor to default initialize new elements (which is no initialization for built-in types):
std::vector<T, boost::noinit_adaptor<std::allocator<T>> memberVector;
As long as you don't pass an initializer into resize, it default initializes the new elements.
So here's the problem, resize is calling insert, which is doing a copy construction from a default constructed element for each of the newly added elements. To get this to 0 cost you need to write your own default constructor AND your own copy constructor as empty functions. Doing this to your copy constructor is a very bad idea because it will break std::vector's internal reallocation algorithms.
Summary: You're not going to be able to do this with std::vector.
You can use a wrapper type around your element type, with a default constructor that does nothing. E.g.:
template <typename T>
struct no_init
{
T value;
no_init() { static_assert(std::is_standard_layout<no_init<T>>::value && sizeof(T) == sizeof(no_init<T>), "T does not have standard layout"); }
no_init(T& v) { value = v; }
T& operator=(T& v) { value = v; return value; }
no_init(no_init<T>& n) { value = n.value; }
no_init(no_init<T>&& n) { value = std::move(n.value); }
T& operator=(no_init<T>& n) { value = n.value; return this; }
T& operator=(no_init<T>&& n) { value = std::move(n.value); return this; }
T* operator&() { return &value; } // So you can use &(vec[0]) etc.
};
To use:
std::vector<no_init<char>> vec;
vec.resize(2ul * 1024ul * 1024ul * 1024ul);
Err...
try the method:
std::vector<T>::reserve(x)
It will enable you to reserve enough memory for x items without initializing any (your vector is still empty). Thus, there won't be reallocation until to go over x.
The second point is that vector won't initialize the values to zero. Are you testing your code in debug ?
After verification on g++, the following code:
#include <iostream>
#include <vector>
struct MyStruct
{
int m_iValue00 ;
int m_iValue01 ;
} ;
int main()
{
MyStruct aaa, bbb, ccc ;
std::vector<MyStruct> aMyStruct ;
aMyStruct.push_back(aaa) ;
aMyStruct.push_back(bbb) ;
aMyStruct.push_back(ccc) ;
aMyStruct.resize(6) ; // [EDIT] double the size
for(std::vector<MyStruct>::size_type i = 0, iMax = aMyStruct.size(); i < iMax; ++i)
{
std::cout << "[" << i << "] : " << aMyStruct[i].m_iValue00 << ", " << aMyStruct[0].m_iValue01 << "\n" ;
}
return 0 ;
}
gives the following results:
[0] : 134515780, -16121856
[1] : 134554052, -16121856
[2] : 134544501, -16121856
[3] : 0, -16121856
[4] : 0, -16121856
[5] : 0, -16121856
The initialization you saw was probably an artifact.
[EDIT] After the comment on resize, I modified the code to add the resize line. The resize effectively calls the default constructor of the object inside the vector, but if the default constructor does nothing, then nothing is initialized... I still believe it was an artifact (I managed the first time to have the whole vector zerooed with the following code:
aMyStruct.push_back(MyStruct()) ;
aMyStruct.push_back(MyStruct()) ;
aMyStruct.push_back(MyStruct()) ;
So...
:-/
[EDIT 2] Like already offered by Arkadiy, the solution is to use an inline constructor taking the desired parameters. Something like
struct MyStruct
{
MyStruct(int p_d1, int p_d2) : d1(p_d1), d2(p_d2) {}
int d1, d2 ;
} ;
This will probably get inlined in your code.
But you should anyway study your code with a profiler to be sure this piece of code is the bottleneck of your application.
I tested a few of the approaches suggested here.
I allocated a huge set of data (200GB) in one container/pointer:
Compiler/OS:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Settings: (c++-17, -O3 optimizations)
g++ --std=c++17 -O3
I timed the total program runtime with linux-time
1.) std::vector:
#include <vector>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t> vec(size);
}
real 0m36.246s
user 0m4.549s
sys 0m31.604s
That is 36 seconds.
2.) std::vector with boost::noinit_adaptor
#include <vector>
#include <boost/core/noinit_adaptor.hpp>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t,boost::noinit_adaptor<std::allocator<size_t>>> vec(size);
}
real 0m0.002s
user 0m0.001s
sys 0m0.000s
So this solves the problem. Just allocating without initializing costs basically nothing (at least for large arrays).
3.) std::unique_ptr<T[]>:
#include <memory>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
auto data = std::unique_ptr<size_t[]>(new size_t[size]);
}
real 0m0.002s
user 0m0.002s
sys 0m0.000s
So basically the same performance as 2.), but does not require boost.
I also tested simple new/delete and malloc/free with the same performance as 2.) and 3.).
So the default-construction can have a huge performance penalty if you deal with large data sets.
In practice you want to actually initialize the allocated data afterwards.
However, some of the performance penalty still remains, especially if the later initialization is performed in parallel.
E.g., I initialize a huge vector with a set of (pseudo)random numbers:
(now I use fopenmp for parallelization on a 24 core AMD Threadripper 3960X)
g++ --std=c++17-fopenmp -O3
1.) std::vector:
#include <vector>
#include <random>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t> vec(size);
#pragma omp parallel
{
std::minstd_rand0 gen(42);
#pragma omp for schedule(static)
for (size_t i = 0; i < size; ++i) vec[i] = gen();
}
}
real 0m41.958s
user 4m37.495s
sys 0m31.348s
That is 42s, only 6s more than the default initialization.
The problem is, that the initialization of std::vector is sequential.
2.) std::vector with boost::noinit_adaptor:
#include <vector>
#include <random>
#include <boost/core/noinit_adaptor.hpp>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t,boost::noinit_adaptor<std::allocator<size_t>>> vec(size);
#pragma omp parallel
{
std::minstd_rand0 gen(42);
#pragma omp for schedule(static)
for (size_t i = 0; i < size; ++i) vec[i] = gen();
}
}
real 0m10.508s
user 1m37.665s
sys 3m14.951s
So even with the random-initialization, the code is 4 times faster because we can skip the sequential initialization of std::vector.
So if you deal with huge data sets and plan to initialize them afterwards in parallel, you should avoid using the default std::vector.
From your comments to other posters, it looks like you're left with malloc() and friends. Vector won't let you have unconstructed elements.
From your code, it looks like you have a vector of structs each of which comprises 2 ints. Could you instead use 2 vectors of ints? Then
copy(data1, data1 + count, back_inserter(v1));
copy(data2, data2 + count, back_inserter(v2));
Now you don't pay for copying a struct each time.
If you really insist on having the elements uninitialized and sacrifice some methods like front(), back(), push_back(), use boost vector from numeric . It allows you even not to preserve existing elements when calling resize()...
I'm not sure about all those answers that says it is impossible or tell us about undefined behavior.
Sometime, you need to use an std::vector. But sometime, you know the final size of it. And you also know that your elements will be constructed later.
Example : When you serialize the vector contents into a binary file, then read it back later.
Unreal Engine has its TArray::setNumUninitialized, why not std::vector ?
To answer the initial question
"Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements?"
yes and no.
No, because STL doesn't expose a way to do so.
Yes because we're coding in C++, and C++ allows to do a lot of thing. If you're ready to be a bad guy (and if you really know what you are doing). You can hijack the vector.
Here a sample code that works only for the Windows's STL implementation, for another platform, look how std::vector is implemented to use its internal members :
// This macro is to be defined before including VectorHijacker.h. Then you will be able to reuse the VectorHijacker.h with different objects.
#define HIJACKED_TYPE SomeStruct
// VectorHijacker.h
#ifndef VECTOR_HIJACKER_STRUCT
#define VECTOR_HIJACKER_STRUCT
struct VectorHijacker
{
std::size_t _newSize;
};
#endif
template<>
template<>
inline decltype(auto) std::vector<HIJACKED_TYPE, std::allocator<HIJACKED_TYPE>>::emplace_back<const VectorHijacker &>(const VectorHijacker &hijacker)
{
// We're modifying directly the size of the vector without passing by the extra initialization. This is the part that relies on how the STL was implemented.
_Mypair._Myval2._Mylast = _Mypair._Myval2._Myfirst + hijacker._newSize;
}
inline void setNumUninitialized_hijack(std::vector<HIJACKED_TYPE> &hijackedVector, const VectorHijacker &hijacker)
{
hijackedVector.reserve(hijacker._newSize);
hijackedVector.emplace_back<const VectorHijacker &>(hijacker);
}
But beware, this is hijacking we're speaking about. This is really dirty code, and this is only to be used if you really know what you are doing. Besides, it is not portable and relies heavily on how the STL implementation was done.
I won't advise you to use it because everyone here (me included) is a good person. But I wanted to let you know that it is possible contrary to all previous answers that stated it wasn't.
Use the std::vector::reserve() method. It won't resize the vector, but it will allocate the space.
Do the structs themselves need to be in contiguous memory, or can you get away with having a vector of struct*?
Vectors make a copy of whatever you add to them, so using vectors of pointers rather than objects is one way to improve performance.
I don't think STL is your answer. You're going to need to roll your own sort of solution using realloc(). You'll have to store a pointer and either the size, or number of elements, and use that to find where to start adding elements after a realloc().
int *memberArray;
int arrayCount;
void GetsCalledALot(int* data1, int* data2, int count) {
memberArray = realloc(memberArray, sizeof(int) * (arrayCount + count);
for (int i = 0; i < count; ++i) {
memberArray[arrayCount + i].d1 = data1[i];
memberArray[arrayCount + i].d2 = data2[i];
}
arrayCount += count;
}
I would do something like:
void GetsCalledALot(int* data1, int* data2, int count)
{
const size_t mvSize = memberVector.size();
memberVector.reserve(mvSize + count);
for (int i = 0; i < count; ++i) {
memberVector.push_back(MyType(data1[i], data2[i]));
}
}
You need to define a ctor for the type that is stored in the memberVector, but that's a small cost as it will give you the best of both worlds; no unnecessary initialization is done and no reallocation will occur during the loop.