Brief Preface
I recognize that there are many nuances and requirements for a standard-compatible allocator. There are a number of questions here covering a range of topics associated with allocators. I realize that the requirements set out by the standard are critical to ensuring that the allocator functions correctly in all cases, doesn't leak memory, doesn't cause undefined-behaviour, etc. This is particularly true where the allocator is meant to be used (or at least, can be used) in a wide range of use cases, with a variety of underlying types and different standard containers, object sizes, etc.
In contrast, I have a very specific use case where I personally strictly control all of the conditions associated with its use, as I describe in detail below. Consequently, I believe that what I've done is perfectly acceptable given the highly-specific nature of what I'm trying to implement.
I'm hoping someone with far more experience and understanding than me can either confirm that the description below is acceptable or point out the problems (and, ideally, how to fix them too).
Overview / Specific Requirements
In a nutshell, I'm trying to write an allocator that is to be used within my own code and for a single, specific purpose:
I need "a few" std::vector (probably uint16_t), with a fixed (at runtime) number of elements. I'm benchmarking to determine the best tradeoff of performance/space for the exact integer type[1]
As noted, the number of elements is always the same, but it depends on some runtime configuration data passed to the application
The number of vectors is also either fixed or at least bounded. The exact number is handled by a library providing an implementation of parallel::for(execution::par_unseq, ...)
The vectors are constructed by me (i.e. so I know with certainty that they will always be constructed with N elements)
[1] The value of the vectors are used to conditionally copy a float from one of 2 vectors to a target: c[i] = rand_vec[i] < threshold ? a[i] : b[i] where a, b, c are contiguous arrays of float, rand_vec is the std::vector I'm trying to figure out here, and threshold is a single variable of type integer_tbd. The code compiles as SSE SIMD operations. I do not remember the details of this, but I believe that this requires additional shifting instructions if the ints are smaller than the floats.
On this basis, I've written a very simple allocator, with a single static boost::lockfree::queue as the free-list. Given that I will construct the vectors myself and they will go out of scope when I'm finished with them, I know with certainty that all calls to alloc::deallocate(T*, size_t) will always return vectors of the same size, so I believe that I can simply push them back onto the queue without worrying about a pointer to a differently-sized allocation being pushed onto the free-list.
As noted in the code below, I've added in runtime tests for both the allocate and deallocate functions for now, while I've been confirming for myself that these situations cannot and will not occur. Again, I believe it is unquestionably safe to delete these runtime tests. Although some advice would be appreciated here too -- considering the surrounding code, I think they should be handled adequately by the branch predictor so they don't have a significant runtime cost (although without instrumenting, hard to say for 100% certain).
In a nutshell - as far as I can tell, everything here is completely within my control, completely deterministic in behaviour, and, thus, completely safe. This is also suggested when running the code under typical conditions -- there are no segfaults, etc. I haven't yet tried running with sanitizers yet -- I was hoping to get some feedback and guidance before doing so.
I should point out that my code runs 2x faster compared to using std::allocator which is at least qualitatively to be expected.
CR_Vector_Allocator.hpp
class CR_Vector_Allocator {
using T = CR_Range_t; // probably uint16_t or uint32_t, set elsewhere.
private:
using free_list_type = boost::lockfree::queue>;
static free_list_type free_list;
public:
T* allocate(size_t);
void deallocate(T* p, size_t) noexcept;
using value_type = T;
using pointer = T*;
using reference = T&;
template struct rebind { using other = CR_Vector_Allocator;};
};
CR_Vector_Allocator.cc
CR_Vector_Allocator::T* CR_Vector_Allocator::allocate(size_t n) {
if (n <= 1)
throw std::runtime_error("Unexpected number of elements to initialize: " +
std::to_string(n));
T* addr_;
if (free_list.pop(addr_)) return addr_;
addr_ = reinterpret_cast<T*>(std::malloc(n * sizeof(T)));
return addr_;
}
void CR_Vector_Allocator::deallocate(T* p, size_t n) noexcept {
if (n <= 1) // should never happen. but just in case, I don't want to leak
free(p);
else
free_list.push(p);
}
CR_Vector_Allocator::free_list_type CR_Vector_Allocator::free_list;
It is used in the following manner:
using CR_Vector_t = std::vector<uint16_t, CR_Vector_Allocator>;
CR_Vector_t Generate_CR_Vector(){
/* total_parameters is a member of the same class
as this member function and is defined elsewhere */
CR_Vector_t cr_vec (total_parameters);
std::uniform_int_distribution<uint16_t> dist_;
/* urng_ is a member variable of type std::mt19937_64 in the class */
std::generate(cr_vec.begin(), cr_vec.end(), [this, &dist_](){
return dist_(this->urng_);});
return cr_vec;
}
void Prepare_Next_Generation(...){
/*
...
*/
using hpx::parallel::execution::par_unseq;
hpx::parallel::for_loop_n(par_unseq, 0l, pop_size, [this](int64_t idx){
auto crossovers = Generate_CR_Vector();
auto new_parameters = Generate_New_Parameters(/* ... */, std::move(crossovers));
}
}
Any feedback, guidance or rebukes would be greatly appreciated.
Thank you!!
Related
An external library gives me a raw pointer of doubles that I want to map to an Eigen type. The raw array is logically a big ordered collection of small dense fixed-size matrices, all of the same size. The main issue is that the small dense matrices may be in row-major or column-major ordering and I want to accommodate them both.
My current approach is as follows. Note that all the entries of a small fixed-size block (in the array of blocks) need to be contiguous in memory.
template<int bs, class Mattype>
void block_operation(double *const vals, const int numblocks)
{
Eigen::Map<Mattype> mappedvals(vals,
Mattype::IsRowMajor ? numblocks*bs : bs,
Mattype::IsRowMajor ? bs : numblocks*bs
);
for(int i = 0; i < numblocks; i++)
if(Mattype::isRowMajor)
mappedvals.template block<bs,bs>(i*bs,0) = block_operation_rowmajor(mappedvals);
else
mappedvals.template block<bs,bs>(0,i*bs) = block_operation_colmajor(mappedvals);
}
The calling function first figures out the Mattype (out of 2 options) and then calls the above function with the correct template parameter.
Thus all my algorithms need to be written twice and my code is interspersed with these layout checks. Is there a way to do this in a layout-agnostic way? Keep in mind that this code needs to be as fast as possible.
Ideally, I would Map the data just once and use it for all the operations needed. However, the only solution I could come up with was invoking the Map constructor once for every small block, whenever I need to access the block.
template<int bs, StorageOptions layout>
inline Map<Matrix<double,bs,bs,layout>> extractBlock(double *const vals,
const int bindex)
{
return Map<Matrix<double,bs,bs,layout>>(vals+bindex*bs*bs);
}
Would this function be optimized away to nothing (by a modern compiler like GCC 7.3 or Intel 2017 under -std=c++14 -O3), or would I be paying a small penalty every time I invoke this function (once for each block, and there are a LOT of small blocks)? Is there a better way to do this?
Your extractBlock is fine, a simpler but somewhat uglier solution is to use a reinterpret cast at the start of block_operation:
using BlockType = Matrix<double,bs,bs,layout|DontAlign>;
BlockType* blocks = reinterpret_cast<BlockType*>(vals);
for(int i...)
block[i] = ...;
This will work for fixed sizes matrices only. Also note the DontAlign which is important unless you can guaranty that vals is aligned on a 16 or even 32 bytes depending on the presence of AVX and bs.... so just use DontAlign!
in a function I need to store some integers in a vector. The function is called a lot of times. I know that they are less then 10 but the number is variable for each call of the function. What is the choice to have better performances?
In example I found that this:
std::vector<int> list(10)
std::vector<int>::iterator it=list.begin();
unsigned int nume_of_elements_stored;
for ( ... iterate on some structures ... ){
if (... a specific condition ...){
*it= integer from structures ;
it++;
nume_of_elements_stored++;
}
}
is slower than:
std::vector<int> list;
unsigned int num_of_elements_stored(0);
for ( ... iterate on some structures ... ){
if (... a specific condition ...){
list.push_back( integer from structures )
}
}
num_of_elements_stored=list.size();
I'm going to go down an extremely uncool route here. At the risk of being crucified, I would suggest that std::vector isn't so great here. An exception would be if you get lucky with the memory allocator and get that temporal locality through the allocator that creating and destroying a bunch of teeny vectors normally wouldn't provide.
Wait!
Before people kill me, I want to say that vector is awesome, generally speaking, as one of the most well-rounded data structures available. But when you're looking at a hotspot like this (hopefully with a profiler) as a result of creating a bunch of teeny vectors repeatedly in a tight loop, that's where this kind of straightforward usage of vector can bite you.
The trouble is that it's a heap-allocated structure (basically a dynamic array), and when we're dealing with a boatload of teeny arrays like this, we really want to use that often-cached memory at the top of the stack that's so cheap to allocate/free when we can.
One way to mitigate this is to reuse the same vector across repeated calls. Store it in the outside caller function's scope and pass it in by reference, clear it, do your push_backs, rinse and repeat. It's worth noting that clear doesn't free any memory in the vector, so it keeps that former capacity around (useful here when we want to reuse the same memory and play to temporal locality).
But here we can play to that stack. As a simplified example (using C-style code that isn't very kosher in C++ or even bothers with exception-safety, but easier to illustrate):
int stack_mem[32];
int num = 0;
int cap = 32;
int* ptr = stack_mem;
for ( ... iterate on some structures ... )
{
if (... a specific condition ...)
{
if (num == cap)
{
cap *= 2;
int* new_ptr = static_cast<int*>(malloc(cap * sizeof(int)));
memcpy(new_ptr, ptr, num * sizeof(int));
if (ptr != stack_mem)
free(ptr);
ptr = new_ptr;
}
ptr[num++] = your_int;
}
}
if (ptr != stack_mem)
free(ptr);
Of course if you use something like this, you should properly wrap it in a reusable class template that does bounds-checking, doesn't use memcpy, has exception-safety, a formal push_back method, emplace_back, copy ctor, move ctor, swap, possibly a fill ctor, range ctor, erase, range erase, insert, range insert, size, empty, iterators/begin/end, uses placement new to avoid requiring copy assignment or default ctor, etc.
The solution uses the stack when N <= 32 (can use a different number suited for your common-case needs) and then switches to heap when exceeded. This allows it to handle your common case scenarios efficiently but also not just go kablooey in those rare case scenarios when N might be huge in some pathological case. That makes it somewhat comparable to variable-length arrays in C (something I actually wish we had in C++, at least until std::dynarray is available) but without the stack overflow tendencies VLAs could have since it switches to heap in rare case scenarios.
I applied all these standard-compliant formalities with a structure based on this idea with a class template that accepts <T, FixedN>, and now use it almost as much as vector since I work with so many cases like this with teeny arrays being repeatedly created that should, in the vast majority of common cases, fit on the stack (but always with those ultra rare exceptional possibilities). It wiped off many profiler hotspots I was getting related to memory off the map.
... but applying this basic idea might give you quite a boost. You can apply that kind of effort above of wrapping it into a safe container preserving C++ object semantics if it pays off in your measurements, and I think it should quite a bit in your case.
I would probably go with sort of a middle ground:
std::vector<int> list;
list.reserve(10);
...and the rest could be pretty much like your second version. To be honest, however, it's probably open to question whether this will really make a lot of difference though.
If you use static vector it will be allocated only once.
First example works slower because it allocates and destroys vector each call.
I'm using omniORB and C++.
In my application i get few small CORBA sequences form different modules and then I need to combine them to one big sequence for further processing.
Is there easy way do this? Something like seq2.append(seq1) or seq2.push_back(seq1). Or some operators? (I am really newbie in STL-things).
The only way I found is to manually go through every element of small sequences and add it to large sequence.
//idl
struct Device;
typedef sequence<Device> DevicesList;
//c++
icore::DevicesList full_list;
foreach (const DStatusList &stlist, states_) {
icore::DevicesList list = convertList(stlist);
int newlength = full_list.length() + list.length();
int last_index = full_list.length();
full_list.length(newlength);
int j=0;
for(int i=last_index; i< last_index+list.length(); i++,j++) {
full_list[i] = list[j];
}
}
Thank you.
It isn't too hard to make little utility functions for doing stuff like this. For example, you can make your own push_back for CORBA sequences:
template<class Seq, class T>
void push_back(Seq& seq, const T& item)
{
const CORBA::ULong len = seq.length();
seq.length(len + 1);
seq[len] = item;
}
MySequence s;
push_back(s, 42);
Just be aware that some ORB's sequence types don't over-allocate memory on push_back (like std::vector does), so this may trigger a reallocation and copying on every single push_back() call. This may be a performance problem. You'll have to check to see if your ORB does this or not, probably by looking at the source to know if it is a problem. Unfortunately I think omniORB has this problem; at least it did a few years ago.
One way to mitigate this is to construct your sequences with maximum values (if you know it in advance). This will make one big allocation up front and you can then call push_back() up to that maximum without triggering a reallocation:
MySequence s(100); // construct with maximum; allocation happens here
push_back(s, 42); // no reallocation
push_back(s, 0); // no reallocation
I also agree with stefaanv's advice in the comments. Use CORBA sequences as little as possible. Basically only use it "around the edges" of your application where you need to make / receive CORBA calls. Get the data into standard containers where it is more easily manipulated as soon as possible. Don't let the CORBA types "seep" into your application. This also helps to make your application more portable if you ever decide to port to a non-CORBA environment.
I've also heard of a new IDL to C++11 mapping that is being proposed. This will map IDL sequences right onto std::vector which will make things considerably easier.
When thinking about this question I start to wondering if std::copy() and/or std::fill are specialized (I really mean optimized) for std::vector<bool>.
Is this required by C++ standard or, perhaps, it is common approach by C++ std library vendors?
Simple speaking, I wonder to know if the following code:
std::vector<bool> v(10, false);
std::fill(v.begin(), v.end(), true);
is in any way better/different than that:
std::vector<bool> v(10, false);
for (auto it = v.begin(); it != v.end(); ++it) *it = true;
To be very strict - can, let say: std::fill<std::vector<bool>::iterator>() go into internal representation of std::vector<bool> and sets their entire bytes instead of single bits? I assume making std::fill friend of std::vector<bool> is not a big problem for library vendor?
[UPDATE]
Next related question: can I (or anybody else :) specialize such algorithms for let say std::vector<bool>, if not already specialized? Is this allowed by C++ standard? I know this will be non portable - but just for one selected std C++ library? Assuming I (or anybody else) find a way to get to std::vector<bool> private parts.
STD is headers only library and it is shipped with your compiler. You can look into those headers yourself. For GCC's vector<bool> impelemtation is in stl_bvector.h. It probably will be the same file for other compilers too. And yes, there is specialized fill (look near __fill_bvector).
Optimizations are nowhere mandated in the standard. It is assumed to be a "quality of implementation" issue if an optimization could applied. The asymptotic complexity of most algorithms is, however, restricted.
Optimizations are allowed as long as a correct program behaves according to what the standard mandates. The examples you ask about, i.e., optimizations involving standard algorithms using iterators on std::vector<bool>, can achieve their objective pretty much in any way the implementation sees fit because there is no way to monitor how they are implemented. This said, I doubt very much that there is any standard library implementation optimizing operations on std::vector<bool>. Most people seem to think that this specialization is an abomination in the first place and that it should go away.
A user is only allowed to create specializations of library types if the specialization involves at least one user defined type. I don't think a user is allowed to provide any function in namespace std at all: There isn't any needs because all such functions would involve a user defined type and would, thus, be found in the user's namespace. Formulated differently: I think you are out of luck with respect to getting algoritms optimized for std::vector<bool> for the time being. You might consider contributing optimized versions to the open source implementations (e.g., libstdc++ and libc++), however.
There is no specialization for it, but you can still use it. (even though it's slow)
But here is a trick I found which enables std::fill on std::vector<bool>, using proxy class std::_Vbase.
(WARNING: I've tested it only for MSVC2013, so it may not work on other compilers.)
int num_bits = 100000;
std::vector<bool> bit_set(num_bits , true);
int bitsize_elem = sizeof(std::_Vbase) * 8; // 1byte = 8bits
int num_elems = static_cast<int>(std::ceil(num_bits / static_cast<double>(bitsize_elem)));
Here, since you need the whole bits of an element if you use any bit of it, the number of elements must be rounded up.
Using this information, we will build a vector of pointers that pointing the original elements underlying the bits.
std::vector<std::_Vbase*> elem_ptrs(num_elems, nullptr);
std::vector<bool>::iterator bitset_iter = bit_set.begin();
for (int i = 0; i < num_elems; ++i)
{
std::_Vbase* elem_ptr = const_cast<std::_Vbase*>((*bitset_iter)._Myptr);
elem_ptrs[i] = elem_ptr;
std::advance(bitset_iter, bitsize_elem);
}
(*bitset_iter)._Myptr : By dereferencing the iterator of std::vector<bool>, you can access the proxy class reference and its member _Myptr.
Since the return type of std::vector<bool>::iterator::operator*() is const std::_Vbase*, remove the constness of it by const_cast.
Now we get the pointer which is pointing the original element underlying those bits, std::_Vbase* elem_ptr.
elem_ptrs[i] = elem_ptr : Record this pointer,...
std::advance(bitset_iter, bitsize_elem) : ...and continue our journey to find the next element, by jumping bits held by the previous element.
std::fill(elem_ptrs[0], elem_ptrs[0] + num_elems, 0); // fill every bits "false"
std::fill(elem_ptrs[0], elem_ptrs[0] + num_elems, -1); // fill every bits "true"
Now, we can use std::fill on the vector of pointers, rather than vector of bits.
Perhaps some may feel uncomfortable using the proxy class externally and even remove the constness of it.
But if you don't care about that and want something fast, this is the fastest way.
I did some comparisons below. (made new project, nothing changed config, release, x64)
int it_max = 10; // do it 10 times ...
int num_bits = std::numeric_limits<int>::max(); // 2147483647
std::vector<bool> bit_set(num_bits, true);
for (int it_count = 0; it_count < it_max; ++it_count)
{
std::fill(elem_ptrs[0], elem_ptrs[0] + num_elems, 0);
} // Elapse Time : 0.397sec
for (int it_count = 0; it_count < it_max; ++it_count)
{
std::fill(bit_set.begin(), bit_set.end(), false);
} // Elapse Time : 18.734sec
for (int it_count = 0; it_count < it_max; ++it_count)
{
for (int i = 0; i < num_bits; ++i)
{
bit_set[i] = false;
}
} // Elapse Time : 21.498sec
for (int it_count = 0; it_count < it_max; ++it_count)
{
bit_set.assign(num_bits, false);
} // Elapse Time : 21.779sec
for (int it_count = 0; it_count < it_max; ++it_count)
{
bit_set.swap(std::vector<bool>(num_bits, false)); // You can not use elem_ptrs anymore
} // Elapse Time : 1.3sec
There is one caveat. When you swap() the original vector with another one, then the vector of pointers becomes useless!
23.2.5 Class vector from the C++ International Standard goes as far as to tell us
To optimize space allocation, a specialization of vector for bool elements is provided:
after which the bitset specialization is provided. That's as far as the standard goes regarding vector<bool>, vendors need to implement it using a bitset to optimize for space. Optimizing for space comes with a cost here, as to not optimize for speed.
It's easier to get a book from the library than it is to find a book if it were between all the library books stapled closely together in containers....
Take your example, you're trying to do a std::fill or std::copy from begin to end. But that's not always the case, sometimes it doen't just simply map to an entire byte. So, that's a bit of a problem in terms of speed optimization. It's easy for the case you'd have to change every bit to one, that's just changing the bytes to 0xF, but that's not the case here; it becomes much harder if you were to only changes certain bits of a byte. Then you'll need to actually compute what the byte will be; that's not a trivial thing to do*, or at least not as an atomic operation on current hardware.
It's the premature optimization story, it's nice in terms of space but horrible in terms of performance.
Is having a "is a multiple of 8 bits" check worth the overhead? I doubt it.
* We're talking about multiple bits here, for the case it's just one bit you can of course do a bit operation.
I was wondering whether (apart from the obvious syntax differences) there would be any efficiency difference between having a class containing multiple instances of an object (of the same type) or a fixed size array of objects of that type.
In code:
struct A {
double x;
double y;
double z;
};
struct B {
double xvec[3];
};
In reality I would be using boost::arrays which are a better C++ alternative to C-style arrays.
I am mainly concerned with construction/destruction and reading/writing such doubles, because these classes will often be constructed just to invoke one of their member functions once.
Thank you for your help/suggestions.
Typically the representation of those two structs would be exactly the same. It is, however, possible to have poor performance if you pick the wrong one for your use case.
For example, if you need to access each element in a loop, with an array you could do:
for (int i = 0; i < 3; i++)
dosomething(xvec[i]);
However, without an array, you'd either need to duplicate code:
dosomething(x);
dosomething(y);
dosomething(z);
This means code duplication - which can go either way. On the one hand there's less loop code; on the other hand very tight loops can be quite fast on modern processors, and code duplication can blow away the I-cache.
The other option is a switch:
for (int i = 0; i < 3; i++) {
int *r;
switch(i) {
case 0: r = &x; break;
case 1: r = &y; break;
case 1: r = &z; break;
}
dosomething(*r); // assume this is some big inlined code
}
This avoids the possibly-large i-cache footprint, but has a huge negative performance impact. Don't do this.
On the other hand, it is, in principle, possible for array accesses to be slower, if your compiler isn't very smart:
xvec[0] = xvec[1] + 1;
dosomething(xvec[1]);
Since xvec[0] and xvec[1] are distinct, in principle, the compiler ought to be able to keep the value of xvec[1] in a register, so it doesn't have to reload the value at the next line. However, it's possible some compilers might not be smart enough to notice that xvec[0] and xvec[1] don't alias. In this case, using seperate fields might be a very tiny bit faster.
In short, it's not about one or the other being fast in all cases. It's about matching the representation to how you use it.
Personally, I would suggest going with whatever makes the code working on xvec most natural. It's not worth spending a lot of human time worrying about something that, at best, will probably only produce such a small performance difference that you'll only catch it in micro-benchmarks.
MVC++ 2010 generated exactly the same code for reading/writing from two POD structs like in your example. Since the offsets to read/write to are computable at compile time, this is not surprising. Same goes for construction and destruction.
As for the actual performance, the general rule applies: profile it if it matters, if it doesn't - why care?
Indexing into an array member is perhaps a bit more work for the user of your struct, but then again, he can more easily iterate over the elements.
In case you can't decide and want to keep your options open, you can use an anonymous union:
struct Foo
{
union
{
struct
{
double x;
double y;
double z;
} xyz;
double arr[3];
};
};
int main()
{
Foo a;
a.xyz.x = 42;
std::cout << a.arr[0] << std::endl;
}
Some compilers also support anonymous structs, in that case you can leave the xyz part out.
It depends. For instance, the example you gave is a classic one in favor of 'old-school' arrays: a math point/vector (or matrix)
has a fixed number of elements
the data itself is usually kept
private in an object
since (if?) it has a class as an
interface, you can properly
initialize them in the constructor
(otherwise, classic array
inialization is something I don't
really like, syntax-wise)
In such cases (going with the math vector/matrix examples), I always ended up using C-style arrays internally, as you can loop over them instead of writing copy/pasted code for each component.
But this is a special case -- for me, in C++ nowadays arrays == STL vector, it's fast and I don't have to worry about nuthin' :)
The difference can be in storing the variables in memory. In the first example compiler can add padding to align the data. But in your paticular case it doesn't matter.
raw arrays offer better cache locality than c++ arrays, as presented however, the array example's only advantage over the multiple objects is the ability to iterate over the elements.
The real answer is of course, create a test case and measure.