Related
I have a threading issue under windows.
I am developing a program that runs complex physical simulations for different conditions. Say a condition per hour of the year, would be 8760 simulations. I am grouping those simulations per thread such that each thread runs a for loop of 273 simulations (on average)
I bought an AMD ryzen 9 5950x with 16 cores (32 threads) for this task. On Linux, all the threads seem to be between 98% to 100% usage, while under windows I get this:
(The first bar is the I/O thread reading data, the smaller bars are the process threads. Red: synchronization, green: process, purple: I/O)
This is from Visual Studio's concurrency visualizer, which tells me that 63% of the time was spent on thread synchronization. As far as I can tell, my code is the same for both the Linux and windows executions.
I made my best to make the objects immutable to avoid issues and that provided a big gain with my old 8-thread intel i7. However with many more threads, this issue arises.
For threading, I have tried a custom parallel for, and the taskflow library. Both perform identically for what I want to do.
Is there something fundamental about windows threads that produces this behaviour?
The custom parallel for code:
/**
* parallel for
* #tparam Index integer type
* #tparam Callable function type
* #param start start index of the loop
* #param end final +1 index of the loop
* #param func function to evaluate
* #param nb_threads number of threads, if zero, it is determined automatically
*/
template<typename Index, typename Callable>
static void ParallelFor(Index start, Index end, Callable func, unsigned nb_threads=0) {
// Estimate number of threads in the pool
if (nb_threads == 0) nb_threads = getThreadNumber();
// Size of a slice for the range functions
Index n = end - start + 1;
Index slice = (Index) std::round(n / static_cast<double> (nb_threads));
slice = std::max(slice, Index(1));
// [Helper] Inner loop
auto launchRange = [&func] (int k1, int k2) {
for (Index k = k1; k < k2; k++) {
func(k);
}
};
// Create pool and launch jobs
std::vector<std::thread> pool;
pool.reserve(nb_threads);
Index i1 = start;
Index i2 = std::min(start + slice, end);
for (unsigned i = 0; i + 1 < nb_threads && i1 < end; ++i) {
pool.emplace_back(launchRange, i1, i2);
i1 = i2;
i2 = std::min(i2 + slice, end);
}
if (i1 < end) {
pool.emplace_back(launchRange, i1, end);
}
// Wait for jobs to finish
for (std::thread &t : pool) {
if (t.joinable()) {
t.join();
}
}
}
A complete C++ project illustrating the issue is uploaded here
Main.cpp:
//
// Created by santi on 26/08/2022.
//
#include "input_data.h"
#include "output_data.h"
#include "random.h"
#include "par_for.h"
void fillA(Matrix& A){
Random rnd;
rnd.setTimeBasedSeed();
for(int i=0; i < A.getRows(); ++i)
for(int j=0; j < A.getRows(); ++j)
A(i, j) = (int) rnd.randInt(0, 1000);
}
void worker(const InputData& input_data,
OutputData& output_data,
const std::vector<int>& time_indices,
int thread_index){
std::cout << "Thread " << thread_index << " [" << time_indices[0]<< ", " << time_indices[time_indices.size() - 1] << "]\n";
for(const int& t: time_indices){
Matrix b = input_data.getAt(t);
Matrix A(input_data.getDim(), input_data.getDim());
fillA(A);
Matrix x = A * b;
output_data.setAt(t, x);
}
}
void process(int time_steps, int dim, int n_threads){
InputData input_data(time_steps, dim);
OutputData output_data(time_steps, dim);
// correct the number of threads
if ( n_threads < 1 ) { n_threads = ( int )getThreadNumber( ); }
// generate indices
std::vector<int> time_indices = arrange<int>(time_steps);
// compute the split of indices per core
std::vector<ParallelChunkData<int>> chunks = prepareParallelChunks(time_indices, n_threads );
// run in parallel
ParallelFor( 0, ( int )chunks.size( ), [ & ]( int k ) {
// run chunk
worker(input_data, output_data, chunks[k].indices, k );
} );
}
int main(){
process(8760, 5000, 0);
return 0;
}
The performance problem you see is definitely caused by the many memory allocations, as already suspected by Matt in his answer. To expand on this: Here is a screenshot from Intel VTune running on an AMD Ryzen Threadripper 3990X with 64 cores (128 threads):
As you can see, almost all of the time is spent in malloc or free, which get called from the various Matrix operations. The bottom part of the image shows the timeline of the activity of a small selection of the threads: Green means that the thread is inactive, i.e. waiting. Usually only one or two threads are actually active. Allocations and freeing memory accesses a shared resource, causing the threads to wait for each other.
I think you have only two real options:
Option 1: No dynamic allocations anymore
The most efficient thing to do would be to rewrite the code to preallocate everything and get rid of all the temporaries. To adapt it to your example code, you could replace the b = input_data.getAt(t); and x = A * b; like this:
void MatrixVectorProduct(Matrix const & A, Matrix const & b, Matrix & x)
{
for (int i = 0; i < x.getRows(); ++i) {
for (int j = 0; j < x.getCols(); ++j) {
x(i, j) = 0.0;
for (int k = 0; k < A.getCols(); ++k) {
x(i,j) += (A(i,k) * b(k,j));
}
}
}
}
void getAt(int t, Matrix const & input_data, Matrix & b) {
for (int i = 0; i < input_data.getRows(); ++i)
b(i, 0) = input_data(i, t);
}
void worker(const InputData& input_data,
OutputData& output_data,
const std::vector<int>& time_indices,
int thread_index){
std::cout << "Thread " << thread_index << " [" << time_indices[0]<< ", " << time_indices[time_indices.size() - 1] << "]\n";
Matrix A(input_data.getDim(), input_data.getDim());
Matrix b(input_data.getDim(), 1);
Matrix x(input_data.getDim(), 1);
for (const int & t: time_indices) {
getAt(t, input_data.getMat(), b);
fillA(A);
MatrixVectorProduct(A, b, x);
output_data.setAt(t, x);
}
std::cout << "Thread " << thread_index << ": Finished" << std::endl;
}
This fixes the performance problems.
Here is a screenshot from VTune, where you can see a much better utilization:
Option 2: Using a special allocator
The alternative is to use a different allocator that handles allocating and freeing memory more efficiently in multithreaded scenarios. One that I had very good experience with is mimalloc (there are others such as hoard or the one from TBB). You do not need to modify your source code, you just need to link with a specific library as described in the documentation.
I tried mimalloc with your source code, and it gave near 100% CPU utilization without any code changes.
I also found a post on the Intel forums with a similar problem, and the solution there was the same (using a special allocator).
Additional notes
Matrix::allocSpace() allocates the memory by using pointers to arrays. It is better to use one contiguous array for the whole matrix instead of multiple independent arrays. That way, all elements are located behind each other in memory, allowing more efficient access.
But in general I suggest to use a dedicated linear algebra library such as Eigen instead of the hand rolled matrix implementation to exploit vectorization (SSE2, AVX,...) and to get the benefits of a highly optimized library.
Ensure that you compile your code with optimizations enabled.
Disable various cross-checks if you do not need them: assert() (i.e. define NDEBUG in the preprocessor), and for MSVC possibly /GS-.
Ensure that you actually have enough memory installed.
You said that all your memory was pre-allocated, but in the worker function I see this...
Matrix b = input_data.getAt(t);
which allocates and fills a new matrix b, and this...
Matrix A(input_data.getDim(), input_data.getDim());
which allocates and fills a new matrix A, and this...
Matrix x = A * b;
which allocates and fills a new matrix x.
The heap is a global data structure, so the thread synchronization time you're seeing is probably contention in the memory allocate/free functions.
These are in a tight loop. You should fix this loop to access b by reference, and reuse the other 2 matrices for every iteration.
I'm attempting to benchmark the memory bandwidth on a ccNUMA system with 2x Intel(R) Xeon(R) Platinum 8168:
24 cores # 2.70 GHz,
L1 cache 32 kB, L2 cache 1 MB and L3 cache 33 MB.
As a reference, I'm using the Intel Advisor's Roofline plot, which depicts the bandwidths of each CPU data-path available. According to this, the bandwidth is 230 GB/s.
In order to benchmark this, I'm using my own little benchmark helper tool which performs timed experiments in a loop. The API offers an abstract class called experiment_functor
which looks like this:
class experiment_functor
{
public:
//+/////////////////
// main functionality
//+/////////////////
virtual void init() = 0;
virtual void* data(const std::size_t&) = 0;
virtual void perform_experiment() = 0;
virtual void finish() = 0;
};
The user (myself) can then define the data initialization, the work to be timed i.e. the experiment and the clean-up routine so that freshly allocated data can be used for each experiment. An instance of the derived class can be provided to the API function:
perf_stats perform_experiments(experiment_functor& exp_fn, const std::size_t& data_size_in_byte, const std::size_t& exp_count)
Here's the implementation of the class for the Schönauer vector triad:
class exp_fn : public experiment_functor
{
//+/////////////////
// members
//+/////////////////
const std::size_t data_size_;
double* vec_a_ = nullptr;
double* vec_b_ = nullptr;
double* vec_c_ = nullptr;
double* vec_d_ = nullptr;
public:
//+/////////////////
// lifecycle
//+/////////////////
exp_fn(const std::size_t& data_size)
: data_size_(data_size) {}
//+/////////////////
// main functionality
//+/////////////////
void init() final
{
// allocate
const auto page_size = sysconf(_SC_PAGESIZE) / sizeof(double);
posix_memalign(reinterpret_cast<void**>(&vec_a_), page_size, data_size_ * sizeof(double));
posix_memalign(reinterpret_cast<void**>(&vec_b_), page_size, data_size_ * sizeof(double));
posix_memalign(reinterpret_cast<void**>(&vec_c_), page_size, data_size_ * sizeof(double));
posix_memalign(reinterpret_cast<void**>(&vec_d_), page_size, data_size_ * sizeof(double));
if (vec_a_ == nullptr || vec_b_ == nullptr || vec_c_ == nullptr || vec_d_ == nullptr)
{
std::cerr << "Fatal error, failed to allocate memory." << std::endl;
std::abort();
}
// apply first-touch
#pragma omp parallel for schedule(static)
for (auto index = std::size_t{}; index < data_size_; index += page_size)
{
vec_a_[index] = 0.0;
vec_b_[index] = 0.0;
vec_c_[index] = 0.0;
vec_d_[index] = 0.0;
}
}
void* data(const std::size_t&) final
{
return reinterpret_cast<void*>(vec_d_);
}
void perform_experiment() final
{
#pragma omp parallel for simd safelen(8) schedule(static)
for (auto index = std::size_t{}; index < data_size_; ++index)
{
vec_d_[index] = vec_a_[index] + vec_b_[index] * vec_c_[index]; // fp_count: 2, traffic: 4+1
}
}
void finish() final
{
std::free(vec_a_);
std::free(vec_b_);
std::free(vec_c_);
std::free(vec_d_);
}
};
Note: The function data serves a special purpose in that it tries to cancel out effects of NUMA-balancing. Ever so often, in a random iteration, the function perform_experiments writes in a random fashion, using all threads, to the data provided by this function.
Question: Using this I am consistently getting a max. bandwidth of 201 GB/s. Why am I unable to achieve the stated 230 GB/s?
I am happy to provide any extra information if needed. Thanks very much in advance for your answers.
Update:
Following the suggestions made by #VictorEijkhout, I've now conducted a strong scaling experiment for the read-only bandwidth.
As you can see, the peak bandwidth is indeed average 217 GB/s, maximum 225 GB/s. It is still very puzzling to note that, at a certain point, adding CPUs actually reduces the effective bandwidth.
Bandwidth performance depends on the type of operation you do. For a mix of reads & writes you will indeed not get the peak number; if you only do reads you will get closer.
I suggest you read the documentation for the "Stream benchmark", and take a look at the posted numbers.
Further notes: I hope you tie your threads down with OMP_PROC_BIND? Also, your architecture runs out of bandwidth before it runs out of cores. Your optimal bandwidth performance may happen with less than the total number of cores.
I'm writing a path tracer as a programming exercise. Yesterday I finally decided to implement multithreading - and it worked well. However, once I wrapped the test code I wrote inside main() in a separate renderer class, I noticed a significant and consistent performance drop. In short - it would seem that filling std::vector anywhere outside of main() causes threads using its elements to perform worse. I managed to isolate and reproduce the issue with simplified code, but unfortunately I still don't know why it happens or what to do in order to fix it.
Performance drop is quite visible and consistent:
97 samples - time = 28.154226s, per sample = 0.290250s, per sample/th = 1.741498
99 samples - time = 28.360723s, per sample = 0.286472s, per sample/th = 1.718832
100 samples - time = 29.335468s, per sample = 0.293355s, per sample/th = 1.760128
vs.
98 samples - time = 30.197734s, per sample = 0.308140s, per sample/th = 1.848841
99 samples - time = 30.534240s, per sample = 0.308427s, per sample/th = 1.850560
100 samples - time = 30.786519s, per sample = 0.307865s, per sample/th = 1.847191
The code I originally posted in this question can be found here: https://github.com/Jacajack/rt/tree/mt_debug or in edit history.
I created a struct foo that is supposed to mimic the behavior of my renderer class and is responsible for initialization of path tracing contexts in its constructor.
The interesting thing is, when I remove the body of foo's constructor and instead do this (initialize contexts directly from main()):
std::vector<rt::path_tracer> contexts; // Can be on stack or on heap, doesn't matter
foo F(cam, scene, bvh, width, height, render_threads, contexts); // no longer fills `contexts`
contexts.reserve(render_threads);
for (int i = 0; i < render_threads; i++)
contexts.emplace_back(cam, scene, bvh, width, height, 1000 + i);
F.run(render_threads);
the performance is back to normal. But then, if I wrap these three lines into a separate function and call it from here, it's worse again. The only pattern I can see here is
that filling the contexts vector outside of main() causes the problem.
I initially thought that this was an alignment/caching issue, so I tried aligning path_tracers with Boost's aligned_allocator and TBB's cache_aligned_allocator with no result. It turns out that this problem persists even when there's only one thread running.
I suspect it must be some kind of wild compiler optimization (I'm using -O3), althought that's just a guess. Do you know any possible causes of such behavior and what can be done to avoid it?
This happens on both gcc 10.1.0 and clang 10.0.0. Currently I'm only using -O3.
I managed to reproduce a similar issue in this standalone example:
#include <iostream>
#include <thread>
#include <random>
#include <algorithm>
#include <chrono>
#include <iomanip>
struct foo
{
std::mt19937 rng;
std::uniform_real_distribution<float> dist;
std::vector<float> buf;
int cnt = 0;
foo(int seed, int n) :
rng(seed),
dist(0, 1),
buf(n, 0)
{
}
void do_stuff()
{
// Do whatever
for (auto &f : buf)
f = (f + 1) * dist(rng);
cnt++;
}
};
int main()
{
int N = 50000000;
int thread_count = 6;
struct bar
{
std::vector<std::thread> threads;
std::vector<foo> &foos;
bool active = true;
bar(std::vector<foo> &f, int thread_count, int n) :
foos(f)
{
/*
foos.reserve(thread_count);
for (int i = 0; i < thread_count; i++)
foos.emplace_back(1000 + i, n);
//*/
}
void run(int thread_count)
{
auto task = [this](foo &f)
{
while (this->active)
f.do_stuff();
};
threads.reserve(thread_count);
for (int i = 0; i < thread_count; i++)
threads.emplace_back(task, std::ref(foos[i]));
}
};
std::vector<foo> foos;
bar B(foos, thread_count, N);
///*
foos.reserve(thread_count);
for (int i = 0; i < thread_count; i++)
foos.emplace_back(1000 + i, N);
//*/
B.run(thread_count);
std::vector<float> buffer(N, 0);
int samples = 0, last_samples = 0;
// Start time
auto t_start = std::chrono::high_resolution_clock::now();
while (1)
{
last_samples = samples;
samples = 0;
for (auto &f : foos)
{
std::transform(
f.buf.cbegin(), f.buf.cend(),
buffer.begin(),
buffer.begin(),
std::plus<float>()
);
samples += f.cnt;
}
if (samples != last_samples)
{
auto t_now = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> t_total = t_now - t_start;
std::cerr << std::setw(4) << samples << " samples - time = " << std::setw(8) << std::fixed << t_total.count()
<< "s, per sample = " << std::setw(8) << std::fixed << t_total.count() / samples
<< "s, per sample/th = " << std::setw(8) << std::fixed << t_total.count() / samples * thread_count << std::endl;
}
}
}
and results:
For N = 100000000, thread_count = 6
In main():
196 samples - time = 26.789526s, per sample = 0.136681s, per sample/th = 0.820088
197 samples - time = 27.045646s, per sample = 0.137288s, per sample/th = 0.823725
200 samples - time = 27.312159s, per sample = 0.136561s, per sample/th = 0.819365
vs.
In foo::foo():
193 samples - time = 22.690566s, per sample = 0.117568s, per sample/th = 0.705406
196 samples - time = 22.972403s, per sample = 0.117206s, per sample/th = 0.703237
198 samples - time = 23.257542s, per sample = 0.117462s, per sample/th = 0.704774
200 samples - time = 23.540432s, per sample = 0.117702s, per sample/th = 0.706213
It seems that the results are the opposite of what is happening in my path tracer, but the visible difference is still here.
Thank you
There is a race condition with foo::buf - one thread makes stores into it, anther reads it. This is undefined behaviour, but on x86-64 platform that is harmless in this particular code.
I cannot reproduce your observations on Intel i9-9900KS, both variants print the same per sample stats.
Compiled with gcc-8.4, g++ -o release/gcc/test.o -c -pthread -m{arch,tune}=native -std=gnu++17 -g -O3 -ffast-math -falign-{functions,loops}=64 -DNDEBUG test.cc
With int N = 50000000; each thread operates on its own array of float[N] which occupies 200MB. Such a data set doesn't fit in CPU caches and the program incurs a lot of data cache misses because it needs to fetch the data from memory:
$ perf stat -ddd ./release/gcc/test
[...]
71474.813087 task-clock (msec) # 6.860 CPUs utilized
66 context-switches # 0.001 K/sec
0 cpu-migrations # 0.000 K/sec
341,942 page-faults # 0.005 M/sec
357,027,759,875 cycles # 4.995 GHz (30.76%)
991,950,515,582 instructions # 2.78 insn per cycle (38.43%)
105,609,126,987 branches # 1477.571 M/sec (38.40%)
155,426,137 branch-misses # 0.15% of all branches (38.39%)
150,832,846,580 L1-dcache-loads # 2110.294 M/sec (38.41%)
4,945,287,289 L1-dcache-load-misses # 3.28% of all L1-dcache hits (38.44%)
1,787,635,257 LLC-loads # 25.011 M/sec (30.79%)
1,103,347,596 LLC-load-misses # 61.72% of all LL-cache hits (30.81%)
<not supported> L1-icache-loads
7,457,756 L1-icache-load-misses (30.80%)
150,527,469,899 dTLB-loads # 2106.021 M/sec (30.80%)
54,966,843 dTLB-load-misses # 0.04% of all dTLB cache hits (30.80%)
26,956 iTLB-loads # 0.377 K/sec (30.80%)
415,128 iTLB-load-misses # 1540.02% of all iTLB cache hits (30.79%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
10.419122076 seconds time elapsed
If you run this application on NUMA CPUs, such as AMD Ryzen and Intel Xeon with multiple sockets, then your observations can probably be explained by adverse placement of threads onto remote NUMA nodes relative to NUMA node where foo::buf is allocated. Those last-level data cache misses have to read memory and if that memory is in a remote NUMA node that takes longer.
To fix that, you may like to allocate memory in the thread that uses it (not in the main thread as the code does) and use a NUMA-aware allocator, such as TCMalloc. See NUMA aware heap memory manager for more details.
When running your benchmark you may like to fix the CPU frequency, so that it doesn't get dynamically adjusted during the run, on Linux you can do that with sudo cpupower frequency-set --related --governor performance.
EDIT: I added two more benchmarks, to compare the use of realloc with the C array and of reserve() with the std::vector. From the last analysis it seems that realloc influences a lot, even if called only 30 times. Checking the documentation I guess this is due to the fact that realloc can return a completely new pointer, copying the old one.
To complete the scenario I also added the code and graph for allocating completely the array during the initialisation. The difference from reserve() is tangible.
Compile flags: only the optimisation described in the graph, compiling with g++ and nothing more.
Original question:
I made a benchmark of std::vector vs a new/delete array, when I add 1 billion integers and the second code is dramatically faster than the one using the vector, especially with the optimisation turned on.
I suspect that this is caused by the vector internally calling realloc too many times. This would be the case if vector does not grows doubling its size every time it gets filled (here the number 2 has nothing special, what matters is that its size grows geometrically).
In such a case the calls to realloc would be only O(log n) instead of O(n).
If this is what causes the slowness of the first code, how can I tell std::vector to grow geometrically?
Note that calling reserve once would work in this case but not in the more general case in which the number of push_back is not known in advance.
black line
#include<vector>
int main(int argc, char * argv[]) {
const unsigned long long size = 1000000000;
std::vector <int> b(size);
for(int i = 0; i < size; i++) {
b[i]=i;
}
return 0;
}
blue line
#include<vector>
int main(int argc, char * argv[]) {
const int size = 1000000000;
std::vector <int> b;
for(int i = 0; i < size; i++) {
b.push_back(i);
}
return 0;
}
green line
#include<vector>
int main(int argc, char * argv[]) {
const int size = 1000000000;
std::vector <int> b;
b.reserve(size);
for(int i = 0; i < size; i++) {
b.push_back(i);
}
return 0;
}
red line
int main(int argc, char * argv[]) {
const int size = 1000000000;
int * a = new int [size];
for(int i = 0; i < size; i++) {
a[i] = i;
}
delete [] a;
return 0;
}
orange line
#include<vector>
int main(int argc, char * argv[]) {
const unsigned long long size = 1000000000;
int * a = (int *)malloc(size*sizeof(int));
int next_power = 1;
for(int i = 0; i < size; i++) {
a[i] = i;
if(i == next_power - 1) {
next_power *= 2;
a=(int*)realloc(a,next_power*sizeof(int));
}
}
free(a);
return 0;
}
EDIT: checking .capacity(), as suggested, we saw that the growth is indeed exponential. So why the vector is so slow?
The optimized C style array is optimized to nothing.
On godbolt:
xorl %eax, %eax
retq
that is the program.
Whenever you have a program optimized to nearly 0s you should consider this possibility.
The optimizer sees you are doing nothing with the memory allocated, notes that unused allocating memory may have zero side effects, and eliminates the allocation.
And writing to memory then never reading it also has zero side effects.
In comparison, the compiler has difficulty proving that the vector's allocation is useless. Probably the compiler developers could teach it to recognize unused std vectors like they recognize unused raw C arrays, but that optimization really is a corner case, and it causes lots of problems profiling in my experience.
Note that the vector-with-reserve at any optimization level is basically the same speed as the unoptimized C style version.
In the C style code, the only thing to optimize is "don't do anything". In the vector code, the unoptimized version is full of extra stack frames and debug checks to ensure you don't go out of bounds (and crash cleanly if you do).
Note that on a Linux system, allocating huge chunks of memory doesn't do anything except fiddle with the virtual memory table. Only when the memory is touched does it actually find some zero'd physical memory for you.
Without reserve, the std vector has to guess an initial small size, resize it an copy it, and repeat. This causes a 50% performance loss, which seems reasonable to me.
With reserve, it actually does the work. The work takes just under 5s.
Adding to vector via push back does causes it to grow geometrically. Geometric grows results in an asymptotic average of 2-3 copies of each piece of data being made.
As for realloc, std::vector does not realloc. It allocates a new buffer, and copies the old data, then discards the old one.
Realloc attempts to grow the buffer, and if it cannot it bitwise copies the buffer.
That is more efficient than std vector can manage for bitwise copyable types. I'd bet the realloc version actually never copies; there is always memory space to grow the vector into (in a real program this may not be the case).
The lack of realloc in std library allocators is a minor flaw. You'd have to invent a new API for it, because you'd want it to work for non-bitwise copy (something like "try grow allocated memory", which if it fails leaves it up to you to grow the allocation).
when I add 1 billion integers and the second code is dramatically faster than the one using the vector
That's... completely unsurprising. One of your cases involves a dynamically sized container that has to readjust for its load, and the other involves a fixed size container that doesn't. The latter simply has to do way less work, no branching, no additional allocations. The fair comparison would be:
std::vector<int> b(size);
for(int i = 0; i < size; i++) {
b[i] = i;
}
This now does the same thing as your array example (well, almost - new int[size] default-initializes all the ints whereas std::vector<int>(size) zero-initializes them, so it's still more work).
It doesn't really make sense to compare these two to each other. If the fixed-size int array fits your use case, then use it. If it doesn't, then don't. You either need a dynamically sized container or not. If you do, performing slower than a fixed-size solution is something you're implicitly giving up.
If this is what causes the slowness of the first code, how can I tell std::vector to grow geometrically?
std::vector is already mandated to grow geometrically already, it's the only way to maintain O(1) amortized push_back complexity.
Is the poor performance of std::vector due to not calling realloc a logarithmic number of times?
Your test neither supports that conclusion, nor does it prove the opposite. However, I would assume that reallocation is called linear number of times unless there is contrary evidence.
Update: Your new test is apparently evidence against your non-logarithmic reallocation hypothesis.
I suspect that this is caused by the vector internally calling realloc too many times.
Update: Your new test shows that some of the difference is due to reallocations... but not all. I suspect that the remainder is due to the fact that optimizer was able to prove (but only in the case of the non-growing) that the array values are unused, and chose to not loop and write them at all. If you were to make sure that the written values are actually used, then I would expect that the non-growing array would have similar optimized performance to the reserving vector.
The difference (between reserving code and non-reserving vector) in optimized build is most likely due to doing more reallocations (compared to no reallocations of the reserved array). Whether the number of reallocations is too much is situational and subjective. The downside of doing fewer reallocations is more wasted space due to overallocation.
Note that the cost of reallocation of large arrays comes primarily from copying of elements, rather than memory allocation itself.
In unoptimized build, there is probably additional linear overhead due to function calls that weren't expanded inline.
how can I tell std::vector to grow geometrically?
Geometric growth is required by the standard. There is no way and no need to tell std::vector to use geometric growth.
Note that calling reserve once would work in this case but not in the more general case in which the number of push_back is not known in advance.
However, a general case in which the number of push_back is not known in advance is a case where the non-growing array isn't even an option and so its performance is irrelevant for that general case.
This isn't comparing geometric growth to arithmetic (or any other) growth. It's comparing pre-allocating all the space necessary to growing the space as needed. So let's start by comparing std::vector to some code that actually does use geometric growth, and use both in ways that put the geometric growth to use1. So, here's a simple class that does geometric growth:
class my_vect {
int *data;
size_t current_used;
size_t current_alloc;
public:
my_vect()
: data(nullptr)
, current_used(0)
, current_alloc(0)
{}
void push_back(int val) {
if (nullptr == data) {
data = new int[1];
current_alloc = 1;
}
else if (current_used == current_alloc) {
int *temp = new int[current_alloc * 2];
for (size_t i=0; i<current_used; i++)
temp[i] = data[i];
swap(temp, data);
delete [] temp;
current_alloc *= 2;
}
data[current_used++] = val;
}
int &at(size_t index) {
if (index >= current_used)
throw bad_index();
return data[index];
}
int &operator[](size_t index) {
return data[index];
}
~my_vect() { delete [] data; }
};
...and here's some code to exercise it (and do the same with std::vector):
int main() {
std::locale out("");
std::cout.imbue(out);
using namespace std::chrono;
std::cout << "my_vect\n";
for (int size = 100; size <= 1000000000; size *= 10) {
auto start = high_resolution_clock::now();
my_vect b;
for(int i = 0; i < size; i++) {
b.push_back(i);
}
auto stop = high_resolution_clock::now();
std::cout << "Size: " << std::setw(15) << size << ", Time: " << std::setw(15) << duration_cast<microseconds>(stop-start).count() << " us\n";
}
std::cout << "\nstd::vector\n";
for (int size = 100; size <= 1000000000; size *= 10) {
auto start = high_resolution_clock::now();
std::vector<int> b;
for (int i = 0; i < size; i++) {
b.push_back(i);
}
auto stop = high_resolution_clock::now();
std::cout << "Size: " << std::setw(15) << size << ", Time: " << std::setw(15) << duration_cast<microseconds>(stop - start).count() << " us\n";
}
}
I compiled this with g++ -std=c++14 -O3 my_vect.cpp. When I execute that, I get this result:
my_vect
Size: 100, Time: 8 us
Size: 1,000, Time: 23 us
Size: 10,000, Time: 141 us
Size: 100,000, Time: 950 us
Size: 1,000,000, Time: 8,040 us
Size: 10,000,000, Time: 51,404 us
Size: 100,000,000, Time: 442,713 us
Size: 1,000,000,000, Time: 7,936,013 us
std::vector
Size: 100, Time: 40 us
Size: 1,000, Time: 4 us
Size: 10,000, Time: 29 us
Size: 100,000, Time: 426 us
Size: 1,000,000, Time: 3,730 us
Size: 10,000,000, Time: 41,294 us
Size: 100,000,000, Time: 363,942 us
Size: 1,000,000,000, Time: 5,200,545 us
I undoubtedly could optimize the my_vect to keep up with std::vector (e.g., initially allocating space for, say, 256 items would probably be a pretty large help). I haven't attempted to do enough runs (and statistical analysis) to be at all sure that std::vector is really dependably faster than my_vect either. Nonetheless, this seems to indicate that when we compare apples to apples, we get results that are at least roughly comparable (e.g., within a fairly small, constant factor of each other).
1. As a side note, I feel obliged to point out that this still doesn't really compare apples to apples--but at least as long as we're only instantiating std::vector over int, many of the obvious differences are basically covered up.
This post include
wrapper classes over realloc, mremap to provide reallocation functionality.
A custom vector class.
A performance test.
// C++17
#include <benchmark/benchmark.h> // Googleo benchmark lib, for benchmark.
#include <new> // For std::bad_alloc.
#include <memory> // For std::allocator_traits, std::uninitialized_move.
#include <cstdlib> // For C heap management API.
#include <cstddef> // For std::size_t, std::max_align_t.
#include <cassert> // For assert.
#include <utility> // For std::forward, std::declval,
namespace linux {
#include <sys/mman.h> // For mmap, mremap, munmap.
#include <errno.h> // For errno.
auto get_errno() noexcept {
return errno;
}
}
/*
* Allocators.
* These allocators will have non-standard compliant behavior if the type T's cp ctor has side effect.
*/
// class mrealloc are usefull for allocating small space for
// std::vector.
//
// Can prevent copy of data and memory fragmentation if there's enough
// continous memory at the original place.
template <class T>
struct mrealloc {
using pointer = T*;
using value_type = T;
auto allocate(std::size_t len) {
if (auto ret = std::malloc(len))
return static_cast<pointer>(ret);
else
throw std::bad_alloc();
}
auto reallocate(pointer old_ptr, std::size_t old_len, std::size_t len) {
if (auto ret = std::realloc(old_ptr, len))
return static_cast<pointer>(ret);
else
throw std::bad_alloc();
}
void deallocate(void *ptr, std::size_t len) noexcept {
std::free(ptr);
}
};
// class mmaprealloc is suitable for large memory use.
//
// It will be usefull for situation that std::vector can grow to a huge
// size.
//
// User can call reserve without worrying wasting a lot of memory.
//
// It can prevent data copy and memory fragmentation at any time.
template <class T>
struct mmaprealloc {
using pointer = T*;
using value_type = T;
auto allocate(std::size_t len) const
{
return allocate_impl(len, MAP_PRIVATE | MAP_ANONYMOUS);
}
auto reallocate(pointer old_ptr, std::size_t old_len, std::size_t len) const
{
return reallocate_impl(old_ptr, old_len, len, MREMAP_MAYMOVE);
}
void deallocate(pointer ptr, std::size_t len) const noexcept
{
assert(linux::munmap(ptr, len) == 0);
}
protected:
auto allocate_impl(std::size_t _len, int flags) const
{
if (auto ret = linux::mmap(nullptr, get_proper_size(_len), PROT_READ | PROT_WRITE, flags, -1, 0))
return static_cast<pointer>(ret);
else
fail(EAGAIN | ENOMEM);
}
auto reallocate_impl(pointer old_ptr, std::size_t old_len, std::size_t _len, int flags) const
{
if (auto ret = linux::mremap(old_ptr, old_len, get_proper_size(_len), flags))
return static_cast<pointer>(ret);
else
fail(EAGAIN | ENOMEM);
}
static inline constexpr const std::size_t magic_num = 4096 - 1;
static inline auto get_proper_size(std::size_t len) noexcept -> std::size_t {
return round_to_pagesize(len);
}
static inline auto round_to_pagesize(std::size_t len) noexcept -> std::size_t {
return (len + magic_num) & ~magic_num;
}
static inline void fail(int assert_val)
{
auto _errno = linux::get_errno();
assert(_errno == assert_val);
throw std::bad_alloc();
}
};
template <class T>
struct mmaprealloc_populate_ver: mmaprealloc<T> {
auto allocate(size_t len) const
{
return mmaprealloc<T>::allocate_impl(len, MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE);
}
};
namespace impl {
struct disambiguation_t2 {};
struct disambiguation_t1 {
constexpr operator disambiguation_t2() const noexcept { return {}; }
};
template <class Alloc>
static constexpr auto has_reallocate(disambiguation_t1) noexcept -> decltype(&Alloc::reallocate, bool{}) { return true; }
template <class Alloc>
static constexpr bool has_reallocate(disambiguation_t2) noexcept { return false; }
template <class Alloc>
static inline constexpr const bool has_reallocate_v = has_reallocate<Alloc>(disambiguation_t1{});
} /* impl */
template <class Alloc>
struct allocator_traits: public std::allocator_traits<Alloc> {
using Base = std::allocator_traits<Alloc>;
using value_type = typename Base::value_type;
using pointer = typename Base::pointer;
using size_t = typename Base::size_type;
static auto reallocate(Alloc &alloc, pointer prev_ptr, size_t prev_len, size_t new_len) {
if constexpr(impl::has_reallocate_v<Alloc>)
return alloc.reallocate(prev_ptr, prev_len, new_len);
else {
auto new_ptr = Base::allocate(alloc, new_len);
// Move existing array
for(auto _prev_ptr = prev_ptr, _new_ptr = new_ptr; _prev_ptr != prev_ptr + prev_len; ++_prev_ptr, ++_new_ptr) {
new (_new_ptr) value_type(std::move(*_prev_ptr));
_new_ptr->~value_type();
}
Base::deallocate(alloc, prev_ptr, prev_len);
return new_ptr;
}
}
};
template <class T, class Alloc = std::allocator<T>>
struct vector: protected Alloc {
using alloc_traits = allocator_traits<Alloc>;
using pointer = typename alloc_traits::pointer;
using size_t = typename alloc_traits::size_type;
pointer ptr = nullptr;
size_t last = 0;
size_t avail = 0;
~vector() noexcept {
alloc_traits::deallocate(*this, ptr, avail);
}
template <class ...Args>
void emplace_back(Args &&...args) {
if (last == avail)
double_the_size();
alloc_traits::construct(*this, &ptr[last++], std::forward<Args>(args)...);
}
void double_the_size() {
if (__builtin_expect(!!(avail), true)) {
avail <<= 1;
ptr = alloc_traits::reallocate(*this, ptr, last, avail);
} else {
avail = 1 << 4;
ptr = alloc_traits::allocate(*this, avail);
}
}
};
template <class T>
static void BM_vector(benchmark::State &state) {
for(auto _: state) {
T c;
for(auto i = state.range(0); --i >= 0; )
c.emplace_back((char)i);
}
}
static constexpr const auto one_GB = 1 << 30;
BENCHMARK_TEMPLATE(BM_vector, vector<char>) ->Range(1 << 3, one_GB);
BENCHMARK_TEMPLATE(BM_vector, vector<char, mrealloc<char>>) ->Range(1 << 3, one_GB);
BENCHMARK_TEMPLATE(BM_vector, vector<char, mmaprealloc<char>>) ->Range(1 << 3, one_GB);
BENCHMARK_TEMPLATE(BM_vector, vector<char, mmaprealloc_populate_ver<char>>)->Range(1 << 3, one_GB);
BENCHMARK_MAIN();
Performance test.
All the performance test are done on:
Debian 9.4, Linux version 4.9.0-6-amd64 (debian-kernel#lists.debian.org)(gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) ) #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02)
Compiled using clang++ -std=c++17 -lbenchmark -lpthread -Ofast main.cc
The command I used to run this test:
sudo cpupower frequency-set --governor performance
./a.out
Here's the output of google benchmark test:
Run on (8 X 1600 MHz CPU s)
CPU Caches:
L1 Data 32K (x4)
L1 Instruction 32K (x4)
L2 Unified 256K (x4)
L3 Unified 6144K (x1)
----------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
----------------------------------------------------------------------------------------------------------
BM_vector<vector<char>>/8 58 ns 58 ns 11476934
BM_vector<vector<char>>/64 324 ns 324 ns 2225396
BM_vector<vector<char>>/512 1527 ns 1527 ns 453629
BM_vector<vector<char>>/4096 7196 ns 7196 ns 96695
BM_vector<vector<char>>/32768 50145 ns 50140 ns 13655
BM_vector<vector<char>>/262144 549821 ns 549825 ns 1245
BM_vector<vector<char>>/2097152 5007342 ns 5006393 ns 146
BM_vector<vector<char>>/16777216 42873349 ns 42873462 ns 15
BM_vector<vector<char>>/134217728 336225619 ns 336097218 ns 2
BM_vector<vector<char>>/1073741824 2642934606 ns 2642803281 ns 1
BM_vector<vector<char, mrealloc<char>>>/8 55 ns 55 ns 12914365
BM_vector<vector<char, mrealloc<char>>>/64 266 ns 266 ns 2591225
BM_vector<vector<char, mrealloc<char>>>/512 1229 ns 1229 ns 567505
BM_vector<vector<char, mrealloc<char>>>/4096 6903 ns 6903 ns 102752
BM_vector<vector<char, mrealloc<char>>>/32768 48522 ns 48523 ns 14409
BM_vector<vector<char, mrealloc<char>>>/262144 399470 ns 399368 ns 1783
BM_vector<vector<char, mrealloc<char>>>/2097152 3048578 ns 3048619 ns 229
BM_vector<vector<char, mrealloc<char>>>/16777216 24426934 ns 24421774 ns 29
BM_vector<vector<char, mrealloc<char>>>/134217728 262355961 ns 262357084 ns 3
BM_vector<vector<char, mrealloc<char>>>/1073741824 2092577020 ns 2092317044 ns 1
BM_vector<vector<char, mmaprealloc<char>>>/8 4285 ns 4285 ns 161498
BM_vector<vector<char, mmaprealloc<char>>>/64 5485 ns 5485 ns 125375
BM_vector<vector<char, mmaprealloc<char>>>/512 8571 ns 8569 ns 80345
BM_vector<vector<char, mmaprealloc<char>>>/4096 24248 ns 24248 ns 28655
BM_vector<vector<char, mmaprealloc<char>>>/32768 165021 ns 165011 ns 4421
BM_vector<vector<char, mmaprealloc<char>>>/262144 1177041 ns 1177048 ns 557
BM_vector<vector<char, mmaprealloc<char>>>/2097152 9229860 ns 9230023 ns 74
BM_vector<vector<char, mmaprealloc<char>>>/16777216 75425704 ns 75426431 ns 9
BM_vector<vector<char, mmaprealloc<char>>>/134217728 607661012 ns 607662273 ns 1
BM_vector<vector<char, mmaprealloc<char>>>/1073741824 4871003928 ns 4870588050 ns 1
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/8 3956 ns 3956 ns 175037
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/64 5087 ns 5086 ns 133944
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/512 8662 ns 8662 ns 80579
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/4096 23883 ns 23883 ns 29265
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/32768 158374 ns 158376 ns 4444
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/262144 1171514 ns 1171522 ns 593
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/2097152 9297357 ns 9293770 ns 74
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/16777216 75140789 ns 75141057 ns 9
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/134217728 636359403 ns 636368640 ns 1
BM_vector<vector<char, mmaprealloc_populate_ver<char>>>/1073741824 4865103542 ns 4864582150 ns 1
I like some features of D, but would be interested if they come with a
runtime penalty?
To compare, I implemented a simple program that computes scalar products of many short vectors both in C++ and in D. The result is surprising:
D: 18.9 s [see below for final runtime]
C++: 3.8 s
Is C++ really almost five times as fast or did I make a mistake in the D
program?
I compiled C++ with g++ -O3 (gcc-snapshot 2011-02-19) and D with dmd -O (dmd 2.052) on a moderate recent linux desktop. The results are reproducible over several runs and standard deviations negligible.
Here the C++ program:
#include <iostream>
#include <random>
#include <chrono>
#include <string>
#include <vector>
#include <array>
typedef std::chrono::duration<long, std::ratio<1, 1000>> millisecs;
template <typename _T>
long time_since(std::chrono::time_point<_T>& time) {
long tm = std::chrono::duration_cast<millisecs>( std::chrono::system_clock::now() - time).count();
time = std::chrono::system_clock::now();
return tm;
}
const long N = 20000;
const int size = 10;
typedef int value_type;
typedef long long result_type;
typedef std::vector<value_type> vector_t;
typedef typename vector_t::size_type size_type;
inline value_type scalar_product(const vector_t& x, const vector_t& y) {
value_type res = 0;
size_type siz = x.size();
for (size_type i = 0; i < siz; ++i)
res += x[i] * y[i];
return res;
}
int main() {
auto tm_before = std::chrono::system_clock::now();
// 1. allocate and fill randomly many short vectors
vector_t* xs = new vector_t [N];
for (int i = 0; i < N; ++i) {
xs[i] = vector_t(size);
}
std::cerr << "allocation: " << time_since(tm_before) << " ms" << std::endl;
std::mt19937 rnd_engine;
std::uniform_int_distribution<value_type> runif_gen(-1000, 1000);
for (int i = 0; i < N; ++i)
for (int j = 0; j < size; ++j)
xs[i][j] = runif_gen(rnd_engine);
std::cerr << "random generation: " << time_since(tm_before) << " ms" << std::endl;
// 2. compute all pairwise scalar products:
time_since(tm_before);
result_type avg = 0;
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
avg += scalar_product(xs[i], xs[j]);
avg = avg / N*N;
auto time = time_since(tm_before);
std::cout << "result: " << avg << std::endl;
std::cout << "time: " << time << " ms" << std::endl;
}
And here the D version:
import std.stdio;
import std.datetime;
import std.random;
const long N = 20000;
const int size = 10;
alias int value_type;
alias long result_type;
alias value_type[] vector_t;
alias uint size_type;
value_type scalar_product(const ref vector_t x, const ref vector_t y) {
value_type res = 0;
size_type siz = x.length;
for (size_type i = 0; i < siz; ++i)
res += x[i] * y[i];
return res;
}
int main() {
auto tm_before = Clock.currTime();
// 1. allocate and fill randomly many short vectors
vector_t[] xs;
xs.length = N;
for (int i = 0; i < N; ++i) {
xs[i].length = size;
}
writefln("allocation: %i ", (Clock.currTime() - tm_before));
tm_before = Clock.currTime();
for (int i = 0; i < N; ++i)
for (int j = 0; j < size; ++j)
xs[i][j] = uniform(-1000, 1000);
writefln("random: %i ", (Clock.currTime() - tm_before));
tm_before = Clock.currTime();
// 2. compute all pairwise scalar products:
result_type avg = cast(result_type) 0;
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
avg += scalar_product(xs[i], xs[j]);
avg = avg / N*N;
writefln("result: %d", avg);
auto time = Clock.currTime() - tm_before;
writefln("scalar products: %i ", time);
return 0;
}
To enable all optimizations and disable all safety checks, compile your D program with the following DMD flags:
-O -inline -release -noboundscheck
EDIT: I've tried your programs with g++, dmd and gdc. dmd does lag behind, but gdc achieves performance very close to g++. The commandline I used was gdmd -O -release -inline (gdmd is a wrapper around gdc which accepts dmd options).
Looking at the assembler listing, it looks like neither dmd nor gdc inlined scalar_product, but g++/gdc did emit MMX instructions, so they might be auto-vectorizing the loop.
One big thing that slows D down is a subpar garbage collection implementation. Benchmarks that don't heavily stress the GC will show very similar performance to C and C++ code compiled with the same compiler backend. Benchmarks that do heavily stress the GC will show that D performs abysmally. Rest assured, though, this is a single (albeit severe) quality-of-implementation issue, not a baked-in guarantee of slowness. Also, D gives you the ability to opt out of GC and tune memory management in performance-critical bits, while still using it in the less performance-critical 95% of your code.
I've put some effort into improving GC performance lately and the results have been rather dramatic, at least on synthetic benchmarks. Hopefully these changes will be integrated into one of the next few releases and will mitigate the issue.
This is a very instructive thread, thanks for all the work to the OP and helpers.
One note - this test is not assessing the general question of abstraction/feature penalty or even that of backend quality. It focuses on virtually one optimization (loop optimization). I think it's fair to say that gcc's backend is somewhat more refined than dmd's, but it would be a mistake to assume that the gap between them is as large for all tasks.
Definitely seems like a quality-of-implementation issue.
I ran some tests with the OP's code and made some changes. I actually got D going faster for LDC/clang++, operating on the assumption that arrays must be allocated dynamically (xs and associated scalars). See below for some numbers.
Questions for the OP
Is it intentional that the same seed be used for each iteration of C++, while not so for D?
Setup
I have tweaked the original D source (dubbed scalar.d) to make it portable between platforms. This only involved changing the type of the numbers used to access and modify the size of arrays.
After this, I made the following changes:
Used uninitializedArray to avoid default inits for scalars in xs (probably made the biggest difference). This is important because D normally default-inits everything silently, which C++ does not.
Factored out printing code and replaced writefln with writeln
Changed imports to be selective
Used pow operator (^^) instead of manual multiplication for final step of calculating average
Removed the size_type and replaced appropriately with the new index_type alias
...thus resulting in scalar2.cpp (pastebin):
import std.stdio : writeln;
import std.datetime : Clock, Duration;
import std.array : uninitializedArray;
import std.random : uniform;
alias result_type = long;
alias value_type = int;
alias vector_t = value_type[];
alias index_type = typeof(vector_t.init.length);// Make index integrals portable - Linux is ulong, Win8.1 is uint
immutable long N = 20000;
immutable int size = 10;
// Replaced for loops with appropriate foreach versions
value_type scalar_product(in ref vector_t x, in ref vector_t y) { // "in" is the same as "const" here
value_type res = 0;
for(index_type i = 0; i < size; ++i)
res += x[i] * y[i];
return res;
}
int main() {
auto tm_before = Clock.currTime;
auto countElapsed(in string taskName) { // Factor out printing code
writeln(taskName, ": ", Clock.currTime - tm_before);
tm_before = Clock.currTime;
}
// 1. allocate and fill randomly many short vectors
vector_t[] xs = uninitializedArray!(vector_t[])(N);// Avoid default inits of inner arrays
for(index_type i = 0; i < N; ++i)
xs[i] = uninitializedArray!(vector_t)(size);// Avoid more default inits of values
countElapsed("allocation");
for(index_type i = 0; i < N; ++i)
for(index_type j = 0; j < size; ++j)
xs[i][j] = uniform(-1000, 1000);
countElapsed("random");
// 2. compute all pairwise scalar products:
result_type avg = 0;
for(index_type i = 0; i < N; ++i)
for(index_type j = 0; j < N; ++j)
avg += scalar_product(xs[i], xs[j]);
avg /= N ^^ 2;// Replace manual multiplication with pow operator
writeln("result: ", avg);
countElapsed("scalar products");
return 0;
}
After testing scalar2.d (which prioritized optimization for speed), out of curiousity I replaced the loops in main with foreach equivalents, and called it scalar3.d (pastebin):
import std.stdio : writeln;
import std.datetime : Clock, Duration;
import std.array : uninitializedArray;
import std.random : uniform;
alias result_type = long;
alias value_type = int;
alias vector_t = value_type[];
alias index_type = typeof(vector_t.init.length);// Make index integrals portable - Linux is ulong, Win8.1 is uint
immutable long N = 20000;
immutable int size = 10;
// Replaced for loops with appropriate foreach versions
value_type scalar_product(in ref vector_t x, in ref vector_t y) { // "in" is the same as "const" here
value_type res = 0;
for(index_type i = 0; i < size; ++i)
res += x[i] * y[i];
return res;
}
int main() {
auto tm_before = Clock.currTime;
auto countElapsed(in string taskName) { // Factor out printing code
writeln(taskName, ": ", Clock.currTime - tm_before);
tm_before = Clock.currTime;
}
// 1. allocate and fill randomly many short vectors
vector_t[] xs = uninitializedArray!(vector_t[])(N);// Avoid default inits of inner arrays
foreach(ref x; xs)
x = uninitializedArray!(vector_t)(size);// Avoid more default inits of values
countElapsed("allocation");
foreach(ref x; xs)
foreach(ref val; x)
val = uniform(-1000, 1000);
countElapsed("random");
// 2. compute all pairwise scalar products:
result_type avg = 0;
foreach(const ref x; xs)
foreach(const ref y; xs)
avg += scalar_product(x, y);
avg /= N ^^ 2;// Replace manual multiplication with pow operator
writeln("result: ", avg);
countElapsed("scalar products");
return 0;
}
I compiled each of these tests using an LLVM-based compiler, since LDC seems to be the best option for D compilation in terms of performance. On my x86_64 Arch Linux installation I used the following packages:
clang 3.6.0-3
ldc 1:0.15.1-4
dtools 2.067.0-2
I used the following commands to compile each:
C++: clang++ scalar.cpp -o"scalar.cpp.exe" -std=c++11 -O3
D: rdmd --compiler=ldc2 -O3 -boundscheck=off <sourcefile>
Results
The results (screenshot of raw console output) of each version of the source as follows:
scalar.cpp (original C++):
allocation: 2 ms
random generation: 12 ms
result: 29248300000
time: 2582 ms
C++ sets the standard at 2582 ms.
scalar.d (modified OP source):
allocation: 5 ms, 293 μs, and 5 hnsecs
random: 10 ms, 866 μs, and 4 hnsecs
result: 53237080000
scalar products: 2 secs, 956 ms, 513 μs, and 7 hnsecs
This ran for ~2957 ms. Slower than the C++ implementation, but not too much.
scalar2.d (index/length type change and uninitializedArray optimization):
allocation: 2 ms, 464 μs, and 2 hnsecs
random: 5 ms, 792 μs, and 6 hnsecs
result: 59
scalar products: 1 sec, 859 ms, 942 μs, and 9 hnsecs
In other words, ~1860 ms. So far this is in the lead.
scalar3.d (foreaches):
allocation: 2 ms, 911 μs, and 3 hnsecs
random: 7 ms, 567 μs, and 8 hnsecs
result: 189
scalar products: 2 secs, 182 ms, and 366 μs
~2182 ms is slower than scalar2.d, but faster than the C++ version.
Conclusion
With the correct optimizations, the D implementation actually went faster than its equivalent C++ implementation using the LLVM-based compilers available. The current gap between D and C++ for most applications seems only to be based on limitations of current implementations.
dmd is the reference implementation of the language and thus most work is put into the frontend to fix bugs rather than optimizing the backend.
"in" is faster in your case cause you are using dynamic arrays which are reference types. With ref you introduce another level of indirection (which is normally used to alter the array itself and not only the contents).
Vectors are usually implemented with structs where const ref makes perfect sense. See smallptD vs. smallpt for a real-world example featuring loads of vector operations and randomness.
Note that 64-Bit can also make a difference. I once missed that on x64 gcc compiles 64-Bit code while dmd still defaults to 32 (will change when the 64-Bit codegen matures). There was a remarkable speedup with "dmd -m64 ...".
Whether C++ or D is faster is likely to be highly dependent on what you're doing. I would think that when comparing well-written C++ to well-written D code, they would generally either be of similar speed, or C++ would be faster, but what the particular compiler manages to optimize could have a big effect completely aside from the language itself.
However, there are a few cases where D stands a good chance of beating C++ for speed. The main one which comes to mind would be string processing. Thanks to D's array slicing capabalities, strings (and arrays in general) can be processed much faster than you can readily do in C++. For D1, Tango's XML processor is extremely fast, thanks primarily to D's array slicing capabilities (and hopefully D2 will have a similarly fast XML parser once the one that's currently being worked on for Phobos has been completed). So, ultimately whether D or C++ is going to be faster is going to be very dependent on what you're doing.
Now, I am suprised that you're seeing such a difference in speed in this particular case, but it is the sort of thing that I would expect to improve as dmd improves. Using gdc might yield better results and would likely be a closer comparison of the language itself (rather than the backend) given that it's gcc-based. But it wouldn't surprise me at all if there are a number of things which could be done to speed up the code that dmd generates. I don't think that there's much question that gcc is more mature than dmd at this point. And code optimizations are one of the prime fruits of code maturity.
Ultimately, what matters is how well dmd performs for your particular application, but I do agree that it would definitely be nice to know how well C++ and D compare in general. In theory, they should be pretty much the same, but it really depends on the implementation. I think that a comprehensive set of benchmarks would be required to really test how well the two presently compare however.
You can write C code is D so as far as which is faster, it will depend on a lot of things:
What compiler you use
What feature you use
how aggressively you optimize
Differences in the first aren't fair to drag in. The second might give C++ an advantage as it, if anything, has fewer heavy features. The third is the fun one: D code in some ways is easier to optimize because in general it is easier to understand. Also it has the ability to do a large degree of generative programing allowing things like verbose and repetitive but fast code to be written in a shorter forms.
Seems like a quality of implementation issue. For example, here's what I've been testing with:
import std.datetime, std.stdio, std.random;
version = ManualInline;
immutable N = 20000;
immutable Size = 10;
alias int value_type;
alias long result_type;
alias value_type[] vector_type;
result_type scalar_product(in vector_type x, in vector_type y)
in
{
assert(x.length == y.length);
}
body
{
result_type result = 0;
foreach(i; 0 .. x.length)
result += x[i] * y[i];
return result;
}
void main()
{
auto startTime = Clock.currTime();
// 1. allocate vectors
vector_type[] vectors = new vector_type[N];
foreach(ref vec; vectors)
vec = new value_type[Size];
auto time = Clock.currTime() - startTime;
writefln("allocation: %s ", time);
startTime = Clock.currTime();
// 2. randomize vectors
foreach(ref vec; vectors)
foreach(ref e; vec)
e = uniform(-1000, 1000);
time = Clock.currTime() - startTime;
writefln("random: %s ", time);
startTime = Clock.currTime();
// 3. compute all pairwise scalar products
result_type avg = 0;
foreach(vecA; vectors)
foreach(vecB; vectors)
{
version(ManualInline)
{
result_type result = 0;
foreach(i; 0 .. vecA.length)
result += vecA[i] * vecB[i];
avg += result;
}
else
{
avg += scalar_product(vecA, vecB);
}
}
avg = avg / (N * N);
time = Clock.currTime() - startTime;
writefln("scalar products: %s ", time);
writefln("result: %s", avg);
}
With ManualInline defined I get 28 seconds, but without I get 32. So the compiler isn't even inlining this simple function, which I think it's clear it should be.
(My command line is dmd -O -noboundscheck -inline -release ....)