Exceptions, move semantics and optimizations: at compiler's mercy (MSVC2010)? - c++

While doing some upgrades to my old exception classes hierarchy to utilize some of C++11 features, I did some speed tests and came across results that are somewhat frustrating. All of this was done with x64bit MSVC++2010 compiler, maximum speed optimization /O2.
Two very simple struct's, both bitwise copy semantics. One without move assignment operator (why would you need one?), another - with. Two simple inlined function returning by value newly created instances of these structs, which get assigned to local variable. Also, notice try/catch block around. Here is the code:
#include <iostream>
#include <windows.h>
struct TFoo
{
unsigned long long int m0;
unsigned long long int m1;
TFoo( unsigned long long int f ) : m0( f ), m1( f / 2 ) {}
};
struct TBar
{
unsigned long long int m0;
unsigned long long int m1;
TBar( unsigned long long int f ) : m0( f ), m1( f / 2 ) {}
TBar & operator=( TBar && f )
{
m0 = f.m0;
m1 = f.m1;
f.m0 = f.m1 = 0;
return ( *this );
}
};
TFoo MakeFoo( unsigned long long int f )
{
return ( TFoo( f ) );
}
TBar MakeBar( unsigned long long int f )
{
return ( TBar( f ) );
}
int main( void )
{
try
{
unsigned long long int lMin = 0;
unsigned long long int lMax = 20000000;
LARGE_INTEGER lStart = { 0 };
LARGE_INTEGER lEnd = { 0 };
TFoo lFoo( 0 );
TBar lBar( 0 );
::QueryPerformanceCounter( &lStart );
for( auto i = lMin; i < lMax; i++ )
{
lFoo = MakeFoo( i );
}
::QueryPerformanceCounter( &lEnd );
std::cout << "lFoo = ( " << lFoo.m0 << " , " << lFoo.m1 << " )\t\tMakeFoo count : " << lEnd.QuadPart - lStart.QuadPart << std::endl;
::QueryPerformanceCounter( &lStart );
for( auto i = lMin; i < lMax; i++ )
{
lBar = MakeBar( i );
}
::QueryPerformanceCounter( &lEnd );
std::cout << "lBar = ( " << lBar.m0 << " , " << lBar.m1 << " )\t\tMakeBar count : " << lEnd.QuadPart - lStart.QuadPart << std::endl;
}
catch( ... ){}
return ( 0 );
}
Program output:
lFoo = ( 19999999 , 9999999 ) MakeFoo count : 428652
lBar = ( 19999999 , 9999999 ) MakeBar count : 74518
Assembler for both loops (showing surrounding counter calls ) :
//- MakeFoo loop START --------------------------------
00000001`3f4388aa 488d4810 lea rcx,[rax+10h]
00000001`3f4388ae ff1594db0400 call qword ptr [Prototype_Console!_imp_QueryPerformanceCounter (00000001`3f486448)]
00000001`3f4388b4 448bdf mov r11d,edi
00000001`3f4388b7 48897c2428 mov qword ptr [rsp+28h],rdi
00000001`3f4388bc 0f1f4000 nop dword ptr [rax]
00000001`3f4388c0 4981fb002d3101 cmp r11,1312D00h
00000001`3f4388c7 732a jae Prototype_Console!main+0x83 (00000001`3f4388f3)
00000001`3f4388c9 4c895c2450 mov qword ptr [rsp+50h],r11
00000001`3f4388ce 498bc3 mov rax,r11
00000001`3f4388d1 48d1e8 shr rax,1
00000001`3f4388d4 4889442458 mov qword ptr [rsp+58h],rax // these 3 lines
00000001`3f4388d9 0f28442450 movaps xmm0,xmmword ptr [rsp+50h] // are of interest
00000001`3f4388de 660f7f442430 movdqa xmmword ptr [rsp+30h],xmm0 // see MakeBar
00000001`3f4388e4 49ffc3 inc r11
00000001`3f4388e7 4c895c2428 mov qword ptr [rsp+28h],r11
00000001`3f4388ec 4c8b6c2438 mov r13,qword ptr [rsp+38h] // this one too
00000001`3f4388f1 ebcd jmp Prototype_Console!main+0x50 (00000001`3f4388c0)
00000001`3f4388f3 488d8c24c0000000 lea rcx,[rsp+0C0h]
00000001`3f4388fb ff1547db0400 call qword ptr [Prototype_Console!_imp_QueryPerformanceCounter (00000001`3f486448)]
//- MakeFoo loop END --------------------------------
//- MakeBar loop START --------------------------------
00000001`3f4389d1 488d8c24c8000000 lea rcx,[rsp+0C8h]
00000001`3f4389d9 ff1569da0400 call qword ptr [Prototype_Console!_imp_QueryPerformanceCounter (00000001`3f486448)]
00000001`3f4389df 4c8bdf mov r11,rdi
00000001`3f4389e2 48897c2440 mov qword ptr [rsp+40h],rdi
00000001`3f4389e7 4981fb002d3101 cmp r11,1312D00h
00000001`3f4389ee 7322 jae Prototype_Console!main+0x1a2 (00000001`3f438a12)
00000001`3f4389f0 4c895c2478 mov qword ptr [rsp+78h],r11
00000001`3f4389f5 498bf3 mov rsi,r11
00000001`3f4389f8 48d1ee shr rsi,1
00000001`3f4389fb 4d8be3 mov r12,r11 // these 3 lines
00000001`3f4389fe 4c895c2468 mov qword ptr [rsp+68h],r11 // are of interest
00000001`3f438a03 48897c2478 mov qword ptr [rsp+78h],rdi // see MakeFoo
00000001`3f438a08 49ffc3 inc r11
00000001`3f438a0b 4c895c2440 mov qword ptr [rsp+40h],r11
00000001`3f438a10 ebd5 jmp Prototype_Console!main+0x177 (00000001`3f4389e7)
00000001`3f438a12 488d8c24c0000000 lea rcx,[rsp+0C0h]
00000001`3f438a1a ff1528da0400 call qword ptr [Prototype_Console!_imp_QueryPerformanceCounter (00000001`3f486448)]
//- MakeBar loop END --------------------------------
Both times are the same if I remove try/catch block. But in presence of it, compiler clearly optimizes code better for struct with redundant move operator=. Also, MakeFoo time does dependent on the size of TFoo and its layout, but in, general, time is several worse than for MakeBar for which time does not depend on small size changes.
Questions:
Is it compiler specific feature ofy MSVC++2010 (could someone check for GCC?)?
Is it because compiler has to preserve temporary until the call finishes, it cannot "rip it apart" in case of MakeFoo, and in case of MakeBar it knows that we allow it to use move semantics and it "rips it apart", generating faster code?
Can I expect same behavior for similar things without try\catch block, but in more complicated scenarios?

Your test is flawed. When compiled with /O2 /EHsc, it runs to completion in a fraction of a second and there is high variability in the results of the test.
I retried the same test, but ran it for 100x as many iterations, with the following results (the results were similar across several runs of the test):
lFoo = ( 1999999999 , 999999999 ) MakeFoo count : 16584927
lBar = ( 1999999999 , 999999999 ) MakeBar count : 16613002
Your test does not show any difference between the performance of assignment of the two types.

Related

DWCAS-alternative with no help of the kernel

I just wanted to test if my compiler recognizes ...
atomic<pair<uintptr_t, uintptr_t>
... and uses DWCASes on it (like x86-64 lock cmpxchg16b) or if it supplements the pair with a usual lock.
So I first wrote a minimal program with a single noinline-function which does a compare and swap on a atomic pair. The compiler generated a lot of code for this which I didn't understand, and I didn't saw any LOCK-prefixed instructions in that. I was curious about whether the
implementation places a lock within the atomic and printed a sizeof of the above atomic pair: 24 on a 64-bit-platform, so obviously without a lock.
At last I wrote a program which increments both portions of a single atomic pair by all the threads my system has (Ryzen Threadripper 64 core, Win10, SMT off) a predefined number of times. Then I calculated the time for each increment in nanoseconds. The time is rather high, about 20.000ns for each successful increment, so it first looked to
me if there was a lock I overlooked; so this couldn't be true with a sizeof of this atomic of 24 bytes. And when I saw at the Processs Viewer I saw that all 64 cores were nearly at 100% user CPU time all the time - so there couldn't be any kernel-interventions.
So is there anyone here smarter than me and can identify what this DWCAS-substitute does from the assembly-dump ?
Here's my test-code:
#include <iostream>
#include <atomic>
#include <utility>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <chrono>
#include <vector>
using namespace std;
using namespace chrono;
struct uip_pair
{
uip_pair() = default;
uip_pair( uintptr_t first, uintptr_t second ) :
first( first ),
second( second )
{
}
uintptr_t first, second;
};
using atomic_pair = atomic<uip_pair>;
int main()
{
cout << "sizeof(atomic<pair<uintptr_t, uintptr_t>>): " << sizeof(atomic_pair) << endl;
atomic_pair ap( uip_pair( 0, 0 ) );
cout << "atomic<pair<uintptr_t, uintptr_t>>::is_lock_free: " << ap.is_lock_free() << endl;
mutex mtx;
unsigned nThreads = thread::hardware_concurrency();
unsigned ready = nThreads;
condition_variable cvReady;
bool run = false;
condition_variable cvRun;
atomic_int64_t sumDur = 0;
auto theThread = [&]( size_t n )
{
unique_lock<mutex> lock( mtx );
if( !--ready )
cvReady.notify_one();
cvRun.wait( lock, [&]() -> bool { return run; } );
lock.unlock();
auto start = high_resolution_clock::now();
uip_pair cmp = ap.load( memory_order_relaxed );
for( ; n--; )
while( !ap.compare_exchange_weak( cmp, uip_pair( cmp.first + 1, cmp.second + 1 ), memory_order_relaxed, memory_order_relaxed ) );
sumDur.fetch_add( duration_cast<nanoseconds>( high_resolution_clock::now() - start ).count(), memory_order_relaxed );
lock.lock();
};
vector<jthread> threads;
threads.reserve( nThreads );
static size_t const ROUNDS = 100'000;
for( unsigned t = nThreads; t--; )
threads.emplace_back( theThread, ROUNDS );
unique_lock<mutex> lock( mtx );
cvReady.wait( lock, [&]() -> bool { return !ready; } );
run = true;
cvRun.notify_all();
lock.unlock();
for( jthread &thr : threads )
thr.join();
cout << (double)sumDur / ((double)nThreads * ROUNDS) << endl;
uip_pair p = ap.load( memory_order_relaxed );
cout << "synch: " << (p.first == p.second ? "yes" : "no") << endl;
}
[EDIT]: I've extracted the compare_exchange_weak-function into a noinline-function and disassembled the code:
struct uip_pair
{
uip_pair() = default;
uip_pair( uintptr_t first, uintptr_t second ) :
first( first ),
second( second )
{
}
uintptr_t first, second;
};
using atomic_pair = atomic<uip_pair>;
#if defined(_MSC_VER)
#define NOINLINE __declspec(noinline)
#elif defined(__GNUC__) || defined(__clang__)
#define NOINLINE __attribute((noinline))
#endif
NOINLINE
bool cmpXchg( atomic_pair &ap, uip_pair &cmp, uip_pair xchg )
{
return ap.compare_exchange_weak( cmp, xchg, memory_order_relaxed, memory_order_relaxed );
}
mov eax, 1
mov r10, rcx
mov r9d, eax
xchg DWORD PTR [rcx], eax
test eax, eax
je SHORT label8
label1:
mov eax, DWORD PTR [rcx]
test eax, eax
je SHORT label7
label2:
mov eax, r9d
test r9d, r9d
je SHORT label5
label4:
pause
sub eax, 1
jne SHORT label4
cmp r9d, 64
jl SHORT label5
lea r9d, QWORD PTR [rax+64]
jmp SHORT label6
label5:
add r9d, r9d
label6:
mov eax, DWORD PTR [rcx]
test eax, eax
jne SHORT label2
label7:
mov eax, 1
xchg DWORD PTR [rcx], eax
test eax, eax
jne SHORT label1
label8:
mov rax, QWORD PTR [rcx+8]
sub rax, QWORD PTR [rdx]
jne SHORT label9
mov rax, QWORD PTR [rcx+16]
sub rax, QWORD PTR [rdx+8]
label9:
test rax, rax
sete al
test al, al
je SHORT label10
movups xmm0, XMMWORD PTR [r8]
movups XMMWORD PTR [rcx+8], xmm0
xor ecx, ecx
xchg DWORD PTR [r10], ecx
ret
label10:
movups xmm0, XMMWORD PTR [rcx+8]
xor ecx, ecx
movups XMMWORD PTR [rdx], xmm0
xchg DWORD PTR [r10], ecx
ret
Maybe someone understands the disassembly. Remember that XCHG is implicitly LOCK'ed on x86. It seems to me that MSVC uses some kind of software transactional memory here. I can extend the shared structure embedded in the atomic arbitrarily but the difference is still 8 bytes; so MSVC always uses some kind of STM.
As Nate pointed out in comments, it is a spinlock.
You can look up source, it ships with the compiler. and is available on Github.
If you build an unoptimized debug, you can step into this source during interactive debugging!
There's a member variable called _Spinlock.
And here's the locking function:
#if 1 // TRANSITION, ABI, GH-1151
inline void _Atomic_lock_acquire(long& _Spinlock) noexcept {
#if defined(_M_IX86) || (defined(_M_X64) && !defined(_M_ARM64EC))
// Algorithm from Intel(R) 64 and IA-32 Architectures Optimization Reference Manual, May 2020
// Example 2-4. Contended Locks with Increasing Back-off Example - Improved Version, page 2-22
// The code in mentioned manual is covered by the 0BSD license.
int _Current_backoff = 1;
const int _Max_backoff = 64;
while (_InterlockedExchange(&_Spinlock, 1) != 0) {
while (__iso_volatile_load32(&reinterpret_cast<int&>(_Spinlock)) != 0) {
for (int _Count_down = _Current_backoff; _Count_down != 0; --_Count_down) {
_mm_pause();
}
_Current_backoff = _Current_backoff < _Max_backoff ? _Current_backoff << 1 : _Max_backoff;
}
}
#elif defined(_M_ARM) || defined(_M_ARM64) || defined(_M_ARM64EC)
while (_InterlockedExchange(&_Spinlock, 1) != 0) { // TRANSITION, GH-1133: _InterlockedExchange_acq
while (__iso_volatile_load32(&reinterpret_cast<int&>(_Spinlock)) != 0) {
__yield();
}
}
#else // ^^^ defined(_M_ARM) || defined(_M_ARM64) || defined(_M_ARM64EC) ^^^
#error Unsupported hardware
#endif
}
(disclosure: I brought this increasing backoff from Intel manual into there, it was just an xchg loop before, issue, PR)
Spinlock use is known to be suboptimal, instead mutex that does kernel wait should have been used. The problem with spinlock is that in a rare case when context switch happens while holding a spinlock by a low priority thread it will take a while to unlock that spinlock, as scheduler will not be aware of high-priority thread waiting on that spinlock.
Sure not using cmpxchg16b is also suboptimal. Still, for bigger atomics non-lock-free mechanism has to be used. (There's no decision to avoid cmpxchg16b made, it is just a consequence of ABI compatibility down to Visual Studio 2015)
There's an issue about making it better, that will hopefully be addressed with the next ABI break: https://github.com/microsoft/STL/issues/1151
As for transaction memory. It might make sense to use hardware transaction memory there. I can speculate that Intel RTM could possibly be implemented there with intrinsics, or there may be some future OS API for them (like, enhanced SRWLOCK), but it is likely that nobody will want more complexity there, as non-lock-free atomic is a compatibility facility, not something you would deliberately want to use.

Will template function typedef specifier be properly inlined when creating each instance of template function?

Have made function that operates on several streams of data in same time, creates output result which is put to destination stream. It has been put huge amount of time to optimize performance of this function (openmp, intrinsics, and etc...). And it performs beautifully.
There is alot math involved here, needless to say very long function.
Now I want to implement in same function with math replacement code for each instance of this without writing each version of this function. Where I want to differentiate between different instances of this function using only #defines or inlined function (code has to be inlined in each version).
Went for templates, but templates allow only type specifiers, and realized that #defines can't be used here. Remaining solution would be inlined math functions, so simplified idea is to create header like this:
'alm_quasimodo.h':
#pragma once
typedef struct ALM_DATA
{
int l, t, r, b;
int scan;
BYTE* data;
} ALM_DATA;
typedef BYTE (*MATH_FX)(BYTE&, BYTE&);
// etc
inline BYTE math_a1(BYTE& A, BYTE& B){ return ((BYTE)((B > A) ? B:A)); }
inline BYTE math_a2(BYTE& A, BYTE& B){ return ((BYTE)(255 - ((long)((long)(255 - A) * (255 - B)) >> 8))); }
inline BYTE math_a3(BYTE& A, BYTE& B){ return ((BYTE)((B < 128)?(2*(((long)A>>1)+64))*((float)B/255):(255-(2*(255-(((long)A>>1)+64))*(float)(255-B)/255)))); }
// etc
template <typename MATH>
inline int const template_math_av (MATH math, ALM_DATA& a, ALM_DATA& b)
{
// ultra simplified version of very complex code
for (int y = a.t; y <= a.b; y++)
{
int yoffset = y * a.scan;
for (int x = a.l; x <= a.r; x++)
{
int xoffset = yoffset + x;
a.data[xoffset] = math(a.data[xoffset], b.data[xoffset]);
}
}
return 0;
}
ALM_API int math_caller(int condition, ALM_DATA& a, ALM_DATA& b);
and math_caller is defined in 'alm_quasimodo.cpp' as follows:
#include "stdafx.h"
#include "alm_quazimodo.h"
ALM_API int math_caller(int condition, ALM_DATA& a, ALM_DATA& b)
{
switch(condition)
{
case 1: return template_math_av<MATH_FX>(math_a1, a, b);
break;
case 2: return template_math_av<MATH_FX>(math_a2, a, b);
break;
case 3: return template_math_av<MATH_FX>(math_a3, a, b);
break;
// etc
}
return -1;
}
Main concern here is optimization, mainly in-lining of MATH function code, and not to break existing optimizations of original code. Without writing each instance of function for specific math operation, of course ;)
So does this template inlines properly all math functions?
And any suggestions how to optimize this function template?
If nothing, thanks for reading this lengthy question.
It all depends on your compiler, optimization level, and how and where are math_a1 to math_a3 functions defined.
Usually, the compiler can optimize this if the functions in question are inline function in the same compilation unit as the rest of the code.
If this doesn't happen for you, you may want to consider functors instead of functions.
Here are some simple examples I experimented with. You can do the same for your function, and check the behavior of different compilers.
For my example, GCC 7.3 and clang 6.0 are pretty good in optimizing-out function calls (provided they see the definition of the function of course). However, somewhat surprisingly, ICC 18.0.0 is only able to optimize-out functors and closures. Even inline functions give it some trouble.
Just to have some code here in case the link stops working in the future.
For the following code:
template <typename T, int size, typename Closure>
T accumulate(T (&array)[size], T init, Closure closure) {
for (int i = 0; i < size; ++i) {
init = closure(init, array[i]);
}
return init;
}
int sum(int x, int y) { return x + y; }
inline int sub_inline(int x, int y) { return x - y; }
struct mul_functor {
int operator ()(int x, int y) const { return x * y; }
};
extern int extern_operation(int x, int y);
int accumulate_function(int (&array)[5]) {
return accumulate(array, 0, sum);
}
int accumulate_inline(int (&array)[5]) {
return accumulate(array, 0, sub_inline);
}
int accumulate_functor(int (&array)[5]) {
return accumulate(array, 1, mul_functor());
}
int accumulate_closure(int (&array)[5]) {
return accumulate(array, 0, [](int x, int y) { return x | y; });
}
int accumulate_exetern(int (&array)[5]) {
return accumulate(array, 0, extern_operation);
}
GCC 7.3 (x86) produces the following assembly:
sum(int, int):
lea eax, [rdi+rsi]
ret
accumulate_function(int (&) [5]):
mov eax, DWORD PTR [rdi+4]
add eax, DWORD PTR [rdi]
add eax, DWORD PTR [rdi+8]
add eax, DWORD PTR [rdi+12]
add eax, DWORD PTR [rdi+16]
ret
accumulate_inline(int (&) [5]):
mov eax, DWORD PTR [rdi]
neg eax
sub eax, DWORD PTR [rdi+4]
sub eax, DWORD PTR [rdi+8]
sub eax, DWORD PTR [rdi+12]
sub eax, DWORD PTR [rdi+16]
ret
accumulate_functor(int (&) [5]):
mov eax, DWORD PTR [rdi]
imul eax, DWORD PTR [rdi+4]
imul eax, DWORD PTR [rdi+8]
imul eax, DWORD PTR [rdi+12]
imul eax, DWORD PTR [rdi+16]
ret
accumulate_closure(int (&) [5]):
mov eax, DWORD PTR [rdi+4]
or eax, DWORD PTR [rdi+8]
or eax, DWORD PTR [rdi+12]
or eax, DWORD PTR [rdi]
or eax, DWORD PTR [rdi+16]
ret
accumulate_exetern(int (&) [5]):
push rbp
push rbx
lea rbp, [rdi+20]
mov rbx, rdi
xor eax, eax
sub rsp, 8
.L8:
mov esi, DWORD PTR [rbx]
mov edi, eax
add rbx, 4
call extern_operation(int, int)
cmp rbx, rbp
jne .L8
add rsp, 8
pop rbx
pop rbp
ret

Exzessive stack usage for simple function in debug build

I have simple class using a kind of ATL database access.
All functions are defined in a header file.
The problematic functions all do the same. There are some macros in use. The generated code looks like this
void InitBindings()
{
if (sName) // Static global char*
m_sTableName = sName; // Save into member
{ AddCol("Name", some_constant_data... _GetOleDBType(...), ...); };
{ AddCol("Name1", some_other_constant_data_GetOleDBType(...), ...); };
...
}
AddCol returns a reference to a structure, but as you see it is ignored.
When I look into the assembler code where I have a function that uses 6 AddCol calls I can see that the function requires 2176 bytes of stack space. I have functions that requires 20kb and more. And in the debugger I can see that the stack isn't use at all. (All initialized to 0xCC and never touched)
See assembler code at the end.
The problem can be seen with VS-2015, and VS-2017.Only in Debug mode.
In Release mode the function reserves no extra stack space at all.
The only rule I see is; more AddCol calls, will cause more stack to be reserved. I can see that approximativ 500bytes per AddCol call is reserved.
Again: The function returns no object, it returns a reference to the binding information.
I already used the following pragmas in front of the function (but inside the class definition in the header):
__pragma(runtime_checks("", off)) __pragma(optimize("ts", on)) __pragma(strict_gs_check(push, off))
But no avail. This pragmas should turn optimization on, switches off runtime checks and stack checks. How can I reduce this unneeded stack space that is allocated. In some cases I can see stack overflows in the debug version, when this functions are used. No problems in the release version.
; 325 : BIND_BEGIN(CMasterData, _T("tblMasterData"))
push ebp
mov ebp, esp
sub esp, 2176 ; 00000880H
push ebx
push esi
push edi
mov DWORD PTR _this$[ebp], ecx
mov eax, OFFSET ??_C#_1BM#GOLNKAI#?$AAt?$AAb?$AAl?$AAM?$AAa?$AAs?$AAt?$AAe?$AAr?$AAD?$AAa?$AAt?$AAa?$AA?$AA#
test eax, eax
je SHORT $LN2#InitBindin
push OFFSET ??_C#_1BM#GOLNKAI#?$AAt?$AAb?$AAl?$AAM?$AAa?$AAs?$AAt?$AAe?$AAr?$AAD?$AAa?$AAt?$AAa?$AA?$AA#
mov ecx, DWORD PTR _this$[ebp]
add ecx, 136 ; 00000088H
call DWORD PTR __imp_??4?$CStringT#_WV?$StrTraitMFC_DLL#_WV?$ChTraitsCRT#_W#ATL#####ATL##QAEAAV01#PB_W#Z
$LN2#InitBindin:
; 326 : // Columns:
; 327 : B$C_IDENT (_T("Id"), m_lId);
push 0
push 0
push 1
push 4
push 0
call ?_GetOleDBType#ATL##YAGAAJ#Z ; ATL::_GetOleDBType
add esp, 4
movzx eax, ax
push eax
push 0
push OFFSET ??_C#_15NCCOGFKM#?$AAI?$AAd?$AA?$AA#
mov ecx, DWORD PTR _this$[ebp]
call ?AddCol#CDBAccess#DB##QAEAAUS_BIND#2#PB_WKGKW4TYPE#32#0_N#Z ; DB::CDBAccess::AddCol
; 328 : B$C (_T("Name"), m_szName);
push 0
push 0
push 0
push 122 ; 0000007aH
mov eax, 4
push eax
call ?_GetOleDBType#ATL##YAGQA_W#Z ; ATL::_GetOleDBType
add esp, 4
movzx ecx, ax
push ecx
push 4
push OFFSET ??_C#_19DINFBLAK#?$AAN?$AAa?$AAm?$AAe?$AA?$AA#
mov ecx, DWORD PTR _this$[ebp]
call ?AddCol#CDBAccess#DB##QAEAAUS_BIND#2#PB_WKGKW4TYPE#32#0_N#Z ; DB::CDBAccess::AddCol
; 329 : B$C (_T("Data"), m_data);
push 0
push 0
push 0
push 4
push 128 ; 00000080H
call ?_GetOleDBType#ATL##YAGAAVCComBSTR#1##Z ; ATL::_GetOleDBType
add esp, 4
movzx eax, ax
push eax
push 128 ; 00000080H
push OFFSET ??_C#_19IEEMEPMH#?$AAD?$AAa?$AAt?$AAa?$AA?$AA#
mov ecx, DWORD PTR _this$[ebp]
call ?AddCol#CDBAccess#DB##QAEAAUS_BIND#2#PB_WKGKW4TYPE#32#0_N#Z ; DB::CDBAccess::AddCol
It is a compiler bug. Already known in connect.
EDIT The problem seams to be fixed in VS-2017 15.5.1
The problem has to do with a bug in the built in offsetof.
It is not possible for me to #undef _CRT_USE_BUILTIN_OFFSETOF as written in this case.
For me it only works to #undef offsetof and to use one of this:
#define myoffsetof1(s,m) ((size_t)&reinterpret_cast<char const volatile&>((((s*)0)->m)))
#define myoffsetof2(s, m) ((size_t)&(((s*)0)->m))
#undef offsetof
#define offsetof myoffsetof1
All ATL DB consumers are affected.
Here is a minimum repro, that shows the bug. Set a breakpint on the Init function. Look into the assembler code and wonder how much stack is used!
// StackUsage.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <string>
#include <list>
#include <iostream>
using namespace std;
struct CRec
{
char t1[20];
char t2[20];
char t3[20];
char t4[20];
char t5[20];
int i1, i2, i3, i4, i5;
GUID g1, g2, g3, g4, g5;
DBTIMESTAMP d1, d2, d3, d4, d5;
};
#define sizeofmember(s,m) sizeof(reinterpret_cast<const s *>(0)->m)
#define typeofmember(c,m) _GetOleDBType(((c*)0)->m)
#define myoffsetof1(s,m) ((size_t)&reinterpret_cast<char const volatile&>((((s*)0)->m)))
#define myoffsetof2(s, m) ((size_t)&(((s*)0)->m))
// Undef this lines to fix the bug
// #undef offsetof
// #define offsetof myoffsetof1
#define COL(n,v) { AddCol(n,offsetof(CRec,v),typeofmember(CRec,v),sizeofmember(CRec,v)); }
class CFoo
{
public:
CFoo()
{
Init();
}
void Init()
{
COL("t1", t1);
COL("t2", t2);
COL("t3", t3);
COL("t4", t4);
COL("t5", t5);
COL("i1", i1);
COL("i2", i2);
COL("i3", i3);
COL("i4", i4);
COL("i5", i5);
COL("g1", g1);
COL("g2", g2);
COL("g2", g3);
COL("g2", g4);
COL("g2", g5);
COL("d1", d1);
COL("d2", d2);
COL("d2", d3);
COL("d2", d4);
COL("d2", d5);
}
void AddCol(PCSTR szName, ULONG nOffset, DBTYPE wType, ULONG nSize)
{
cout << szName << '\t' << nOffset << '\t' << wType << '\t' << nSize << endl;
}
};
int main()
{
CFoo foo;
return 0;
}

Adding stringstream/cout hurts performance, even when the code is never called

I have a program in which a simple function is called a large number of times. I have added some simple logging code and find that this significantly affects performance, even when the logging code is not actually called. A complete (but simplified) test case is shown below:
#include <chrono>
#include <iostream>
#include <random>
#include <sstream>
using namespace std::chrono;
std::mt19937 rng;
uint32_t getValue()
{
// Just some pointless work, helps stop this function from getting inlined.
for (int x = 0; x < 100; x++)
{
rng();
}
// Get a value, which happens never to be zero
uint32_t value = rng();
// This (by chance) is never true
if (value == 0)
{
value++; // This if statment won't get optimized away when printing below is commented out.
std::stringstream ss;
ss << "This never gets printed, but commenting out these three lines improves performance." << std::endl;
std::cout << ss.str();
}
return value;
}
int main(int argc, char* argv[])
{
// Just fror timing
high_resolution_clock::time_point start = high_resolution_clock::now();
uint32_t sum = 0;
for (uint32_t i = 0; i < 10000000; i++)
{
sum += getValue();
}
milliseconds elapsed = duration_cast<milliseconds>(high_resolution_clock::now() - start);
// Use (print) the sum to make sure it doesn't get optimized away.
std::cout << "Sum = " << sum << ", Elapsed = " << elapsed.count() << "ms" << std::endl;
return 0;
}
Note that the code contains stringstream and cout but these are never actually called. However, the presence of these three lines of code increases the run time from 2.9 to 3.3 seconds. This is in release mode on VS2013. Curiously, if I build in GCC using '-O3' flag the extra three lines of code actually decrease the runtime by half a second or so.
I understand that the extra code could impact the resulting executable in a number of ways, such as by preventing inlining or causing more cache misses. The real question is whether there is anything I can do to improve on this situation? Switching to sprintf()/printf() doesn't seem to make a difference. Do I need to simply accept that adding such logging code to small functions will affect performance even if not called?
Note: For completeness, my real/full scenario is that I use a wrapper macro to throw exceptions and I like to log when such an exception is thrown. So when I call THROW_EXCEPT(...) it inserts code similar to that shown above and then throws. This in then hurting when I throw exceptions from inside a small function. Any better alternatives here?
Edit: Here is a VS2013 solution for quick testing, and so compiler settings can be checked: https://drive.google.com/file/d/0B7b4UnjhhIiEamFyS0hjSnVzbGM/view?usp=sharing
So I initially thought that this was due to branch prediction and optimising out branches so I took a look at the annotated assembly for when the code is commented out:
if (value == 0)
00E21371 mov ecx,1
00E21376 cmove eax,ecx
{
value++;
Here we see that the compiler has helpfully optimised out our branch, so what if we put in a more complex statement to prevent it from doing so:
if (value == 0)
00AE1371 jne getValue+99h (0AE1379h)
{
value /= value;
00AE1373 xor edx,edx
00AE1375 xor ecx,ecx
00AE1377 div eax,ecx
Here the branch is left in but when running this it runs about as fast as the previous example with the following lines commented out. So lets have a look at the assembly for having those lines left in:
if (value == 0)
008F13A0 jne getValue+20Bh (08F14EBh)
{
value++;
std::stringstream ss;
008F13A6 lea ecx,[ebp-58h]
008F13A9 mov dword ptr [ss],8F32B4h
008F13B3 mov dword ptr [ebp-0B0h],8F32F4h
008F13BD call dword ptr ds:[8F30A4h]
008F13C3 push 0
008F13C5 lea eax,[ebp-0A8h]
008F13CB mov dword ptr [ebp-4],0
008F13D2 push eax
008F13D3 lea ecx,[ss]
008F13D9 mov dword ptr [ebp-10h],1
008F13E0 call dword ptr ds:[8F30A0h]
008F13E6 mov dword ptr [ebp-4],1
008F13ED mov eax,dword ptr [ss]
008F13F3 mov eax,dword ptr [eax+4]
008F13F6 mov dword ptr ss[eax],8F32B0h
008F1401 mov eax,dword ptr [ss]
008F1407 mov ecx,dword ptr [eax+4]
008F140A lea eax,[ecx-68h]
008F140D mov dword ptr [ebp+ecx-0C4h],eax
008F1414 lea ecx,[ebp-0A8h]
008F141A call dword ptr ds:[8F30B0h]
008F1420 mov dword ptr [ebp-4],0FFFFFFFFh
That's a lot of instructions if that branch is ever hit. So what if we try something else?
if (value == 0)
011F1371 jne getValue+0A6h (011F1386h)
{
value++;
printf("This never gets printed, but commenting out these three lines improves performance.");
011F1373 push 11F31D0h
011F1378 call dword ptr ds:[11F30ECh]
011F137E add esp,4
Here we have far fewer instructions and once again it runs as quickly as with all lines commented out.
So I'm not sure I can say for certain exactly what is happening here but I feel at the moment it is a combination of branch prediction and CPU instruction cache misses.
In order to solve this problem you could move the logging into a function like so:
void log()
{
std::stringstream ss;
ss << "This never gets printed, but commenting out these three lines improves performance." << std::endl;
std::cout << ss.str();
}
and
if (value == 0)
{
value++;
log();
Then it runs as fast as before with all those instructions replaced with a single call log (011C12E0h).

How can I prevent MSVC++ from over-allocating stack space for a switch statement?

As part of updating the toolchain for a legacy codebase, we would like to move from the Borland C++ 5.02 compiler to the Microsoft compiler (VS2008 or later). This is an embedded environment where the stack address space is predefined and fairly limited. It turns out that we have a function with a large switch statement which causes a much larger stack allocation under the MS compiler than with Borland's and, in fact, results in a stack overflow.
The form of the code is something like this:
#ifdef PKTS
#define RETURN_TYPE SPacket
typedef struct
{
int a;
int b;
int c;
int d;
int e;
int f;
} SPacket;
SPacket error = {0,0,0,0,0,0};
#else
#define RETURN_TYPE int
int error = 0;
#endif
extern RETURN_TYPE pickone(int key);
void findresult(int key, RETURN_TYPE* result)
{
switch(key)
{
case 1 : *result = pickone(5 ); break;
case 2 : *result = pickone(6 ); break;
case 3 : *result = pickone(7 ); break;
case 4 : *result = pickone(8 ); break;
case 5 : *result = pickone(9 ); break;
case 6 : *result = pickone(10); break;
case 7 : *result = pickone(11); break;
case 8 : *result = pickone(12); break;
case 9 : *result = pickone(13); break;
case 10 : *result = pickone(14); break;
case 11 : *result = pickone(15); break;
default : *result = error; break;
}
}
When compiled with cl /O2 /FAs /c /DPKTS stack_alloc.cpp, a portion of the listing file looks like this:
_TEXT SEGMENT
$T2592 = -264 ; size = 24
$T2582 = -240 ; size = 24
$T2594 = -216 ; size = 24
$T2586 = -192 ; size = 24
$T2596 = -168 ; size = 24
$T2590 = -144 ; size = 24
$T2598 = -120 ; size = 24
$T2588 = -96 ; size = 24
$T2600 = -72 ; size = 24
$T2584 = -48 ; size = 24
$T2602 = -24 ; size = 24
_key$ = 8 ; size = 4
_result$ = 12 ; size = 4
?findresult##YAXHPAUSPacket###Z PROC ; findresult, COMDAT
; 27 : switch(key)
mov eax, DWORD PTR _key$[esp-4]
dec eax
sub esp, 264 ; 00000108H
...
$LN11#findresult:
; 30 : case 2 : *result = pickone(6 ); break;
push 6
lea ecx, DWORD PTR $T2584[esp+268]
push ecx
jmp SHORT $LN17#findresult
$LN10#findresult:
; 31 : case 3 : *result = pickone(7 ); break;
push 7
lea ecx, DWORD PTR $T2586[esp+268]
push ecx
jmp SHORT $LN17#findresult
$LN17#findresult:
call ?pickone##YA?AUSPacket##H#Z ; pickone
mov edx, DWORD PTR [eax]
mov ecx, DWORD PTR _result$[esp+268]
mov DWORD PTR [ecx], edx
mov edx, DWORD PTR [eax+4]
mov DWORD PTR [ecx+4], edx
mov edx, DWORD PTR [eax+8]
mov DWORD PTR [ecx+8], edx
mov edx, DWORD PTR [eax+12]
mov DWORD PTR [ecx+12], edx
mov edx, DWORD PTR [eax+16]
mov DWORD PTR [ecx+16], edx
mov eax, DWORD PTR [eax+20]
add esp, 8
mov DWORD PTR [ecx+20], eax
; 41 : }
; 42 : }
add esp, 264 ; 00000108H
ret 0
The allocated stack space includes dedicated locations for each case to temporarily store the structure returned from pickone(), though in the end, only one value will be copied to the result structure. As you can imagine, with larger structures, more cases, and recursive calls in this function, the available stack space is consumed rapidly.
If the return type is POD, as when the above is compiled without the /DPKTS directive, each case copies directly to result, and stack usage is more efficient:
$LN10#findresult:
; 31 : case 3 : *result = pickone(7 ); break;
push 7
call ?pickone##YAHH#Z ; pickone
mov ecx, DWORD PTR _result$[esp]
add esp, 4
mov DWORD PTR [ecx], eax
; 41 : }
; 42 : }
ret 0
Can anyone explain why the compiler takes this approach and whether there's a way to convince it to do otherwise? I have limited freedom to re-architect the code, so pragmas and the like are the more desirable solutions. So far, I have not found any combination of optimization, debug, etc. arguments that make a difference.
Thank you!
EDIT
I understand that findresult() needs to allocate space for the return value of pickone(). What I don't understand is why the compiler allocates additional space for each possible case in the switch. It seems that space for one temporary would be sufficient. This is, in fact, how gcc handles the same code. Borland, on the other hand, appears to use RVO, passing the pointer all the way down and avoiding use of a temporary. The MS C++ compiler is the only one of the three that reserves space for each case in the switch.
I know that it's difficult to suggest refactoring options when you don't know which portions of the test code can change -- that's why my first question is why does the compiler behave this way in the test case. I'm hoping that if I can understand that, I can choose the best refactoring/pragma/command-line option to fix it.
Why not just
void findresult(int key, RETURN_TYPE* result)
{
if (key >= 1 && key <= 11)
*result = pickone(4+key);
else
*result = error;
}
Assuming this counts as a smaller change, I just remembered an old question about scope, specifically related to embedded compilers. Does the optimizer do any better if you wrap each case in braces to explicitly limit the temporary scope?
switch(key)
{
case 1 : { *result = pickone(5 ); break; }
Another scope-changing option:
void findresult(int key, RETURN_TYPE* result)
{
RETURN_TYPE tmp;
switch(key)
{
case 1 : tmp = pickone(5 ); break;
...
}
*result = tmp;
}
This is all a bit hand-wavy, because we're just trying to guess which input will coax a sensible response from this unfortunate optimizer.
I'm going to assume that rewriting that function is allowed, as long as the changes don't "leak" outside the function. I'm also assuming that (as mentioned in the comments) you actually have a number of separate functions to call (but that they all receive the same type of input and return the same result type).
For such a case, I'd probably change the function to something like:
RETURN_TYPE func1(int) { /* ... */ }
RETURN_TYPE func2(int) { /* ... */ }
// ...
void findresult(int key, RETURN_TYPE *result) {
typedef RETURN_TYPE (*f)(int);
f funcs[] = (func1, func2, func3, func4, func5, /* ... */ };
if (in_range(key))
*result = funcs[key](key+4);
else
*result = error;
}