Why is one thread faster than just calling a function, mingw - c++

When I call function execution time is 6.8 sec.
Call it from a thread time is 3.4 sec
and when using 2 thread 1.8 sec. No matter what optimization I use rations stay same.
In Visual Studio times are like expected 3.1, 3 and 1.7 sec.
#include<math.h>
#include<stdio.h>
#include<windows.h>
#include <time.h>
using namespace std;
#define N 400
float a[N][N];
struct b{
int begin;
int end;
};
DWORD WINAPI thread(LPVOID p)
{
b b_t = *(b*)p;
for(int i=0;i<N;i++)
for(int j=b_t.begin;j<b_t.end;j++)
{
a[i][j] = 0;
for(int k=0;k<i;k++)
a[i][j]+=k*sin(j)-j*cos(k);
}
return (0);
}
int main()
{
clock_t t;
HANDLE hn[2];
b b_t[3];
b_t[0].begin = 0;
b_t[0].end = N;
b_t[1].begin = 0;
b_t[1].end = N/2;
b_t[2].begin = N/2;
b_t[2].end = N;
t = clock();
thread(&b_t[0]);
printf("0 - %d\n",clock()-t);
t = clock();
hn[0] = CreateThread ( NULL, 0, thread, &b_t[0], 0, NULL);
WaitForSingleObject(hn[0], INFINITE );
printf("1 - %d\n",clock()-t);
t = clock();
hn[0] = CreateThread ( NULL, 0, thread, &b_t[1], 0, NULL);
hn[1] = CreateThread ( NULL, 0, thread, &b_t[2], 0, NULL);
WaitForMultipleObjects(2, hn, TRUE, INFINITE );
printf("2 - %d\n",clock()-t);
return 0;
}
Times:
0 - 6868
1 - 3362
2 - 1827
CPU - Core 2 Duo T9300
OS - Windows 8, 64 - bit
compiler: mingw32-g++.exe, gcc version 4.6.2
edit:
Tried different order, same result, even tried separate applications.
Task Manager showing CPU Utilization around 50% for function and 1 thread and 100% for 2 thread
Sum of all elements after each call is the same: 3189909.237955
Cygwin result: 2.5, 2.5 and 2.5 sec
Linux result(pthread): 3.7, 3.7 and 2.1 sec
#borisbn results: 0 - 1446 1 - 1439 2 - 721.

The difference is a result of something in the math library implementing sin() and cos() - if you replace the calls to those functions with something else that takes time the significant difference between step and 0 and step 1 goes away.
Note that I see the difference with gcc (tdm-1) 4.6.1, which is a 32-bit toolchain targeting 32 bit binaries. Optimization makes no difference (not surprising since it seems to be something in the math library).
However, if I build using gcc (tdm64-1) 4.6.1, which is a 64-bit toolchain, the difference does not appear - regardless if the build is creating a 32-bit program (using the -m32 option) or a 64-bit program (-m64).
Here are some example test runs (I made minor modifications to the source to make it C99 compatible):
Using the 32-bit TDM MinGW 4.6.1 compiler:
C:\temp>gcc --version
gcc (tdm-1) 4.6.1
C:\temp>gcc -m32 -std=gnu99 -o test.exe test.c
C:\temp>test
0 - 4082
1 - 2439
2 - 1238
Using the 64-bit TDM 4.6.1 compiler:
C:\temp>gcc --version
gcc (tdm64-1) 4.6.1
C:\temp>gcc -m32 -std=gnu99 -o test.exe test.c
C:\temp>test
0 - 2506
1 - 2476
2 - 1254
C:\temp>gcc -m64 -std=gnu99 -o test.exe test.c
C:\temp>test
0 - 3031
1 - 3031
2 - 1539
A little more information:
The 32-bit TDM distribution (gcc (tdm-1) 4.6.1) links to the sin()/cos() implementations in the msvcrt.dll system DLL via a provided import library:
c:/mingw32/bin/../lib/gcc/mingw32/4.6.1/../../../libmsvcrt.a(dcfls00599.o)
0x004a113c _imp__cos
While the 64-bit distribution (gcc (tdm64-1) 4.6.1) doesn't appear to do that, instead linking to some static library implementation provided with the distribution:
c:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/4.6.1/../../../../x86_64-w64-mingw32/lib/../lib32/libmingwex.a(lib32_libmingwex_a-cos.o)
C:\Users\mikeb\AppData\Local\Temp\cc3pk20i.o (cos)
Update/Conclusion:
After a bit of spelunking in a debugger stepping through the assembly of msvcrt.dll's implementation of cos() I've found that the difference in the timing of the main thread versus an explicitly created thread is due to the FPU's precision being set to a non-default setting (presumably the MinGW runtime in question does this at start up). In the situation where the thread() function takes twice as long, the FPU is set to 64-bit precision (REAL10 or in MSVC-speak _PC_64). When the FPU control word is something other than 0x27f (the default state?), the msvcrt.dll runtime will perform the following steps in the sin() and cos() function (and probably other floating point functions):
save the current FPU control word
set the FPU control word to 0x27f (I believe it's possible for this value to be modified)
perform the fsin/fcos operation
restore the saved FPU control word
The save/restore of the FPU control word is skipped if it's already set to the expected/desired 0x27f value. Apparently saving/restoring the FPU control word is expensive, since it appears to double the amount of time the function takes.
You can solve the problem by adding the following line to main() before calling thread():
_control87( _PC_53, _MCW_PC); // requires <float.h>

Not a cache matterhere.
Likely different runtime libraries for user created threads and main thread.
You may compare the calculations a[i][j]+=k*sin(j)-j*cos(k); in detail (numbers) for specific values of i, j, and k to confirm differences.

The reason is the main thread is doing 64 bit float math and the threads are doing 53 bit math.
You can know this / fix it by changing the code to
...
extern "C" unsigned int _control87( unsigned int newv, unsigned int mask );
DWORD WINAPI thread(LPVOID p)
{
printf( "_control87(): 0x%.4x\n", _control87( 0, 0 ) );
_control87(0x00010000,0x00010000);
...
The output will be:
c:\temp>test
_control87(): 0x8001f
0 - 2667
_control87(): 0x9001f
1 - 2683
_control87(): 0x9001f
_control87(): 0x9001f
2 - 1373
c:\temp>mingw32-c++ --version
mingw32-c++ (GCC) 4.6.2
You can see that 0 was going to run w/o the 0x10000 flag, but once set, runs at the same speed as 1 & 2. If you look up the _control87() function, you'll see that this value is the _PC_53 flag, which sets the precision to be 53 instead of 64 had it been left as zero.
For some reason, Mingw isn't setting it to the same value at process init time that CreateThread() does at thread create time.
Another work around it to turn on SSE2 with _set_SSE2_enable(1), which will run even faster, but may give different results.
c:\temp>test
0 - 1341
1 - 1326
2 - 702
I believe this is on by default for the 64 bit because all 64 bit processors support SSE2.

As others suggested, change the order of your three tests to get some more insight. Also, the fact that you have a multi-core machine pretty well explains why using two threads doing half the work each takes half the time. Take a look at your CPU usage monitor (Control-Shift-Escape) to find out how many cores are maxed out during the running time.

Related

Strange behaviour of Parallel Boost Graph Library example code

I have set up simple tests with Parallel Boost Graph Library (PBGL), which I have never used before, and observed entirely unexpected behaviour I would like to explain.
My steps were as follows:
Dump test data in METIS format (a kind of social graph with 50 mln vertices and 100 mln edges);
Build modified PBGL example from graph_parallel\example\dijkstra_shortest_paths.cpp
Example was slightly extended to proceed with Eager, Crauser and delta-stepping algorithms.
Note: building of the example required some obscure workaround about the MUTABLE_QUEUE define in crauser_et_al_shortest_paths.hpp (example code is in fact incompatible with the new mutable_queue)
int lookahead = 1;
delta_stepping_shortest_paths(g, start, dummy_property_map(), get(vertex_distance, g), get(edge_weight, g), lookahead);
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)).lookahead(lookahead));
dijkstra_shortest_paths(g, start, distance_map(get(vertex_distance, g)));
Run
mpiexec -n 1 mytest.exe mydata.me
mpiexec -n 2 mytest.exe mydata.me
mpiexec -n 4 mytest.exe mydata.me
mpiexec -n 8 mytest.exe mydata.me
The observed behaviour:
-n 1:
mem usage: 35 GB in 1 running process, which utilizes exactly 1 device thread (processor load 12.5%)
delta stepping time: about 1 min 20 s
eager time: about 2 min
crauser time: about 3 min 20 s.
-n 2:
crash in the stage of data load.
-n 4:
mem usage: 40+ Gb in roughly equal parts in 4 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
-n 8:
mem usage: 44+ Gb in roughly equal parts in 8 running processes, each of which utilizes exactly 1 device thread
calculation times are unchanged in the margins of observation error.
So, except the unapropriate memory usage and very low total performance the only changes I observe when more MPI processes are running are slightly increased total memory consumption and linear rise of processor load.
The fact that initial graph is somehow partitioned between processes (probably by the vertices number ranges) is nevertheless evident.
What is wrong with this test (and, probably, my idea of MPI usage in whole)?
My enviromnent:
- one Win 10 PC with 64 Gb and 8 kernels;
- MS MPI 10.0.12498.5;
- MSVC 2017, toolset 141;
- boost 1.71
N.B. See original example code here.

Inconsistency when benchmarking two contiguous measurements

I was benchmarking a function and I see that some iteration are slower than other.
After some tests I tried to benchmark two contiguous measurements and I still got some weird results.
The code is on wandbox.
For me the important part is :
using clock = std::chrono::steady_clock;
// ...
for (int i = 0; i < statSize; i++)
{
auto t1 = clock::now();
auto t2 = clock::now();
}
The loop is optimized away as we can see on godbolt.
call std::chrono::_V2::steady_clock::now()
mov r12, rax
call std::chrono::_V2::steady_clock::now()
The code was compiled with:
g++ bench.cpp -Wall -Wextra -std=c++11 -O3
and gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) on an Intel® Xeon® W-2195 Processor.
I was the only user on the machine and I try to run with and without higth priority (nice or chrt) and the result was the same.
The result I got with 100 000 000 iterations was:
The Y-axis is in nanoseconds, it's the result of the line
std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count()
These 4 lines make me think of: No cache/L1/L2/L3 cache misses (even if the “L3 cache misses” line seems to be too close of the L2 line)
I am not sure why there will be cache misses, may be the storage of the result, but it’s not in the measured code.
I have try to run 10 000 times the program with a loop of 1500, because the L1 cache of this processor is:
lscpu | grep L1
L1d cache: 32K
L1i cache: 32K
And 1500*16 bits = 24 000 bits which is less than 32K so there shouldn’t have cache miss.
And the results:
I still have my 4 lines (and some noise).
So if it’s realy a cache miss I don’t have any idea why it is happening.
I don’t kown if it’s useful for you but I run:
sudo perf stat -e cache-misses,L1-dcache-load-misses,L1-dcache-load ./a.out 1000
With the value 1 000 / 10 000 / 100 000 / 1 000 000
I got between 4.70 and 4.30% of all L1-dcache hits, which seem pretty decent to me.
So the questions are:
What is the cause of these slowdown?
How produce a qualitative benchmark of a function when I can’t have a constant time for a No operation ?
Ps : I don’t kwon if I am missing useful informations / flags, feel free to ask !
How reproduce:
The code:
#include <iostream>
#include <chrono>
#include <vector>
int main(int argc, char **argv)
{
int statSize = 1000;
using clock = std::chrono::steady_clock;
if (argc == 2)
{
statSize = std::atoi(argv[1]);
}
std::vector<uint16_t> temps;
temps.reserve(statSize);
for (int i = 0; i < statSize; i++)
{
auto t1 = clock::now();
auto t2 = clock::now();
temps.push_back(
std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count());
}
for (auto t : temps)
std::cout << (int)t << std::endl;
return (0);
}
Build:
g++ bench.cpp -Wall -Wextra -std=c++11 -O3
Generate output (sudo needed):
In this case I run 10 000 time the program. Each time take 100 measures, and I remove the first which is always about 5 time slower :
for i in {1..10000} ; do sudo nice -n -17 ./a.out 100 | tail -n 99 >> fast_1_000_000_uint16_100 ; done
Generate graph:
cat fast_1_000_000_uint16_100 | gnuplot -p -e "plot '<cat'"
The result I have on my machine:
Where I am after the answer of Zulan and all the comments
The current_clocksource is set on tsc and no switch seen in dmesg, command used:
dmesg -T | grep tsc
I use this script to remove the HyperThreading (HT)
then
grep -c proc /proc/cpuinfo
=> 18
Subtract 1 from the last result to obtain the last available core:
=> 17
Edit /etc/grub/default and add isolcpus=(last result) in in GRUB_CMDLINE_LINUX:
GRUB_CMDLINE_LINUX="isolcpus=17"
Finaly:
sudo update-grub
reboot
// reexecute the script
Now I can use:
taskset -c 17 ./a.out XXXX
So I run 10 000 times a loop of 100 iterations.
for i in {1..10000} ; do sudo /usr/bin/time -v taskset -c 17 ./a.out 100 > ./core17/run_$i 2>&1 ; done
Check if there is any Involuntary context switches:
grep -L "Involuntary context switches: 0" result/* | wc -l
=> 0
There is none, good. Let's plot :
for i in {1..10000} ; do cat ./core17/run_$i | head -n 99 >> ./no_switch_taskset ; done
cat no_switch_taskset | gnuplot -p -e "plot '<cat'"
Result :
There are still 22 measures greater than 1000 (when most values are arround 20) that I don't understand.
Next step, TBD
Do the part :
sudo nice -n -17 perf record...
of the Zulan answer's
I can't reproduce it with these particular clustered lines, but here is some general information.
Possible causes
As discussed in the comments, nice on a normal idle system is just a best effort. You still have at least
The scheduling tick timer
Kernel tasks that are bound to a certain code
Your task may be migrated from one core to another for an arbitrary reason
You can use isolcpus and taskset to get exclusive cores for certain processes to avoid some of that, but I don't think you can really get rid of all the kernel tasks. In addition, use nohz=full to disable the scheduling tick. You should also disable hyperthreading to get exclusive access to a core from a hardware thread.
Except for taskset, which I absolutely recommend for any performance measurement, these are quite unusual measures.
Measure instead of guessing
If there is a suspicion what could be happening, you can usually setup a measurement to confirm or disprove that hypothesis. perf and tracepoints are great for that. For example, we can start with looking at scheduling activity and some interrupts:
sudo nice -n -17 perf record -o perf.data -e sched:sched_switch -e irq:irq_handler_entry -e irq:softirq_entry ./a.out ...
perf script will now tell list you every occurrence. To correlate that with slow iterations you can use perf probe and a slightly modified benchmark:
void __attribute__((optimize("O0"))) record_slow(int64_t count)
{
(void)count;
}
...
auto count = std::chrono::duration_cast<std::chrono::nanoseconds>(t2 - t1).count();
if (count > 100) {
record_slow(count);
}
temps.push_back(count);
And compile with -g
sudo perf probe -x ./a.out record_slow count
Then add -e probe_a:record_slow to the call to perf record. Now if you are lucky, you find some close events, e.g.:
a.out 14888 [005] 51213.829062: irq:softirq_entry: vec=1 [action=TIMER]
a.out 14888 [005] 51213.829068: probe_a:record_slow: (559354aec479) count=9029
Be aware: while this information will likely explain some of your observation, you enter a world of even more puzzling questions and oddities. Also, while perf is pretty low-overhead, there may be some perturbation on what you measure.
What are we benchmarking?
First of all, you need to be clear what you actually measure: The time to execute std::chrono::steady_clock::now(). It's actually good to do that to figure out at least this measurement overhead as well as the precision of the clock.
That's actually a tricky point. The cost of this function, with clock_gettime underneath, depends on your current clocksource1. If that's tsc you're fine - hpet is much slower. Linux may switch quietly2 from tsc to hpet during operation.
What to do to get stable results?
Sometimes you might need to do benchmarks with extreme isolation, but usually that's not necessary even for very low-level micro-architecture benchmarks. Instead, you can use statistical effects: repeat the measurement. Use the appropriate methods (mean, quantiles), sometimes you may want to exclude outliers.
If the measurement kernel is not significantly longer than timer precision, you will have to repeat the kernel and measure outside to get a throughput rather than a latency, which may or may not be different.
Yes - benchmarking right is very complicated, you need to consider a lot of aspects, especially when you get closer to the hardware and the your kernel times get very short. Fortunately there's some help, for example Google's benchmark library provides a lot of help in terms of doing the right amount of repetitions and also in terms having experiment factors.
1 /sys/devices/system/clocksource/clocksource0/current_clocksource
2 Actually it's in dmesg as something like
clocksource: timekeeping watchdog on CPU: Marking clocksource 'tsc' as unstable because the skew is too large:

pthread multithreading in Mac OS vs windows multithreaing

I've developed a multi platform program (using the FLTK toolkit) and implement multithreading to do background intensive tasks.
I have followed the FLTK tutorials/examples on multithreading which involve using pthread on Mac, ie the function pthread_create and windows threading on windows ie _beginthread
What I have noticed is that the performance is much higher on Windows ie 3 to 4 times faster in these background threads (in the time to execute them).
Why might this be? Is it the threading libraries I'm using? Surely there shouldn't be such a difference? Or could it be the runtime libraries underneath it all?
Here are my machine details
Mac:
Intel(R) Core(TM) i7-3820QM CPU # 2.70GHz
16 GB DDR3 1600 MHz
Model MacBookPro9,1
OS: Mac OSX 10.8.5
Windows:
Intel(R) Core(TM) i7-3520M CPU # 2.90GHz
16 GB DDR3 1600 MHz
Model: Dell Latitude E5530
OS: Windows 7 Service Pack 1
EDIT
To just do a basic speed comparison I compiled this on both machines running from the command line
int main(int agrc, char **argv)
{
time_t t = time(NULL);
tm* tt=localtime(&t);
std::stringstream s;
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"1: "<<s.str()<<std::endl;
double sum=0;
for (int i=0;i<100000000;i++){
double ii=i*0.123456789;
sum=sum+sin(ii)*cos(ii);
}
t = time(NULL);
tt=localtime(&t);
s.str("");
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"2: "<<s.str()<<std::endl;
}
Windows takes less than a second. Mac takes 4/5 seconds. Any ideas?
On Mac I'm compiling with g++, with visual studio 2013 on windows.
SECOND EDIT
if I change the line
std::cout<<"2: "<<s.str()<<std::endl;
to
std::cout<<"2: "<<s.str()<<" "<<sum<<std::endl;
Then all of a sudden Windows takes a little bit longer...
This makes me think that the whole thing might be compiler optimisation. So the question would be is g++ (4.2 is the version I have) worse at optimisation or do I need to provide additional flags?
THIRD(!) AND FINAL EDIT
I can report that I achieve comparable performance by ensuring g++ optimisation flags -O were provided at compile time. One of those annoying things that happens so often
A: Im tearing my hair out on problem x
B: Are you sure you're not doing y?
A: That works, why is this information not plastered all over the place and in every tutorial on problem x on the web?
B: Did you read the manual?
A: No, if I completely read the manual for every single bit of code/program I used I would never actually get round to doing anything...
Meh.

CUDA memory error

I run high-performance calculations on multiple GPUs (two GPUs per machine), currently I test my code on GeForce GTX TITAN. Recently I noticed that random memory errors occur so that I can't rely on the outcome anymore. Tried to debug and ran into things I don't understand. I'd appreciate if someone could help me understand why the following is happening.
So, here's my GPU:
$ nvidia-smi -a
Driver Version : 331.67
GPU 0000:03:00.0
Product Name : GeForce GTX TITAN
...
VBIOS Version : 80.10.2C.00.02
FB Memory Usage
Total : 6143 MiB
Used : 14 MiB
Free : 6129 MiB
Ecc Mode
Current : N/A
Pending : N/A
My Linux machine (Ubuntu 12.04 64-bit):
$ uname -a
Linux cluster-cn-211 3.2.0-61-generic #93-Ubuntu SMP Fri May 2 21:31:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Here's my code (basically, allocate 4G of memory, fill with zeros, copy back to host and check if all values are zero; spoiler: they're not)
#include <cstdio>
#define check(e) {if (e != cudaSuccess) { \
printf("%d: %s\n", e, cudaGetErrorString(e)); \
return 1; }}
int main() {
size_t num = 1024*1024*1024; // 1 billion elements
size_t size = num * sizeof(float); // 4 GB of memory
float *dp;
float *p = new float[num];
cudaError_t e;
e = cudaMalloc((void**)&dp, size); // allocate
check(e);
e = cudaMemset(dp, 0, size); // set to zero
check(e);
e = cudaMemcpy(p, dp, size, cudaMemcpyDeviceToHost); // copy back
check(e);
for(size_t i=0; i<num; i++) {
if (p[i] != 0) // this should never happen, amiright?
printf("%lu %f\n", i, p[i]);
}
return 0;
}
I run it like this
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2013 NVIDIA Corporation
Built on Sat_Jan_25_17:33:19_PST_2014
Cuda compilation tools, release 6.0, V6.0.1
$ nvcc test.cu
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
$ ./a.out | head
516836128 -0.000214
516836164 -0.841684
516836328 -3272.289062
516836428 -644673853950867887966360388719607808.000000
516836692 0.000005
516850472 232680927002624.000000
516850508 909806289566040064.000000
...
$ echo $?
0
This is not what I expected: many elements are non-zero. Here a couple of observations
I checked with cuda-memcheck - no errors. Checked with valgrind's memcheck - no errors.
the memory allocation works as expected, nvidia-smi reports 4179MiB / 6143MiB
the same happens if I
allocate less memory (e.g. 2 GB)
compile with -arch sm_30 or -arch compute_30 (see capabilities)
go from SDK version 6.0 back to 5.5
go from GTX Titan to Tesla K20c (here the ECC checking is enabled and all counters are zero); behavior is the same, I was able to test it on five different GPU cards.
allocate multiple smaller arrays on the device
the errors disappear if I test on a GTX 680
Again, the question is: why do I see those memory errors and how can I ensure that this never happens?
I also perform calculations using CPU and we have found the same issue. We are using the model GeForce GTX 660 Ti.
I have check that the number of errors increases with the time which the GPU has been working.
The problem can be solved by shutting down the computer (it doesn't work if the machine is rebooted), but after some time working the problem starts again.
I have no idea why that happens. I have tried several codes to check the memory and all of them give the same result.
As far as I have checked this problem cannot be avoided, and the only way to be sure that your results are ok is to check the memory after the calculations and to shutdown the machine evey so often. I know this is not a good solution but is the only I have found.

Linux futex 5.4 OS Kernel version 2.6.9 giving issues

We have migrated a C++ RHEL 5.4 app from RH 6.2 and found that application has broken. One of our investigations lead to findings that the code in 5.4 box refers 'futex'. Note out app is compiled using 32 bit compiler option -
grep futex tool_strace.txt
futex(0xff8ea454, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0xf6d1f4fc, FUTEX_WAKE_PRIVATE, 2147483647) = 0
futex(0xf6c10a4c, FUTEX_WAKE_PRIVATE, 2147483647) = 0
As per http://www.akkadia.org/drepper/assumekernel.html I added the code on 5.4 build -
setenv("LD_ASSUME_KERNEL" , "2.4.1" , 1); // to use Linux Threads
But the strace dump still shows me 'futex' being called.
All the addresses ff8ea454, f6d1f4fc and f6c10a4c are 32 bit addresses. So if my assumption is right how can I code that 'futex' calls can be suppressed or be not called at all.
Is there any known issue with futex calls?
I believe the following to be true:
LD_ASSUME_KERNEL has to be set before your program starts to have any effect.
futex is used to implement any type of locks, so you can't avoid it.
You shouldn't need LD_ASSUME_KERNEL when you are compiling your own code, as it should
use newer interfaces as appropriate.
2.4.1 is an ancient kernel version to be trying to work as. Given your mentioning of 32 bit compiles, suggests you are on an AMD64 architecture machine, and that may not even support libraries for going back that far.