in my compagny we are building programs for different versions of Debian. We are using a Jenkins build chains with Virtual Machine on ESXI.
The programs compils with GCC. Based on some test we found that the compilation time on Stretch/Buster is 50% slower than on Wheezy/Jessie.
For example, a simple Hello World program :
jessie
------
real 0m0.099s
user 0m0.076s
sys 0m0.012s
buster
------
real 0m0,201s
user 0m0,168s
sys 0m0,032s
For small programs, it's not really important but for bigger projects, time difference is really visible (even with -O3 falgs) :
jessie
------
real 0m29.996s
user 0m26.636s
sys 0m1.688s
buster
------
real 0m59,051s
user 0m53,226s
sys 0m5,164s
Our biggest project takes 25 min on Jessie to compile against 45 min on Stretch.
Note this is done on two different virtual machine but on the same physical machine. The CPU models is : Intel(R) Core(TM) i7-4770 CPU # 3.40GHz.
I think that one reason might be the meldown and spectre patch that is applied to the kernel. But i don't know if this patch is enabled on stretch.
Do you have any idea about the possible reasons of this performance difference? How i can check it? And how to fix it if possible.
Regards.
I'm trying to generate a callgraph using oprofile and for some reason it fails.
I'm using the below command to config it:
opcontrol --shutdown
opcontrol --reset
opcontrol --no-vmlinux
opcontrol --separate=library
opcontrol --event=default
opcontrol --callgraph=20
opcontrol --status
Here I get:
Daemon not running
Event 0: CPU_CLK_UNHALTED:100000:0:1:1
Separate options: library
vmlinux file: none
Image filter: none
Call-graph depth: 20
Buffer size: 10000000
CPU buffer watershed: 2560000
CPU buffer size: 160000
Then when trying to generate callgraph (for example using opreport pdpd -l --callgraph -o profile_pdp.txt)
I get:
30 0.7659 libpthread-2.5.so pthread_mutex_lock
30 100.000 libpthread-2.5.so pthread_mutex_lock [self]
My linux kernel version is 2.6.18
I do get the following error when running opreport (don't know if relevant):
opreport: /usr/lib64/libstdc++.so.6: no version information available (required by opreport)
Any idea why I can't get the full callgraph?
Found the issue, it was working with a 64bit kernel while debugging 32bit exe, don't know whay it is an issue for oprofile.
I am currently running a simulator which is mounted on a disk. When I am trying to cross compile an application, it always gives me this error:
/mnt/mipsroot/cross-tools/bin/../libexec/gcc/mips-unknown-linux-gnu/4.6.3/cc1plus: error while loading shared libraries: libcloog.so.0: cannot open shared object file: No such file or directory
I have tried a lot to rectify it, but in vain. Is there any way how I can get my application to cross compile?
[root#Canada ~]# yum search cloog
Redirecting to '/usr/bin/dnf search cloog' (see 'man yum2dnf')
Last metadata expiration check: 0:56:47 ago on Thu Nov 10 14:11:03 2016.
================================= N/S Matched: cloog =================================
cloog.i686 : The Chunky Loop Generator
cloog.x86_64 : The Chunky Loop Generator
cloog-devel.i686 : Development tools for the Chunky Loop Generator
cloog-devel.x86_64 : Development tools for the Chunky Loop Generator
[root#Canada ~]# yum install cloog-devel
did you do this ?
I am noticing that the elapsed time of my unit tests in Visual Studio 2013 Pro are only consistent when they are run via the same command. When the command changes the elapsed times change dramatically.
My specific situation is this: I have 4 passing tests, when I run them all using the "Run All" command in the Test Explorer window I get:
But, when I run those same 4 tests again, this time using the Run Passed Tests command, I get this:
Are these tests run in the sequence that they are listed in the Test Explorer?
Why does test1 take 16ms when I use Run All and then 1ms when I use Run Passing and test2 takes 4ms when I use Run All and 16ms when I use Run Passing?
I need to figure out which translation units need to be restructured to improve compile times, How do I get hold of the compilation time, using cmake, for my translation units ?
Following properties could be used to time compiler and linker invocations:
RULE_LAUNCH_COMPILE
RULE_LAUNCH_CUSTOM
RULE_LAUNCH_LINK
Those properties could be set globally, per directory and per target. That way you can only have a subset of your targets (say tests) to be impacted by this property. Also you can have different "launchers" for each target that also could be useful.
Keep in mind, that using "time" directly is not portable, because this utility is not available on all platforms supported by CMake. However, CMake provides "time" functionality in its command-line tool mode. For example:
# Set global property (all targets are impacted)
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE "${CMAKE_COMMAND} -E time")
# Set property for my_target only
set_property(TARGET my_target PROPERTY RULE_LAUNCH_COMPILE "${CMAKE_COMMAND} -E time")
Example CMake output:
[ 65%] Built target my_target
[ 67%] Linking C executable my_target
Elapsed time: 0 s. (time), 0.000672 s. (clock)
Note, that as of CMake 3.4 only Makefile and Ninja generators support this properties.
Also note, that as of CMake 3.4 cmake -E time has problems with spaces inside arguments. For example:
cmake -E time cmake "-GUnix Makefiles"
will be interpreted as:
cmake -E time cmake "-GUnix" "Makefiles"
I submitted patch that fixes this problem.
I would expect to replace the compiler (and/or linker) with 'time original-cmd'. Using plain 'make', I'd say:
make CC="time gcc"
The 'time' program would run the command and report on the time it took. The equivalent mechanism would work with 'cmake'. If you need to capture the command as well as the time, then you can write your own command analogous to time (a shell script would do) that records the data you want in the way you want.
To expand on the previous answer, here's a concrete solution that I just wrote up — which is to say, it definitely works in practice, not just in theory, but it has been used by only one person for approximately three minutes, so it probably has some infelicities.
#!/bin/bash
{ time clang "$#"; } 2> >(cat <(echo "clang $#") - >> /tmp/results.txt)
I put the above two lines in /tmp/time-clang and then ran
chmod +x /tmp/time-clang
cmake .. -DCMAKE_C_COMPILER=/tmp/time-clang
make
You can use -DCMAKE_CXX_COMPILER= to hook the C++ compiler in exactly the same way.
I didn't use make -j8 because I didn't want the results to get interleaved in weird ways.
I had to put an explicit hashbang #!/bin/bash on my script because the default shell (dash, I think?) on Ubuntu 12.04 wasn't happy with those redirection operators.
I think that the best option is to use:
set_property(GLOBAL PROPERTY RULE_LAUNCH_COMPILE "time -v")
set_property(GLOBAL PROPERTY RULE_LAUNCH_LINK "time -v")
Despite what has been said above:
Keep in mind, that using "time" directly is not portable, because this utility is not available on all platforms supported by CMake. However, CMake provides "time"...
https://stackoverflow.com/a/34888291/5052296
If your system contain it, you will get much better results with the -v flag.
e.g.
time -v /usr/bin/c++ CMakeFiles/basic_ex.dir/main.cpp.o -o basic_ex
Command being timed: "/usr/bin/c++ CMakeFiles/basic_ex.dir/main.cpp.o -o basic_ex"
User time (seconds): 0.07
System time (seconds): 0.01
Percent of CPU this job got: 33%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.26
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 16920
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 6237
Voluntary context switches: 7
Involuntary context switches: 23
Swaps: 0
File system inputs: 0
File system outputs: 48
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0