I'm trying to profile a C++ application, that I did not write, to get a sense for where the major computation points are. I'm not a C++ expert and even less so C++ debugging/profiling expert. I believe I am running into a (common?) problem with dynamic libraries.
I compile link to Google CPU Profiler using (OS X, G++):
env LIBS=-lprofiler ./configure
make
make install
I then run profile the installed application (jags) with:
env CPUPROFILE=./jags.prof /usr/local/bin/jags regression.cmd
pprof /usr/local/bin/jags jags.prof
Unfortunately, I get the error:
pprof /usr/local/bin/jags jags.prof Can't exec "objdump":
No such file or directory at /usr/local/bin/pprof line 2833.
objdump /System/Library/Frameworks/Accelerate.framework/Versions/A/
Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib: No such file or directory
The program dynamically links to libLAPACK.dylib. So prof does not seem to understand it (?). I thought about trying to statically link, but the documents associated with the program say that it is impossible to statically link in LAPACK or BLAS (two required libraries).
Is there a way to have the profiler ignore libLAPACK? I'm okay if it doesn't sample within libLAPACK. Or how might I get profiling to work?
This error was caused by jags being a shell script, that subsequently called profilable code.
pprof /usr/local/bin/REAL_EXEC jags.prof
fixes the problem.
I don't see a clean way to do it, but maybe there's a hacky workaround -- what happens if you hack the pprof perl script (or better a copy thereof;-), line 2834, so that instead of calling error it emits the message and then does return undef;?
If you're profiling on OSX, the Shark tool is really great as well. It's very simple to use, and has worked out of the box for me when I've tried it.
Related
I have an OCaml program that worked fine on Ubuntu 16 but when recompiled and run on Ubuntu 20 I get the following error:-
$ ocamldebug ./linearizer
OCaml Debugger version 4.08.1
(ocd) r
Loading program... done.
Time: 89534
Program end.
Uncaught exception: Sys_error "Illegal seek"
(ocd) b
Time: 89533 - pc: 624888 - module Netaccel_link
No source file for Netaccel_link.
I thought this was due to missing dev libraries but:-
$ sudo apt install libocamlnet-ocaml-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libocamlnet-ocaml-dev is already the newest version (4.1.6-1build6).
0 upgraded, 0 newly installed, 0 to remove and 20 not upgraded.
What setup step am I missing on Ubuntu 20?
This looks like a regression bug in libocamlnet and you should report an issue there or, I am a bit pessimistic that you will get any response, you can try to debug the issue yourself.
The problem that you are facing has nothing to do with missing libraries (they will be reported during installation or, if the package is broken, end up in linker errors). It may result, however, from some misconfiguration of the system. If that is true, then you're lucky as you can fix it yourself.
I will give you some advice that might help you in debugging this issue. For more, please try using discuss.ocaml.org as a more suitable media (SO doesn't favor this kind of a discussion and we might get deleted by admins).
The illegal seek exception is thrown when the seek operation is applied on a non-regular file, aka ESPIPE Unix error. So check your inputs. It could be that what was previously regarded as a file in Ubuntu is now a pipe or a socket.
Try to use ltrace or strace to pinpoint the culprit e.g.,
ltrace ./linearizer
or, if it overwhelms you, try strace
strace ./linearizer
Instead of using ocamldebug you can use plain gdb. You can use gdb's interfaces to provide the path to the source code (though most likely it won't work since ocamlnet is not compiled with debug information). I believe that it will give you a more meaningful backtrace.
Instead of using the system installation try using opam. Install your dependencies with opam and try older versions as well as newer versions of the OCaml compiler. Also, try different versions of ocamlnet. Ideally, try to reproduce the environment that used to work for you.
When nothing else works, you can use objdump -d and look at the disassembly of your binary. OCaml is using a pretty readable and intuitive name mangling scheme (<module_name>__<function_name>_<uid>), so you can easily find the source code (search for <module_name>.ml file and look for the <function_name> there)
Finally, just use docker or any other container to run your application. Consider switching from ocamlnet to something more modern and supported.
I know how to code but I really do not know my way around a computer.
I have a program that I have to run for my master thesis. It is a code with multiple collabs and runs perfectly on Linux. However, it is a very complex simulational code and therefore it takes time to run for multiple parameters. I've been using my Linux at the university to run it but would like to run some of it on my personal computer (MAC OS). It works by using the R language to call upon c++ functions as follows (being filename a code on c++).
On a Rstudio script:
Sys.setenv("PKG_CPPFLAGS" = "-fopenmp -DPARALLEL")
system("rm filename.so")
system("rm filename.o")
system ("R CMD SHLIB filename.cpp")
dyn.load("filename.so")
After system ("R CMD SHLIB filename.cpp") I get error:
clang: error: unsupported option '-fopenmp'
make: *** [filename.o] Error 1
I've researched on the subject and found this
Enable OpenMP support in clang in Mac OS X (sierra & Mojave)
I've Installed LLVM, yet I do not know how to use it in this case.
How do I use it in this case?
Thank you in advance.
"Don't do it that way." Read up on R and Rcpp and use the proper tools (especially for packaging and/or compiling) which should pick up OpenMP where possible. In particular,
scan at least the Rcpp Introduction vignette
also look at the Rcpp Attributes vignette
"Just say no" to building the compilation commands by hand unless you know what you are doing with R and have read Writing R Extensions carefully a few times. It can be done, I used to show how in tutorials and workshops (see old slides from 12-15 years ago on my website) but we first moved to package inline which helps here, and later relied on the much better Rcpp Attributes.
Now, macOS has some extra hurdles in which tools work and which ones don't. The rcpp-devel mailing list may be of help, the default step is otherwise to consult the tutorial by James.
Edit: And of course if you "just want the above to work" try the obvious step of removing the part causing the error, i.e. use
Sys.setenv("PKG_CPPFLAGS" = "")
as your macOS box appears to have a compiler but not OpenMP (which, as I understand it, is the default thanks to some "surprising" default choices at Apple -- see the aforementioned tutorial for installation help.)
I am currently installing the HDF5 library, more precisely the hdf5-1.10.0-patch1, on Cygwin, as I want to use it with Fortran. Following the instructions from the hdfgroup website
(here is the link), I did the following:
./configure --enable-fortran
make > "out1_check.txt" 2> "warn1_check.txt" &
make check > "out2_check.txt" 2> "warn2_check.txt" &
The execution of the last command (make check) proceeds as it should, until it gets stuck. The process does not stop and something is happening (8-12% CPU are in use by sh.exe, already 39 hours of CPU time) but "out2_check.txt" looks like
Making check in src
...
[many successful checks]
...
============================
No need to test testlinks_env.sh again.
============================
============================
Testing testswmr.sh
Unfortunately, I do not have the output file from the first run of make check, but it did not contain more information on Testing testswmr.sh. There was never any error message.
So, what is this testswmr.sh, why does it get stuck and how can I finalize the installation process? Maybe I can skip the remaining checks and just proceed to make install?
Important note: an older version of HDF5 is already installed from the Cygwin repo. It does not seem to support Fortran however, so I decided to install the current version myself.
Available (and used) compilers are gcc and gfortran.
As far as I can tell, only Intel Fortran is supported on Windows. There is no Cygwin download here https://support.hdfgroup.org/HDF5/release/obtain518.html and I have never come across a report of experience for Cygwin/Fortran/HDF5.
Your options:
Use Intel Fortran
Use Linux or Mac
Sorry
Currently I'm trying to start programming on my new Mac. I installed TextWrangler, and chose C++ as my language of choice; since I have some prior knowledge of it, from when I used Windows.
So, I wrote the ever so common "Hello World" program. Although, when I tried to run it, I got an error:
"This file doesn’t appear to contain a valid ‘shebang’ line (application error code: 13304)"
I tried searching the error code to find out how to fix this, but I couldn't find anything.. I have no idea what a 'shebang' line is... Can someone help me out?
You need to compile it with a compiler first. I assume you tried to run the source file like ./source but C++ doesn't work this way.
With some compilers however, you can provide a shebang-line as the first line of the source file (the #! is known as shebang or crunchbang, hence the name), like so:
#!/path/to/compiler
So that the shell knows what application is used to run that sort of file, and when you attempt to run the source file by itself, the compiler will compile and run it for you. That's a compiler-dependent feature though, so I recommend just plain compiling with G++ or whatever Macs use to get an executable, then run that.
While I wouldn't recommend it for regular C++ development, I'm using a simple shell script wrapper for small C++ utilities. Here is a Hello World example:
#if 0 // -- build and run wrapper script for C++ ------------------------------
TMP=$(mktemp -d)
c++ -o ${TMP}/a.out ${0} && ${TMP}/a.out ${#:1} ; RV=${?}
rm -rf ${TMP}
exit ${RV}
#endif // ----------------------------------------------------------------------
#include <iostream>
int main(int argc, char *argv[])
{
std::cout << "Hello world" << std::endl;
return 0;
}
It does appear that you are trying to run the source file directly, however you will need to compile using a C++ compiler, such as that included in the gcc (GNU Compiler Collection) which contains the C++ compiler g++ for the Mac. It is not included with the Mac, you have to download it first:
from http://www.tech-recipes.com/rx/726/mac-os-x-install-gcc-compiler/ : "To install the gcc compiler, download the xcode package from http://connect.apple.com/. You’ll need to register for an Apple Developer Connection account. Once you’ve registered, login and click Download Software and then Developer Tools. Find the Download link next to Xcode Tools (version) – CD Image and click it!"
Once it's installed, if you are going for a quick Hello World, then, from a terminal window in the directory of your source file, you can execute the command g++ HelloWorld.cpp -o HelloWorld. Then you should be able to run it as ./HelloWorld.
Also, if you're coming from a Visual Studio world, you might want to give Mono and MonoDevelop a try. Mono is a free implementation of C# (and other languages), and MonoDevelop is an IDE which is very similar to Visual Studio. MonoDevelop supports C# and other .NET languages, including Visual Basic .NET, as well as C/C++ development. I have not used it extensively, but it does seem to be very similar to VS, so you won't have to learn new everything all in a day. I also have used KDevelop, which I liked a lot while I was using it, although that's been a while now. It has a lot of support for GNU-style development in C/C++, and was very powerful as I recall.
Good luck with your endeavors!
Links:
Mono: http://mono-project.com/Main_Page
MonoDevelop: http://monodevelop.com/
KDevelop: http://kdevelop.org/
shebang is http://en.wikipedia.org/wiki/Shebang_%28Unix%29.
not sure why your program is not running. you will need to compile and link to make an executable.
What I find confusing (/interesting) is C++ program giving "Shebang line" error. Shebang line is a way for the Unix like operating system to specify which program should be used to interpret the rest of the file. The shebang line usually points to the path of the interpreter. C++ is a compiled language and does not have interpreter for it.
To get the real technical details of how shebang lines work, do a man execve and get that man page online here - man execve.
If you're on a mac then doing something like this on the commandline:
g++ -o program program.cpp
Will compile and link your program into an executable called program. Then you can run it like:
./program
The reason you got the 'shebang' error is probably because you tried to run the cpp file like:
./program.cpp
And the shell tries to find an interpreter to run the code in the file. Because this is C++ there is no relevant interpreter but if your file contains Python or Bash then having a line like this
#!/usr/bin/python
at the 1st line in your source file will tell the shell to use the python interpreter
The lines that start with a pattern like this: #!/.../.../.. is called a shebang line. In other words, a shebang is the character sequence consisting of the characters number sign and exclamation mark (#!).In Unix-like operating systems, when a text file with a shebang is used as if it is an executable, the program loader mechanism parses the rest of the file's initial line as an interpreter directive. The loader executes the specified interpreter program, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data.
I'm using Linux redhat 3, can someone explain how is that possible that i am able to analyze
with gdb , a core dump generated in Linux redhat 5 ?
not that i complaint :) but i need to be sure this will always work... ?
EDIT: the shared libraries are the same version, so no worries about that, they are placed in a shaerd storage so it can be accessed from both linux 5 and linux 3.
thanks.
You can try following commands of GDB to open a core file
gdb
(gdb) exec-file <executable address>
(gdb) set solib-absolute-prefix <path to shared library>
(gdb) core-file <path to core file>
The reason why you can't rely on it is because every process used libc or system shared library,which will definitely has changes from Red hat 3 to red hat 5.So all the instruction address and number of instruction in native function will be diff,and there where debugger gets goofed up,and possibly can show you wrong data to analyze. So its always good to analyze the core on the same platform or if you can copy all the required shared library to other machine and set the path through set solib-absolute-prefix.
In my experience analysing core file, generated on other system, do not work, because standard library (and other libraries your program probably use) typically will be different, so addresses of the functions are different, so you cannot even get a sensible backtrace.
Don't do it, because even if it works sometimes, you cannot rely on it.
You can always run gdb -c /path/to/corefile /path/to/program_that_crashed. However, if program_that_crashed has no debug infos (i.e. was not compiled and linked with the -g gcc/ld flag) the coredump is not that useful unless you're a hard-core debugging expert ;-)
Note that the generation of corefiles can be disabled (and it's very likely that it is disabled by default on most distros). See man ulimit. Call ulimit -c to see the limit of core files, "0" means disabled. Try ulimit -c unlimited in this case. If a size limit is imposed the coredump will not exceed the limit size, thus maybe cutting off valuable information.
Also, the path where a coredump is generated depends on /proc/sys/kernel/core_pattern. Use cat /proc/sys/kernel/core_pattern to query the current pattern. It's actually a path, and if it doesn't start with / then the file will be generated in the current working directory of the process. And if cat /proc/sys/kernel/core_uses_pid returns "1" then the coredump will have the file PID of the crashed process as file extension. You can also set both value, e.g. echo -n /tmp/core > /proc/sys/kernel/core_pattern will force all coredumps to be generated in /tmp.
I understand the question as:
how is it possible that I am able to
analyse a core that was produced under
one version of an OS under another
version of that OS?
Just because you are lucky (even that is questionable). There are a lot of things that can go wrong by trying to do so:
the tool chains gcc, gdb etc will
be of different versions
the shared libraries will be of
different versions
so no, you shouldn't rely on that.
You have asked similar question and accepted an answer, ofcourse by yourself here : Analyzing core file of shared object
Once you load the core file you can get the stack trace and get the last function call and check the code for the reason of crash.
There is a small tutorial here to get started with.
EDIT:
Assuming you want to know how to analyse core file using gdb on linux as your question is little unclear.