Possible to write improved cmake logging macro with position information? - c++

All the time when debugging cmake code, I find myself writing things like the following:
message( "[some_filename.cmake]: some message about what is going on here." )
In C++, I use macros to automatically log the file name and line number - is this possible in cmake? Ideally, I'd like to write a macro that with the following:
log_info( "some message about what is going on here." )
It would print to console:
|info | some_filename.cmake[72] some message about what is going on here.

With CMAKE_CURRENT_LIST_FILE and CMAKE_CURRENT_LIST_LINE this should be possible.
However, using CMAKE_CURRENT_LIST_LINE directly in the macro will always give you the line in the macro, not where the macro was used (at least with CMake 3.6.1). Thus you need to pass it as an argument:
macro(log_info _line)
# get the path of the CMakeLists file evaluating this macro
# relative to the project's root source directory
file(RELATIVE_PATH _curr_file
"${PROJECT_SOURCE_DIR}"
"${CMAKE_CURRENT_LIST_FILE}")
message("|info | ${_curr_file}[${_line}] ${ARGN}")
endmacro(log_info)
And use it as
log_info(${CMAKE_CURRENT_LIST_LINE} "a info message")
A more advanced solution would be:
macro(log _level _line)
# get the path of the CMakeLists file evaluating this macro
# relative to the project's root source directory
file(RELATIVE_PATH _curr_file
"${PROJECT_SOURCE_DIR}"
"${CMAKE_CURRENT_LIST_FILE}")
set(level_padded " ")
if("${_level}" MATCHES [Dd][Ee][Bb][Uu][Gg])
set(level_padded "debug ")
elseif("${_level}" MATCHES [Ii][Nn][Ff][Oo])
set(level_padded "info ")
elseif("${_level}" MATCHES [Ee][Rr][Rr][Oo][Rr])
set(level_padded "error ")
endif()
message("|${level_padded} | ${_curr_file}[${_line}] ${ARGN}")
endmacro(log)
Usage:
log(debug ${CMAKE_CURRENT_LIST_LINE} "debug info")
log(Info ${CMAKE_CURRENT_LIST_LINE} "info message")
log(ERROR ${CMAKE_CURRENT_LIST_LINE} "error message")

I don't think this is possible in a general manner. CMake does not offer much help with regard to debugging or introspection.
I can imagine a macro that uses global variables which must be added by the user at the beginning of each macro or function. In the end, it wouldn't reduce the amount of manual work but clutter the code.
Maybe you can use a function / macro of your editor. Some editors can insert the current file name, the current time stamp or a pseudo-random UUID by key stroke. Whenever you want to log a message, just add the value and you can later search the code for this unique sequence. Nut sure whether this is a better workflow for you.
The best place would be a feature request for CMake. You are not the only one who could profit from such a feature. Maybe Kitware or someone else is willing to develop and upstream a log macro.

Related

Immuconf with Clojure not handling tree config files

Whenever I add a third config file to my .immuconf.edn I get:
No configuration files were specified, and neither an .immuconf.edn file nor
an IMMUCONF_CFG environment variable was found
This is driving me crazy since I cant really find anything wrong.
Using this loads thing OK:
["configs/betfair.edn" "configs/web-server.edn"]
however this generated an error:
["configs/betfair.edn" "configs/web-server.edn" "~/betfair.edn"]
This is the content of betfair.edn
{:betfair {:usr "..."
:pwd "..."
:app-key "..." ;; key used
:app-key-live "..."
:app-key-test "..."}}
(where ... is replaced with actual strings)
Why am I getting this error when adding the third file and how can I fix this?
Make sure that the last file specified in your <project dir>/.immuconf.edn (~/betfair.edn) exists in your home directory.
Immuconf does some magic to replace ~ in filenames specified in .immuconf.edn with a value of (System/getProperty "user.home") so you might check if that system property points to the same directory where your ~/betfair.edn file is located.
I have recreated your setup and it works on my machine so it is probably a problem with locations or access rights to your files. Unfortunately, error handling for the no arg invocation of (immuconf.config/load) doesn't help in troubleshooting as it swallows any exceptions and returns nil. That exception would probably tell you what kind of error occured (some file not found or some IO error happened). You might want to file a pull request with a patch to log such errors as warnings instead of ignoring them.

One source needs to compile differently on multiple machines

I have a program in C++ that is designed to run a simulation for a summer project I'm doing. It is pretty computationally intensive, but I have gotten permission to use a cluster computer's resources to run it, but I test it and develop it on my own laptop. This program generates text files as output, and this is where I run into trouble.
I need the text files to be saved in different paths depending on whether I'm running the program on my own computer or on the cluster computer. My solution for now has been to use $(shell hostname) in my makefile to check which machine the code is being compiled on and, from that output, use conditional compilation with macros defined from that operation in the makefile. At one time, I was using two different versions of a header that defined macros differently on my computer versus the cluster, but I'm using a git repository to transfer changes back and forth, and I was having a very difficult time excluding one file like this.
I was just wondering what is the most preferable practice to set paths at compile time on different computers with the same source.
It doesn't sound to me like it needs to compile differently on different machines. It sounds like it needs to take some paths at run-time from either the command line, or from some sort of config file.
One suggestion would be to use the boost program options library which in one simple setup allows you to read the same params either from the command line or from a config file. This is what I used when running similar jobs on a big cluster or on my laptop and it worked nicely.
Below is a simple example from their docs:
// Declare the supported options.
po::options_description desc("Allowed options");
desc.add_options()
("help", "produce help message")
("compression", po::value<int>(), "set compression level")
;
po::variables_map vm;
po::store(po::parse_command_line(ac, av, desc), vm);
po::notify(vm);
if (vm.count("help")) {
cout << desc << "\n";
return 1;
}
if (vm.count("compression")) {
cout << "Compression level was set to "
<< vm["compression"].as<int>() << ".\n";
} else {
cout << "Compression level was not set.\n";
}
I agree with Alex, the easiest solution will not be at compile time, but at runtime either via a config file or command line arguments. All other things being equal, it may be easier for you to just try passing it via command line arguments using argv and argc.
I am not very experienced with this, but I can think of one simple way of doing this.
Set up an environment variable that points to the appropriate directory on each machine,
and use that environment variable in your makefile.
For example,
in machine 1's ~/.bashrc
export MY_DIRECTORY = ~/Foo
in machine 2's ~/.bashrc
export MY_DIRECTORY = ~/Bar
your Makefile will use the environment variable of the machine it is running on.
Eg. $(MY_DIRECTORY)
And (~/.bashrc is not a part of your repository, so different copies can exist on the two machines)
If you can stomach a dependency on QtCore (between 750K and 4MB library depending on compile options and platform), you can use QSettings to conveniently store the directory path without having to set the directory path each time. You can pass it on the command line at runtime once and have the program store the result into the settings file, and then that setting will become the new default for future invocations of the program without the command line argument.
Other dependency-free alternatives would involve writing your own configuration file parsing routines or using existing ones, but I always like to rely on well-tested and open source code.
Good luck!
You could continue with the route of using separate headers. Include both in your Git repository as clusterHeader.h and laptopHeader.h and use your existing Makefile script to build with a different header on each system. To ease linking troubles, perhaps temporarily rename or copy the file within the script from clusterHeader.hpp or laptopHeader.hpp to just plain old header.h while building, and change the name back at the end of the script.
If you want to keep changes in your header consistent between builds, use this method as another header file, and #include that header within your OS independent header.
i.e.
source.cpp
|
-> header.hpp
|
-> clusterHeader.hpp OR laptopHeader.hpp
Alternatively, as long as the systems aren't exactly the same OS (which I'm assuming they aren't since one is a cluster), you could probably quite easily get it to work with some simple #ifdef statements.
Finally, CMake or qmake are always options.

how to get function name and line number when project crashed in release mode

i have a project in C++/MFC
when i run its in debug mode and project crashed
i can get function and line number of code
with SetUnhandledExceptionFilter function
but in release mode i can not get it
i am test this function and source
_set_invalid_parameter_handler msdn.microsoft.com/en-us/library/a9yf33zb(v=vs.80).aspx
StackWalker http://www.codeproject.com/KB/threads/StackWalker.aspx
MiniDumpReader & crashrpt http://code.google.com/p/crashrpt/
StackTracer www.codeproject.com/KB/exception/StackTracer.aspx
any way to get function and line of code when project crashed in release mode
without require pdb file or map file or source file ?
PDB files are meant to provide you this information; the flaw is that don't you want a PDB file. I can understand not wanting to release the PDB to end users, but in that case why would you want them to see stack trace information? To me your goal is conflicting with itself.
The best solution for gathering debug info from end users is via a minidump, not by piecing together a stack trace on the client.
So, you have a few options:
Work with minidumps (ideal, and quite common)
Release the PDBs (which won't contain much more info than you're already trying to deduce)
Use inline trace information in your app such as __LINE__, __FILE__, and __FUNCTION__.
Just capture the crash address if you can't piece together a meaningful stack trace.
Hope this helps!
You can get verbose output from the linker that will show you where each function is placed in the executable. Then you can use the offset from the crash report to figure out which function was executing.
In release mode, this sort of debugging information isn't included in the binary. You can't use debugging information which simply isn't there.
If you need to debug release-mode code, start manually tracing execution by writing to a log file or stdout. You can include information about where your code appears using __FUNCTION__ and __LINE__, which the compiler will replace with the function/line they appear in/on. There are many other useful predefined macros which you can use to debug your code.
Here's a very basic TR macro, which you can sprinkle through out your code to follow the flow of execution.
void trace_function(const char* function, int line) {
std::cout << "In " << function << " on line " << line << std::endl;
}
#define TR trace_function(__FUNCTION__, __LINE__)
Use it by placing TR at the top of each function or anywhere you want to be sure the flow of execution is reaching:
void my_function() {
TR();
// your code here
}
The best solution though, is to do your debugging in debug mode.
You can separate the debug symbols so that your release version is clean, then bring them together with the core dump to diagnose the problem afterwards.
This works well for GNU/Linux, not sure what the Microsoft equivalent is. Someone mentioned PDB...?

How to get my program name in GDB when writting a "define" script?

I found it's very annoying to get the program name when defining a GDB script. I can't find any corresponding "info" command, and we can't use argv[0] either, because of multi-process/thread and frame-choosing.
So, what should I do?
If you are using a recent gdb (7 and above) you can play around with the Python support which is quite extensive (and probably where you want to go when defining gdb-scripts in general). I'm no expert at it but for a test-program /tmp/xyz I could use:
(gdb) python print gdb.current_progspace().filename
/tmp/xyz
See http://sourceware.org/gdb/current/onlinedocs/gdb/Python.html for more info on the Python support.
In "normal" gdb you could get the process name with "info proc" and "info target", but I guess you wanted it not just printed but being able to use it further in scripts? Don't know how to get the value out of the python runtime into a gdb variable other than the extremely ugly "using log files and sourcing it and hope for the best". This is how this could be done:
define set-program-name
set logging file tmp.gdb
set logging overwrite on
set logging redirect on
set logging on
python print "set $programname = \"%s\"" % gdb.current_progspace().filename
set logging off
set logging redirect off
set logging overwrite off
source tmp.gdb
end
and in your own function doing:
define your-own-func
set-program-name
printf "The program name is %s\n", $programname
end
I would suggest "going all in" on the python support, and scrap gdb-scripting. I believe it is worth the effort.

Using CMake with CTest and CDash

I am going to use CDash with CMake/CTest on my C++ project.
In order to enable CDash and customize settings, like
"MEMORYCHECK_SUPPRESSIONS_FILE", "DART_TESTING_TIMEOUT", I added the following lines in the root CMakeLists.txt
set(MEMORYCHECK_SUPPRESSIONS_FILE "${CMAKE_SOURCE_DIR}/valgrind.supp")
set(DART_TESTING_TIMEOUT "120")
include(CTest)
However, the generated "DartConfiguration.tcl" does not contain my settings at all
( MemoryCheckSuppressionFile is empty and TimeOut is still the default value )
I found that, for example, if I pass -DDART_TESTING_TIMEOUT=STRING:120 , it works , but it fails if specifying them in the CMakeLists.txt.
Thank you in advance :)
DartConfiguration.tcl
# Dynamic analisys and coverage
PurifyCommand:
ValgrindCommand:
ValgrindCommandOptions:
MemoryCheckCommand: /usr/bin/valgrind
MemoryCheckCommandOptions:
MemoryCheckSuppressionFile:
CoverageCommand: /usr/bin/gcov
# Testing options
# TimeOut is the amount of time in seconds to wait for processes
# to complete during testing. After TimeOut seconds, the
# process will be summaily terminated.
# Currently set to 25 -9.0.0.71596-0inutes
TimeOut: 1500
There are three possible solutions:
You create cache variables. This also creates a GUI entry for the variable, which is not always what you want for automatic testing: SET(DART_TESTING_TIMEOUT "120" CACHE STRING "")
You specify your options with a simple "set" command, but in a file called DartConfig.cmake instead of the main CMakeLists.txt . This file gets parsed to create the DartConfiguration.tcl
You use CTest scripting to set up your dartclient: http://www.cmake.org/Wiki/CMake_Scripting_Of_CTest