I have created a boost thread using:
boost::thread thrd(&connectionThread); where connectionThread is a simple void function. This works fine, however, when I try to make it wait for some seconds, for example using:
boost::xtime xt;
boost::xtime_get(&xt, boost::TIME_UTC);
xt.sec += 1;
boost::thread::sleep(xt); // Sleep for 1 second
The program crashes at the xtime_get line. Even when manually trying to set xt.sec it doesn't work. I've tried several other methods, but I can't seem to make it work. Is there something I'm doing wrong? Is there a easier way to achieve my goal?
Is there an easier way
Maybe something along these lines:
boost::this_thread::sleep(boost::posix_time::seconds(1));
boost::thread::sleep(boost::posix_time::seconds(1));
boost::xtime_get() looks like one of the few Boost APIs that's not implemented in a header, so this might be something like not having the Boost library compiled correctly. This is probably somelike having mismatched calling conventions or something. I don't know off the top of my head what steps you might need to go through to rebuild the library - all I've ever used in Boost has been stuff that only requires the headers.
It might be helpful if you just trace into the xtime_get() routine, even if it's at the assembly level. The xtime struct is very, very basic and xtime_get() really doesn't do anything more than call a platform-specific API to get the numbers to plug into the xtime struct.
With that code (not knowing, for example, where you put it), all I can say is that the xtime_get method returns the type of the measure returned. That is, you have to be sure, for example that the following assert holds:
int res = boost::xtime_get(&xt, boost::TIME_UTC);
assert(res == boost::TIME_UTC);
It may happen that in your system this is not the case.
However, looking at the code again, it comes to my mind that the crash may not be related to this call in particular, but with other things you're doing in your application. Again, it depends in where you're using this code. Is it within the operator() of your thread?
Related
I was trying to think of a way how to deal with C libraries that expect you to globally initialize them and I came up with this:
namespace {
class curl_guard {
public:
curl_guard()
{
puts("curl_guard constructor");
// TODO: curl_global_init
}
~curl_guard()
{
puts("curl_guard destructor");
// TODO: curl_global_cleanup
}
};
curl_guard curl{}; // nothing is ever printed to terminal
}
But when I link this into an executable and run it, there's no output, because it is optimized out (verified with objdump), even in debug build. As I understand, this is intended, because this type is never accessed in any way. Is there any way to mark it so that it is not excluded? Preferably without making it accessible to the user of the library. I'd prefer a general solution, but GCC-only also works for my purposes.
I am aware of typical singleton patterns, but I think this is a special case where none of them apply, because I never want to access this even from internals, just simply have a class tucked away which has one simple job of initializing and deinitializing a C library which is caused by the library being linked in and not something arbitrary like "just don't forget to construct this in main" which is as useful as going back to writing C code.
The real solution was pretty simple - the code can't be separate and needs to be in one of compilation units that are used by the user of the library.
libcurl needs to be initialized globally, and initialization is NOT thread safe, because libraries it depends on also cannot be initialized in thread-safe manner, so it is one of those things where global pre-main initialization is not only convenient, but useful. I in fact am using several other libraries which are like that, and I separate them from libraries that do use threads in the background, by putting those in the main as opposed to pre-main.
And while there are some concerns with doing something like this at all, that's exactly what I need, including library aborting before main even runs if it's installed improperly.
Sure, it's "good practice" to ignore critical errors just to please the users of libraries who insist on libraries aborting being oh so terrible, but I'm sure noone likes to know that in next 50 seconds, they will be dead due to flying vertically downwards due to something that could've been found and fixed early.
In my application I would like to use a configuration file. However I have the chicken and egg problem here: the configuration file can even describe how SDL (and other things) should be init'ed. Thus, I would need to call SDL_GetPrefPath() before even SDL_Init() to get the common place where configuration for my application is stored as per-user basis. I'm not sure if it's possible. I also need SDL_GetBasePath() for similar (fall-back) purpose, I know that's ugly but some users would like that optionally for trying to find the configuration file. Reading the configuration file itself is not a problem, as I don't use SDL related functions for that, though I need to get the directory where configuration file can be found.
Surely, I can test if it's work, but it's not the best, it may fail in different SDL versions, on other architecture, or who knows. I would like to know it's safe "by design" (and "future-proof"), or not.
On the SDL-2 wiki, some functions like SDL_ShowSimpleMessageBox() is noted, that it's safe to be used even before SDL_Init(), but I am not sure if this is the situation in this case. I'm not sure if SDL-2 wiki is detailed enough to always show the situation like this (as with SDL_ShowSimpleMessageBox()), since this information is not mentioned on the pages of the functions I'm talking about now.
Note, my application is intended to run on Linux, MacOS and Windows, so it would be hard to judge by myself where SDL will put its preference directory after initialization, and also an ugly, redundant stuff then ...
Maybe it's useful for others too, so this is what I could figure out:
https://bugzilla.libsdl.org/show_bug.cgi?id=3796
I've submitted an SDL bugzilla ticket about this. The answer, that it's basically platform dependent, and not safe for these functions to be used without SDL_Init(). May work on some platforms, may not on others (or in the future). However, I got a tip, to use SDL_Init(0) and SDL_Quit() around the call. So, with my own ideas added as well, something like this:
char *my_pref_dir_path = NULL;
if (!SDL_Init(0)) {
char *p = SDL_GetPrefPath(app_org, app_name);
if (p) {
my_pref_dir_path = strdup(p);
SDL_free(p);
}
SQL_Quit();
}
if (!my_pref_dir_path) {
/* ... panic or exit or whatever ... */
}
SDL_Init(0) won't initialize too much (not even video or other subsystems, so it won't interfere with the plan that I may not need SDL at all later), but it should be safe now to use the desired functions. After that, SQL_Quit() would make SDL "go away". Later, of course, you can have a "proper" SDL initialization, for example: SDL_Init(SDL_INIT_EVERYTHING), just like if this code hasn't been in your program at all before.
Since, I am not sure, if the pointer returned by SDL_GetPrefPath() is valid after SQL_Quit(), I would strdup() it or such, and I use SDL_free() before SQL_Quit() on the pointer returned by SDL_GetPrefPath() since it's recommended by the documentation too anyway. Maybe I'm just too careful here, but I think, it's a safer bet then, if I play tricks like this.
After this code, my_pref_dir_path will hold a pointer to the SDL preferences directory string, or NULL, if an error occured.
I'm implementing a logger for an OpenGL application ( the only reason I'm mentioning it is that it runs in a loop ). I'd like to somehow log every method call or some group of method calls of some classes, every time they are called.
My initial approach was to place the required logger function call in all the methods ( which actually kind of works like comments :) ) but I got really tired of it really fast, so I started looking for a more effective way. I searched google for some time, but since I don't really know what I'm looking for, I ran out of ideas.
The best thing for my case would be some kind of magical method, that would be called every time I invoked any other class method, idealy with name and params string as a parameter for this method. ( kind of PHP - like magic method __call() - but that one works only if method is not defined ). I don't know what I am looking for, if something like that even exists, and if it does, what do we call it?
P.S.:
my logging works on macros, so no worries for performance there :)
#if DEV_LOG
#define log_init() logInit()
#define log_write(a,b) writeToLog(to_str(a), to_str(b))
#else
#define log_init()
#define log_write(a,b)
#endif
( And if there's a nicer way to do this, let me know, please :) )
Thank you!
1st I have to re-cite my co-answerer Filip here
C++ doesn't have this kind of "magical method", so you are stuck with explicitly stating a function call inside every member-function, if you'd like one to be made.
Such stuff is implemented as compiler specific features like the GCC profiling. There will be code generated to track for function calls, their parameters, and where these actually were called from and how often.
The general usage is to compile and link your code with special compiler flags that will generate this code. When your code is run, this information will be stored along specific kind of databases, that can be analyzed with a separate tool after running (as e.g. gprof for the GCC toolchain).
A similar tooling suite is used for retrieving code coverage of certain program runs (e.g. testsuites for your code): gcov A Test Coverage Program
C++ doesn't have this kind of "magical method", so you are stuck with explicitly stating a function call inside every member-function, if you'd like one to be made.
You could instead use a debugger to track the calls made, the program you've written shouldn't have to be responsible for questions such as "what code is called, when and with what?"; that's the exact question a profiler, or a debugger, was made to answer.
I have following requirement:
Adding text at the entry and exit point of any function.
Not altering the source code, beside inserting from above (so no pre-processor or anything)
For example:
void fn(param-list)
{
ENTRY_TEXT (param-list)
//some code
EXIT_TEXT
}
But not only in such a simple case, it'd also run with pre-processor directives!
Example:
void fn(param-list)
#ifdef __WIN__
{
ENTRY_TEXT (param-list)
//some windows code
EXIT_TEXT
}
#else
{
ENTRY_TEXT (param-list)
//some any-os code
if (condition)
{
return; //should become EXIT_TEXT
}
EXIT_TEXT
}
So my question is: Is there a proper way doing this?
I already tried some work with parsers used by compilers but since they all rely on running a pre-processor before parsing, they are useless to me.
Also some of the token generating parser, which do not need a pre-processor are somewhat useless because they generate a memory-mapping of tokens, which then leads to a complete new source code, instead of just inserting the text.
One thing I am working on is to try it with FLEX (or JFlex), if this is a valid option, I would appreciate some input on it. ;-)
EDIT:
To clarify a little bit: The purpose is to allow something like a stack trace.
I want to trace every function call, and in order to follow the call-hierachy, I need to place a macro at the entry-point of a function and at the exit point of a function.
This builds a function-call trace. :-)
EDIT2: Compiler-specific options are not quite suitable since we have many different compilers to use, and many that are propably not well supported by any tools out there.
Unfortunately, your idea is not only impractical (C++ is complex to parse), it's also doomed to fail.
The main issue you have is that exceptions will bypass your EXIT_TEXT macro entirely.
You have several solutions.
As has been noted, the first solution would be to use a platform dependent way of computing the stack trace. It can be somewhat imprecise, especially because of inlining: ie, small functions being inlined in their callers, they do not appear in the stack trace as no function call was generated at assembly level. On the other hand, it's widely available, does not require any surgery of the code and does not affect performance.
A second solution would be to only introduce something on entry and use RAII to do the exit work. Much better than your scheme as it automatically deals with multiple returns and exceptions, it suffers from the same issue: how to perform the insertion automatically. For this you will probably want to operate at the AST level, and modify the AST to introduce your little gem. You could do it with Clang (look up the c++11 migration tool for examples of rewrites at large) or with gcc (using plugins).
Finally, you also have manual annotations. While it may seem underpowered (and a lot of work), I would highlight that you do not leave logging to a tool... I see 3 advantages to doing it manually: you can avoid introducing this overhead in performance sensitive parts, you can retain only a "summary" of big arguments and you can customize the summary based on what's interesting for the current function.
I would suggest using LLVM libraries & Clang to get started.
You could also leverage the C++ language to simplify your process. If you just insert a small object into the code that is constructed on function scope entrance & rely on the fact that it will be destroyed on exit. That should massively simplify recording the 'exit' of the function.
This does not really answer you question, however, for your initial need, you may use the backtrace() function from execinfo.h (if you are using GCC).
How to generate a stacktrace when my gcc C++ app crashes
Having used gprof and callgrind many times, I have reached the (obvious) conclusion that I cannot use them efficiently when dealing with large (as in a CAD program that loads a whole car) programs. I was thinking that maybe, I could use some C/C++ MACRO magic and somehow build a simple (but nice) logging mechanism. For example, one can call a function using the following macro:
#define CALL_FUN(fun_name, ...) \
fun_name (__VA_ARGS__);
We could add some clocking/timing stuff before and after the function call, so that every function called with CALL_FUN gets timed, e.g
#define CALL_FUN(fun_name, ...) \
time_t(&t0); \
fun_name (__VA_ARGS__); \
time_t(&t1);
The variables t0, t1 could be found in a global logging object. That logging object can also hold the calling graph for each function called through CALL_FUN. Afterwards, that object can be written in a (specifically formatted) file, and be parsed from some other program.
So here comes my (first) question: Do you find this approach tractable ? If yes, how can it be enhanced, and if not, can you propose a better way to measure time and log callgraphs ?
A collegue proposed another approach to deal with this problem, which is annotating with a specific comment each function (that we care to log). Then, during the make process, a special preprocessor must be run, parse each source file, add logging logic for each function we care to log, create a new source file with the newly added (parsing) code, and build that code instead. I guess that reading CALL_FUN... macros (my proposal) all over the place is not the best approach, and his approach would solve this problem. So what is your opinion about this approach?
PS: I am not well versed in the pitfalls of C/C++ MACROs, so if this can be developed using another approach, please say it so.
Thank you.
Well you could do some C++ magic to embed a logging object. something like
class CDebug
{
CDebug() { ... log somehow ... }
~CDebug() { ... log somehow ... }
};
in your functions then you simply write
void foo()
{
CDebug dbg;
...
you could add some debug info
dbg.heythishappened()
...
} // not dtor is called or if function is interrupted called from elsewhere.
I am bit late, but here is what I am doing for this:
On Windows there is a /Gh compiler switch which makes the compiler to insert a hidden _penter function at the start of each function. There is also a switch for getting a _pexit call at the end of each function.
You can utilizes this to get callbacks on each function call. Here is an article with more details and sample source code:
http://www.johnpanzer.com/aci_cuj/index.html
I am using this approach in my custom logging system for storing the last few thousand function calls in a ring buffer. This turned out to be useful for crash debugging (in combination with MiniDumps).
Some notes on this:
The performance impact very much depends on your callback code. You need to keep it as simple as possible.
You just need to store the function address and module base address in the log file. You can then later use the Debug Interface Access SDK to get the function name from the address (via the PDB file).
All this works suprisingly well for me.
Many nice industrial libraries have functions' declarations and definitions wrapped into void macros, just in case. If your code is already like that -- go ahead and debug your performance problems with some simple asynchronous trace logger. If no -- post-insertion of such macros can be an unacceptably time-consuming.
I can understand the pain of running an 1Mx1M matrix solver under valgrind, so I would suggest starting with so called "Monte Carlo profiling method" -- start the process and in parallel run pstack repeatedly, say each second. As a result you will have N stack dumps (N can be quite significant). Then, the mathematical approach would be to count relative frequencies of each stack and make a conclusion about the ones most frequent. In practice you either immediately see the bottleneck or, if no, you switch to bisection, gprof, and finally to valgrind's toolset.
Let me assume the reason you are doing this is you want to locate any performance problems (bottlenecks) so you can fix them to get higher performance.
As opposed to measuring speed or getting coverage info.
It seems you're thinking the way to do this is to log the history of function calls and measure how long each call takes.
There's a different approach.
It's based on the idea that mainly the program walks a big call tree.
If time is being wasted it is because the call tree is more bushy than necessary,
and during the time that's being wasted, the code that's doing the wasting is visible on the stack.
It can be terminal instructions, but more likely function calls, at almost any level of the stack.
Simply pausing the program under a debugger a few times will eventually display it.
Anything you see it doing, on more than one stack sample, if you can improve it, will speed up the program.
It works whether or not the time is being spent in CPU, I/O or anything else that consumes wall clock time.
What it doesn't show you is tons of stuff you don't need to know.
The only way it can not show you bottlenecks is if they are very small,
in which case the code is pretty near optimal.
Here's more of an explanation.
Although I think it will be hard to do anything better than gprof, you can create a special class LOG for instance and instantiate it in the beginning of each function you want to log.
class LOG {
LOG(const char* ...) {
// log time_t of the beginning of the call
}
~LOG(const char* ...) {
// calculate the total time spent,
//by difference between current time and that saved in the constructor
}
};
void somefunction() {
LOG log(__FUNCTION__, __FILE__, ...);
.. do other things
}
Now you can integrate this approach with the preprocessing one you mentioned. Just add something like this in the beginning of each function you want to log:
// ### LOG
and then you replace the string automatically in debug builds (shoudn't be hard).
May be you should use a profiler. AQTime is a relatively good one for Visual Studio. (If you have VS2010 Ultimate, you already have a profiler.)