Visual Studio Function Debugging - c++

I am working on VS 2008. I wish to get the following information for all my methods:
1) Time at call entry
2) Time at call exit and the return value.
GDB allows me to set a break point at each function entry and exit and run a script at the breakpoint and then continue debugging. I am tired of looking for solutions to do something similar on VS. I even thought of writing a script to parse my entire code and write fprintf's at entry and exit but this is very complex. Desperately looking for help.

using windbg, you can also set at each function entry and run a script.
For instance the following command will add a breakpoint on all functions of your module, display the name of the function, the current time, run until the function exit, display the time and continue.
bm yourmodule!* "kcL1;.echotime;gu;.echotime;gc"

Basically this is a function level Time-Based Profiling (TBP). Several tools can help you on this:
Visual Studio Profiling Tools: which is available with Visual Studio Ultimate and Premium version only. http://msdn.microsoft.com/en-us/library/z9z62c29.aspx
Intel vTune: It can do lots of things, including function level profiling. http://software.intel.com/en-us/articles/intel-vtune-amplifier-xe/
AMD CodeAnalyst: it is a free tool. It can work with Intel CPU as well (with limited function, but enough for your purpose). It can do source code level profiling: http://developer.amd.com/cpu/codeanalyst/codeanalystwindows/pages/default.aspx
I suggest you to try with AMD CodeAnalyst first. If you don't have Visual Studio Premium or Ultimate edition.

I assume you are suing c++. You can define a time trace class which display the timestamps
/* define this in a header file */
class ShowTimestamp {
private:
static int level_; // nested level for function call tree
private:
const char *func_;
public:
ShowTimestamp(const char* f) : func_(f) {
std::cout << func_ << ":" << (level_++) << ":begin\t" << GetTickCount64() << std::endl;
}
~ShowTimestamp() {
std::cout << func_ << ":" << (--level_) << ":end\t" << GetTickCount64() << std::endl;
}
};
#ifndef NO_TRACE_TIMER
#define TIMESTAMP_TRACER ShowTimestamp _stt_(__FUNCTION__);
#elif
#define TIMESTAMP_TRACER
#endif
The level_ should be declared in a CPP file separately.
// You need to define the static member in a CPP file
int ShowTimestamp::level_ = 0;
In your code, you can do
int Foo(int bar) {
TIMESTAMP_TRACER
// all the other things.
......
return bar;
}
If you don't want to trace timer any longer, you can just define NO_TRACE_TIMER

Visual Studio is not suited for this; you would have to use WinDbg. It has its own scripting language that would allow you to do what you are seeking. Unfortunately I don't know the first thing about it's scripting language; you will have to read the help file (which is actually more-or-less useful, for once).

Related

boost, coroutine2 (1.63.0): throwing exception crashes visual studio on 32bit windows

In my application I'm using coroutine2 to generate some objects which I have to decode from a stream. These objects are generated using coroutines. My problem is that as soon as I reach the end of the stream and would theoretically throw std::ios_base::failure my application crashes under certain conditions.
The function providing this feature is implemented in C++, exported as a C function and called from C#. This all happens on a 32bit process on Windows 10 x64. Unfortunately it only reliably crashes when I start my test from C# in debugging mode WITHOUT the native debugger attached. As soon as I attach the native debugger everything works like expected.
Here is a small test application to reproduce this issue:
Api.h
#pragma once
extern "C" __declspec(dllexport) int __cdecl test();
Api.cpp
#include <iostream>
#include <vector>
#include <sstream>
#include "Api.h"
#define BOOST_COROUTINES2_SOURCE
#include <boost/coroutine2/coroutine.hpp>
int test()
{
using coro_t = boost::coroutines2::coroutine<bool>;
coro_t::pull_type source([](coro_t::push_type& yield) {
std::vector<char> buffer(200300, 0);
std::stringstream stream;
stream.write(buffer.data(), buffer.size());
stream.exceptions(std::ios_base::eofbit | std::ios_base::badbit | std::ios_base::failbit);
try {
std::vector<char> dest(100100, 0);
while (stream.good() && !stream.eof()) {
stream.read(&dest[0], dest.size());
std::cerr << "CORO: read: " << stream.gcount() << std::endl;
}
}
catch (const std::exception& ex) {
std::cerr << "CORO: caught ex: " << ex.what() << std::endl;
}
catch (...) {
std::cerr << "CORO: caught unknown exception." << std::endl;
}
});
std::cout << "SUCCESS" << std::endl;
return 0;
}
C#:
using System;
using System.Runtime.InteropServices;
namespace CoroutinesTest
{
class Program
{
[DllImport("Api.dll", EntryPoint = "test", CharSet = CharSet.Ansi, CallingConvention = CallingConvention.Cdecl)]
internal static extern Int32 test();
static void Main(string[] args)
{
test();
Console.WriteLine("SUCCESS");
}
}
}
Some details:
We are using Visual Studio 2015 14 and dynamically link the c++ runtime.
The test library statically links Boost 1.63.0.
We also tried to reproduce this behaviour with calling the functionallity directly from c++ and from python. Both tests have not been successful so far.
If you start the c# code with CTRL F5 (meaning without the .net debugger) everything will also be fine. Only if you start it with F5 (meaning the .NET Debugger attached) the visual studio instance will crash. Also be sure not to enable the native debugger!
Note: If we don't use the exceptions in the stream, everything seams to be fine as well. Unfortunately the code decoding my objects makes use of them and therefore I cannot avoid this.
It would be amazing if you had some additional hints on what might go wrong here or a solution. I'm not entirely sure if this is a boost bug, could also be the c# debugger interfering with boost-context.
Thanks in advance! Best Regards, Michael
I realize this question is old but I just finished reading a line in the docs that seemed pertinent:
Windows using fcontext_t: turn off global program optimization (/GL) and change /EHsc (compiler assumes that functions declared as extern "C" never throw a C++ exception) to /EHs (tells compiler assumes that functions declared as extern "C" may throw an exception).
This is just a guess but in your coroutine I think you are supposed to push a boolean to your sink (named as yield in your code) and the code is not doing it.

Why would the compiler leave in the binary the implementation of an inlined function?

Consider this simple piece of code:
#include <iostream>
#include <sstream>
class Utl
{
public:
// singleton accessor
static Utl& GetInstance();
virtual std::string GetHello( bool full ) = 0;
};
class UtlImpl : public Utl
{
public:
UtlImpl() {}
virtual std::string GetHello( bool full )
{
return (full)?"full":"partial";
}
};
Utl& Utl::GetInstance()
{
static UtlImpl instance;
return instance;
}
int main( int argc, char* argv[] )
{
std::cout << Utl::GetInstance().GetHello(true) << std::endl;
std::cout << Utl::GetInstance().GetHello(false) << std::endl;
return 0;
}
I compile this with Visual Studio 2015 in "Debug" and "RelWithDebInfo" mode.
Then I use a coverage validation tool (Software verify - Coverage Validator).
For the Debug build, the tool reports a 100% coverage
For the RelWithDebInfo build, the tool reports a 66.67% coverage. It reports the function Utl::GetInstance() has not been executed.
As RelWithDebInfo is optimized, I suspect that's because the function has been inlined by the compiler (I'm not familiar with assembly code to verify that, but I can post anything that would help if someone explains me how to check this). But when I use Software verify's DbgHelp browser tool, this one reports that a Utl::GetInstance() is present in the binary.
Is it possible that Visual Studio inlined the code of Utl::GetInstance() function but also kept a "real" Utl::GetInstance() in the binary (then possibly ending with two implementations of this code)? That would explain why the tool reports me the function has never been called, while its code has definitely been executed...
Any function that has global scope needs to have a callable function as well as the inlined function, so there will be duplicates.
Setting "Inline function expansion" to "Disabled (/Ob0)" when building allows OP to get 100% coverage for test.

Void Functions, cout statements, and compilers

This is something I have noticed and I do not have the answer to it and it bothers me.
Let's say we have two simple functions.
void foo()
{
std::cout << "Rainbows are cute!" << std::endl;
return;
}
int main()
{
foo();
return 0;
}
Now these two functions are all part of the same cpp file.
If I compile this cpp file on gcc the file will cout "Rainbows are cute!"
but if I were to do it on Xcode or Visual Studio, the cout statement will not display. I mention VS and Xcode because these are two common compilers, used by many.
My question is why does this happen? What is going on in the compilers were one will display the cout statement in the void functions and the others will not?
The printouts will display in VS and Xcode as well. The difference is in how you run this. When you execute your program from Visual Studio, console window briefly pops up, displays the message, and promptly disappears.
To prevent this from happening, you can set breakpoint on return 0 line, and run in debug mode. When the breakpoint is hit, switch to the console window to see the message:

Visual Studio 2010 debugger points to wrong line

The debugger in Visual Studio 2010 is recently pointing at the wrong lines and/or skipping lines and I have no idea why this is. This is a CUDA project and only happens in CUDA files. I've noticed the following:
It always happens at the same part of the program.
The lines it points to are always the same, i.e. not random.
Putting extra code after the culprit lines changes which lines it points to.
It only happens in .cu-files. Moving the code to a .cpp-file does not recreate the problem.
What I have tried:
Clean and rebuilt the solution.
Install SP1 for MSVC10 and do all possible updates via Windows Updates
Set the compiler to not use optimizations in debug mode for both C/C++ and CUDA C/C++
Manually delete all created files and then rebuild from the solution folder.
Deleting the folder C:\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files
Recreating the solution only using the source files.
Disabling my extensions.
I've managed to reduce the code to the following which might reproduce the problem. Mind that this code has to be inside a .cu-file and most probably needs to be compiled with the cuda compiler for C/C++. Including boost is not really necessary, but this example does show what problems I'm having. A shorter example is at the back.
#include <boost/numeric/ublas/matrix.hpp>
using boost::numeric::ublas::matrix;
struct foo {
foo() : mat(NULL) {}
matrix<float>* mat;
};
bool func(foo *data) {
bool status; // <- skipped line
status = false;
if (status) {
std::cout << "test\n";
return (status); // <- error reported here
}
int size = data->mat->size1(); // instead of here
return status;
}
int main(int args, char* argv[]) {
func(NULL); // force error by passing NULL pointer
return 0;
}
Does anyone have any idea how to solve this or how this could be happening? It's pretty annoying having to debug this way.
Shorter example only showing the skipping lines. No external libraries necessary.
bool func() {
bool status; // <- skipped line
status = false;
return status;
}
int main(int args, char* argv[]) {
func();
return 0;
}
Since the program only contains CPU instructions and variable declarations of types that have no construction contain no instructions, the debugger will not stop there. It just executes instructions and then uses the debugging information that the compiler provided to find the relevant line of source code.

How to trace all calls to a predefined set of functions in C++?

I have a C++ application that uses a third-party library. Every here and there in my code there're calls to that library. I would like to trace all such calls.
It would be easy if those were functions in my code - I would insert a macro that would obtain the current function name and time of call start and pass those to a local object constructor, then on function exit the object would be destroyed and trace the necessary data. The macro would expand to an empty string for configurations where I don't need tracing to eliminate the associated overhead.
Is there some easy way to reliably do something similar for calls to an external library? All the interface to the library I have is the .h file with functions prototypes included into my code.
You could try writing a wrapper library that exposes the same interface and internally redirects the calls to the original lib.
Then you can easily add your trace code to the wrapper functions.
All that changes for your project is the lib your are going to link against.
To prevent multiple symbols being defined, you can include the external libs header inside a separate namespace.
EDIT:
Including the external libs header in a namespace does not solve the symbol problem. You have to use a macro in your header that renames the original function and every occurrence in your code. Use something like this for new wrapper library header:
#define originalExportedFunction WRAPPED_originalExportedFunction
extern "C" int originalExportedFunction(int);
Your implementation in the wrapper lib then might look like:
extern "C" int WRAPPED_originalExportedFunction(int i)
{
//trace code here...
return originalExportedFunction(i);
}
If you happen to work under unix/linux use
ltrace
to track library calls,
strace
for system calls. These are commands no in code solution though. You can also look at valgrind with the -callgrind option to profile.
Well you could just add another layer on top of the 3rd party lib calls. That way you can add whatever sophisticated tracing wrapping you want.
e.g.
struct trace
{
static void myfoo() { cout << "calling foo" << endl; foo(); }
// or
// static void myfoo() { if (_trace) {..} foo(); }
};
Since you seem to know the functions you want to call (and the signatures for those calls) you can still use your macro/class wrapper idea. Something like:
typedef void (*pfun)(int);
class Foo {
pfun call;
public:
Foo(pfun p) : call(p) {}
void operator()(int x) {
std::cout << "Start trace..." << std::endl;
(*call)(x);
std::cout << "End trace" << std::endl;
}
};
void bar (int x) {
std::cout << "In bar: " << x << std::endl;
}
int main () {
Foo foo(&bar);
foo (42);
return 0;
}
Try to create a macro for all interface apis e.g.
Suppose the api is being called as:
obj->run_first(var1);
Then create below macro:
#define obj->run_first(args) \
dumptimestamp(__FUNCTION__, __LINE__); \
obj->run_first(args); \
dumptimestamp(__FUNCTION__, __LINE__);
You can generate the list of similar macros from a lib's header file as it has the list of all interface methods.
dumptimestamp will dump the timestamp along with the function and line numbers.
If you don't want to change your code, then there is a way to do such thing by instrumentation. If you're interested in this way, take a look at a nice dynamic-binary instrumentation toolkit called PIN (maintained by Intel):
http://www.pintool.org/downloads.html
With PIN, you can insert your own code on function entry/exit. One example would be capturing malloc/free:
http://www.pintool.org/docs/29972/Pin/html/index.html#FindSymbol
This is quite different way to trace the function calls. But, it's worth to take a look.