How can I set a breakpoint on an empty statement in C++? - c++

Specifically, I want to write a macro that
1) allows me to set a breakpoint
2) does nothing else
3) causes no compiler warnings
#define NO_OP ((void)0)
void main()
{
bool b = true;
if (b)
NO_OP; // I try to set a breakpoint here, but
} // it jumps to here (in Visual Studio 2010)
I also tried
#define NO_OP (assert(1)) // doesn't work
#define NO_OP (sizeof(int)) // doesn't work
#define NO_OP __asm{} // doesn't work
#define NO_OP do {(void)0;} while(0) // warning: conditional is constant
The only thing that works so far is the cringe-worthy
#define NO_OP { int x = 0; x = x; }
There has to be a better way.
EDIT
Thanks Blorgbeard, __asm{ nop } does indeed work. But I just realized that anything with braces is less than perfect (problematic?) because it leaves a useless semi-colon hanging there after it. (Later) I don't know squat about assembler but I tried removing the braces, and voila! There's the answer: __asm nop
Thanks!
FOR THE CURIOUS
Here's a slightly less absurd example:
string token = GetNextToken();
if (!Ignore(token))
{
// process token
DoThis(token);
DoThat(token);
}
This code is complete -- as long as the program works correctly I don't care to know anything about ignored tokens. But at any given time (and without changing the code) I want to make sure that I'm not rejecting good tokens
string token = GetNextToken();
if (Ignore(token))
{
NO_OP; // when desired, set breakpoint here to monitor ignored tokens
}
else
{
// process token
DoThis(token);
DoThat(token);
}

An actual no-op instruction:
__asm nop

Perhaps you can do this:
#define BREAKPOINT __asm { int 3; }
This will call interrupt 3, which is the breakpoint interrupt. This will set a breakpoint in your code, which is compiled as part of your code.
Now, if you want just some operation that you can set a breakpoint on, which essentially does nothing, besides allowing you to break on that line. I think you have to compile your code without optimization for one thing, as a NO_OP as you've implemented is likely to be optimized out of the code by an optimizing compiler, with the optimization switches on.
The other point is, that this seems like a very strange thing to do. From my knowledge, one normally sets a breakpoint on a line of code one wants to look at. See what variable state's are, step one line at a time, etc. I don't really see how setting a breakpoint on a line of code with essentially no significance in your program, will help you debug anything.

C++03:
inline void __dummy_function_for_NO_OP () {}
#define NO_OP __dummy_function_for_NO_OP ()
int main () {
NO_OP;
}
C++11:
#define NO_OP [](){}()
int main () {
NO_OP;
}

On msvc x64 there is an intrinsic for this:
__nop;
Header file: <intrin.h>
doc: https://learn.microsoft.com/en-us/cpp/intrinsics/nop?view=msvc-170

How about __asm int 3? Also, are optimizations enabled? That could be the reason for the others failing (actually never tried to break on them).

You could define a global unsigned int debug_counter, then your "no-op" macro can be debug_counter++. Pretty sure the compiler won't remove that, but to be absolutely sure, put some code somewhere to print the value of the counter.

Define in myassert.h and include everywhere in your app (force include in Visual Studio?).
// Works cross-platform. No overhead in release builds
#ifdef DEBUG
// This function may need to be implemented in a cxx module if
// your compiler optimizes this away in debug builds
inline bool my_assert_func(const bool b)
{
// can set true/false breakpoints as needed
if (b) {
return true;
}
else {
return false;
}
}
#define myassert(b) assert(my_assert_func(b))
#else // RELEASE
// In release builds this is a NOP
#define myassert(b) ((void)0)
#endif
#define DEBUG_BREAKPOINT myassert(true)
Now you can:
string token = GetNextToken();
if (Ignore(token))
DEBUG_BREAKPOINT;
}
else {
// process token
DoThis(token);
DoThat(token);
}

Related

please solve this memory leak problem in c++

class A
{
public:
unique_ptr<int> m_pM;
A() { m_pM = make_unique<int>(5); };
~A() { };
public:
void loop() { while (1) {}; } // it means just activating some works. for simplifying
};
int main()
{
_CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF);
A a;
a.loop(); // if i use force quit while activating this function, it causes memory leak
}
is there any way to avoid memory leak when i use force quit while activating this program?
a.loop() is an infinite loop so everything after that is unreachable, so the compiler is within its right to remove all code after the call to a.loop(). See the compiler explorer for proof.
I believe that outside of some niche and very rare scenarios truly infinite loops like the one you wrote here are pretty useless, since they literally mean “loop indefinitely”. So what’s the compiler supposed to do? In some sense it just postpones the destruction of your object until some infinite time in the future.
What you usually do is use break inside such loop and break when some condition is met. A simplified example: https://godbolt.org/z/sxr7eG4W1
Here you can see the unique_ptr::default_delete in the disassembly and also see that the compiler is actually checking the condition inside the loop.
Note: extern volatile is used to ensure the compiler doesn’t optimise away the flag, since it’s a simplified example and the compiler is smart enough to figure out that the flag is not changed. In real code I’d advice against using volatile. Just check for the stop condition. That’s it.

Preprocessor Directive for Repeated Code Blocks (with condition)

Is there any way in C++ to implement a concept like the following pseudo-code?
#pragma REPEAT
for (;;)
{
// code block #1
#pragma REPEAT_CONDITION(a==1)
// code
#end_pragma
// code block #2
}
#end_pragma
Which would get compiled as something like this:
if (a == 1)
{
for (;;)
{
// code block #1
// code
// code block #2
}
}
else
{
for (;;)
{
// code block #1
// code block #2
}
}
The goal here being to generate an easily readable piece of performance code by abstracting a condition from the inner loop. Thus not having to manually maintain duplicated code blocks.
Honestly, the preprocessor should be used for conditional compilation and precious little else nowadays. With inline(-suggesting) functions, insanely optimising compilers and enumerations, their most common use cases have been gradually whittled away.
I'm assuming here you don't want to check the condition every time through the loop, even though this cleans up your code considerably:
for (;;) {
// code block #1
if (a == 1) {
// code
}
// code block #2
}
The only reason I could think of you doing this would be for the extra speed of not doing the check multiple times but you may want to actually check the impact it has. Unless // code is pitifully simple, it will most likely swamp the effect of a single conditional statement.
If you do need the separate loops for whatever reason, you may be better off putting those common code blocks into functions and simply calling them with a one-liner:
if (a == 1) {
for (;;) {
callCodeBlock1();
// code
callCodeBlock2();
} else {
for (;;) {
callCodeBlock1();
callCodeBlock2();
}
}

Critical section via constexpr

In embedded programming there is a need to create atomic sections of code - so called critical sections. They are usually implemented via macros, for example, like this:
#define ENTER_CRITICAL() int saved_status_ = CPU_STATUS_REGISTER; __disable_irq();
#define EXIT_CRITICAL() CPU_STATUS_REGISTER = saved_status_
I.e. on entering the status of interrupts (enabled or disabled) is saved; on exit - it is restored. The problem is that additional variable is needed for this.
My question is: is it possible to make critical sections via constexpr functions (and to get rid of macros what so ever)?
RAII solution would be traditional:
struct CriticalSection {
int saved_status_;
void Enter() {
saved_status_ = CPU_STATUS_REGISTER;
__disable_irq();
}
CriticalSection() { Enter(); }
void Exit() {
CPU_STATUS_REGISTER = saved_status_;
}
~CriticalSection() {
Exit(); // Can you call this more than once safely? Dunno.
}
};
you'd use it like this:
void foo() {
// unprotected code goes here
{
CriticalSection _;
// protected code goes here
}
// unprotected code goes here
}
Doing this without any state is not possible, because CPU_STATUS_REGISTER is a run time value. State in C/C++ is mostly stored in variables.
I strongly suspect that, under any non-trivial optimization level, the above RAII class will compile to the exact same code that your macros compiled to, except you no longer have to remember to EXIT_CRITICAL().

How can I make a DEBUG function in C++ such as ConditionalAttribute in C#

In C#, one can make a method such as this:
[Conditional("DEBUG")]
private void MyFunction()
{
}
In release, this method will no longer "exist".
Using C++, I have a class where I want to perform the same assertions at the beginning of every method, so I would like to put the group of assertions in their own method. If I chose to do it that way, would I have to rely on the compiler optimizing out an empty function (since asserts will be optimized out as well)? For example:
class MyClass
{
private:
void DebugFunction()
{
assert(...);
assert(...);
assert(...);
// ...
}
};
Or would I have to introduce a macro:
#ifdef NDEBUG
#define DebugFunction
#endif
What's the best way to do this?
The compiler will definitely optimize out the empty functions. I would prefer the function of asserts over different versions of the code for debug and release. Of course, you should name the function appropriately, and document your reasons as well :-)
If, for some reason, you absolutely did have the urge to use #ifndef, make sure you do it inside the CheckState() function. This allows you to perform checks in release mode as well, should you later decide to do so. For example:
class MyClass
{
private:
void CheckState()
{
assert(...);
assert(...);
#ifndef NDEBUG
// some expensive check to only run on Debug builds
#endif
// Some check you want to always make
}
}

optimizing branching by re-ordering

I have this sort of C function -- that is being called a zillion times:
void foo ()
{
if (/*condition*/)
{
}
else if(/*another_condition*/)
{
}
else if (/*another_condition_2*/)
{
}
/*And so on, I have 4 of them, but we can generalize it*/
else
{
}
}
I have a good test-case that calls this function, causing certain if-branches to be called more than the others.
My goal is to figure the best way to arrange the if statements to minimize the branching.
The only way I can think of is to do write to a file for every if condition branched to, thereby creating a histogram. This seems to be a tedious way. Is there a better way, better tools?
I am building it on AS3 Linux, using gcc 3.4; using oprofile (opcontrol) for profiling.
It's not portable, but many versions of GCC support a function called __builtin_expect() that can be used to tell the compiler what we expect a value to be:
if(__builtin_expect(condition, 0)) {
// We expect condition to be false (0), so we're less likely to get here
} else {
// We expect to get here more often, so GCC produces better code
}
The Linux kernel uses these as macros to make them more intuitive, cleaner, and more portable (i.e. redefine the macros on non-GCC systems):
#ifdef __GNUC__
# define likely(x) __builtin_expect((x), 1)
# define unlikely(x) __builtin_expect((x), 0)
#else
# define likely(x) (x)
# define unlikely(x) (x)
#endif
With this, we can rewrite the above:
if(unlikely(condition)) {
// we're less likely to get here
} else {
// we expect to get here more often
}
Of course, this is probably unnecessary unless you're aiming for raw speed and/or you've profiled and found that this is a problem.
Try a profiler (gprof?) - it will tell you how much time is spent. I don't recall if gprof counts branches, but if not, just call a separate empty method in each branch.
Running your program under Callgrind will give you branch information. Also I hope you profiled and actually determined this piece of code is problematic, as this seems like a microoptimization at best. The compiler is going to generate a branch table from the if/else if/else if it's able to which would require no branching (this is dependent on what the conditionals are, obviously)0, and even failing that the branch predictor on your processor (assuming this is not for embedded work, if it is feel free to ignore me) is pretty good at determining the target of branches.
It doesn't actually matter what order you change them round to, IMO. The branch predictor will store the most common branch and auto take it anyway.
That said, there are something you could try ... You could maintain a set of job queues and then, based on the if statements, assign them to the correct job queue before executing them one after another at the end.
This could further be optimised by using conditional moves and so forth (This does require assembler though, AFAIK). This could be done by conditionally moving a 1 into a register, that is initialised as 0, on condition a. Place the pointer valueat the end of the queue and then decide to increment the queue counter or not by adding that conditional 1 or 0 to the counter.
Suddenly you have eliminated all branches and it becomes immaterial how many branch mispredictions there are. Of course, as with any of these things, you are best off profiling because, though it seems like it would provide a win ... it may not.
We use a mechanism like this:
// pseudocode
class ProfileNode
{
public:
inline ProfileNode( const char * name ) : m_name(name)
{ }
inline ~ProfileNode()
{
s_ProfileDict.Find(name).Value() += 1; // as if Value returns a nonconst ref
}
static DictionaryOfNodesByName_t s_ProfileDict;
const char * m_name;
}
And then in your code
void foo ()
{
if (/*condition*/)
{
ProfileNode("Condition A");
// ...
}
else if(/*another_condition*/)
{
ProfileNode("Condition B");
// ...
} // etc..
else
{
ProfileNode("Condition C");
// ...
}
}
void dumpinfo()
{
ProfileNode::s_ProfileDict.PrintEverything();
}
And you can see how it's easy to put a stopwatch timer in those nodes too and see which branches are consuming the most time.
Some counter may help. After You see the counters, and there are large differences, You can sort the conditions in a decreasing order.
static int cond_1, cond_2, cond_3, ...
void foo (){
if (condition){
cond_1 ++;
...
}
else if(/*another_condition*/){
cond_2 ++;
...
}
else if (/*another_condtion*/){
cond_3 ++;
...
}
else{
cond_N ++;
...
}
}
EDIT: a "destructor" can print the counters at the end of a test run:
void cond_print(void) __attribute__((destructor));
void cond_print(void){
printf( "cond_1: %6i\n", cond_1 );
printf( "cond_2: %6i\n", cond_2 );
printf( "cond_3: %6i\n", cond_3 );
printf( "cond_4: %6i\n", cond_4 );
}
I think it is enough to modify only the file that contains the foo() function.
Wrap the code in each branch into a function and use a profiler to see how many times each function is called.
Line-by-line profiling gives you an idea which branches are called more often.
Using something like LLVM could make this optimization automatically.
As a profiling technique, this is what I rely on.
What you want to know is: Is the time spent in evaluating those conditions a significant fraction of execution time?
The samples will tell you that, and if not, it just doesn't matter.
If it does matter, for example if the conditions include function calls that are on the stack a significant part of the time, what you want to avoid is spending much time in comparisons that are false. The way you tell this is, if you often see it calling a comparison function from, say, the first or second if statement, then catch it in such a sample and step out of it to see if it returns false or true. If it typically returns false, it should probably go farther down the list.