Is it possible to define a section or scope in my code within which a different code path is executed, without using a global or passed-down state variable?
For debugging purposes, I want to be able to surround a section of faulty code with a scope or #define to temporarily switch on pre-defined debugging behavior within this section, e.g. use debug data, a more precise data type, an already validated algorithm, … This needs to work in a multi-threaded application in which multiple threads will likely execute the same shared code concurrently, but only some of them have called this code from within the defined section.
For example, here is some pseudo-code that is not working, but might illustrate what I'd like to do. A static expensive function that is called from several places concurrently:
Result Algorithms::foo()
{
#ifdef DEBUG_SECTION
return Algorithms::algorithmPrecise(dataPrecise);
#else
return Algorithms::algorithmOptimized(dataOptimized);
#endif
}
Three classes of which instances need to be updated frequently:
Result A::update()
{
return Algorithms::foo();
}
Result B::update()
{
Result result;
#define DEBUG_SECTION
...
result = a.update() + 1337;
...
#undef DEBUG_SECTION
return result;
}
Result C::update()
{
return a.update();
}
As you can see, class A directly calls foo(), whereas in class B, foo() is called indirectly by calling a.update() and some other stuff. Let us assume B::update() returns a wrong result, so I want to be able to use the debug implementation of foo() only from this location. In C::update(), the optimized version should still be used.
My conceptual idea is to define a DEBUG_SECTION around the faulty code which would use the debug implementation at this location. This, however, does not work in practice, as Algorithms::foo() is compiled once with DEBUG_SECTION not being defined. In my application, Algorithms, A, B, and C are located in separate libraries.
I want that within a section defined in the code, a different code section within shared code is executed. However, outside of this section I still want execution of the original code, which at runtime will happen concurrently, so I cannot simply use a state variable. I could add a debugFlag parameter to each call within the DEBUG_SECTION that is passed down in each recursive call that is then provided to Algorithms::foo(), but this is extremely prone to errors (you must not miss any calls, but the section could be quite huge, spread over different files, …) and quite messy in a larger system. Is there any better way to do this?
I need a solution for C++11 and MSVC.
This might work by using a template:
template<bool pDebug>
Result Algorithms::foo()
{
if(pDebug)
return Algorithms::algorithmPrecise(dataPrecise);
else
return Algorithms::algorithmOptimized(dataOptimized);
}
On the other hand this means moving your function definition into a header (or forcing template instantiation, see these answers).
The downside is that changing the call to Algorithms::foo() from instance.foo<false> to instance.foo<true> every time you want to switch between debugging and release might require effort. If you have multiple affected calls you could use a compile time const variable to reduce the typing effort, but not knowing your code exactly I can't estimate if this is a feasible solution.
If the majority of your code uses the optimized version of the function you can also set the template parameter to default to false (template<bool pDebug = false>) to avoid changing existing code that will not call the debug-version.
Related
I have three threads in an application I'm building, all of which remain open for the lifetime of the application. Several variables and functions should only be accessed from specific threads. In my debug compile, I'd like a check to be run and an error to be thrown if one of these functions or variables is accessed from an illegal thread, but I don't want this as overhead in my final compilation. I really just want this so I the programmer don't make stupid mistakes, not to protect my executing program from making mistakes.
Originally, I had a 'thread protected' class template that would wrap around return types for functions, and run a check on construction before implicitly converting to the intended return type, but this didn't seem to work for void return types without disabling important warnings, and it didn't resolve my issue for protected variables.
Is there a method of doing this, or is it outside the scope of the language? 'If you need this solution, you're doing it wrong' comments not appreciated, I managed to near halve my program's execution time with this methodology, but it's just too likely I'm going to make a mistake that results in a silent race condition and ultimately undefined behavior.
What you described is exactly what assert macro is for.
assert(condition)
In a debug build condition is checked. If it is false, the program will throw an exception at that line. In a release build, the assert and whatever is inside the parentheses aren't compiled.
Without being harsh, it would have been more helpful if you had explained the variables you are trying to protect. What type are they? Where do they come from? What's their lifetime? Are they global? Why do you need to protect a returned type if it's void? How did you end up in a situation where one thread might accidentally access something. I kinda have to guess but I'll throw out some ideas here:
#include <thread>
#include <cassert>
void protectedFunction()
{
assert(std::this_thread::get_id() == g_thread1.get_id());
}
// protect a global singleton (full program lifetime)
std::string& protectedGlobalString()
{
static std::string inst;
assert(std::this_thread::get_id() == g_thread1.get_id());
return inst;
}
// protect a class member
int SomeClass::protectedInt()
{
assert(std::this_thread::get_id() == g_thread1.get_id());
return this->m_theVar;
}
// thread protected wrapper
template <typename T>
class ThreadProtected
{
std::thread::id m_expected;
T m_val;
public:
ThreadProtected(T init, std::thread::id expected)
: m_val(init), m_expected(expected)
{ }
T Get()
{
assert(std::this_thread::get_id() == m_expected);
return m_val;
}
};
// specialization for void
template <>
class ThreadProtected<void>
{
public:
ThreadProtected(std::thread::id expected)
{
assert(std::this_thread::get_id() == expected);
}
};
assert is oldschool. We were actually told to stop using it at work because it was causing resource leaks (the exception was being caught high up in the stack). It has the potential to cause debugging headaches because the debug behavior is different from the release behavior. A lot of the time if the asserted condition is false, there isn't really a good choice of what to do; you usually don't want to continue running the function but you also don't know what value to return. assert is still very useful when developing code. I personally use assert all the time.
static_assert will not help here because the condition you are checking for (e.g. "Which thread is running this code?") is a runtime condition.
Another note:
Don't put things that you want to be compiled inside an assert. It seems obvious now, but it's easy to do something dumb like
int* p;
assert(p = new(nothrow) int); // check that `new` returns a value -- BAD!!
It's good to check the allocation of new, but the allocation won't happen in a release build, and you won't even notice until you start release testing!
int* p;
p = new(nothrow) int;
assert(p); // check that `new` returns a value -- BETTER...
Lastly, if you write the protected accessor functions in a class body or in a .h, you can goad the compiler into inlining them.
Update to address the question:
The real question though is where do I PUT an assert macro? Is a
requirement that I write setters and getters for all my thread
protected variables then declare them as inline and hope they get
optimised out in the final release?
You said there are variables that should be checked (in the debug build only) when accessed to make sure the correct thread is accessing them. So, theoretically, you would want an assert macro before every such access. This is easy if there are only a few places (if this is the the case, you can ignore everything I'm about to say). However, if there are so many places that it starts to violate the DRY Principal, I suggest writing getters/setters and putting the assert inside (This is what I've casually given examples of above). But while the assert won't add overhead in release mode (since it's conditionally compiled), using extra functions (probably) adds function call overhead. However, if you write them in the .h, there's a good chance they'll be inlined.
Your requirement for me was to come up with a way to do this without release overhead. Now that I've mentioned inlining I'm obligated to say that the compiler knows best. There usually are compiler-specific ways to force inlining (since the compiler is allowed to ignore the inline keyword). You should be profiling the code before trying to inline things. See the answer to this question. Is it good practice to make getters and setters inline?. You can easily see if the compiler is inlining the function by looking at the assembly. Don't worry, you don't have to be good at assembly. Just find the calling function and look for a call to the getter/setter. If the function was inlined, you won't see a call and you'll see probably a mov instead.
I'm implementing a helper class which has a number of useful functions which will be used in a large number of classes. However, a few of them are not designed to be called from within certain sections of code (from interrupt functions, this is an embedded project).
However, for users of this class the reasons why some functions are allowed while others are prohibited from being called from interrupt functions might not be immediately obvious, and in many cases the prohibited functions might work but can cause very subtle and hard to find bugs later on.
The best solution for me would be to cause a compiler error if the offending function is called from a code section it shouldn't be called from.
I've also considered a few non-technical solutions, but a technical one would be preferred.
Indicate it in the documentation with a warning. Might be easily missed, especially when the function seems obvious, like read_byte(), why would anyone study the documentation whether the function is reentrant or not?
Indicate it in the function's name. Ugly. Who likes function names like read_byte_DO_NOT_CALL_FROM_INTERRUPT() ?
Have a global variable in a common header, included in each and every file, which is set to true at the beginning of each interrupt, set to false at the end, and the offending functions check it at their beginning, and exit if it's set. Problem: interrupts might interrupt each other. Also, it doesn't cause compile-time warnings or errors.
Similar to #3, have a global handler with a stack, so that nested interrupts can be handled. Still has the problem of only working at runtime and it also adds a lot of overhead. Interrupts should not waste more than a clock cycle or two for this feature, if at all.
Abusing the preprocessor. Unfortunately, the naive way of a #define at the beginning and an #undef at the end of each interrupt, with an #ifdef at the beginning of the offending function doesn't work, because the preprocessor doesn't care about scope.
As interrupts are always classless functions, I could make the offending functions protected, and declare them as friends in all classes which use them. This way, it would be impossible to use them directly from within interrupts. As main() is classless, I'll have to place most of it into a class method. I don't like this too much, as it can become needlessly complicated, and the error it generates is not obvious (so users of this function might encapsulate them to "solve" the problem, without realizing what the real problem was). A compiler or linker error message like "ERROR: function_name() is not to be used from within an interrupt" would be much more preferable.
Checking the interrupt registers within the function has several issues. In a large microcontroller there are a lot of registers to check. Also, there is a very small but dangerous chance of a false positive when an interrupt flag is being set exactly one clock cycle before, so my function would fail because it thinks it was called from an interrupt, while the interrupt would be called in the next cycle. Also, in nested interrupts, the interrupt flags are cleared, causing a false negative. And finally, this is yet another runtime solution.
I did play with some very basic template metaprogramming a while ago, but I'm not that experienced with it to find a very simple and elegant solution. I would rather try other ways before committing myself to try to implement a template metaprogramming bloatware.
A solution working with only features available in C would also be acceptable, even preferable.
Some comments below. As a warning, they won't be fun reading, but I won't do you a service by not pointing out what's wrong here.
If you are calling external functions from inside an ISR, no amount of documentation or coding will help you. Since in most cases, it is bad practice to do so. The programmer must know what they are doing, or no amount of documentation or coding mechanisms will save the program.
Programmers do not design library functions specifically for the purpose of getting called from inside an ISR. Rather, programmers design ISR:s with all the special restrictions that come with an ISR in mind: make sure interrupt flags are cleared correctly, keep the code short, do not call external functions, do not block the MCU longer than necessary, consider re-entrancy, consider dangerous compiler optimizations (use volatile). A person who does not know this is not competent enough to write ISRs.
If you actually have a function int read_byte(int address) then this suggests that the program design is bad to begin with. This function could do one of two things:
Either it can read a byte some some peripheral hardware, in which case the function name is very bad and should be changed.
Or it could read any generic byte from an address, in which case the function is 100% useless "bloatware". You can safely assume that a somewhat competent C programmer can read a byte from a memory address without some bloatware holding their hand.
In either case, int is not a byte. It is a word of 16 or 32 bits. The function should be returning uint8_t. Similarly, if the parameter passed is used to descibe a memory-mapped address of an MCU, it should either have type void*, uint8_t* or uintptr_t. Everything else is wrong.
Notably, if you are using int rather than stdint.h for embedded systems programming, then this whole discussion is the least of your problems, as you haven't even gotten the fundamental basics right. Your programs will be filled to the brim with undefined behavior and implicit promotion bugs.
Overall, all the solutions you suggest are simply not acceptable. The root of the problem here appears to be the program design. Deal with that instead of inventing ways to defend the broken design with horrible meta programming.
I would suggest option 8 & 9.
Peer reviews & assertions.
You state in the comments that your interrupt functions are short. If that's really the case, then reviewing them will be trivial. Adding comments in the header will make it so that anyone can see what's going on. On adding an assert, while you make it viable that debug builds will return the wrong result in error, it will also ensure that you you will catch any calls; and give you a fighting chance during testing to catch the problem.
Ultimately, the macro processing just won't work since the best you can do is catch if a header has been included, but if the callstack goes via another wrapper (that doesn't have comments) then you just can't catch that.
Alternatively you could make your helper a template, but then that would mean every wrapper around your helper would also have to be a template so that can know if you're in an interrupt routine... which will ultimately be your entire code base.
if you have one file for all interrupt routine then this might be helpful:
define one macro in class header ,say FORBID_INTERRUPT_ROUTINE_ACCESS.
and in interrupt handler file check for that macro definition :
#ifdef FORBID_INTERRUPT_ROUTINE_ACCESS
#error : cannot access function from interrupt handlers.
#endif
if someone add header file for that class to use that class in interrupt handler then it will throw an error.
Note : you have to build target by specifying that warnings will be considered as error.
Here is the C++ template functions suggestion.
I don't think this is metaprogramming or bloatware.
First make 2 classes which will define the context which the user will be using the functions in:
class In_Interrupt_Handler;
class In_Non_Interrupt_Handler;
If You will have some common implementations between the 2 contexts, a Base class can be added:
class Handy_Base
{
protected:
static int Handy_protected() { return 0; }
public:
static int Handy_public() { return 0; }
};
The primary template definition, without any implementations. The implemenations will be provided by the specialization classes:
template< class Is_Interrupt_Handler >
class Handy_functions;
And the specializations.
// Functions can be used when inside an interrupt handler
template<>
struct Handy_functions< In_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 1; }
static int Handy2() { return 2; }
};
// Functions can be used when inside any function
template<>
struct Handy_functions< In_Non_Interrupt_Handler >
: Handy_Base
{
static int Handy1() { return 4; }
static int Handy2() { return 8; }
};
In this way if the user of the API wants to access the functions, the only way is by specifing what type of functions are needed.
Example of usage:
int main()
{
using IH_funcs = Handy_functions<In_Interrupt_Handler>;
std::cout << IH_funcs::Handy1() << '\n';
std::cout << IH_funcs::Handy2() << '\n';
using Non_IH_funcs = Handy_functions<In_Non_Interrupt_Handler>;
std::cout << Non_IH_funcs::Handy1() << '\n';
std::cout << Non_IH_funcs::Handy2() << '\n';
}
In the end I think the problem boils down to the developer using Your framework. And How much Your framework requires the devloper to boilerplate.
The above does not stop the developer calling the Non Interrupt Handler functions from inside an Interrupt Handler.
I think that type of analysis would require some type of static analysis checking system.
Is there a way to tell g++ more about a type, function, or specific variable (other than attributes) that I might know is safe to preform.
Example:
TurnLedOn();
TurnLedOn();
Only the first function actually turns the LED on the second function does not actually do anything....so would it be possible to tell g++ more about the function so that it gets rid of a second call if it knows that the LED is on (because it knows that a corresponding TurnLedOff() function has not been called)....
The reason I do not want to use g++ attributes is because I want to arbitrarily define optimizations, which is really not possible with attributes (and I believe the optimization I am trying here is not actually possible to begin with using attributes)
These are optimisations you need to code. Such as:
class LedSwitch {
bool isOn{false};
public:
inline void turnLedOn(){
if (!isOn) {
isOn = true;
// ...
}
}
// ...
}
// ...
If the code inlines then the compiler might then notice the bool negated in the second hardcoded sequential call, but why do that in the first place?
Maybe you should revisit design if things like this are slowing down your code.
One possibility is to make it so that the second TurnLedOn call does nothing, and make it inline and declare it in a header file so the compiler can see the definition in any source file:
extern bool isLedOn; // defined somewhere else
inline void TurnLedOn()
{
if(!isLedOn)
{
ActuallyTurnLedOn();
isLedOn = true;
}
}
Then the compiler might be able to figure out by itself that calling TurnLedOn twice does nothing useful. Of course, as with any optimization, you have no guarantees.
Contrary to your thinking, the answer by #immibis is what you were expecting.
This way to describe the complex behavior of the function TurnLedOn (i.e. needn't be called twice in a row unless unlocked by some other action) is indeed how you tell the compiler to perform this "optimization".
Could you imagine other annotations such as
#pragma call_once_toggle_pair(TurnLEDOn, TurnLEDOff)
with innumerable variants describing all your vagaries ?
The C++ language has enough provisions to let you express arbitrarily complex situations, please don't add yet a layer of complexity on top of that.
I am working on very large c++ project, it has lot of real time critical functions and also lot of slow background functions. These background functions should not be called from time critical functions. So is there way to detect these background functions being called from critical functions? compile time would be good but anyway I like to detect before these background functions.
More info, both slow and critical functions are part of same class and share same header.
Some more information, Critical functions runs under really faster thread (>=10KHz) slower one runs under different slower thread (<=1KHz). Class member variables are protected using critical sections in slow functions since both use same class member variables. That's reason calling slow functions in critical functions will slowdown overall system performance. That's reason I like to find all these kind of functions automatically instead of manual checking.
Thanks....
You need to leverage the linker. Separate the "realtime" and slow functions into two modules, and link them in the correct order.
For example, split the files into two directories. Create a lib from each directory (ranlib the object files together) then link your final application using:
c++ -o myapp main.o lib1/slowfns.a lib2/realtime.a
If you try to call anything from slowfns.a in realtime.a, depending on the compiler, it will fail to link (some compilers may need options to enforce this).
In addition, this lets you easily manage compile-time declarations too: make sure that the headers from the slowfns library aren't on the include path when compiling the "realtime" funcitons library for added protection.
Getting a compile-time detection other than the one proposed by Nicholas Wilson will be extremely hard if not impossible, but assuming "background" really refers to the functions, and not to multiple threads (I saw no mention of threads in the question, so I assume it's just an odd wording) you could trivially use a global flag and a locker object, and either assert or throw an exception. Or, output a debug message. This will, of course, be runtime-only -- but you should be able to very quickly isolate the offenders. It will also be very low overhead for debug builds (almost guaranteed to run from L1 cache), and none for release builds.
Using CaptureStackBackTrace, one should be able to capture the offending function's address, which a tool like addr2line (or whatever the MS equivalent is) can directly translate to a line in your code. There is probably even a toolhelp function that can directly do this translation (though I wouldn't know).
So, something like this (untested!) might do the trick:
namespace global { int slow_flag = 0; }
struct slow_func_locker
{
slow_func_locker() { ++global::slow_flag; }
~slow_func_locker(){ --global::slow_flag; }
};
#indef NDEBUG
#define REALTIME if(global::slow_flag) \
{ \
void* backtrace; \
CaptureStackBackTrace(0, 1, &backtrace, 0); \
printf("RT function %s called from %08x\n", __FUNCTION__, backtrace); \
}
#define SLOW_FUNC slow_func_locker slow_func_locker_;
#else
#define REALTIME
#define SLOW_FUNC
#endif
foo_class::some_realtime_function(...)
{
REALTIME;
//...
};
foo_class::some_slow_function(...)
{
SLOW_FUNC;
//...
some_realtime_function(blah); // this will trigger
};
The only real downside (apart from not being compile-time) is you have to mark each and every slow and realtime function with either marker, but since the compiler cannot magically know which is what, there's not much of a choice anyway.
Note that the global "flag" is really a counter, not a flag. The reason for this is that a slow function could immediately call another slow function that returns and clears the flag -- incorrectly assuming a fast function now (the approach with critical sections suggested by xgbi might deadlock in this case!). A counter prevents this from happening. In presence of threads, one might replace int with std::atomic_int, too.
EDIT:
As it is clear now that there are really 2 threads running, and it only matters that one of them (the "fast" thread) does not ever call a "slow" function, there is another simple, working solution (example using Win32 API, but could be done with POSIX either way):
When the "fast" thread starts up (the "slow" thread does not need to do this), store the thread ID somewhere, either as global variable, or as member of the object that contains all the fast/slow functions -- anywhere where it's accessible:
global::fast_thread_id = GetCurrentThreadId();
The macro to bail out on "unwelcome" function calls could then look like:
#define CHECK_FAST_THREAD assert(GetCurrentThreadID() != global::fast_thread_id)
This macro is then added to any "slow" function that should never be called from the "fast" thread. If the fast thread calls a function that it must not call, the assert triggers and it is known which function is called.
Don't know how to do that at compile time, but for runtime, maybe use a mutex?
static Mutex critical_mutex;
#define CALL_SLOW( f ) if( critical_mutex.try_lock() == FAIL) \
printf("SLOW FUNCTION " #f" called while in CRITICAL\n");\
f
#define ENTER_CRITICAL() critical_mutex.lock()
#define EXIT_CRITICAL() critical_mutex.unlock()
Whenever you use a slow function while in a critical section, the trylock will fail.
void slow_func(){
}
ENTER_CRITICAL();
CALL_SLOW( slow_func() );
EXIT_CRITICAL();
Will print:
SLOW FUNCTION slow_func() called while in CRITICAL
If you need speed, you can implement your lightweight mutex with interlockedincrement on windows or __sync* functions on linux.
Preshing has an awesome set of blog posts about this HERE.
If you're free to modify the code as you wish, there's a type-system-level solution that involves adding some boilerplate.
Basically, you create a new class, SlowFunctionToken. Every slow function in your program takes a reference to SlowFunctionToken. Next, you make SlowFunctionToken's default and copy constructors private.
Now only functions that already have a SlowFunctionToken can call slow functions. How do you get a SlowFunctionToken? Add friend declarations to SlowFunctionToken; specifically, friend the thread entry functions of the threads that are allowed to use slow functions. Then, create local SlowFunctionToken objects there and pass them down.
class SlowFunctionToken;
class Stuff {
public:
void FastThread();
void SlowThread();
void ASlowFunction(SlowFunctionToken& sft);
void AnotherSlowFunction(SlowFunctionToken& sft);
void AFastFunction();
};
class SlowFunctionToken {
SlowFunctionToken() {}
SlowFunctionToken(const SlowFunctionToken&) {}
friend void Stuff::SlowThread();
};
void Stuff::FastThread() {
AFastFunction();
//SlowFunctionToken sft; doesn't compile
//ASlowFunction(???); doesn't compile
}
void Stuff::SlowThread() {
SlowFunctionToken sft;
ASlowFunction(sft);
}
void Stuff::ASlowFunction(SlowFunctionToken& sft) {
AnotherSlowFunction(sft);
AFastFunction(); // works, but that function can't call slow functions
}
I have programmed in both Java and C, and now I am trying to get my hands dirty with C++.
Given this code:
class Booth {
private :
int tickets_sold;
public :
int get_tickets_sold();
void set_tickets_sold();
};
In Java, wherever I needed the value of tickets_sold, I would call the getter repeatedly.
For example:
if (obj.get_tickets_sold() > 50 && obj.get_tickets_sold() < 75){
//do something
}
In C I would just get the value of the particular variable in the structure:
if( obj_t->tickets_sold > 50 && obj_t->tickets_sold < 75){
//do something
}
So while using structures in C, I save on the two calls that I would otherwise make in Java, the two getters that is, I am not even sure if those are actual calls or Java somehow inlines those calls.
My point is if I use the same technique that I used in Java in C++ as well, will those two calls to getter member functions cost me, or will the compiler somehow know to inline the code? (thus reducing the overhead of function call altogether?)
Alternatively, am I better off using:
int num_tickets = 0;
if ( (num_tickets = obj.get_ticket_sold()) > 50 && num_tickets < 75){
//do something
}
I want to write tight code and avoid unnecessary function calls, I would care about this in Java, because, well, we all know why. But, I want my code to be readable and to use the private and public keywords to correctly reflect what is to be done.
Unless your program is too slow, it doesn't really matter. In 99.9999% of code, the overhead of a function call is insignificant. Write the clearest, easiest to maintain, easiest to understand code that you can and only start tweaking for performance after you know where your performance hot spots are, if you have any at all.
That said, modern C++ compilers (and some linkers) can and will inline functions, especially simple functions like this one.
If you're just learning the language, you really shouldn't worry about this. Consider it fast enough until proven otherwise. That said, there are a lot of misleading or incomplete answers here, so for the record I'll flesh out a few of the subtler implications. Consider your class:
class Booth
{
public:
int get_tickets_sold();
void set_tickets_sold();
private:
int tickets_sold;
};
The implementation (known as a definition) of the get and set functions is not yet specified. If you'd specified function bodies inside the class declaration then the compiler would consider you to have implicitly requested they be inlined (but may ignore that if they're excessively large). If you specify them later using the inline keyword, that has exactly the safe effect. Summarily...
class Booth
{
public:
int get_tickets_sold() { return tickets_sold; }
...
...and...
class Booth
{
public:
int get_tickets_sold();
...
};
inline int Booth::get_tickets_sold() { return tickets_sold; }
...are equivalent (at least in terms of what the Standard encourages us to expect, but individual compiler heuristics may vary - inlining is a request that the compiler's free to ignore).
If the function bodies are specified later without the inline keyword, then the compiler is under no obligation to inline them, but may still choose to do so. It's much more likely to do so if they appear in the same translation unit (i.e. in the .cc/.cpp/.c++/etc. "implementation" file you're compiling or some header directly or indirectly included by it). If the implementation is only available at link time then the functions may not be inlined at all, but it depends on the way your particular compiler and linker interact and cooperate. It is not simply a matter of enabling optimisation and expecting magic. To prove this, consider the following code:
// inline.h:
void f();
// inline.cc:
#include <cstdio>
void f() { printf("f()\n"); }
// inline_app.cc:
#include "inline.h"
int main() { f(); }
Building this:
g++ -O4 -c inline.cc
g++ -O4 -o inline_app inline_app.cc inline.o
Investigating the inlining:
$ gdb inline_app
...
(gdb) break main
Breakpoint 1 at 0x80483f3
(gdb) break f
Breakpoint 2 at 0x8048416
(gdb) run
Starting program: /home/delroton/dev/inline_app
Breakpoint 1, 0x080483f3 in main ()
(gdb) next
Single stepping until exit from function main,
which has no line number information.
Breakpoint 2, 0x08048416 in f ()
(gdb) step
Single stepping until exit from function _Z1fv,
which has no line number information.
f()
0x080483fb in main ()
(gdb)
Notice the execution went from 0x080483f3 in main() to 0x08048416 in f() then back to 0x080483fb in main()... clearly not inlined. This illustrates that inlining can't be expected just because a function's implementation is trivial.
Notice that this example is with static linking of object files. Clearly, if you use library files you may actually want to avoid inlining of the functions specifically so that you can update the library without having to recompile the client code. It's even more useful for shared libraries where the linking is done implicitly at load time anyway.
Very often, classes providing trivial functions use the two forms of expected-inlined function definitions (i.e. inside class or with inline keyword) if those functions can be expected to be called inside any performance-critical loops, but the countering consideration is that by inlining a function you force client code to be recompiled (relatively slow, possibly no automated trigger) and relinked (fast, for shared libraries happens on next execution), rather than just relinked, in order to pick up changes to the function implementation.
These kind of considerations are annoying, but deliberate management of these tradeoffs is what allows enterprise use of C and C++ to scale to tens and hundreds of millions of lines and thousands of individual projects, all sharing various libraries over decades.
One other small detail: as a ballpark figure, an out-of-line get/set function is typically about an order of magnitude (10x) slower than the equivalent inlined code. That will obviously vary with CPU, compiler, optimisation level, variable type, cache hits/misses etc..
No, repetitive calls to member functions will not hurt.
If it's just a getter function, it will almost certainly be inlined by the C++ compiler (at least with release/optimized builds) and the Java Virtual Machine may "figure out" that a certain function is being called frequently and optimize for that. So there's pretty much no performance penalty for using functions in general.
You should always code for readability first. Of course, that's not to say that you should completely ignore performance outright, but if performance is unacceptable then you can always profile your code and see where the slowest parts are.
Also, by restricting access to the tickets_sold variable behind getter functions, you can pretty much guarantee that the only code that can modify the tickets_sold variable to member functions of Booth. This allows you to enforce invariants in program behavior.
For example, tickets_sold is obviously not going to be a negative value. That is an invariant of the structure. You can enforce that invariant by making tickets_sold private and making sure your member functions do not violate that invariant. The Booth class makes tickets_sold available as a "read-only data member" via a getter function to everyone else and still preserves the invariant.
Making it a public variable means that anybody can go and trample over the data in tickets_sold, which basically completely destroys your ability to enforce any invariants on tickets_sold. Which makes it possible for someone to write a negative number into tickets_sold, which is of course nonsensical.
The compiler is very likely to inline function calls like this.
class Booth {
public:
int get_tickets_sold() const { return tickets_sold; }
private:
int tickets_sold;
};
Your compiler should inline get_tickets_sold, I would be very surprised if it didn't. If not, you either need to use a new compiler or turn on optimizations.
Any compiler worth its salt will easily optimize the getters into direct member access. The only times that won't happen are when you have optimization explicitly disabled (e.g. for a debug build) or if you're using a brain-dead compiler (in which case, you should seriously consider ditching it for a real compiler).
The compiler will very likely do the work for you, but in general, for things like this I would approach it more from the C perspective rather than the Java perspective unless you want to make the member access a const reference. However, when dealing with integers, there's usually little value in using a const reference over a copy (at least in 32 bit environments since both are 4 bytes), so your example isn't really a good one here... Perhaps this may illustrate why you would use a getter/setter in C++:
class StringHolder
{
public:
const std::string& get_string() { return my_string; }
void set_string(const std::string& val) { if(!val.empty()) { my_string = val; } }
private
std::string my_string;
}
That prevents modification except through the setter which would then allow you to perform extra logic. However, in a simple class such as this, the value of this model is nil, you've just made the coder who is calling it type more and haven't really added any value. For such a class, I wouldn't have a getter/setter model.