Debugging macros can take a lot of time. We are much better off
avoiding them except in the very rare cases when neither constants,
functions nor templates can do what we want.
What are the rare cases?
If you want actual textual replacement, that's where you use macros. Take a look at Boost.Preprocessor, it's a great way to simulate variadic templates in C++03 without repeating yourself too much.
In other words, if you want to manipulate the program code itself, use macros.
Another useful application is assert, which is defined to be a no-op when NDEBUG is not defined (usually release mode compilation).
That brings us to the next point, which is a specialization of the first one: Different code with different compilation modes, or between different compilers. If you want cross-compiler support, you can't get away without macros. Take a look at Boost in general, it needs macros all the time because of various deficiencies in various compilers it has to support.
Another important point is when you need call-site information without wanting to bug the user of your code. You have no way to automatically get that with just a function.
#define NEEDS_INFO() \
has_info(__FILE__, __LINE__, __func__)
With a suitable declaration of has_info (and C++11/C99 __func__ or similar).
This question doesn't appear to have a definite, closed-form answer, so I'll just give a couple of examples.
Suppose you want to print information about a given type. Type names don't exist in the compiled code, so they cannot possibly be expressed by the language itself (except for C++ extensions). Here the preprocessor must step in:
#define PRINT_TYPE_INFO(type) do { printf("sizeof(" #type ") = %zu\n", sizeof(type)); } while (false)
PRINT_TYPE_INFO(int);
PRINT_TYPE_INFO(double);
Similarly, function names are not themselves variable, so if you need to generate lots of similar names, the preprocessor helps:
#define DECLARE_SYM(name) fhandle libfoo_##name = dlsym("foo_" #name, lib);
DECLARE_SYM(init); // looks up "foo_init()", declares "libfoo_init" pointer
DECLARE_SYM(free);
DECLARE_SYM(get);
DECLARE_SYM(set);
My favourite use is for dispatching CUDA function calls and checking their return value:
#define CUDACALL(F, ARGS...) do { e = F(ARGS); if (e != cudaSuccess) throw cudaException(#F, e); } while (false)
CUDACALL(cudaMemcpy, data, dp, s, cudaMemcpyDeviceToHost);
CUDACALL(cudaFree, dp);
Since this an Open ended Question an trick which I often use and find convenient.
If you want to write an wrapper function over an free function like say malloc, without modifying each and every instance in your code where the function is called then a simple macro shall suffice:
#define malloc(X) my_malloc( X, __FILE__, __LINE__, __FUNCTION__)
void* my_malloc(size_t size, const char *file, int line, const char *func)
{
void *p = malloc(size);
printf ("Allocated = %s, %i, %s, %p[%li]\n", file, line, func, p, size);
/*Link List functionality goes in here*/
return p;
}
You can often use this trick to write your own memory leak detector etc, for debugging purposes.
Though the example is for malloc it can be re-used for any free standing function really.
One example is token pasting if you want to use a value as both an identifier and a value. From the msdn link:
#define paster( n ) printf_s( "token" #n " = %d", token##n )
int token9 = 9;
paster( 9 ); // => printf_s( "token9 = %d", token9 );
There are also cases in the c++ faq where though there may be alternatives the macro solution is the best way to do things. One example is pointers to member functions where the right macro
#define CALL_MEMBER_FN(object,ptrToMember) ((object).*(ptrToMember))
makes it much easier to make the call rather than deal with all the assorted hair of trying to do it w/out the macro.
int ans = CALL_MEMBER_FN(fred,p)('x', 3.14);
Honestly I just take their word for it and do it this way, but apparently it gets worse as the calls become more complicated.
Here's an example of someone trying to go it alone
When you need the call itself to optionally return from a function.
#define MYMACRO(x) if(x) { return; }
void fn()
{
MYMACRO(a);
MYMACRO(b);
MYMACRO(c);
}
This is usually used for small bits of repetitive code.
I am not sure that debugging macros take a lot of time. I would believe that I find simple the debugging of macros (even 100 lines monster macros), because you have the possibility to look at the expansion (using gcc -C -E for instance) - which is less possible with e.g. C++ templates.
C macros are useful when in several occasions:
you want to process a list of things in several different ways
you want to define an "lvalue" expression
you need efficiency
you need to have the location of the macro thru __LINE__)
you need unique identifiers
etc etc
Look at the many uses of #define-d macros inside major free softwaare (like Gtk, Gcc, Qt, ...)
What I regret a lot is that C macro language is so limited.... Imagine if the C macro language would have been as powerful as Guile!!! (Then you could write things as complex as flex or bison as macros).
Look at the power of Common Lisp macros!
If you are using C, you need to use macros to simulate templates.
From http://www.flipcode.com/archives/Faking_Templates_In_C.shtml
#define CREATE_VECTOR_TYPE_H(type) \
typedef struct _##type##_Vector{ \
type *pArray; \
type illegal; \
int size; \
int len; \
} type##_Vector; \
void type##_InitVector(type##_Vector *pV, type illegal); \
void type##_InitVectorEx(type##_Vector *pV, int size, type illegal); \
void type##_ClearVector(type##_Vector *pV); \
void type##_DeleteAll(type##_Vector *pV); \
void type##_EraseVector(type##_Vector *pV); \
int type##_AddElem(type##_Vector *pV, type Data); \
type type##_SetElemAt(type##_Vector *pV, int pos, type data); \
type type##_GetElemAt(type##_Vector *pV, int pos);
#define CREATE_VECTOR_TYPE_C(type) \
void type##_InitVector(type##_Vector *pV, type illegal) \
{ \
type##_InitVectorEx(pV, DEF_SIZE, illegal); \
} \
void type##_InitVectorEx(type##_Vector *pV, int size, type illegal) \
{ \
pV-len = 0; \
pV-illegal = illegal; \
pV-pArray = malloc(sizeof(type) * size); \
pV-size = size; \
} \
void type##_ClearVector(type##_Vector *pV) \
{ \
memset(pV-pArray, 0, sizeof(type) * pV-size); \
pV-len = 0; \
} \
void type##_EraseVector(type##_Vector *pV) \
{ \
if(pV-pArray != NULL) \
free(pV-pArray); \
pV-len = 0; \
pV-size = 0; \
pV-pArray = NULL; \
} \
int type##_AddElem(type##_Vector *pV, type Data) \
{ \
type *pTmp; \
if(pV-len = pV-size) \
{ \
pTmp = malloc(sizeof(type) * pV-size * 2); \
if(pTmp == NULL) \
return -1; \
memcpy(pTmp, pV-pArray, sizeof(type) * pV-size); \
free(pV-pArray); \
pV-pArray = pTmp; \
pV-size *= 2; \
} \
pV-pArray[pV-len] = Data; \
return pV-len++; \
} \
type type##_SetElemAt(type##_Vector *pV, int pos, type data) \
{ \
type old = pV-illegal; \
if(pos = 0 && pos <= pV-len) \
{ \
old = pV-pArray[pos]; \
pV-pArray[pos] = data; \
} \
return old; \
} \
type type##_GetElemAt(type##_Vector *pV, int pos) \
{ \
if(pos = 0 && pos <= pV-len) \
return pV-pArray[pos]; \
return pV-illegal; \
}
Consider the standard assert macro.
It uses conditional compilation to ensure that the code is included only in debug builds (rather than relying on the optimizer to elide it).
It uses the __FILE__ and __LINE__ macros to create references to the location in the source code.
I've once used macro to generate a large string array along with index enumeration:
strings.inc
GEN_ARRAY(a)
GEN_ARRAY(aa)
GEN_ARRAY(abc)
GEN_ARRAY(abcd)
// ...
strings.h
// the actual strings
#define GEN_ARRAY(x) #x ,
const char *strings[]={
#include "strings.inc"
""
};
#undef GEN_ARRAY
// indexes
#define GEN_ARRAY(x) enm_##x ,
enum ENM_string_Index{
#include "strings.inc"
enm_TOTAL
};
#undef GEN_ARRAY
It is useful when you have several array that has to be kept synchronized.
To expand on #tenfour's answer about conditional returns: I do this a lot when writing Win32/COM code where it seems I'm checking an HRESULT every second line. For example, compare the annoying way:
// Annoying way:
HRESULT foo() {
HRESULT hr = SomeCOMCall();
if (SUCCEEDED(hr)) {
hr = SomeOtherCOMCall();
}
if (SUCCEEDED(hr)) {
hr = SomeOtherCOMCall2();
}
// ... ad nauseam.
return hr;
}
With the macro-y nice way:
// Nice way:
HRESULT foo() {
SUCCEED_OR_RETURN(SomeCOMCall());
SUCCEED_OR_RETURN(SomeOtherCOMCall());
SUCCEED_OR_RETURN(SomeOtherCOMCall2());
// ... ad nauseam.
// If control makes it here, nothing failed.
return S_OK;
}
It's doubly handy if you wire up the macro to log any failures automatically: using other macro ideas like token pasting and FILE, LINE, etc; I can even make the log entry contain the code location and the expression that failed. You could also throw an assert in there if you wanted to!
#define SUCCEED_OR_RETURN(expression) { \
HRESULT hrTest = (expression); \
if (!SUCCEEDED(hrTest)) { \
logFailure( \
#expression, \
HResultValueToString(hrTest), \
__FILE__, \
__LINE__, \
__FUNCTION__); \
return hrTest; \
} \
}
Debugging would be much easier as your project will be in divided into various modules for each task. Macros can be very useful when you have a large and complex software project. But there are some pitfalls which are stated here.
For me it's more comfortable to use macros for constants and for parts of code that have no separated logical functionality. But there are some important differences between (inline) functions and (function-like) macros, here they are:
http://msdn.microsoft.com/en-us/library/bf6bf4cf.aspx
Related
I want to write a simple Macro function. Because this macro is used in many places by different normal c++ functions, I encountered a variable scope issue. I would like to know if there is a quick way to solve it? Thank you very much.
As you can see in the attached code, depending on whether the macro is called in the function for the first time or not, I want to either declare or reuse the variable ptrCandidate. Note the variable scope is in the function, not in the file or translation unit. In other words, every time the macro is invoked in a new function for the 1st time, I want the top macro. And within the same function, if the macro is invoked again, I want the bottom macro.
#define EXPECT_MY_CLASS_EQ(expectedStr, candidateStr) \
auto ptrCandidate = parseAndGetPtr(candidateStr); \
doWork(ptrCandidate); \
EXPECT_EQ(expectedStr, convertToString(ptrCandidate));
#define EXPECT_MY_CLASS_EQ(expectedStr, candidateStr) \
ptrCandidate = parseAndGetPtr(candidateStr); \
doWork(ptrCandidate); \
EXPECT_EQ(expectedStr, convertToString(ptrCandidate));
void foo(){
EXPECT_MY_CLASS_EQ("123","abcd")
}
void bar(){
EXPECT_MY_CLASS_EQ("111","aabb")
EXPECT_MY_CLASS_EQ("222","ccdd")
}
void foo(){
auto ptrCandidate = parseAndGetPtr("abcd");
doWork(ptrCandidate);
EXPECT_EQ("123", convertToString(ptrCandidate));
}
void bar(){
auto ptrCandidate = parseAndGetPtr("aabb");
doWork(ptrCandidate);
EXPECT_EQ("111", convertToString(ptrCandidate));
/* auto */ ptrCandidate = parseAndGetPtr("ccdd");
doWork(ptrCandidate);
EXPECT_EQ("222", convertToString(ptrCandidate));
}
As shown in another answer, you don't need a macro in this case.
Generally speaking though, you can avoid re-definitions of variable names by the following means:
Use of __LINE__ preprocessor symbol (or __COUNTER__, though IIRC that's not standard). Note that creating a variable name with the preprocessor requires two levels of indirection (replace VARIABLE in the link with __LINE__).
A do { /* code */ } while(0) ... which is AFAIK the most common way to write macros that are more than just a simple expression.
A lambda which is immediately executed:
([](auto var) { /* code using var */ })(initExpressionForVar())
Note that each of these approaches actually creates a new variable each time, so is semantically different from your approach with two separate macros! This is especially important if the type of the (assigned) variable has a non-default assignment operator!
If, for some reason, you rely on the reuse of a single variable and the assignment to it, then IMO the easiest approach is to define two macros. One macro which declares the variable (and initializes it, if necessary), and another macro with the code which uses the variable.
It seems regular function works:
void EXPECT_MY_CLASS_EQ(const char* expectedStr, const char* candidateStr)
{
auto ptrCandidate = parseAndGetPtr(candidateStr);
doWork(ptrCandidate);
EXPECT_EQ(expectedStr, convertToString(ptrCandidate));
}
A possible way might be to use __LINE__ or __COUNTER__ with preprocessor symbol concatenation.
In your case, you probably don't need any macro: prefer some static inline function.
Here is a real-life example (using concatenation and __LINE__) from my Bismon's project file cmacros.h line 285 (it is in C, but the same trick could be done in C++)
#define LOCAL_FAILURE_HANDLE_ATBIS_BM(Fil,Lin,Lockset,Flabel,FcodVar,ReasonVar,PlaceVar) \
struct failurehandler_stBM fh_##Lin \
= { \
.pA = {.htyp = typayl_FailureHandler_BM}, \
.failh_magic = FAILUREHANDLEMAGIC_BM, \
.failh_lockset = Lockset, \
.failh_reason = NULL, \
.failh_jmpbuf = {}}; \
curfailurehandle_BM = &fh_##Lin; \
volatile int failcod_##Lin = setjmp(fh_##Lin.failh_jmpbuf); \
FcodVar = failcod_##Lin; \
if (failcod_##Lin) { \
ReasonVar = fh_##Lin.failh_reason; \
PlaceVar = fh_##Lin.failh_place; \
goto Flabel; \
}; \
(void)0
#define LOCAL_FAILURE_HANDLE_AT_BM(Fil,Lin,Lockset,Flabel,FcodVar,ReasonVar,PlaceVar) \
LOCAL_FAILURE_HANDLE_ATBIS_BM(Fil,Lin,Lockset,Flabel,FcodVar,ReasonVar,PlaceVar)
/// code using LOCAL_FAILURE_HANDLE_BM should probably backup and
/// restore the curfailurehandle_BM
#define LOCAL_FAILURE_HANDLE_BM(Lockset,Flabel,FcodVar,ReasonVar,PlaceVar) \
LOCAL_FAILURE_HANDLE_AT_BM(__FILE__,__LINE__,Lockset,Flabel,FcodVar,ReasonVar,PlaceVar)
Back to your question, if you still want a macro: just create a block, e.g.
#define EXPECT_MY_CLASS_EQ(expectedStr, candidateStr) do{ \
auto ptrCandidate = parseAndGetPtr(candidateStr); \
doWork(ptrCandidate); \
EXPECT_EQ(expectedStr, convertToString(ptrCandidate));} while(0)
How can we check if there is an assignment in a macro param as in below example?
define:
#define value(x) {...}
call:
case a: value( a = 10 )
case b: value( 10 )
what i want to do is implement a string enum in below way:
#define STR_ENUM_DICT_ITEM_(value) [#((MethodX value)) stringValue]:##value,
#define STR_ENUM_DICT_ITEM(idx, value) STR_ENUM_DICT_ITEM_(value)
#define STR_ENUM(type, name, ...) \
typedef NS_ENUM (type, name){__VA_ARGS__}; \
NSString *name##_S(type value) \
{ \
static NSDictionary *values; \
static dispatch_once_t onceToken; \
dispatch_once(&onceToken, ^{ \
values = #{ \
metamacro_foreach(STR_ENUM_DICT_ITEM, , __VA_ARGS__) \
}; \
}); \
return [values valueForKey:[#(value)stringValue]]; \
}
STR_ENUM(NSUInteger, MethodX,
Method1 = 100// this is comment
, Method2
, Method3 = Method1
);
so i need to check if there is a assignment in the param, or other way
can get the value of (Method1 = 100) or (Method3 = Method1), which result is 100, 100.
Not very efficient in terms of performance but it works:
#define value(x) \
do { \
assert(!strchr(#x, '=')); \
/* rest of macro */ \
} while (0)
This is a simple example only covering the two cases provided by the OP. However, using the # operator to convert the macro's argument into a "string" one can create as complex rules to test against as one likes.
Can you specify in more details which cases do you want to distinguish? How about
value(a)
value(a+2)
value(a==10)
value(a<=10)
value('=')
?
What do you want to happen in case it contains assignment? Compilation error or something other?
For a compilation error, I managed to get the following work
#define check(a) if (a==a);
int main() {
int a;
check(10);
check(a);
check(a+2);
check(a==10);
check(a<=10);
check('=');
check(a=10);
return 0;
}
Every macros except the a=10 compiles. The latter turns into a=10==a=10, which does not compile.
I have a macro that checks an error state. If there is an error, it logs the result and returns out of the method.
CHECKHR_FAILED_RETURN(hr) if(FAILED(hr)){LOGHR_ERROR(hr); return hr;}
The macro is called like this:
CHECKHR_FAILED_RETURN(_recordingGraph->StopRecording(¤tFile));
However, if the result has indeed FAILED(hr), the method is executed again to perform the LOGHR_ERROR(hr). I see why my StopRecording is getting called twice in case of an error, so my question is...
How do you evaluate the result of a parameter in a macro, but use it multiple times within the same macro?
UPDATE:
Based on suggestions below, I changed my macros to the following.
#define CHECKHR_FAILED_RETURN(hr) do { \
HRESULT result = hr; \
if(FAILED(result)) \
{ \
LOGHR_ERROR(result); \
return result; \
} \
} while (false);
#define CHECKHR_FAILED(hr) do { \
HRESULT result = hr; \
if(FAILED(result)) \
{ \
LOGHR_ERROR(result); \
return true; \
} \
else \
{ \
return false; \
} \
} while (false);
As one commenter says, prefer a function to a macro in every place where it's possible. In this case it's not possible, since you want to embed a return into the code.
You can do an assignment to a temporary variable within the macro and use it instead of calling the parameter multiple times.
#define CHECKHR_FAILED_RETURN(hr) do{ HRESULT hr2=hr; if(FAILED(hr2)) {LOGHR_ERROR(hr2); return hr2; }}while(false)
The do loop is an idiom ensuring that the macro can be used in an if-else just like a function call. With C++11 and onwards you can alternatively use a lambda expression.
How do you write a macro with variable number of arguments to define a function? Suppose that we define the class class1 with 2 parameters and class class2 with three parameters.
class class1 {
public:
int arg1;
int arg2;
class1(int x1, int x2): arg1(x1), arg2(x2) {}
};
class class2 {
public:
int arg1;
int arg2;
int arg3;
class1(int x1, int x2, int x3): arg1(x1), arg2(x2), arg3(x3) {}
};
For each class that I define or even classes that have been defined before I want to write the following:
template<> inline void writeInfo<class1>(const class1& obj, FILE* fp) {
writeAmount(2, fp);
writeName("arg1", fp);
writeInfo(obj.arg1, fp);
writeName("arg2", fp);
writeInfo(obj.arg2, fp);
}
template<> inline void writeInfo<class2>(const class2& obj, FILE* fp) {
writeAmount(3, fp);
writeName("arg1", fp);
writeInfo(obj.arg1, fp);
writeName("arg2", fp);
writeInfo(obj.arg2, fp);
writeName("arg3", fp);
writeInfo(obj.arg3, fp);
}
We do not need to care about the definitions of writeAmount, writeName or writeInfo. What I would like to do is write something like:
MACROWRITEINFO(class1, 2, arg1, arg2);
MACROWRITEINFO(class2, 3, arg1, arg2, arg3);
Is it possible to create such macro so that it can expand to the above template definitions? I've read in a lot of places that macros are evil, but in this case I believe that they are very helpful since they'll reduce the amount of code I type and thus the amount of typos I'll make during the creation of the template functions.
First of all you should improve your formatting/code. Your code lacks "class" keywords and semicolons after classes definitions - when you post a snippet make sure it's proper code, because some people (i.e. me) will try to compile it.
Second of all, dont use function template specialization. If macros are evil, then they must be satan incarnation. Just stick to the good old overloads. See here for details.
And at least - an answer. You could mess around with variadic macros if all args were of the same type - for example, you could create an array inside writeInfo function and iterate over elements. Since it's cleary not the case here you can define many variants of MACROWRITEINFO macro for different number of parameteres, using some common blocks to reduce code repetition. For example:
#define MACROWRITEINFO_BEGIN(type, amount) \
void writeInfo(const type& obj, FILE* fp) \
{ \
writeAmount(amount, fp);
#define MACROWRITEINFO_NAMEINFO(name) \
writeName(#name, fp); \
writeInfo(obj.##name, fp);
#define MACROWRITEINFO_END() \
}
Using those you can now define variants based on number of arguments.
#define MACROWRITEINFO1(type, arg1) \
MACROWRITEINFO_BEGIN(type, 1) \
MACROWRITEINFO_NAMEINFO(arg1) \
MACROWRITEINFO_END()
#define MACROWRITEINFO2(type, arg1, arg2) \
MACROWRITEINFO_BEGIN(type, 2) \
MACROWRITEINFO_NAMEINFO(arg1) \
MACROWRITEINFO_NAMEINFO(arg2) \
MACROWRITEINFO_END()
And so on...
EDIT:
Well I guess it is possible to use variadic macros here. Take at look at this SO question. It's pure madness, but you should be able to achieve what you want.
EDIT:
My idea was to expand variadic arguments into array then iterate over them; if they were of the same type, let's say int, you could write:
#define VAARGSSAMPLE(...) \
int args[] = { __VA_ARGS__ }; \
for (int i = 0; i < sizeof(args)/sizeof(int); ++i) \
{ \
printf("%d\n", args[i]); \
}
VAARGSSAMPLE(1, 5, 666);
So if all your variables were of the same type you could put them in an array. But they are not, so it won't do. If you really, really want to stick to variadic arguments go to my first edit.
I don't think it's possible to do that with a macro. You can use variable arguments (variadic) but you can't generate code which depends on the arguments.
I'd suggest you create a DSL (e.g. simple xml..) and generate code out of it. this is much cleaner and good practice.
You could do this:
<writeInfos>
<writeInfo class="class1" amount="3">
<arguments>
<argument>arg1</argument>
<argument>arg2</argument>
</arguments>
</writeInfo>
</writeInfos>
Then create source code out of this. You should add this step in your build process..
But you can also define something much simpler.. you could put your MACROWRITEINFO "functions" in a text file and parse it yourself..
It's my first question there, and it's a noobish question :).
I'm facing to a problem with C++ and Qt 4.6, because I want to factorize some of my code which is invoking some public slots of a QObject, through the QMetaMethod::invoke() method.
The problem I'm facing to, is that the Q_ARG macro is defined as follow:
#define Q_ARG(type, data) QArgument<type >(#type, data)
That says that I should know at compile time the type. But I get on the other hand my arguments for the method, which are coming as QVariants. I can get their types through the ->type() accessor, which returns an enum value of type QVariant::Type, but naturally not as compile time type.
So to simply generates the arguments for the invocation, I made the following macro:
#define PASS_SUPPORTED_TYPE(parameterToFill, requiredType, param, supported) { \
\
switch (requiredType) { \
case QVariant::String: \
parameterToFill = Q_ARG(QString, \
param.value<QString>()); \
break; \
\
case QVariant::Int: \
parameterToFill = Q_ARG(int, param.value<int>()); \
break; \
\
case QVariant::Double: \
parameterToFill = Q_ARG(double, param.value<double>()); \
break; \
\
case QVariant::Char: \
parameterToFill = Q_ARG(char, param.value<char>()); \
break; \
\
case QVariant::Bool: \
parameterToFill = Q_ARG(bool, param.value<bool>()); \
break; \
\
case QVariant::Url: \
parameterToFill = Q_ARG(QUrl, param.value<QUrl>()); \
break; \
\
default: \
supported = false; \
\
} \
\
supported = true; \
}
The same could be done in a method which could return true or false instead of setting the "supported" flag, but this would force me to make heap allocation in this case, because the "param.value()" call returns a copy of the QVariant value, which I should store in heap through a new or through a memset.
And that is my problem, I don't want to do heap allocation in this method, because this will get called thousands of time (this is a request handling module).
for (int k = 0; k < methodParams.size(); ++k) {
QVariant::Type paramType = QVariant::nameToType(methodParams[k].toAscii());
[...]
bool supportedType = false;
PASS_SUPPORTED_TYPE(
paramsToPass[k],
paramType,
params[k],
supportedType);
[...]
}
metaMethod.invoke(objectToCall, paramsToPass[0], paramsToPass[1], paramsToPass[2] [...]);
This does not please me because it's not type safe. So the question I'm asking myself is how could I fire out this macro, and replace it with a method which would do stack allocation and not heap allocation?
I thank you all in advance for your help and interest.
And that is my problem, I don't want
to do heap allocation in this method,
because this will get called thousands
of time (this is a request handling
module).
Don't second guess performances issues. Yes, stack allocation is faster and yes, one should avoid copies when they aren't needed. However, this looks like premature optimization to me.
It seems you're building a very complex code architecture in order to save a few CPU cycles. In the end, you won't be able to tell reliably what gets called and how many times. And you'll have an unmaintainable code.
My advice would be: focus on the correctness and simplicity of your code and if you really face performances issues at some point, profile your code to see what is wrong.