It is my very first post, so I would like to welcome with everybody. The problem I have occurred is the code optimization at compilation time, and to be more specific removing debug prints.
Let's imagine that we have native syslog logger and we are wrapping it (without using of macros, it is very important note!) with following code:
enum severity { info_log, debug_log, warning_log, error_log };
template <severity S>
struct flusher {
logger* log_;
flusher(logger* log) : log_(log) {}
flusher(flusher& rhs) : log_(rhs.log_) {}
~flusher() { syslog(S, log_->stream.str()); log_->stream.str(""); }
operator std::ostream& () { return log_->stream; }
};
#ifdef NDEBUG
template <> struct flusher<debug_log> {
flusher(logger*) {}
flusher(flusher&) {}
~flusher() {}
template <typename T> flusher& operator<<(T const&) { return *this; }
};
#endif
struct logger {
std::ostringstream stream;
template <severity T>
flusher<T> operator<<(flusher<T> (*m)(logger&)) { return m(*this); }
};
inline flusher<info_log> info(logger& log) { return flusher<info_log>(&log); }
inline flusher<debug_log> debug(logger& log) { return flusher<debug_log>(&log); }
inline flusher<warning_log> warning(logger& log) { return flusher<warning_log>(&log); }
inline flusher<error_log> error(logger& log) { return flusher<error_log>(&log); }
I thought that the empty implementation of flusher will encourage compiler to remove such useless code, but with both O2 and O3 it is not removed.
Is there any possibility to provoke mentioned behaviour?
Thanks in advance
I have successfully done what you're attempting, although with at least two differences... 1) I wasn't using templates - that might be creating a complexity the compiler is unable to optimize out, and 2) my log use included a macro (see below).
Additionally, you may have already done this, make sure all your "empty" definitions are in the logger's header file (so optimizations are done at compile-time and not postponed to link-time).
// use it like this
my_log << "info: " << 5 << endl;
The release definition looks like this:
#define my_log if(true);else logger
and the debug definition looks like this:
#define my_log if(false);else logger
Note that the compiler optimizes out the logger for all if(true) in release, and uses the logger in debug. Also note the full if/else syntax in both cases avoids funny situations where you have unscoped use, e.g.
if (something)
my_log << "this" << endl;
else
somethingelse();
would cause somethingelse to be the else of my_log without it.
Your current code is not preventing the call to f() and any side effects it may have, only preventing the actual printing. This is why macros are the traditional approach to this problem - they provide an unevaluated context where you can check if the value should be printed before actually printing.
In order to achieve this without macros, some extra indirection is needed e.g. std::function, function pointers etc. As an example, you could provide a wrapper class which contained a std::function, and specialise your stream operators to call the std::function in the default case, and not in the NDEBUG case
Very rough example:
//Wrapper object for holding std::functions without evaluating
template <typename Func>
struct debug_function_t {
debug_function_t(Func & f) : f(f) {}
decltype(f()) operator()() { return f(); }
std::function<Func> f;
};
//Helper function for type deduction
template <typename Func>
debug_function_t<Func> debug_function(Func & f) {
return debug_function_t<Func>(f);
}
struct debug_logger {
template <typename T>
debug_logger & operator<<(T & rhs) {}
template <typename Func> //Doesn't call f(), so it's never evaluated
debug_logger & operator<<(debug_function_t<Func> f) { }
};
Then in your client code
int f(){ std::cout << "f()\n"; }
debug_logger log;
log << debug_function(f);
So, following the comment's code:
inline int f()
{
std::cout << 1;
return 1;
}
needs to be made into:
inline int f()
{
#ifndef NDEBUG
std::cout << 1;
#endif
return 1;
}
or something like this:
#ifndef NDEBUG
static const int debug_enable = 1;
#else
static const int debug_enable = 0;
#endif
inline int f()
{
if (debug_enable)
{
std::cout << 1;
}
return 1;
}
You need to tell the compiler somehow that this code isn't needed.
The technique I've used for a few games requires the debug printing to be a function rather than a general expression. E.g.:
debug_print("this is an error string: %s", function_that_generates_error_string());
In release mode, the definition of debug_print is:
#define debug_print sizeof
That removes debug_print and any expression passed to it from the executable. It still has to be passed valid expressions, but they are not evaluated at runtime.
Related
Lets say I'm trying to write multiple handlers for multiple message types.
enum MESSAGE_TYPE { TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR };
One solution might be
void handler_for_type_one(...){ ... }
void handler_for_type_two(...){ ... }
...
switch(message_type){
case TYPE_ONE: handler_for_type_one(); break;
case TYPE_TWO: handler_for_type_two(); break;
...
And yeah, that would work fine. But now I want to add logging that wraps each of the handlers. Let's say a simple printf at the beginning / end of the handler function (before and after is fine too).
So maybe I do this:
template<MESSAGE_TYPE>
void handler() {
std::printf("[default]");
}
template<> void handler<TYPE_ONE>() {
std::printf("[one]");
}
template<> void handler<TYPE_TWO>() {
std::printf("[two]");
}
template<> void handler<TYPE_THREE>() {
std::printf("[three]");
}
int main()
{
std::printf("== COMPILE-TIME DISPATCH ==\n");
handler<TYPE_ZERO>();
handler<TYPE_ONE>();
handler<TYPE_TWO>();
handler<TYPE_THREE>();
handler<TYPE_FOUR>();
}
And it works how I'd expect:
== COMPILE-TIME DISPATCH ==
[default][one][two][three][default]
When the message-type is known at compile time, this works great. I don't even need that ugly switch. But outside of testing I won't know the message type and even if I did, wrap_handler (for the logging) "erases" that, requiring me to use the switch "map".
void wrap_handler(MESSAGE_TYPE mt) {
std::printf("(before) ");
switch (mt) {
case TYPE_ZERO: handler<TYPE_ZERO>(); break;
case TYPE_ONE: handler<TYPE_ONE>(); break;
case TYPE_TWO: handler<TYPE_TWO>(); break;
case TYPE_THREE: handler<TYPE_THREE>(); break;
//case TYPE_FOUR: handler<TYPE_FOUR>(); break; // Showing "undefined" path
default: std::printf("(undefined)");
}
std::printf(" (after)\n");
}
int main()
{
std::printf("== RUNTIME DISPATCH ==\n");
wrap_handler(TYPE_ZERO);
wrap_handler(TYPE_ONE);
wrap_handler(TYPE_TWO);
wrap_handler(TYPE_THREE);
wrap_handler(TYPE_FOUR);
}
== RUNTIME DISPATCH ==
(before) [default] (after)
(before) [one] (after)
(before) [two] (after)
(before) [three] (after)
(before) (undefined) (after)
My "goals" for the solution are:
Have the enum value as close to the handler definition as possible -- template specialization like I show above seems to be about the best I can do in this area, but I have no idea.
When adding a message-type/handler, I'd prefer to keep the changes as local/tight as possible. (Basically, I'm looking for any way to get rid of that switch).
If I do need a switch or map, etc., since it'd be far away from the new handler, I'd like a way at compile time to tell whether there's a message type (enum value) without a corresponding switch case. (Maybe make the switch a map/array? Not sure if you can get the size of an initialized map at compile time.)
Minimize boilerplate
The other solution that seems obvious is a virtual method that's overridden in different subclasses, one for each message type, but it doesn't seem like there's a way to "bind" a message type (enum value) to a specific implementation as cleanly as the template specialization above.
Just to round it out, this could be done perfectly with (other languages) decorators:
#handles(MESSAGE_TYPE.TYPE_ZERO)
def handler(...):
...
Any ideas?
One way I'd get rid of the manual switch statements is to use template recursion, as follows. First, we create an integer sequence of your enum class, like so:
enum MESSAGE_TYPE { TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR };
using message_types = std::integer_sequence<MESSAGE_TYPE, TYPE_ZERO, TYPE_ONE, TYPE_TWO, TYPE_THREE, TYPE_FOUR>;
Second, let's change slightly the handler and make it a class with a static function:
template <MESSAGE_TYPE M>
struct Handler
{
// replace with this whatever your handler needs to do
static void handle(){std::cout << (int)M << std::endl;}
};
// specialise as required
template <>
struct Handler<MESSAGE_TYPE::TYPE_FOUR>
{
static void handle(){std::cout << "This is my last message type" << std::endl;}
};
Now, with these we can easily use template recursion to create a generic switch map:
template <class Sequence>
struct ct_map;
// specialisation to end recusion
template <class T, T Head>
struct ct_map<std::integer_sequence<T, Head>>
{
template <template <T> class F>
static void call(T t)
{
return F<Head>::handle();
}
};
// recursion
template <class T, T Head, T... Tail>
struct ct_map<std::integer_sequence<T, Head, Tail...>>
{
template <template <T> class F>
static void call(T t)
{
if(t == Head) return F<Head>::handle();
else return ct_map<std::integer_sequence<T, Tail...>>::template call<F>(t);
}
};
And use as follows:
int main()
{
ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_ZERO);
ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_THREE);
ct_map<message_types>::call<Handler>(MESSAGE_TYPE::TYPE_FOUR);
}
If now, you want to create your wraphandler, you can do this:
template <MESSAGE_TYPE M>
struct WrapHandler
{
static void handle()
{
std::cout << "Before" << std::endl;
Handler<M>::handle();
std::cout << "After" << std::endl;
}
};
int main()
{
ct_map<message_types>::call<WrapHandler>(MESSAGE_TYPE::TYPE_THREE);
}
Live code here
The way I understand it, a function pointer may be what you need.
Going from your example, the code would be like this:
template<MESSAGE_TYPE>
void handler() {
std::printf("[default]");
}
template<> void handler<TYPE_ONE>() {
std::printf("[one]");
}
template<> void handler<TYPE_TWO>() {
std::printf("[two]");
}
template<> void handler<TYPE_THREE>() {
std::printf("[three]");
}
void wrap_handler(void (*handler)()) {
std::printf("(before) ");
if (!handler)
std::printf("(undefined)");
else
handler();
std::printf(" (after)\n");
}
int main()
{
std::printf("== COMPILE-TIME DISPATCH ==\n");
handler<TYPE_ZERO>();
handler<TYPE_ONE>();
handler<TYPE_TWO>();
handler<TYPE_THREE>();
handler<TYPE_FOUR>();
std::printf("\n\n");
std::printf("== RUNTIME DISPATCH ==\n");
wrap_handler(TYPE_ZERO);
wrap_handler(TYPE_ONE);
wrap_handler(TYPE_TWO);
wrap_handler(TYPE_THREE);
wrap_handler(TYPE_FOUR);
}
The function pointer mirrors the prototype of the function (meaning all calls need to be compatible).
In order to pass an argument, the function would change to:
void wrap_handler(void (*handler)(ArgumentType), const ArgumentType &arg) {
std::printf("(before) ");
if (!handler)
std::printf("(undefined)");
else
handler(arg);
std::printf(" (after)\n");
}
A way around this would be to use std::function (C++11).
void wrap_handler(std::function<> handler) {
std::printf("(before) ");
if (!handler)
std::printf("(undefined)");
else
handler();
std::printf(" (after)\n");
}
Possible ways to call this include:
wrap_handler(&functionWithoutArguments);
wrap_handler(std::bind(functionWithArgument, someArgument);
wrap_handler([=](){ LambdaCode; });
etc.
This is a common problem for all applications receiving messages or events.
However, in C++ the switch or some kind of table of handlers is the best you can do. The reason is that the value of the enum only exists at run-time, therefore you cannot make that decision at compile time.
Other languages, like Python, can provide the solution you are looking for, because they are interpreted languages, so compile time and run-time are the same.
Boost asio is good example of how you can hide the switch, but my experience is that hiding it is not as good as you think at the first place.
When you need to debug your code or someone else has to find the handler which belongs to a certain event, or somehow, you have to check if the handler is registered you need to know, where the switch is, place a break point there, or log the incoming messages. This is much more difficult in systems like asio.
C++ requires the exact function signature to be figured out at compile time. This does include determining template parameters. You won't be able to get rid of some logic that determines the exact operation to execute, whether you're creating a map-like data structure for this or keep it a switch. If you're just worried about accidentally leaving out some enum constant in the switch or about the boilerplate code this may be the time to get the preprocessor involved.
#ifdef MESSAGE_TYPES
# error macro name conflict for MESSAGE_TYPES may result in errors
#endif
// x is a function-like macro that takes 1 parameter (2, if you want the constants to assigned a specific value)
#define MESSAGE_TYPES(x) \
x(TYPE_ZERO) \
x(TYPE_ONE) \
x(TYPE_TWO) \
x(TYPE_THREE) \
x(TYPE_FOUR)
#ifdef MESSAGE_TYPE_ENUM_CONSTANT
# error macro name conflict for MESSAGE_TYPE_ENUM_CONSTANT may result in errors
#endif
#define MESSAGE_TYPE_ENUM_CONSTANT(c) c,
enum MESSAGE_TYPE { MESSAGE_TYPES(MESSAGE_TYPE_ENUM_CONSTANT) };
#undef MESSAGE_TYPE_ENUM_CONSTANT
template<MESSAGE_TYPE>
void handler() {
std::printf("[default]");
}
template<> void handler<TYPE_ONE>() {
std::printf("[one]");
}
template<> void handler<TYPE_TWO>() {
std::printf("[two]");
}
template<> void handler<TYPE_THREE>() {
std::printf("[three]");
}
void wrap_handler(MESSAGE_TYPE mt) {
std::printf("(before) ");
#ifdef HANDLER_CALL_SWITCH_CASE
# error macro name conflict for HANDLER_CALL_SWITCH_CASE may result in errors
#endif
#define HANDLER_CALL_SWITCH_CASE(c) case c: handler<c>(); break;
switch (mt) {
MESSAGE_TYPES(HANDLER_CALL_SWITCH_CASE);
default:
std::printf("(undefined)");
break;
}
#undef HANDLER_CALL_SWITCH_CASE
std::printf(" (after)\n");
}
#undef MESSAGE_TYPES
I want to use preprocessor command to control code executive path. Because in this way can save runtime time.
#if (sizeof(T)==1 doesn't comple with error: C1017
template<typename T>
class String
{
public:
static void showSize()
{
#if (sizeof(T)==1)
cout << "char\n";
#else
cout << "wchar_t\n";
#endif
}
};
inline void test()
{
String<char>::showSize();
String<wchar_t>::showSize();
}
The preprocessor runs before the C++ compiler. It knows nothing of C++ types; only preprocessor tokens.
While I would expect any decent compiler to optimize away an if (sizeof(T) == 1), you can be explicit about it in C++17 with the new if constexpr:
template<typename T>
class String
{
public:
static void showSize()
{
if constexpr (sizeof(T) == 1) {
std::cout << "char\n";
} else {
std::cout << "wchar_t\n";
}
}
};
Live Demo
Pre C++17 it's a bit less straightforward. You could use some partial-specialization shenanigans. It's not particularly pretty, and I don't think it will even be more efficient in this case, but the same pattern could be applied in other situations:
template <typename T, size_t = sizeof(T)>
struct size_shower
{
static void showSize()
{
std::cout << "wchar_t\n";
}
};
template <typename T>
struct size_shower<T, 1>
{
static void showSize()
{
std::cout << "char\n";
}
};
template<typename T>
class String
{
public:
static void showSize()
{
size_shower<T>::showSize();
}
};
Live Demo
In this case you could directly specialize String, but I'm assuming in your real situation it has other members that you don't want to have to repeat.
The C and C++ preprocessor is mostly a glorified (well, not that glorious) text replacement engine. It doesn't really understand C or C++ code. It doesn't know sizeof, and it doesn't know C or C++ types. (It certainly won't know what T from your template class is.)
If you want to do things conditionally on T and on sizeof, then you'll need to write C++ code to do it (i.e., if (...) instead of #if ....)
As #some-programmer-dude mentioned in the comment, sizeof is not part of the preprocessor.
you should use if constexpr if you want it to work in compile time.
if you don't care if it happens in compile time or run-time just use a regular if statment
keep in mind that if constexpr is a new feature in C++17!
btw Borland C++ and Watcom C++ support sizeof() in preprocessor expressions, I do not know whether gcc support it.
The relevant code is :
std::fstream fout("Logs.txt");
class Logs;
typedef std::ostream& (*ostream_manipulator2)(std::ostream&);
class LogsOutput
{
public:
LogsOutput() {}
~LogsOutput() {}
Logs * pLogs;
friend LogsOutput& operator<<(LogsOutput &logsClass, std::string &strArg);
friend LogsOutput& operator<<(LogsOutput &logsClass, const char *strArg);
friend LogsOutput& operator<<(LogsOutput &logsClass, ostream_manipulator2 pf);
friend LogsOutput& operator<<(LogsOutput &logsClass, uint64_t number);
};
LogsOutput *pLogsOutput;
template <typename T>
T& LOUToutput()
{
if (pLogsOutput)
{
return (*pLogsOutput);
}
else
return fout;
}
I would like to call this function as such :
LOUToutput () << "Print this line " << std::endl;
Sometimes however the LogsOutput class isn't created, therefore dereferencing its pointer would crash in which case i would rather output to file instead.
I understand that the compiler cannot tell at compile time whether the LogsOutput class will be instantiated or not and thus cannot deduce the type of the template, but I don't see any other way I could make it work.
So my question is how can my function return a different type based on a run time condition ?
The complex solution to this is to use inheritance. If you were to inherit from std::ostream, you could return a common base class (Here is a discussion if you are interested: How to inherit from std::ostream?)
The simpler solution, imo., is to return a proxy class that redirects output as necessary.
struct LogProxy {
LogsOutput *pLog;
// ...
LogProxy &operator<<(std::string &o) {
if(pLogsOutput) {
*pLog << o;
} else {
// Assuming this is available as a global.. You probably don't want to do that
fout << o;
}
return *this;
}
// ....
};
LogProxy LOUToutput() {
return LogProxy { pLogsOutput; };
}
A few other general comments:
If you want to use templates you would need to make this a compile time condition. You could use something like std::enable_if<> to provide multiple template overloads of LOUToutput() which choose at compile time where to log to.
I'm guessing it was just for the purposes of posting to SO, but your code has multiple globals declared in a header file. You'll need to fix that.
There are no const declarations on your code. A lot of those operators look like they should at least be declared const on their output (string,.etc.) parameters.
EDIT: Here is a working (compiles correctly) sample of this idea:
#include <iostream>
struct PRXY {
bool cond;
const PRXY &operator<<(const std::string &t) const {
if(cond) {
std::cout << t;
} else {
std::cerr << t;
}
return *this;
}
};
PRXY pr(bool cond) {
return PRXY { cond };
}
void test() {
pr(false) << "Hello";
}
Do you know how to perform a lazy evaluation of string, like in this D snippet:
void log(lazy string msg) {
static if (fooBarCondition)
writefln(…) /* something with msg */
}
Actually, the problem might not need laziness at all since the static if. Maybe it’s possible to discard char const* strings when not used? Like, in C++:
void log(char const *msg) {
#ifdef DEBUG
cout << … << endl; /* something with msg */
#else /* nothing at all */
#endif
}
Any idea? Thank you.
#ifdef DEBUG
#define log(msg) do { cout << … << endl; } while(0)
#else
#define log(msg) do { } while(0)
#endif
There are two ways to achieve laziness in C++11: macros and lambda expressions. Both are not "lazy" technically, but what is called "normal evaluation" (as opposed to "eager evaluation"), which mean that an expression might be evaluated any number of times. So if you are translating a program from D (or haskell) to C++ you will have to be careful not to use expressions with side effects (including computation time) in these expressions.
To achieve true laziness, you will have to implement memoizing, which is not that simple.
For simple logging, macros are just fine.
You could mix macros and lambdas to create this effect
you could have a type, lazy
template<class T>
class lazy {
...
}
and then you could have a LAZY wrapper that created one of these using a lambda
#define LAZY(E) my_lazy_type<decltype((E))>([&](){ return E; })
All my_lazy_type needs is a constructor that accepts a std::function, and a an overload of operator() that evaluates and returns this. On each evaluation you can replace the thunk with a thunk that just returns the already computed value and thus it would only get computed once.
edit:
here is an example of what I am talking about. I would like however to point out that this is not a perfect example. it passes around a bunch of stuff by value in side the lazy which may completely defeat the purpose of doing this all in the first place. It uses mutable inside this because I need to be able to memoize the thunk in const cases. This could be improved in a lot of ways but it's a decent proof of concept.
#include <iostream>
#include <functional>
#include <memory>
#include <string>
#define LAZY(E) lazy<decltype((E))>{[&](){ return E; }}
template<class T>
class lazy {
private:
struct wrapper {
std::function<T()> thunk;
wrapper(std::function<T()>&& x)
: thunk(std::move(x)) {}
wrapper(const std::function<T()>& x)
: thunk(x) {}
};
//anytime I see mutable, I fill a bit odd
//this seems to be warented here however
mutable std::shared_ptr<wrapper> thunk_ptr;
public:
lazy(std::function<T()>&& x)
: thunk_ptr(std::make_shared<wrapper>(std::move(x))) {}
T operator()() const {
T val = thunk_ptr->thunk();
thunk_ptr->thunk = [val](){return val;};
return val;
}
};
void log(const lazy<std::string>& msg) {
std::cout << msg() << std::endl;
}
int main() {
std::string hello = "hello";
std::string world = "world";
log(LAZY(hello + ", " + world + "!"));
return 0;
}
While Elazar's answer works, I prefer not to use macros for this (especially not ones with all-lowercase names).
Here is what I would do instead:
template<bool /* = false */>
struct logger_impl {
template<typename T>
static std::ostream & write(std::ostream & stream, T const &) {
return stream;
}
};
template<>
struct logger_impl<true> {
template<typename T>
static std::ostream & write(std::ostream & stream, T const & obj) {
return stream << obj;
}
};
template<typename T>
void log(T const & obj) {
#if defined(NDEBUG)
logger_impl<true>::write(std::cout, obj);
#else
logger_impl<false>::write(std::cout, obj);
#endif
}
Just my 2 cents.
I would like to know what is better to use in my situation and why. First of all I heard that using RTTI (typeid) is bad. Anyone could explain why? If I know exactly types what is wrong to compare them in a runtime? Furthermore is there any example how to use boost::type_of? I have found none searching through the mighty google :) Other solution for me is specialization, but I would neet to specialize at least 9 types of new method. Here is an example what I need:
I have this class
template<typename A, typename B, typename C>
class CFoo
{
void foo()
{
// Some chunk of code depends on old A type
}
}
So I need to rather check in typeid(what is I heard is BAD) and make these 3 realizations in example like:
void foo()
{
if (typeid(A) == typeid(CSomeClass)
// Do this chunk of code related to A type
else
if (typeid(B) == typeid(CSomeClass)
// Do this chunk of code related to B type
else
if (typeid(C) == typeid(CSomeClass)
// Do this chunk of code related to C type
}
So what is the best solution? I don't want to specialize for all A,B,C, because every type is has 3 specializations so I will get 9 methods or just this typeid check.
It's bad because
A, B and C are known at compile-time but you're using a runtime mechanism. If you invoke typeid the compiler will make sure to include metadata into the object files.
If you replace "Do this chunk of code related to A type" with actual code that makes use of CSomeClass's interface you'll see you won't be able to compile the code in case A!=CSomeClass and A having an incompatible interface. The compiler still tries to translate the code even though it is never run. (see example below)
What you normally do is factoring out the code into separate function templates or static member functions of classes that can be specialized.
Bad:
template<typename T>
void foo(T x) {
if (typeid(T)==typeid(int*)) {
*x = 23; // instantiation error: an int can't be dereferenced
} else {
cout << "haha\n";
}
}
int main() {
foo(42); // T=int --> instantiation error
}
Better:
template<typename T>
void foo(T x) {
cout << "haha\n";
}
void foo(int* x) {
*x = 23;
}
int main() {
foo(42); // fine, invokes foo<int>(int)
}
Cheers, s
Well generally solutions can be come up with without RTTI. It "can" show you haven't thought the design of the software out properly. THAT is bad. Sometimes RTTI can be a good thing though.
None-the-less there IS something odd in what you want to do. Could you not create an interim template designed something like as follows:
template< class T > class TypeWrapper
{
T t;
public:
void DoSomething()
{
}
};
then partially specialise for the functions you want to as follows:
template<> class TypeWrapper< CSomeClass >
{
CSomeClass c;
public:
void DoSomething()
{
c.DoThatThing();
}
};
Then in your class define above you would do something such as ...
template
class CFoo
{
TypeWrapper< A > a;
TypeWrapper< B > b;
TypeWrapper< C > c;
void foo()
{
a.DoSomething();
b.DoSomething();
c.DoSomething();
}
}
This way it only actually does something in the "DoSomething" call if it is going through the partially specialised template.
The problem lies in the code chunks you write for every specialization.
It doesn't matter if you write (lengthwise)
void foo()
{
if (typeid(A) == typeid(CSomeClass)
// Do this chunk of code related to A type
else
if (typeid(B) == typeid(CSomeClass)
// Do this chunk of code related to B type
else
if (typeid(C) == typeid(CSomeClass)
// Do this chunk of code related to C type
}
or
void foo()
{
A x;
foo_( x );
B y;
foo_( y );
C z;
foo_( z );
}
void foo_( CSomeClass1& ) {}
void foo_( CSomeClass2& ) {}
void foo_( CSomeClass3& ) {}
The upside of the second case is, when you add a class D, you get reminded by the compiler that there is an overload for foo_ missing which you have to write. This can be forgotten in the first variant.
I'm afraid this is not going to work in the first place. Those "chunks of code" have to be compilable even if the type is not CSomeClass.
I don't think type_of is going to help either (if it is the same as auto and decltype in C++0x).
I think you could extract those three chunks into separate functions and overload each for CSomeClass. (Edit: oh there are else if's. Then you might indeed need lots of overloads/specialization. What is this code for?)
Edit2: It appears that your code is hoping to do the equivalent of the following, where int is the special type:
#include <iostream>
template <class T>
bool one() {return false; }
template <>
bool one<int>() { std::cout << "one\n"; return true; }
template <class T>
bool two() {return false; }
template <>
bool two<int>() { std::cout << "two\n"; return true; }
template <class T>
bool three() {return false; }
template <>
bool three<int>() { std::cout << "three\n"; return true; }
template <class A, class B, class C>
struct X
{
void foo()
{
one<A>() || two<B>() || three<C>();
}
};
int main()
{
X<int, double, int>().foo(); //one
X<double, int, int>().foo(); //two
X<double, double, double>().foo(); //...
X<double, double, int>().foo(); //three
}
I think you've got your abstractions wrong somewhere.
I would try redefining A, B & C in terms of interfaces they need to expose (abstract base classes in C++ with pure virtual methods).
Templating allows basically duck-typing, but it sounds like CFoo knows too much about the A B & C classes.
typeid is bad because:
typeid can be expensive, bloats
binaries, carries around extra
information that shouldn't be
required.
Not all compilers support it
It's basically breaking the class hierarchy.
What I would recommend is refactoring: remove the templating, instead define interfaces for A, B & C, and make CFoo take those interfaces. That will force you to refactor the behaviour so the A, B & C are actually cohesive types.