Recently, I'm working on something like reflection in c++ using by my plugin system. Right now, I wonder if I can convert a super-class pointer into sub-class pointer given the string name of sub-class:
class SuperClass
{
public:
SuperClass(const string &name):class_name(name){}
// a convert function like
// return value should variant like SubClassA * or SubClassB *
// SubClassA * ConvertByName();
private:
string class_name;
};
class SubClassA: public SuperClass
{
public:
SubClassA():SuperClass("SubClassA")
};
class SubClassB: public SuperClass
{
public:
SubClassB():SuperClass("SubClassB")
}
when using:
// some place create instance
SuperClass *one = new SubClassA;
SuperClass *two = new SubClassB;
// other place using
auto a = one->ConvertByName(); // a is of type SubClassA
auto b = two->ConvertByName(); // b is of type SubClassB
Can it be realized? Or is there any better way in c++?
[Update 1]
There my be some other sub-classes, such as, SubClassC, SubClassD, ...
So basically, we don't know what and how many sub-classes are derived from this SuperClass. What we know about sub-class is only its class name in string format.
[Update 2]
My motivation
I need this for plugin system. I want to create a plugin anytime, but don't want hack into my plugin core system codes. That is plugin codes are isolated from projects. A plugin system will never know what and how many plugins are added into system until runtime
Possible, well, this way you manually somewhat reimplement dynamic dispatch and make your class a kind of sealed.
struct Base {
Base(std::string type_id): type_id(std::move(type_id)) {}
template<class F> auto visitThis(F &&f) const;
template<class F> auto visitThis(F &&f);
private:
std::string type_id;
};
struct Child1: Base { Child1(): Base("Child1") {}};
struct Child2: Base { Child2(): Base("Child2") {}};
template<class F> auto Base::visitThis(F &&f) const {
if(type_id == "Child1") {
return std::invoke(std::forward<F>(f),
static_cast<Child1 const *>(this));
}
else if(type_id == "Child2") {
return std::invoke(std::forward<F>(f),
static_cast<Child2 const *>(this));
}
else throw std::runtime_error("Unsupported subclass");
}
template<class F> auto Base::visitThis(F &&f) {
if(type_id == "Child1") {
return std::invoke(std::forward<F>(f), static_cast<Child1 *>(this));
}
else if(type_id == "Child2") {
return std::invoke(std::forward<F>(f), static_cast<Child2 *>(this));
}
else throw std::runtime_error("Unsupported subclass");
}
int main() {
std::unique_ptr<Base> b1 = std::make_unique<Child1>();
b1->visitThis([](Child1 const *ch) { std::cout << "Hi, Ch1!\n"; });
}
If your classes all have some virtual thing, consider using dynamic_cast
See this C++ reference for details, and read a good C++ programming book.
Read also the documentation of your C++ compiler (e.g. GCC)
Right now, I wonder if I can convert a super-class pointer into sub-class pointer given the string name of sub-class
This is not possible without specific coding and programming conventions
(since the class names do not exist at runtime). Look inside Qt or RefPerSys as an example.
A possible approach could be to write your C++ code generator to help you (so generate parts of your C++ code - probably some header file containing your class declarations-, like Qt does with its moc, and configure your build automation tool, e.g. your Makefile). Look perhaps inside ANTLR, SWIG, GPP, etc...
A more ambitious approach, if you use GCC, would be to write your own GCC plugin. Consider also extending Clang. This is worthwhile only for large existing code bases.
A plugin system will never know what and how many plugins are added into system until runtime
It seems that you are designing some plugin machinery. Take inspiration from Qt plugins or FLTK plugins. If on Linux, see manydl.c and consider generating some of the C++ code of your plugins (see e.g. this draft report, and the CHARIOT and DECODER European projects).
BTW, do you want to unload plugins (on Linux, call dlclose(3); read also then the C++ dlopen minihowto)? Do you have a multi-threaded application? If you do, you'll better have some locking (e.g. std::mutex) to avoid parallel plugin loading.
You could also consider generating at runtime some glue code: e.g. using libgccjit or asmjit, or simply generating some temporary C++ code (e.g. on Linux in /tmp/generated.cc that you would compile - maybe with popen(3) - using g++ -Wall -O -fPIC /tmp/generated.cc -o /tmp/generated-plugin.so) and later dlopen(3) that /tmp/generated-plugin.so. Read Drepper's paper how to write shared libraries (for Linux).
C++ does not ship enough information in binaries to write new code.
Dynamically linked C++ code do not carry enough information for other dynamically linked code to build a copy of the class at link time.
So there is no way, short of shipping a C++ compiler, to do exactly what you are asking. I have heard of some people who go that far, and embed C++ compilers into their hand-grown "dynamic linking" environment, but usually by that point you are better off with using a language where that is a built-in feature, or not using the raw C++ object model and using something reflection-enabled.
It is quite likely that the underlying problem you are trying to solve using this technique can be solved in C++, if one exists.
Below code came from a post about C++ interview questions here. I've never known this technique :) (though it's claimed a good one :)). My questions are: In which situation do we need to use it? Do you often see it in your real production/legacy code?
Question:
Implement a method to get topSecretValue for any given Something* object. The method should be cross-platform compatible and not depend on sizeof (int, bool, string).
class Something {
Something() {
topSecretValue = 42;
}
bool somePublicBool;
int somePublicInt;
std::string somePublicString;
private:
int topSecretValue;
};
Answer:
Create another class which has all the members of Something in the same order, but has additional public method which returns the value. Your replica Something class should look like:
class SomethingReplica {
public:
int getTopSecretValue() { return topSecretValue; } // <-- new member function
bool somePublicBool;
int somePublicInt;
std::string somePublicString;
private:
int topSecretValue;
};
int main(int argc, const char * argv[]) {
Something a;
SomethingReplica* b = reinterpret_cast<SomethingReplica*>(&a);
std::cout << b->getTopSecretValue();
}
It’s important to avoid code like this in a final product, but it’s nevertheless a good technique when dealing with legacy code, as it can be used to extract intermediate calculation values from a library class. (Note: If it turns out that the alignment of the external library is mismatched to your code, you can resolve this using #pragma pack.)
You can do this without reinterpret_cast. There is a trick using templates and friends that is outlined in the following blog post that demonstrates the technique:
Access to private members. That's easy!
This is certainly safer than the interviewer's approach, since it eliminates human error in re-creating the class definition. Is this approach good at all, though? The given question has some incredibly artificial constraints that would rarely apply to a 'real' project. If it's a C++ project and you have access to the header file, why not just add a getter? If it's not a C++ project, why are you so constrained in your definition of the interop class?
I want to write a function that calls several sub functions and return the result of these sub functions.
sub functions:
template<class A> A sub1(A a)
template<class B> B sub2(B b, int i)
template<class C> C sub3(C c, string p)
THE function will call these accordingly in the switch statement.
Sorry I only have pseudo code since I am confused with the issue and not start to write the code.
mf(string s)
{
int k;
k = process(s)
string
switch (k){
case 0:
return sub1(k);
case 1:
return sub2(s, k);
case 2:
return sub3(k, s);
default:
break;
}
}
How can I define mf above since there is no return type for it now? using template again?
By the way, my c++ compiler does support c++ 11 standard which I am not so familiar with.
C++ is basically a static-typed language, which means all types of expressions are decided at compile time rather than at run time.
Using dynamic-typing in a static-typed language is possible, but not recommended for widely use. Because doing so you're giving up almost all the polymorphism features provided by the language. You'll have to check types manually, or implement your own dynamic-type-based polymorphism.
If the data returned is not too complex, tagged structure is usually a good idea:
struct Value
{
enum {INT, FLOAT, PTR} type;
union
{
int int_data;
float float_data;
void *ptr_data;
};
};
For more complex data types with a lot of operations needed to support, you should consider using abstract interfaces and inheritance.
If you considered the problem seriously and believe that none of those methods above applies to your problem, and that dynamic typing is the best way, here are some options:
boost::any -- A unique container for all types. Need to test for types and convert them manually before use.
boost::variant -- A union-like container which supports unary polymorphic operations via boost::static_visitor.
Some programming frameworks have their own support for dynamic-typing. One example is QVariant in Qt. If you are in such a framework, it's usually recommended to use them instead of something else from another library.
If you need a function that returns the value of its sub function you need the same return type for all of them.
Here a small meaningless example:
double calculatedPositive(double value)
{
// Do stuff
}
double calculatedNegative(double value)
{
// Do stuff
}
double functionA(double value)
{
if(value > 0)
return calculatePositive(value);
else
return calculateNegative(value);
}
P.-S. We could provide you with a better answer if you'd say what you are trying to achieve ;)
In a pure C++ world we can generate interfacing or glue code between different components or interfaces at compile time, using a combination of template-based compile-time and runtime-techniques (to e.g. mostly automatically marshall to/from calls using legacy types).
When having to interface C++ applications with Objective-C/Cocoa for GUI, system integration or IPC though, things become harder due to the less strict typing - yet often not more then a flat repitive interface layer is needed: thin bridging delegates have to be defined or conversion code to language bridging calls has to be written.
If you have to deal with interfaces of non-trivial size and want to avoid script-based code generation this quickly becomes cumbersome and is just a pain every time refactorings have to take place. Using a combination of (template) metaprogramming and the Objective-C runtime library, it should be possible to reduce the amount of code considerably...
Before i go to reinvent the wheel (and possibly waste time), does anyone know about techniques, best-practices or examples in that direction?
As for an example, lets say we need a delegate that supports this informal protocol:
- (NSString*)concatString:(NSString*)s1 withString:(NSString*)s2;
- (NSNumber*) indexOf:(CustomClass*)obj;
Instead of implementing an Obj-C class now that explicitly bridges to a C++-instance, i'd like to do something like this instead:
class CppObj {
ObjcDelegate m_del;
public:
CppObj() : m_del(this)
{
m_del.addHandler
<NSString* (NSString*, NSString*)>
("concatString", &CppObj::concat);
m_del.addHandler
<NSNumber* (CustomClass*)>
("indexOf", &CppObj::indexOf);
}
std::string concat(const std::string& s1, const std::string& s2) {
return s1.append(s2);
}
size_t indexOf(const ConvertedCustomClass& obj) {
return 42;
}
};
All that should be needed from the user to support additional types would be to specialize a conversion template function:
template<class To, class From> To convert(const From&);
template<>
NSString* convert<NSString*, std::string>(const std::string& s) {
// ...
}
// ...
The example above of course does ignore support for formal protocols etc. but should get the point across. Also, due to the type-information for Objc-runtime-types being mostly decayed into some-native-types or class-type i don't think the explicit specification of parameter and return types for the delegate-methods can be avoided.
I didn't find anything satisfactory and came up with a prototype that, given the following informal protocol:
- (NSString*)concatString:(NSString*)s1 withString:(NSString*)s2;
and this C++ code:
struct CppClass {
std::string concatStrings(const std::string& s1, const std::string& s2) const {
return s1+s2;
}
};
std::string concatStrings(const std::string& s1, const std::string& s2) {
return s1+s2;
}
allows creating and passing a delegate:
CppClass cpp;
og::ObjcClass objc("MyGlueClass");
objc.add_handler<NSString* (NSString*, NSString*)>
("concatString:withString:", &cpp, &CppClass::concatStrings);
// or using a free function:
objc.add_handler<NSString* (NSString*, NSString*)>
("concatString:withString:", &concatStrings);
[someInstance setDelegate:objc.get_instance()];
which can then be used:
NSString* result = [delegate concatString:#"abc" withString:#"def"];
assert([result compare:#"abcdef"] == NSOrderedSame);
Boost.Function objects can also be passed, which means Boost.Bind can easily be used as well.
While the basic idea works, this is still a prototype. I did a short blog post on the subject and the prototype source is available via bitbucket. Constructive input and ideas welcome.
Did you look at the wxWidgets library? I don't code in Objective-C, but at least the developers claim decent support for Cocoa/Objective-C. Which means, they have some mapping from C++ implemented somehow. The library's website is http://www.wxwidgets.org.
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.