I would like a generic way to create unique compile-time identifiers for any C++ user defined types.
for example:
unique_id<my_type>::value == 0 // true
unique_id<other_type>::value == 1 // true
I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated.
I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};.
A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's.
I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.
I would like to make clear, that using mpl::set or similar constructs like mpl::vector and filtering, does not solve this problem, because the scope of the meta-set/vector is limited and actually causes more problems than just preprocessor meta programming.
A while back I added a build step to one project of mine, which allowed me to write #script_name(args) in a C++ source file and have it automatically replaced with the output of the associated script, for instance ./script_name.pl args or ./script_name.py args.
You may balk at the idea of polluting the language into nonstandard C++, but all you'd have to do is write #sha1(my_type) to get the unique integer hash of the class name, regardless of build order and without the need for explicit instantiation.
This is just one of many possible nonstandard solutions, and I think a fairly clean one at that. There's currently no great way to impose an arbitrary, consistent ordering on your classes without just specifying it explicitly, so I recommend you simply give in and go the explicit instantiation route; there's nothing really wrong with centralising the information, but as you said it's not all that different from an enumeration, which is what I'd actually use in this situation.
Persistence of data is a very interesting problem.
My first question would be: do you really want serialization ? If you are willing to investigate an alternative, then jump to the next section.
If you're still there, I think you have not given the typeid solution all its due.
// static detection
template <typename T>
size_t unique_id()
{
static size_t const id = some_hash(typeid(T)); // or boost::sp_typeinfo
return id;
}
// dynamic detection
template <typename T>
size_t unique_id(T const& t)
{
return some_hash(typeid(t)); // no memoization possible
}
Note: I am using a local static to avoid the order of initialization issue, in case this value is required before main is entered
It's pretty similar to your unique_id<some_type>::value, and even though it's computed at runtime, it's only computed once, and the result (for the static detection) is then memoized for future calls.
Also note that it's fully generic: no need to explicitly write the function for each type.
It may seem silly, but the issue of serialization is that you have a one-to-one mapping between the type and its representation:
you need to version the representation, so as to be able to decode "older" versions
dealing with forward compatibility is pretty hard
dealing with cyclic reference is pretty hard (some framework handle it)
and then there is the issue of moving information from one to another --> deserializing older versions becomes messy and frustrating
For persistent saves, I usually recommend using a dedicated BOM. Think of the saved data as a message to your future self. And I usually go the extra mile and proposes the awesome Google Proto Buffer library:
Backward and Forward compatibility baked-in
Several format outputs -> human readable (for debug) or binary
Several languages can read/write the same messages (C++, Java, Python)
Pretty sure that you will have to implement your own extension to make this happen, I've not seen nor heard of any such construct for compile-time. MSVC offers __COUNTER__ for the preprocessor but I know of no template equivalent.
Related
In C++11 and later, the <type_traits> header contains many classes for type checking, such as std::is_empty, std::is_polymorphic, std::is_trivially_constructible and many others.
While we use these classes just like normal classes, I cannot figure out any way to possibly write the definition of these classes. No amount of SFINAE (even with C++14/17 rules) or other method seems to be able to tell if a class is polymorphic, empty, or satisfy other properties. An class that is empty still occupies a positive amount of space as the class must have a unique address.
How then, might compilers define such classes in C++? Or perhaps it is necessary for the compiler to be intrinsically aware of these class names and parse them specially?
Back in the olden days, when people were first fooling around with type traits, they wrote some really nasty template code in attempts to write portable code to detect certain properties. My take on this was that you had to put a drip-pan under your computer to catch the molten metal as the compiler overheated trying to compile this stuff. Steve Adamczyk, of Edison Design Group (provider of industrial-strength compiler frontends), had a more constructive take on the problem: instead of writing all this template code that takes enormous amounts of compiler time and often breaks them, ask me to provide a helper function.
When type traits were first formally introduced (in TR1, 2006), there were several traits that nobody knew how to implement portably. Since TR1 was supposed to be exclusively library additions, these couldn't count on compiler help, so their specifications allowed them to get an answer that was occasionally wrong, but they could be implemented in portable code.
Nowadays, those allowances have been removed; the library has to get the right answer. The compiler help for doing this isn't special knowledge of particular templates; it's a function call that tells you whether a particular class has a particular property. The compiler can recognize the name of the function, and provide an appropriate answer. This provides a lower-level toolkit that the traits templates can use, individually or in combination, to decide whether the class has the trait in question.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm about to create a lexer for a project, proof of concepts of it exists, the idea works and whatnot, I was about to start writing it and I realised:
Why chars?
(I'm moving away from C, I'm still fairly suspicious of the standard libraries, I felt it easier to deal in char* for offsets and such than learn about strings)
why not w_char or something, ints, or indeed any type (given it has some defined operations).
So should I use a template? So far it seems like yes I should but there are 2 counter-arguments I can consider:
Firstly, modular complication, the moment I write "template" it must go in a header file / be available with implementation to whatever uses it (it's not a matter of hiding source code I don't mind the having to show code part, it will be free (as in freedom) software) this means extra parsing and things like that.
My C background screams not to do this, I seem to want separate .o files, I understand why I can't by the way, I'm not asking for a way.
Separate object files speed up complication, because the make file (you tell it or have it use -MM with the compiler to figure out for itself) wont run the complication command for things that haven't changed and so forth.
Secondly, with templates, I know of no way to specify what a type must do, other than have the user realise when something fails (you know how Java has an extends keyword) I suspect that C++11 builds on this, as meta-programming is a large chapter in "The C++ programming language" 4th edition.
Are these reasons important these days? I learned with the following burned into my mind:
"You are not creating one huge code file that gets compiled, you create little ones that are linked" and templates seem to go against this.
I'm sure G++ parses very quickly, but it also optimises, if it spends a lot of time optimising one file, it'll re-do that optimisation every time it sees that in a translation unit, where as with separate object files, it does a bit (general optimisations) only once, and perhaps a bit more if you use LTO (link time optimisation)
Or I could create a class that every input to the lexer derives from and use that (generic programming I believe it's called) but my C-roots say "eww virtuals" and urge me towards the char*
I understand this is quite open, I just don't know where to draw the line between using a template, and not using a template.
Templates don't have to be in the header! If you have only a few instantiations, you can explicitly instantiate the class and function templates in suitable translation units. That is, a template would be split into three parts:
A header declaring the templates.
A header including the first and implementing the template but otherwise only included in the third set of files.
Source files including the headers in 2. and explicitly instantiating the templates with the corresponding types.
Users of these template would only include the header and never the implementation header. An example where this can be done are IOStreams: There are basically just two instantiations: one for char and one for wchar_t. Yes, you can instantiate the streams for other types but I doubt that anybody would do so (I'm sometimes questioning if anybody uses stream with a different character type than char but probably people are).
That said, the concepts used by templates are, indeed, not explicitly represented in the source and C++11 doesn't add any facilities to do so either. There were discussions on adding concepts to C++ but so far they are not part of any standard. There is a concepts light proposal which, I think, will be included in C++14.
However, in practice I haven't found that much of a problem: it is quite possible to document the concepts and use things like static_assert() to potentially produce nicer error messages. The problem is more that many concepts are actually more restrictive than the underlying algorithms and that the extra slack is sometimes quite useful.
Here is a brief and somewhat made-up example of how to implement and instantiate the template. The idea is to implement something like std::basic_ostream but merely provide out scaled-down version of a string output operator:
// simple-ostream.hpp
#include "simple-streambuf.hpp"
template <typename CT>
class simple_ostream {
simple_streambuf<CT>* d_sbuf;
public:
simple_ostream(simple_streambuf<CT>* sbuf);
simple_streambuf<CT>* rdbuf() { return this->d_sbuf; } // should be inline
};
template <typename CT>
simple_ostream<CT>& operator<< (simple_ostream<CT>&, CT const*);
Except for the rdbuf() member the above is merely a class definition with a few member declarations and a function declaration. The rdbuf() function is implemented directly to show that you can mix&match the visible implementation where performance is necessary with external implementation where decoupling is more important. The used class template simple_streambuf is thought to be similar to std::basic_streambuf and, at least, declared in the header "simple-streambuf.hpp".
// simple-ostream.tpp
// the implementation, only included to create explicit instantiations
#include "simple-ostream.hpp"
template <typename CT>
simple_ostream<CT>::simple_ostream(simple_streambuf<CT>* sbuf): d_sbuf(sbuf) {}
template <typename CT>
simple_ostream<CT>& operator<< (simple_ostream<CT>& out, CT const* str) {
for (; *str; ++str) {
out.rdbuf()->sputc(*str);
}
return out;
}
This implementation header is only included when explicitly instantiating the class and function templates. For example, to instantiations for char would look like this:
// simple-ostream-char.cpp
#include "simple-ostream.tpp"
// instantiate all class members for simple_ostream<char>:
template class simple_ostream<char>;
// instantiate the free-standing operator
template simple_ostream<char>& operator<< <char>(simple_ostream<char>&, char const*);
Any use of the simple_ostream<CT> would just include simple-ostream.hpp. For example:
// use-simple-ostream.cpp
#include "simple-ostream.hpp"
int main()
{
simple_streambuf<char> sbuf;
simple_ostream<char> out(&sbuf);
out << "hello, world\n";
}
Of course, to build an executable you will need both use-simple-ostream.o and simple-ostream-char.o but assuming the template instantiations are part of a library this isn't really adding any complexity. The only real headache is when a user wants to use the class template with unexpected instantiations, say, char16_t, but only char and wchar_t are provided: In this case the user would need to explicitly create the instantiations or, if necessary, include the implementation header.
In case you want to try the example out, below is a somewhat simple-minded and sloppy (because being header-only) implementation of simple-streambuf<CT>:
#ifndef INCLUDED_SIMPLE_STREAMBUF
#define INCLUDED_SIMPLE_STREAMBUF
#include <iostream>
template <typename CT> struct stream;
template <>
struct stream<char> {
static std::ostream& get() { return std::cout; }
};
template <>
struct stream<wchar_t> {
static std::wostream& get() { return std::wcout; }
};
template <typename CT>
struct simple_streambuf
{
void sputc(CT c) {
stream<CT>::get().rdbuf()->sputc(c);
}
};
#endif
Yes, it should be limited to chars. Why ? Because you're asking...
I have little experience with templates, but when I used templates the necessity arose naturally, I didn't need to try to use templates.
My 2 cents, FWIW.
1: Firstly, modular complication, the moment I write "template" it must go in a header file…
That's not a real argument. You have the ability to use C, C++, structs, classes, templates, classes with virtual functions, and all the other benefits of a multi paradigm language. You're not coerced to take an all-or-nothing approach with your designs, and you can mix and match these functionalities based on your design's needs. So you can use templates where they are an appropriate tool, and other constructs where templates are not ideal. It's hard to know when that will be, until after you have had experience using them all. Template/header-only libraries are popular, but one of the reasons the approach is used is that they simplify linking and the build process, and can reduce dependencies if designed well. If they are designed poorly, then yes, they can result in an explosion in compile times. That's not the language's fault -- it's the implementor's design.
Heck, you could even put your implementations behind C opaque types and use templates for everything, keeping the core template code visible to exactly one translation.
2: Secondly, with templates, I know of no way to specify what a type must do…
That is generally regarded as a feature. Instantiation can result in further instantiations which is capable of instantiating different implementations and specializations -- this is template meta programming domain. Often, all you really need to do is instantiate the implementation, which results in evaluation of the type and parameters. This -- simulation of "concepts" and interface verification -- can increase your build times, however. But furthermore, that may not be the best design because deferring instantiation is in many cases preferable.
If you just need to brute-force instantiate all your variants, one approach would be to create a separate translation which does just that -- you don't even need to link it to your library; add it to a unit test or some separate target. That way, you could validate instantiation and functionalities are correct without significant impact to your clients including/linking to the library.
Are these reasons important these days?
No. Build times are of course very important, but I think you just need to learn the right tool to use, and when and why some implementations must be abstracted (or put behind compilation firewalls) when/if you need fast builds and scalability for large projects. So yes, they are important, but a good design can strike a good balance between versatility and build times. Also remember that template metaprogramming is capable of moving a significant amount of program validation from runtime to compile time. So a hit on compile times does not have to be bad, because it can save you from a lot of runtime validations/issues.
I'm sure G++ parses very quickly, but it also optimises, if it spends a lot of time optimising one file, it'll re-do that optimisation every time it sees that in a translation unit…
Right; That redundancy can kill fast build times.
where as with separate object files, it does a bit (general optimisations) only once, and perhaps a bit more if you use LTO (link time optimisation) … Separate object files speed up complication, because the make file (you tell it or have it use -MM with the compiler to figure out for itself) wont run the complication command for things that haven't changed and so forth.
Not necessarily so. First, many object files produce a lot of demand on the linker. Second, it multiplies the work because you have more translations, so reducing object files is a good thing. This really depends on the structure of your libraries and dependencies. Some teams take the approach the opposite direction (I do quite regularly), and use an approach which produces few object files. This can make your builds many times faster with complex projects because you eliminate redundant work for the compiler and linker. For best results, you need a good understanding of the process and your dependencies. In large projects, translation/object reductions can result in builds which are many times faster. This is often referred to as a "Unity Build". Large Scale C++ Design by John Lakos is a great read on dependencies and C++ project structures, although it's rather dated at this point so you should not take every bit of advice at face value.
So the short answer is: Use the best tool for the problem at hand -- a good designer will use many available tools. You're far from exhausting the capabilities of the tools and build systems. A good understanding of these subjects will take years.
From what I understand, standard layout allows three things:
Empty base class optimization
Backwards compatibility with C with certain pointer casts
Use of offsetof
Now, included in the library is the is_standard_layout predicate metafunction, but I can't see much use for it in generic code as those C features I listed above seem extremely rare to need checking in generic code. The only thing I can think of is using it inside static_assert, but that is only to make code more robust and isn't required.
How is is_standard_layout useful? Are there any things which would be impossible without it, thus requiring it in the standard library?
General response
It is a way of validating assumptions. You wouldn't want to write code that assumes standard layout if that wasn't the case.
C++11 provides a bunch of utilities like this. They are particularly valuable for writing generic code (templates) where you would otherwise have to trust the client code to not make any mistakes.
Notes specific to is_standard_layout
It looks to me like the (pseudo code) definition of is_pod would roughly be...
// note: applied recursively to all members
bool is_pod(T) { return is_standard_layout(T) && is_trivial(T); }
So, you need to know is_standard_layout in order to implement is_pod. Given that, we might as well expose is_standard_layout as a tool available to library developers. Also of note: if you have a use-case for is_pod, you might want to consider the possibility that is_standard_layout might actually be a better (more accurate) choice in that case, since POD is essentially a subset of standard layout.
I get the feeling that they added every conceivable variant of type evaluation, regardless of any obvious value, just in case someone might encounter a need sometime before the next standard comes out. I doubt if piling on these "extra" type properties adds a significant additional burden to compiler developers.
There is a nice discussion of standard layout here: Why is C++11's POD "standard layout" definition the way it is?
There is also a lot of good detail at cppreference.com: Non-static data members
That is my question. I'm just curious what the consensus is on limiting the types that can be passed in to a generic function or class. I thought I had read at some point, that if you're doing generic programming, it was generally better to leave things open instead of trying to close them down (don't recall the source).
I'm writing a library that has some internal generic functions, and I feel that they should only allow types within the library to be used with them, simply because that's how I mean for them to be used. On the other hand, I'm not really sure my effort to lock things down is worth it.
Anybody maybe have some sources for statistics or authoritative commentary on this topic? I'm also interested in sound opinions. Hopefully that doesn't invalidate this question altogether :\
Also, are there any tags here on SO that equate to "best-practice"? I didn't see that one specifically, but it seems like it'd be helpful to be able to bring up all best-practice info for a given SO topic... maybe not, just a thought.
Edit: One answer so far mentioned that the type of library I'm doing would be significant. It's a database library that ends up working with STL containers, variadics (tuple), Boost Fusion, things of that nature. I can see how that would be relevant, but I'd also be interested in rules of thumb for determining which way to go.
Always leave it as open as possible - but make sure to
document the required interface and behaviour for valid types to use with your generic code.
use a type's interface characteristics (traits) to determine whether to allow/disallow it. Don't base your decision on the type name.
produce reasonable diagnosis if
someone uses a wrong type. C++
templates are great at raising tons
of deeply-nested errors if they get instanced with
the wrong types - using type traits, static assertions and related techniques, one can easily produce more succinct error messages.
In my database framework, I decided to forgo templates and use a single base class. Generic programming meant that any or all objects can be used. The specific type classes outweighed the few generic operations. For example, strings and numbers can be compared for equality; BLOBs (Binary Large OBjects) may want to use a different method (such as comparing MD5 checksums stored in a different record).
Also, there was an inheritance branch between strings and numeric types.
By using an inheritance hierarchy, I can refer to any field by using the Field class or to a specialized class such as Field_Int.
It's one of the strongest selling points of the STL that it's so open, and that its algorithms work with my data structures as well as with the one it provides itself, and that my algorithms work with its data structures as well as with mine.
Whether it makes sense to leave your algorithms open to all types or limit them to yours depends largely on the library you're writing, which we know nothing about.
(Initially I meant to answer that being widly open is what Generic Programming is all about, but now I see that there's always limits to genericity, and that you have to draw the line somewhere. It might just as well be limited to your types, if that makes sense.)
At least IMO, the right thing to do is roughly what concepts attempted: rather than attempting to verify that you're receiving the specified type (or one of the set of specified types), do your best to specify the requirements on the type, and verify that the type you've received has the right characteristics, and can meet the requirements of your template.
Much like with concepts, much of the motivation for that is to simply provide good, useful error messages when those requirements aren't met. Ultimately, the compiler will produce an error message if somebody attempts to instantiate your template over a type that doesn't meet its requirements. The problem is that, as likely as not, the error message won't by very helpful unless you take steps to ensure that it is.
The Problem
If you clients can see your internal functions in public headers, and if the names of these internal generic functions are "common", then you may be putting your clients at risk of accidentally calling your internal generic functions.
For example:
namespace Database
{
// internal API, not documented
template <class DatabaseItem>
void
store(DatabaseItem);
{
// ...
}
struct SomeDataBaseType {};
} // Database
namespace ClientCode
{
template <class T, class U>
struct base
{
};
// external API, documented
template <class T, class U>
void
store(base<T, U>)
{
// ...
}
template <class T, class U>
struct derived
: public base<T, U>
{
};
} // ClientCode
int main()
{
ClientCode::derived<int, Database::SomeDataBaseType> d;
store(d); // intended ClientCode::store
}
In this example the author of main doesn't even know Database::store exists. He intends on calling ClientCode::store, and gets lazy, letting ADL choose the function instead of specifying ClientCode::store. After all, his argument to store comes from the same namespace as store so it should just work.
It doesn't work. This example calls Database::store. Depending on the innards of Database::store this call may result in a compile-time error, or worse yet, a run time error.
How To Fix
The more generically you name your functions, the more likely this is to happen. Give your internal functions (the ones that must appear in your headers) really non-generic names. Or put them in a sub-namespace like details. In the latter case you have to make sure your clients won't ever have details as an associated namespace for the purpose of ADL. That's usually accomplished by not creating types that the client will use, either directly or indirectly, in namespace details.
If you want to get more paranoid, start locking things down with enable_if.
If perhaps you think your internal functions might be useful to your clients, then they are no longer internal.
The above example code is not far-fetched. It has happened to me. It has happened to functions in namespace std. I call store in this example overly generic. std::advance and std::distance are classic examples of overly generic code. It is something to guard against. And it is a problem concepts attempted to fix.
I would like to generate various data types in C++ with unique deterministic names. For example:
struct struct_int_double { int mem0; double mem1; };
At present my compiler synthesises names using a counter, which means the names don't agree when compiling the same data type in distinct translation units.
Here's what won't work:
Using the ABI mangled_name function. Because it depends already on structs having unique names. Might work in C++11 compliant ABI by pretending struct is anonymous?
Templates eg struct2 because templates don't work with recursive types.
A complete mangling. Because it gives names which are way too long (hundreds of characters!)
Apart from a global registry (YUK!) the only thing I can think of is to first create a unique long mangled name, and then use a digest or hash function to shorten it (and hope there are no clashes).
Actual problem: to generate libraries which can be called where the types are anonymous, eg tuples, sum types, function types.
Any other ideas?
EDIT: Addition description of recursive type problem. Consider defining a linked list like this:
template<class T>
typedef pair<list<T>*, T> list;
This is actually what is required. It doesn't work for two reasons: first, you can't template a typedef. [NO, you can NOT use a template class with a typedef in it, it doesn't work] Second, you can't pass in list* as an argument because it isn't defined yet. In C without polymorphism you can do it:
struct list_int { struct list_int *next; int value; };
There are several work arounds. For this particular problem you can use a variant of the Barton-Nackman trick, but it doesn't generalise.
There is a general workaround, first shown me by Gabrielle des Rois, using a template with open recursion, and then a partial specialisation to close it. But this is extremely difficult to generate and would probably be unreadable even if I could figure out how to do it.
There's another problem doing variants properly too, but that's not directly related (it's just worse because of the stupid restriction against declaring unions with constructable types).
Therefore, my compiler simply uses ordinary C types. It has to handle polymorphism anyhow: one of the reasons for writing it was to bypass the problems of C++ type system including templates. This then leads to the naming problem.
Do you actually need the names to agree? Just define the structs separately, with different names, in the different translation units and reinterpret_cast<> where necessary to keep the C++ compiler happy. Of course that would be horrific in hand-written code, but this is code generated by your compiler, so you can (and I assume do) perform the necessary static type checks before the C++ code is generated.
If I've missed something and you really do need the type names to agree, then I think you already answered your own question: Unless the compiler can share information between the translation of multiple translation units (through some global registry), I can't see any way of generating unique, deterministic names from the type's structural form except the obvious one of name-mangling.
As for the length of names, I'm not sure why it matters? If you're considering using a hash function to shorten the names then clearly you don't need them to be human-readable, so why do they need to be short?
Personally I'd probably generate semi-human-readable names, in a similar style to existing name-mangling schemes, and not bother with the hash function. So, instead of generating struct_int_double you might generate sid (struct, int, double) or si32f64 (struct, 32-bit integer, 64-bit float) or whatever. Names like that have the advantage that they can still be parsed directly (which seems like it would be pretty much essential for debugging).
Edit
Some more thoughts:
Templates: I don't see any real advantage in generating template code to get around this problem, even if it were possible. If you're worried about hitting symbol name length limits in the linker, templates can't help you, because the linker has no concept of templates: any symbols it see will be mangled forms of the template structure generated by the C++ compiler and will have exactly the same problem as long mangled names generated directly by the felix compiler.
Any types that have been named in felix code should be retained and used directly (or nearly directly) in the generated C++ code. I would think there are practical (soft) readability/maintainability constraints on the complexity of anonymous types used in felix code, which are the only ones you need to generate names for. I assume your "variants" are discriminated unions, so each component part must have a name (the tag) defined in the felix code, and again these names can be retained. (I mentioned this in a comment, but since I'm editing my answer I might as well include it)
Reducing mangled-name length: Running a long mangled name through a hash function sounds like the easiest way to do it, and the chance of collisions should be acceptable as long as you use a good hash function and retain enough bits in your hashed name (and your alphabet for encoding the hashed name has 37 characters, so a full 160-bit sha1 hash could be written in about 31 characters). The hash function idea means that you won't be able to get directly back from a hashed name to the original name, but you might never need to do that. And you could dump out an auxiliary name-mapping table as part of the compilation process I guess (or re-generate the name from the C struct definition maybe, where it's available). Alternatively, if you still really don't like hash functions, you could probably define a reasonably compact bit-level encoding (then write that in the 37-character identifier alphabet), or even run some general purpose compression algorithm on that bit-level encoding. If you have enough felix code to analyse you could even pre-generate a fixed compression dictionary. That's stark raving bonkers of course: just use a hash.
Edit 2: Sorry, brain failure -- sha-1 digests are 160 bits, not 128.
PS. Not sure why this question was down-voted -- it seems reasonable to me, although some more context about this compiler you're working on might help.
I don't really understand your problem.
template<typename T>
struct SListItem
{
SListItem* m_prev;
SListItem* m_next;
T m_value;
};
int main()
{
SListItem<int> sListItem;
}