Why is the Loki library not more widely used? - c++

The Loki library implements some very widely used concepts (smart pointer, visitor, factory, etc.). The associated book "Modern C++ Design" is often mentioned, but the library itself is not widely used. Why is that?
Most developers seem to prefer Boost. In particular, why do people often decide to use Boost's smart pointers rather than Loki's?

Loki is a research/proof-of-concept sort of thing. Alexandrescu pushes new ideas, other people adopt those for real world. Also boost::shared_ptr is almost literally in TR1.

Loki's suffers from being a good library touching on several functional areas (template metaprogamming support with a few specific applications: smart pointers, singletons, function objects, scope guards etc.), whereas boost is a collection of many libraries typically exhaustively covering each functional area and much more highly tuned for portability (first).
When 9 birds out of 10 can be killed with the same stone, many people just start with boost and fill in the gaps with third party libraries. It's very hard to compete with boost if you overlap. Because you won't overlap with much of boost, people will download/install boost anyway to get the other functionality, so unless you nail an area that boost is weak at - and the difference is significant to the project, they'll "settle" for boost there too.
Further, Alexandrescu made repeated attempts to get Loki included in boost, and some of the key boost authors just weren't cooperative. My personal view is that they want the more complete but much less user-friendly MPL to have more "market share": as authors of the library and the hard-copy books that are the only decent documentation (in stark contrast with most other boost libraries which have excellent online documentation), they do quite well out of this.
If anyone is offended by and disagrees with this analysis, I'm all ears.
Another practical issue with extremely parameterised code is that in large projects where different developers/teams work independently, they'll often end up using subtly different instantiations of the same template pretty arbitrarily. This makes it harder to pass values between those subsystems: the receiver may need to:
be parameterised (i.e. templated, and hence inline, which introduces compilation dependencies and slower builds in enterprise-scale systems)
provide some minimal coverage for all possible instantiations (e.g. checking error codes and expecting/handling exceptions)
working through some compile-time to run-time hand-over based on an abstract base accessor with implementations for each instantiation) which compromises some of the performance benefits of parameterisation
This is all possible, but it takes a great programmer to navigate the terrain.

You want to use a library that the next programmer is going to know and that is going to be well supported in the future - so you pick a major lib.
Because it's a major lib lots of people use it, so it becomes the default choice.

I actually prefer Loki's way of doing things and I have contributed to Loki myself a Decorator pattern which now sits in the tracker because the project as far as I know is no longer maintained.
I use boost shared_pointer just because it will be the standard very soon, I may dislike the fact that I can't customize it to act exactly the way I want it to act but I have to live with it.
Usage of the standard library is important as it keeps the code maintainable by other programmers. If it's open source and you want to experiment go ahead and use Loki. No one is stopping you.
Actually Windows Vista uses some of Loki's features.
I am guessing they are not using the redundant implementations of smart pointers and visitors.

Speaking as someone who's used quite a bit of the Boost library, and also looked at Loki more than once, the biggest problem was the sparsity of documentation. Also, Loki uses some of the hairiest bits of C++ templates. Exciting stuff, but also rather daunting.

I used Loki once for a little tool (basically an interpreter) and actually liked it. My coworkers were less enthusiastic about the library, so its use remained constrained to this small sub-project.

Related

Is it common practice to abstract library dependencies from implementation?

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Boost library acceptance in industry

I have seen lots of people suggesting the Boost library on Stack Overflow, so I am also thinking of learning it. But today I came across this link: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Boost
I wanted to know about its acceptance in industry at a broader level. My current company also doesn't allow me to use this so I am confused whether to look into this or not.
Parts of Boost library is currently being accepted into the Standard library for C++0x and it is regarded as one of the top libs with a high industry acceptance. I'm actually unaware of any other library getting accepted into the C++ Standard Library on such a large scale.
"Ten Boost libraries are already included in the C++ Standards Committee's Library Technical Report (TR1) and will be in the new C++0x Standard now being finalized. C++0x will also include several more Boost libraries in addition to those from TR1. More Boost libraries are proposed for TR2."
You should definately look into this. Don't go by Google or any other large institution. They generally have to work to a subset of any complex language like C++. Hence, they'll have restrictions on which parts they can use so that it's easier to hire and train engineers to use the code base.
Additionally, Boost leverages many aspects of the higher forms of functionality within C++, case in point template meta-programming. Boost provides for a safer, though bulkier, form of functions as first class objects. They add in a more powerful "bind" which works so well with the standard library that I'd be lost without it. Finally, they have in place tuples and hash tables, both fundamental datatypes in modern development libraries.
In short, I really can't name one reason why you wouldn't want to look at Boost, even just to learn something. It's peer reviewed and largely platform independent. The source code is a treasure trove of information and more advanced programming techniques.
I think the who is using boost web page speaks for itself. Notably: Adobe, McAfee, and Real Networks probably qualify as industry.
My current company also does not allow me
to use [boost]. So I am confused whether
to look into it or not.
You might want to dig a little further and find out why. As others have said, Boost is a fantastically useful set of open source and peer reviewed libraries of extremely high quality. Look at their development LOC chart for an idea how long and how much $$ it would cost your company to reinvent the wheel.

Boost advocacy - help needed

Possible duplicates
Is there a reason to not use Boost?
What are the advantages of using the C++ BOOST libraries?
OK, the high-level question is "Please provide me with what you consider to be the most effective arguments of why entire Boost, or some specific parts of it, should be compiled on our company's system and endorsed in software engineering standards".
Details of what I need:
Would gladly accept both positive arguments (why install), as well as proposed rebuttals of likely counter-arguments I might hear (see question context below).
Arguments should be made aimed at both technical Software Engineering team members and/or very technical senior managers - in other words, for the latter, the details of the argument may/should be technical, but the thrust of the argument should be "how would this make/save the company X money vs losing the company Y money as a cost of adding it to our toolset".
Context of the question:
I'm a developer in a company with several hundred developers, many dosens of whom do C++.
I had the (mis)fortune of being reassigned from my beloved Perl development spot to a team where I am also doing C++ development. So far I found numerous things that I could easily have done in Perl that are very hard/cumbersome to do in C++ (foreach loop as an example), and anytime I hit one of these, the answer 50% likely ends up being "You can't do this in standard C++ but you can do it with Boost"
Our toolkit includes some legacy RogeWave libraries, and VERY limited number of Boost libraries (e.g. no regex, no foreach), of very old vintage.
Any development must use libraries compiled and vetted by Software Engineering team. That is a hard and fast rule.
SE team is somewhat resistant to adding new libraries, for a variety of reasons (e.g. effort to do this; functionality conflicts with RogeWave, for example for RegEx; the risk of installing and using any new software; cost of educating developers, etc...). They will add the libraries if presented with sufficient business need or majorly convincing cost/benefit ratio argument, but they have pretty tough threshold.
So, I'm looking for examples of which parts of Boost are so wonderful (with exact cost/benefit estimates) that installing them would be an Obviously Worth It Effort for Software Engineering.
Thanks in advance for any ideas/suggestions/examples.
Please don't mark this question as subjective as I am looking for measurable answers, not merely wonderful feelings :)
Wherever I worked in the last decade, when they had their own smart pointer class, I found bugs in that - usually within a few weeks. And, no, I never went and looked at it hoping to find errors.
I got into the habit of posting the following quote from the TR1 smart pointer proposal:
The Boost developers found a shared-ownership smart pointer exceedingly difficult to implement correctly. Others have made the same observation. For example, Scott Meyers [Meyers01] says:
"The STL itself contains no reference-counting smart pointer, and writing a good one - one that works correctly all the time - is tricky enough that you don't want to do it unless you have to. I published the code for a reference-counting smart pointer in More Effective C++ in 1996, and despite basing it on established smart pointer implementations and submitting it to extensive pre- publication reviewing by experienced developers, a small parade of valid bug reports has trickled in for years. The number of subtle ways in which reference-counting smart pointers can fail is remarkable."
This plus a detailed analysis of the bug(s) I found usually got me the job of incorporating the boost libs into the code base. :)
It seems to me you're doing this the wrong way around. Since proposals to add new libraries are going to be met with a lot of resistance, don't even bother trying to argue for boost as a whole. Choose your battles instead.
Find the specific Boost libraries which you know (with your knowledge of the application it's to be used in) will be useful and save time and money. And then propose adding those.
I could easily list the Boost libs I've found useful, and why I think they're great, but I don't know if they'll be of any use in your application.
Push for individual boost libraries to be included, and then perhaps, over time, so many of them will be included that it'll be simpler for everyone to just include all of Boost.
It's an open standard not controlled by a specific company ( no licensing costs )
It's cross platform
It's expertly designed / written and very fast / efficient extensively tested
There are open source implementations which your team can compile themselves.
Boost will soon become part of the standard C++ STL
Here is a slightly oldish 2005 article on Dr. Dobbs discussing the upcoming C++0x standard.
http://www.ddj.com/cpp/184401958
I had to maintain a component using that old vintage Tools.h++ from Roguewave, on a Solaris system.
On Solaris, if we want to use boost, we need to use either gcc, or SunStudio with STLport implementation of the standard (instead of Roguewave one). And as Tools.h++ requires the old Roguewave pre-standard implementation of the standard -- on Solaris --, I had to give up on boost.
In the end I rewrote a simplified version of a few boost-like features I needed.
If you are in that same situation (*) , you would not be able to move from Roguewave library to boost that easily. There is a non negligible cost in this operation, as for instance pointer containers from both libraries have quite different interfaces.
(*) Where we can't slowly change bits by bits of the old code to progressively use boost. In that situation, the migration has to be radical and to change simultaneously every occurrence of Tools.h++ by something more trendy, or even better.
NB: Most people are able to progressively use boost in old projects, and may miss a very important, and yes technical, point. Hence my negative answer.
Boost is a great tool and an invaluable part of our product development (we'd be lost without smart_ptr)... but because it is changing so fast the stability of releases can be effected.
For example, we were happily introducing new versions of Boost as soon as they came out without thinking twice. That is until we were stung with a bug in the 1.35 threading library that produced occassional (ie difficult to debug) but critical errors. Fortunately we identified the issue before anything was released to the public and could move back to 1.34.
Ever since then we've taken a specific version, extensively tested it, and not updated without a compelling reason to do so.
Here are two suggestions for advocating boost:
Who's Using Boost? (http://www.boost.org/users/uses.html)
lots of major projects use boost: (e.g., adobe photoshop, CERN)
Boost Project Cost (http://www.boost.org/development/index.html)
How much it would cost to hire a team to write boost from scratch? There's a nifty (somewhat gimmicky) calculator there that helps to put it in perspective.

Why STL containers are preferred over MFC containers?

Previously, I used to use MFC collection classes such CArray and CMap. After a while I switched to STL containers and have been using them for a while. Although I find STL much better, I am unable to pin point the exact reasons for it. Some of the reasoning such as :
It requires MFC: does not hold because other parts of my program uses MFC
It is platform dependent: does not hold because I run my application only on windows.(No need for portability)
It is defined in the C++ standard: OK, but MFC containers still work
The only reason I could come up is that I can use algorithms on the containers. Is there any other reason that I am missing here - what makes STL containers better than MFC containers?
Ronald Laeremans, VC++ Product Unit Manager, even said to use STL in June 2006:
And frankly the team will give you the same answer. The MFC collection classes are only there for backwards compatibility. C++ has a standard for collection classes and that is the Standards C++ Library. There is no technical drawback for using any of the standard library in an MFC application.
We do not plan on making significant changes in this area.
Ronald LaeremansActing Product Unit ManagerVisual C++ Team
However, at one point where I was working on some code that ran during the installation phase of Windows, I was not permitted to use STL containers, but was told to use ATL containers instead (actually CString in particular, which I guess isn't really a container). The explanation was that the STL containers had dependecies on runtime bits that might not actually be available at the time the code had to execute, while those problems didn't exist for the ATL collections. This is a rather special scenario that shouldn't affect 99% of the code out there.
STL containers:
Have performance guarantees
Can be used in STL algorithms which also have performance guarantees
Can be leveraged by third-party C++ libraries like Boost
Are standard, and likely to outlive proprietary solutions
Encourage generic programming of algorithms and data structures. If you write new algorithms and data structures that conform to STL you can leverage what STL already provides at no cost.
Compatibility with other libraries (such as boost) in syntax, interoperability, and paradigm. It's a non-trivial benefit.
Using STL will develop a skillset that is more likely to be useful in other contexts. MFC isn't so widely used any more; STL is.
Using STL will develop a mindset that you may (or may not) find useful in code you write yourself.
Using something other than STL isn't inherently wrong though.
STL has more collection types than MFC
Visual Studio (2008+) debugger visualizes STL much better than MFC. (AUTOEXP.DAT magic can fix that - but it is a pain! Nothing like debugging your debugger when you screw it up...)
One good thing with MFC is that there is still a large corpus of MFC code out there. Other answers talk about third-party compatibility. Don't forget third party MFC-based stuff.
I always prefer using more standard/compatible libraries where I can since I may have future projects that I can reuse a portion of the code on. I have no idea what libraries future projects will use, but I have a better chance of making my code reusable if I use standard/compatible stuff.
Also, the more I use a library, I get more comfortable and quicker with it. If I am going to invest the time to learn a library, I want to make sure it's going to stay around and is not tied in with a specific platform or framework.
Of course, I say all of this assuming that my choices are rather similar when it comes to performance, features and ease of use. For instance, if the MFC classes are a significant enough improvement in these areas, I would use them instead.
In fact you can use STL algorithms on some of MFC containers as well. However, STL containers are preferred for another very practical reason: many third-party libraries (Boost, arabica, Crypto++, utf-cpp...) are designed to work with STL, but know nothing about MFC containers.
MFC containers derive from CObject and CObject has the assignment operator made private. I have found this very annoying in practice.
std::vector, unlinke CArray, guarantees that the memory block is contiguous, thus you can interop with C programming interfaces easily:
std::vector<char> buffer;
char* c_buffer = &*buffer.begin();
It is now assumed that C++ developers are at least passingly familiar with the STL. Not so for MFC containers. So if you're adding a new developer to your team, you will have an easier time finding one who know STL than the MFC containers, and thus will be able to contribute immediately.
I think it boils down to a simple question: Who do you trust more? If you trust Microsoft, then continue to use the MFC variants. If you trust the industry, then use STL.
I vote for STL because the code that runs on Windows today might need to be ported to another platform tomorrow. :)
This is a case of whichever tools work for the job you want to do, and 'better' being a subjective term.
If you need to use your containers with other standards-compliant code, or if it is ever going to be code that is shared across platforms, STL containers are probably a better bet.
If you're certain that your code will remain in MFC-land, and MFC containers work for you, why not continue to use them?
STL containers aren't inherently better than MFC containers, however as they are part of the standard they are more useful in a wider range of contexts.
Next to the mentioned aspects: well-supported, standard available, optimized for performance, designed for use with the algorithms, I might add one more aspect: type-safety, and loads of compile-time checks. You can't even imagine drawing a double out of a std::set<int>.
Because a library that uses iterators to combine sequences of any kind with algorithms so that A) all sensible permutations are possible and B) it's universally extensible, is (when you think about your concept of a container library as it was 15 years ago) such a mind-blowingly marvelous idea, that it has pretty much blown out of the water everything else within less than a decade?
Seriously, the STL has its flaws (why not ranges instead of iterators? and that erase-remove idiom is not exactly pretty), but it's based on a brilliant idea and there's very good reasons we don't need to learn a new container/algorithms library every time we start at a new job.
I wouldn't totally dismiss the portability argument. Just because you use MFC today doesn't mean that you always will. And even if you mostly write for MFC, some of your code could be re-used in other applications if it were more generic and standards-based.
I think the other reason to consider STL is that its design and evolution has benefited from libraries that have come before, include MFC and ATL. So many of the solutions are cleaner, more re-usable, and less error prone. (Though I wish they had a better naming convention.)

What are the good parts in the poorly-thought-of non-standard C++ libraries?

In trying to get up to speed with C++ (coming from a long experience with C), I am obviously trying to do the right thing, and use as much as is standard as is possible.
However, in my readings on the matter I come accross a lot of criticism for standard things, and praise for non-standard things. For example, even the the (I assume) poorly-thought-of MFC library has features in, for example, its CString class that some folks think useful enough to cause them to continue using it despite the fact that it's (a) non-standard and (b) that it's (I assume, from the wealth of criticism) deficient in many important ways.
My question is twofold, then:
A. What libraries that are poorly-thought-of contain features that nonetheless make it worth continuing to use them, what are those features, and what's so good about them?
B. Are there "adaptor" libraries out there that simplify and/or tighten up the use of such libraries, e.g. providing nice interfaces that abstract resources leaks, adaptors that go from a non-STL library interface to a STL, and so on
As a relative newbie to StackOverflow, I'm not 100% sure that this question is sufficiently on-point, so I apologise up-front if it's too open-ended.
Thanks in advance
My personal grunge is with ACE. It was sort of the other way around - great idea, nothing else was available at the time for cross-platform threaded and network development in C++, wide deployment, books by the library authors, etc. But the implementation was terrible, usage patterns were complicated, almost all the useful features of C++ suppressed (or didn't exist at the time.) I think this library alone is responsible for good chunk of people thinking that C++ is hard and ugly. Very recently Boost collection started catching up with threads, ipc, and networking, so there is at least an alternative. BUT with all that said, I still think it's worth to be familiar with ACE if you are in that space since, again, way too many people use it, the ideas are good, and it can serve as great negative example for library design.
IMO the best thing about MFC is that, historically, it was available before the STL was available, and something was better than nothing.
MFC is still good, if you're writing code to be compatible with an existing MFC code base.
Apart from that there's little merit in MFC, except that it's perhaps still one of the (if not the most) obvious C++ class library for Windows.
On a more general note I think one of the reasons that people stick with old, awkward and perhaps even obsolete libraries is that they have grown used to it and anything new and shiny might do the job better but are harder to grasp/understand initially. And so you stay with what is familiar and keep your productivity.
I think it is a balance between getting the job done and getting the job done "right". Somewhere in between is probably where most of us end up.
CString is non-standard because MFC is not cross-platform, I think. std::string() is standard, but if we use everything standard, then why MS developed MFC?