Boost advocacy - help needed - c++

Possible duplicates
Is there a reason to not use Boost?
What are the advantages of using the C++ BOOST libraries?
OK, the high-level question is "Please provide me with what you consider to be the most effective arguments of why entire Boost, or some specific parts of it, should be compiled on our company's system and endorsed in software engineering standards".
Details of what I need:
Would gladly accept both positive arguments (why install), as well as proposed rebuttals of likely counter-arguments I might hear (see question context below).
Arguments should be made aimed at both technical Software Engineering team members and/or very technical senior managers - in other words, for the latter, the details of the argument may/should be technical, but the thrust of the argument should be "how would this make/save the company X money vs losing the company Y money as a cost of adding it to our toolset".
Context of the question:
I'm a developer in a company with several hundred developers, many dosens of whom do C++.
I had the (mis)fortune of being reassigned from my beloved Perl development spot to a team where I am also doing C++ development. So far I found numerous things that I could easily have done in Perl that are very hard/cumbersome to do in C++ (foreach loop as an example), and anytime I hit one of these, the answer 50% likely ends up being "You can't do this in standard C++ but you can do it with Boost"
Our toolkit includes some legacy RogeWave libraries, and VERY limited number of Boost libraries (e.g. no regex, no foreach), of very old vintage.
Any development must use libraries compiled and vetted by Software Engineering team. That is a hard and fast rule.
SE team is somewhat resistant to adding new libraries, for a variety of reasons (e.g. effort to do this; functionality conflicts with RogeWave, for example for RegEx; the risk of installing and using any new software; cost of educating developers, etc...). They will add the libraries if presented with sufficient business need or majorly convincing cost/benefit ratio argument, but they have pretty tough threshold.
So, I'm looking for examples of which parts of Boost are so wonderful (with exact cost/benefit estimates) that installing them would be an Obviously Worth It Effort for Software Engineering.
Thanks in advance for any ideas/suggestions/examples.
Please don't mark this question as subjective as I am looking for measurable answers, not merely wonderful feelings :)

Wherever I worked in the last decade, when they had their own smart pointer class, I found bugs in that - usually within a few weeks. And, no, I never went and looked at it hoping to find errors.
I got into the habit of posting the following quote from the TR1 smart pointer proposal:
The Boost developers found a shared-ownership smart pointer exceedingly difficult to implement correctly. Others have made the same observation. For example, Scott Meyers [Meyers01] says:
"The STL itself contains no reference-counting smart pointer, and writing a good one - one that works correctly all the time - is tricky enough that you don't want to do it unless you have to. I published the code for a reference-counting smart pointer in More Effective C++ in 1996, and despite basing it on established smart pointer implementations and submitting it to extensive pre- publication reviewing by experienced developers, a small parade of valid bug reports has trickled in for years. The number of subtle ways in which reference-counting smart pointers can fail is remarkable."
This plus a detailed analysis of the bug(s) I found usually got me the job of incorporating the boost libs into the code base. :)

It seems to me you're doing this the wrong way around. Since proposals to add new libraries are going to be met with a lot of resistance, don't even bother trying to argue for boost as a whole. Choose your battles instead.
Find the specific Boost libraries which you know (with your knowledge of the application it's to be used in) will be useful and save time and money. And then propose adding those.
I could easily list the Boost libs I've found useful, and why I think they're great, but I don't know if they'll be of any use in your application.
Push for individual boost libraries to be included, and then perhaps, over time, so many of them will be included that it'll be simpler for everyone to just include all of Boost.

It's an open standard not controlled by a specific company ( no licensing costs )
It's cross platform
It's expertly designed / written and very fast / efficient extensively tested
There are open source implementations which your team can compile themselves.
Boost will soon become part of the standard C++ STL
Here is a slightly oldish 2005 article on Dr. Dobbs discussing the upcoming C++0x standard.
http://www.ddj.com/cpp/184401958

I had to maintain a component using that old vintage Tools.h++ from Roguewave, on a Solaris system.
On Solaris, if we want to use boost, we need to use either gcc, or SunStudio with STLport implementation of the standard (instead of Roguewave one). And as Tools.h++ requires the old Roguewave pre-standard implementation of the standard -- on Solaris --, I had to give up on boost.
In the end I rewrote a simplified version of a few boost-like features I needed.
If you are in that same situation (*) , you would not be able to move from Roguewave library to boost that easily. There is a non negligible cost in this operation, as for instance pointer containers from both libraries have quite different interfaces.
(*) Where we can't slowly change bits by bits of the old code to progressively use boost. In that situation, the migration has to be radical and to change simultaneously every occurrence of Tools.h++ by something more trendy, or even better.
NB: Most people are able to progressively use boost in old projects, and may miss a very important, and yes technical, point. Hence my negative answer.

Boost is a great tool and an invaluable part of our product development (we'd be lost without smart_ptr)... but because it is changing so fast the stability of releases can be effected.
For example, we were happily introducing new versions of Boost as soon as they came out without thinking twice. That is until we were stung with a bug in the 1.35 threading library that produced occassional (ie difficult to debug) but critical errors. Fortunately we identified the issue before anything was released to the public and could move back to 1.34.
Ever since then we've taken a specific version, extensively tested it, and not updated without a compelling reason to do so.

Here are two suggestions for advocating boost:
Who's Using Boost? (http://www.boost.org/users/uses.html)
lots of major projects use boost: (e.g., adobe photoshop, CERN)
Boost Project Cost (http://www.boost.org/development/index.html)
How much it would cost to hire a team to write boost from scratch? There's a nifty (somewhat gimmicky) calculator there that helps to put it in perspective.

Related

Has Boost ever been used in a regulated project (FDA, FAA)?

While posting a comment recently, I found myself remarking that, in my experience, Boost is not widely used in regulated industries (FDA, FAA). In fact, I don't know of any project that uses it or has used it. I realize, though, that my experience may be lacking here, so I wanted to know if anybody had knowledge of a project using boost in a medical device or in an aviation flight system (lighting, cabin controls, cockpit equipment, etc.).
I am not sure this is the right place to ask it (maybe some other SO site), but I thought this would be a good place to start.
This is not a question about whether or not boost should be used in these areas, it is a question about anybody knowing if it has been used.
EDIT Some examples projects that might help clarify this: Aircraft cabin lighting systems, cabin management systems, cockpit instrumentation, infusion/food/insulin pumps, dialysis machines, laboratory diagnostic devices, blood center data collection systems, etc. Some are life sustaining or potentially flight critical, some collect data, some collect data used to make medical decisions, etc., but I believe all are covered as regulated devices by the FAA/FDA.
EDIT Outside (did not come with the development chain) libraries are often brought into these types of projects for other purposes (graphics libraries, drivers, USB stacks, etc.) These are treated as SOUP. The use of boost would fall under this approach. Does anybody know of a project where boost was used this way?
EDIT Boost is a very large framework, with multiple components. I'm looking for any part of it that has been used in a project. For example "Boost smart pointers" or "Boost Enable" or "Boost Array" or "Boost Optional", etc. But used in "whole", not part. Not used by looking at the Boost code and re-using the idea; used as a whole component of the system (i.e. the legal sense).
This is central to the question, because used in this way means that tradeoffs of handling the SOUP must be dealt with. This may place this question outside the scope of this SO site...not sure.
I would say that a lot if it comes down to how the system designer(s) are handling system safety at an architectural level.
Triplication
For instance if the approach taken is triplicate redundancy with a trusted voter then system trials/testing is going to be the major step in approving the implementation. Suppose that one of the triplicates' development team chose to use Boost. If the system as a whole passed all its test vectors then one could argue that one didn't need to trawl through Boost itself looking for implementation errors. Obviously if all three triplicates had chosen to use Boost then that would be cause for concern, because then the scope of the test vectors becomes unmanageable.
Triplication is a standard approach to handling the problem of using software resources like compilers, libraries and programmers all of which are at risk of error. Boost is just another one of these. One could argue that Boost shared pointers are clearly a way of reducing the risk of programmer error. That in the round is most likely to be beneficial to the system as a whole.
Important, but Not Safety Critical
Where it gets interesting is when triplication is not being used, and one is now in the realms of really having to trust stuff. Interestingly the way a lot of systems seem to get round the problem is to say that ultimately there is a human in control who is supervising and able to intervene in the event of system error.
For example the car industry has a set of programming rules called MISRA. The software for an ABS system is supposed to be written to this rule set, and the development tools are supposed to be set to enforce those rules on the source code. The idea is that this will reduce the risk of undetected bugs to an acceptable level. And because ultimately there is a driver driving the car they can always do their own cadence braking. And thus the car industry has avoided having to have a triplicate implementation of ABS.
They are extending the same philosophy to more complex car systems like adaptive cruise control and self driving cars. Personally I think that such an extension is unreasonable for self driving cars. The relevant legislation makes it clear that it is the 'drivers' fault if such a vehicle has a crash (ie you are still 'driving' it), but the glossy advertising won't dwell on that important aspect.
It's the same in the world of medical devices; there's supposed to be a nurse or someone monitoring the patient anyway, so the occasional blip is covered by that supervision. The whole thing is very poor anyway; whilst the software for a medical device may have been written tested and approved, quite often these things run on embedded Windows XP. They all get networked up and end up being infested with computer viruses, etc. The FDA won't let you have an auto update system inplace, only the device vendor can update it, and of course they can't ever hope to keep up. So you end up with a nicely written well tested and good piece of medical software running on top of an OS installation which has had all the world's hackers running around inside it doing god knows what. I think that the use of Boost in these circumstances is not going to add much to the overall system risk.
So if a MISRA compliant toolchain offered Boost as part of that toolchain then I don't see why that would be any different to a toolchain offering a standard C library. If the toolchain vendor is certifying it then it's no different to the situation with anything else.
There are weaknesses with that approach. In my experience I have come across a MISRA compliant tool chain in widespread use whose compiler turned out junk object code when all optimisations were turned on. I was actually able to verify this in the disassembly. I then took a look at the source code for toolchains's standard C library, and it clearly wasn't in itself written to the MISRA rule set, and furthermore it contained glaring, horrible bugs.
And yet there is no regulatory block to building, testing and selling a car ABS system using this tool chain so long as you tick the MISRA checkbox in the project settings. Adding Boost to that toolchain would hardly make matters worse.
Safety Critical Without Triplication
The final approach is no triplication and no human supervision. This is really hard because you then need formal proof of the correctness of every component of the tool chain, OS, drivers, chips, etc. AFAIK it's never been done for a truly safety critical system like nuclear reactors, flight control avionics, or other systems that really will definitely kill people if they go wrong.
The only thing that comes close so far as I can tell is Greenhill's compiler suite and their INTEGRITY operating system. They can give you (for a large fee) formal testing and verification evidence for every single line of the OS, all their libraries and their compiler, everything. If one were ever to attempt a truly safety critical system without triplication that would be a starting point.
I don't think they've done a C++11 yet, though I have added Boost to their toolchain and it worked just fine (it wasn't in a safety critical system I hasten to add).
Conclusion
Certainly if outfits like Greenhills with a well deserved and good reputation for reliable and thoroughly tested toolchains offer Boost then I think one would be in good position to use it in an regulated system. However I doubt that the whole of Boost would be offered that way; they are more likely to follow the compiler standards.
I also know that GCC has in the past been put through formal compiler validation testing so that it could be used in Stuff That Matters. I expect that that will get repeated sooner of later for the more recent incarnations that have taken on aspects of Boost.
I think the best answer we can have here is "yes and no." I will try to explain why.
Boost is a huge umbrella for many constituent libraries. Some of them depend on each other in various ways (e.g. when a higher-level library needs features provided by a lower-level part of Boost like Type Traits). This raises questions about the usefulness of a simple answer to the question, because if three parts of Boost have been used in a regulated project, but they are different parts than you want to use, it is of no little value to know this. And we will never know the full answer regarding all parts, because you cannot prove a negative (and there are too many parts to ever expect a "100% yes" answer).
Boost is (and always has been) rapidly evolving. Entirely new libraries are added all the time. ASIO is a big one that didn't exist at all until somewhat recently. This makes it even more difficult to answer the question, because over time there are parts of Boost which are young and not as well tested as others. Additionally, existing libraries sometimes go through backward-incompatible revisions (e.g. "Boost Filesystem 3" not too long ago).
Many parts of Boost end up in projects not by a traditional dependency but rather by copy-pasting code from Boost, and perhaps modifying it to taste (e.g. adding or removing support for specific compilers). Likewise, many parts of Boost end up in projects via the fact that Boost is sort of a proving ground for many new C++ standard library features, such as shared_ptr (C++11) and unordered_map (TR1). Some features which are part of the language today were originally part of Boost, so many people have used "Boost code" without even knowing it.
Note that code does not somehow become safer when it transitions to official status within the language--GCC has had bugs which did not exist in the Boost equivalent implementations of the same concepts. This matters when considering practical questions like "Should we allow the use of Boost in our project or should we restrict ourselves to what the compiler vendor gives us?" If you are thinking of using a feature which has been implemented very recently by your compiler vendor (say, within the past year), you may well be better off using a third-party (e.g. Boost) implementation which is more mature.
Finally, since it seems that the impetus for this question is to gain some reassurance that using Boost is not a bad idea for a production project: I would certainly say that in general using Boost is fine and good, with the huge caveat that you need a local expert in Boost who knows which parts of Boost should not be used in your domain. For example, Boost Spirit, Phoenix, and Wave are examples of libraries which have been in Boost for a while but which very few people truly, deeply understand. It's one thing to use library code you don't fully understand (we all do), but quite another to use code which almost no person on earth understands.
In summary, I don't think anyone will be able to give you the reassurance that you seek that Boost is OK for safety-critical systems. You need to evaluate it on your own, the same as you need to evaluate your own compiler vendor's software, your other third-party dependencies, and the code you write yourself. I have used all four categories of software quite a lot, and in my experience Boost had fewer critical bugs than any of the others, and fewer regressions than either GCC or my own code.

Should you migrate a project to C++11?

I have been trying to get our team to migrate a large C++ project to VS2012 from VS2008. I want to do it mostly because I want to start using C++11 and the IDE is much nicer. So my reasons are somewhat selfish.
My team lead is pushing back because he doesn't see the business case for the migration, citing that most performance improvement features we'll get with C++11 we already have with BOOST and other libraries. He also says that this will require changing runtimes on all of our platforms which may change certain behavior. Which would mean we would need to retest on all of the servers that we have deployed to.
The first argument I somewhat understand, although I believe C++11 code will be much cleaner than using BOOST (again not a great business case).
The argument about using different runtimes I don't understand. What runtimes do a native C++ application use? This is not VC++. Would his concern be just that the STL won't be exactly the same implementation?
I don't see what the issue there would be. Is there something I am missing? Are there any other good arguments for migrating that I should cite to help my case?
All 3rd party libraries need to be built with the new compiler
Code might currently be unknowingly reliant on undefined behavior, a new compiler might do something totally different for UB than the current one (and cause problems)
Performance won't change much because you wouldn't have been coding in C++11 style (basically, lots of stuff is passed by value where before it would not have been). If your code base has a lot of...
std::vector<Blah> func(std::vector<Asdf> v); // notice all the pass by value
... C++11 could be a great performance improvement. But in C++98/03 you just wouldn't do that.
You need to lower the barrier to entrance for your team lead. Do the migration yourself and smoke test your product. Then show it to him. After that, here are abstract reasons to upgrade:
C++11 style is less code and simpler
VS2012 has the C++11 standard library additions - you can stop hand rolling 50 bug ridden replacements
Programmers want to work using a modern language and modern tools. This will spark a company wide resurgence of learning and best practices which will improve code quality, employee retention, employee continuing education, etc
It's a delicate balance when to do this sort of upgrade. If you do it too often you're spending money without gaining any business advantage. If you do it too infrequently you're dealing with so much legacy tech and legacy code that maintenance can become a nightmare. When a significant language change comes, that you will ultimately move to, it's best to do it sooner than later (and btw, this isn't particularly soon) - otherwise you just keep accumulating what will later be considered legacy code. Moving to a new compiler for new tools often isn't worth it. Moving for an important language upgrade usually is worth it.
Whether any of that is compelling to your team lead is beyond me. Good luck though
VS2012 is so last year, obsolete already, so good Microsoft replaced it after only 1 year of use, criticisms, eye-blinding whiteness, and ALL CAPS!
But considering that you can build a VS2008 project in the newer IDEs means you can upgrade today to VS2013 and work on upgrading the project to the VS2013 tooling over time.
Your TL is correct though, an upgrade requires a complete re-test, but if you have time between adding features then it is possible to fit such a large test in.
I'd say the main aspect of upgrading is just to keep up to date, its not such a big deal today but in another 5 years time your old VS2008 builds might start to hold you back (as I know having recently upgraded a VS2002 project to 2010), its never a good idea to get so far behind as the longer you leave it, the greater the effort in the upgrade that you will eventually have to do. That's the real reason to do it - chances that Microsoft will not support 2008 builds in the next version, and older IDEs won't run on Windows 9, get greater every year. Best to fix this problem while you have time.
I guess I wouldn't expect much in the way of performance improvements, but regarding testing, I'd recommend full retest on any kind of compiler or tool-chain change regardless of if you change your code or not.
EDIT: I'll add that I'd push to move to the more recent compiler on the next release cycle... that removes the testing argument (since you'll need to test anyway).

Boost library acceptance in industry

I have seen lots of people suggesting the Boost library on Stack Overflow, so I am also thinking of learning it. But today I came across this link: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Boost
I wanted to know about its acceptance in industry at a broader level. My current company also doesn't allow me to use this so I am confused whether to look into this or not.
Parts of Boost library is currently being accepted into the Standard library for C++0x and it is regarded as one of the top libs with a high industry acceptance. I'm actually unaware of any other library getting accepted into the C++ Standard Library on such a large scale.
"Ten Boost libraries are already included in the C++ Standards Committee's Library Technical Report (TR1) and will be in the new C++0x Standard now being finalized. C++0x will also include several more Boost libraries in addition to those from TR1. More Boost libraries are proposed for TR2."
You should definately look into this. Don't go by Google or any other large institution. They generally have to work to a subset of any complex language like C++. Hence, they'll have restrictions on which parts they can use so that it's easier to hire and train engineers to use the code base.
Additionally, Boost leverages many aspects of the higher forms of functionality within C++, case in point template meta-programming. Boost provides for a safer, though bulkier, form of functions as first class objects. They add in a more powerful "bind" which works so well with the standard library that I'd be lost without it. Finally, they have in place tuples and hash tables, both fundamental datatypes in modern development libraries.
In short, I really can't name one reason why you wouldn't want to look at Boost, even just to learn something. It's peer reviewed and largely platform independent. The source code is a treasure trove of information and more advanced programming techniques.
I think the who is using boost web page speaks for itself. Notably: Adobe, McAfee, and Real Networks probably qualify as industry.
My current company also does not allow me
to use [boost]. So I am confused whether
to look into it or not.
You might want to dig a little further and find out why. As others have said, Boost is a fantastically useful set of open source and peer reviewed libraries of extremely high quality. Look at their development LOC chart for an idea how long and how much $$ it would cost your company to reinvent the wheel.

Why is the Loki library not more widely used?

The Loki library implements some very widely used concepts (smart pointer, visitor, factory, etc.). The associated book "Modern C++ Design" is often mentioned, but the library itself is not widely used. Why is that?
Most developers seem to prefer Boost. In particular, why do people often decide to use Boost's smart pointers rather than Loki's?
Loki is a research/proof-of-concept sort of thing. Alexandrescu pushes new ideas, other people adopt those for real world. Also boost::shared_ptr is almost literally in TR1.
Loki's suffers from being a good library touching on several functional areas (template metaprogamming support with a few specific applications: smart pointers, singletons, function objects, scope guards etc.), whereas boost is a collection of many libraries typically exhaustively covering each functional area and much more highly tuned for portability (first).
When 9 birds out of 10 can be killed with the same stone, many people just start with boost and fill in the gaps with third party libraries. It's very hard to compete with boost if you overlap. Because you won't overlap with much of boost, people will download/install boost anyway to get the other functionality, so unless you nail an area that boost is weak at - and the difference is significant to the project, they'll "settle" for boost there too.
Further, Alexandrescu made repeated attempts to get Loki included in boost, and some of the key boost authors just weren't cooperative. My personal view is that they want the more complete but much less user-friendly MPL to have more "market share": as authors of the library and the hard-copy books that are the only decent documentation (in stark contrast with most other boost libraries which have excellent online documentation), they do quite well out of this.
If anyone is offended by and disagrees with this analysis, I'm all ears.
Another practical issue with extremely parameterised code is that in large projects where different developers/teams work independently, they'll often end up using subtly different instantiations of the same template pretty arbitrarily. This makes it harder to pass values between those subsystems: the receiver may need to:
be parameterised (i.e. templated, and hence inline, which introduces compilation dependencies and slower builds in enterprise-scale systems)
provide some minimal coverage for all possible instantiations (e.g. checking error codes and expecting/handling exceptions)
working through some compile-time to run-time hand-over based on an abstract base accessor with implementations for each instantiation) which compromises some of the performance benefits of parameterisation
This is all possible, but it takes a great programmer to navigate the terrain.
You want to use a library that the next programmer is going to know and that is going to be well supported in the future - so you pick a major lib.
Because it's a major lib lots of people use it, so it becomes the default choice.
I actually prefer Loki's way of doing things and I have contributed to Loki myself a Decorator pattern which now sits in the tracker because the project as far as I know is no longer maintained.
I use boost shared_pointer just because it will be the standard very soon, I may dislike the fact that I can't customize it to act exactly the way I want it to act but I have to live with it.
Usage of the standard library is important as it keeps the code maintainable by other programmers. If it's open source and you want to experiment go ahead and use Loki. No one is stopping you.
Actually Windows Vista uses some of Loki's features.
I am guessing they are not using the redundant implementations of smart pointers and visitors.
Speaking as someone who's used quite a bit of the Boost library, and also looked at Loki more than once, the biggest problem was the sparsity of documentation. Also, Loki uses some of the hairiest bits of C++ templates. Exciting stuff, but also rather daunting.
I used Loki once for a little tool (basically an interpreter) and actually liked it. My coworkers were less enthusiastic about the library, so its use remained constrained to this small sub-project.

What are the good parts in the poorly-thought-of non-standard C++ libraries?

In trying to get up to speed with C++ (coming from a long experience with C), I am obviously trying to do the right thing, and use as much as is standard as is possible.
However, in my readings on the matter I come accross a lot of criticism for standard things, and praise for non-standard things. For example, even the the (I assume) poorly-thought-of MFC library has features in, for example, its CString class that some folks think useful enough to cause them to continue using it despite the fact that it's (a) non-standard and (b) that it's (I assume, from the wealth of criticism) deficient in many important ways.
My question is twofold, then:
A. What libraries that are poorly-thought-of contain features that nonetheless make it worth continuing to use them, what are those features, and what's so good about them?
B. Are there "adaptor" libraries out there that simplify and/or tighten up the use of such libraries, e.g. providing nice interfaces that abstract resources leaks, adaptors that go from a non-STL library interface to a STL, and so on
As a relative newbie to StackOverflow, I'm not 100% sure that this question is sufficiently on-point, so I apologise up-front if it's too open-ended.
Thanks in advance
My personal grunge is with ACE. It was sort of the other way around - great idea, nothing else was available at the time for cross-platform threaded and network development in C++, wide deployment, books by the library authors, etc. But the implementation was terrible, usage patterns were complicated, almost all the useful features of C++ suppressed (or didn't exist at the time.) I think this library alone is responsible for good chunk of people thinking that C++ is hard and ugly. Very recently Boost collection started catching up with threads, ipc, and networking, so there is at least an alternative. BUT with all that said, I still think it's worth to be familiar with ACE if you are in that space since, again, way too many people use it, the ideas are good, and it can serve as great negative example for library design.
IMO the best thing about MFC is that, historically, it was available before the STL was available, and something was better than nothing.
MFC is still good, if you're writing code to be compatible with an existing MFC code base.
Apart from that there's little merit in MFC, except that it's perhaps still one of the (if not the most) obvious C++ class library for Windows.
On a more general note I think one of the reasons that people stick with old, awkward and perhaps even obsolete libraries is that they have grown used to it and anything new and shiny might do the job better but are harder to grasp/understand initially. And so you stay with what is familiar and keep your productivity.
I think it is a balance between getting the job done and getting the job done "right". Somewhere in between is probably where most of us end up.
CString is non-standard because MFC is not cross-platform, I think. std::string() is standard, but if we use everything standard, then why MS developed MFC?