Embedded C++ project - Smart Pointer supported wanted. Possible portability issues? - c++

I'm working on a big project for embedded systems.
The project is a library and some binaries that must be integrated into customer's code/solution.
So, it must be as much OS/Platform independent as possible.
We've been working on embedded linux so far without problems. However it is possible that non linux based platforms join the fun in the near future.
To ilustrate the kind of platform we are working with, they must be capable of running demanding modules such as a Java Virtual Machine.
I'm not sure which kind of platform may show up and what kind of compilers they may offer.
So I'm a little worried about using advanced C++ futures or libraries that may cause a lot of trouble. Mainly I want to avoid the possibility of incompatibility due to that.
We are refactoring a few C++ modules of our solution. They are really tricky and smart pointers support would help a lot.
At first, I thought about making a custom smart pointer package, but it seems a little risk to me (bugs here would cause a huge headache).
So I thought about using boost's smart pointers.
Do you guys think I'm going to have trouble in the future if I use the boost's smart pointers?
I tried to extract the boost's smart pointer package using bcp, however a lot of other things come along with that. something like 4Mb of code.
The extracted directories are:
config/compiler
config/stdlib
config/platform
config/abi
config/no_tr1
detail
smart_ptr
mpl (and subdirs)
preprocessor (and subdirs)
exception (and subdirs)
type_traits (and dubdirs)
That doesn't seem very portable to me (but I may be wrong about it).
What do you guys think about it?
Thanks very much for your help.

Newer compilers include shared_ptr as C++11/TR1. If you have a reasonably modern compiler- which you really want to have, because of C++11- then it should not be problematic.
If you do not right now have a customer who cannot use TR1, then rock on with it. You can deal with future customers when they arrive- YAGNI applies here, and smart pointers are very important. As are C++11 features like move semantics.
However, if you were desperate, you could roll your own shared_ptr- the concept is not particularly complex.

Don't hesitate with using smart pointers. The smart pointer package you extracted should be portable to all decent compilers.
If it won't work with your compiler, you can replace conflicting parts of code manually. Boost code is more complicated, because it contains workarounds for various compiler bugs or missing functionalities. That's one of the reasons, why Boost.Preprocessor or Boost.Typetraits were added.

Boost is very portable; the source code size of the library is no indication of target image size; much of the library code will remain unused and will not be included in the target image.
Moreover, most common (and not so common and obsolete) 32bit platforms are supported by a "bare-metal" ports of GCC. However while GCC is portable without an OS, GNU libc targets POSIX compliant OS, so bare-metal and non-POSIX dependent ports usually use alternative libraries such as uClib or Newlib. On top of these GNU stdlibc++ will run happily and also many Boost libraries. Parts of Boost such as threads will need porting for unsupported targets, purely data structure related features such as smart pointers will have no target environment dependencies.

Related

Replace Visual Studio standard library

I am on Windows, and suppose I want to use different implementation of standard C++ library for my projects - say, libstdc++ or libc++.
Is there a way to persuade my Visual Studio to use it instead of MSVC library, so I can still use #include <algorithm> and not #include <custom/algorithm>? I believe that I can achieve it by simply adding path to my headers into project, but I am looking for more "system-wise" way, so I wouldn't repeat it for every single project.
Will it actually worth the hassle - specifically in terms of modern C++ features being available / standart-compliant?
If so, what would be cons of such replacement, apart of possibility to use some features that are not present in other implementations?
Particulary, answers to this question mention that there may be compatibility issues with other libs - does this apply only to Linux world, or will I have problems on Windows too?
Note: this is mostly a theoretical question - I'm fine with MSVC library, but I'd really like to know more about different stdlib implementations.
Theoretically it's not impossible to swap out stdlib implementations. With clang, you can choose between libc++ (clang's) and libstdc++ (GCC's).
However, in practice, stdlib implementations are often tied fairly fundamentally to the internals of the compiler they ship with, especially when it comes to newer-added C++ features, and this is not truer for many compilers than Visual Studio.
Could you make it work with a lot of hacking around? Maybe. Would it be worthwhile? I very much doubt it. Even if you succeeded, you will have sacrificed a reproducible build environment and will be relying on some deeply dark arts. Your project will not be reusable.
There is no indication in your question as you why you think you need to switch implementations, but it seems unlikely that any reason you could come up with would be worth the trouble.

How to find Boost libraries that does not contain any platform specific code

For our current project, we are thinking to use Boost framework.
However, the project should be truly cross-platform and might be shipped to some exotic platforms. Therefore, we would like to use only Boost packages (libraries) that does not contain any platform specific code: pure C++ and that's all.
Boost has the idea of header-only packages (libraries).
Can one assume that these packages (libraries) are free from platform specific code?
In case if not, is there a way to identify these kind of packages of Boost?
All C++ code is platform-specific to some extent. On the one side, there is this ideal concept of "pure standard C++ code", and on the other side, there is reality. Most of the Boost libraries are designed to maintain the ideal situation on the user-side, meaning that you, as the user of Boost, can write platform-agnostic standard C++ code, while all the underlying platform-specific code is hidden away in the guts of those Boost libraries (for those that need them).
But at the core of this issue is the problem of how to define platform-specific code versus standard C++ code in the real world. You can, of course, look at the standard document and say that anything outside of it is platform-specific, but that's nothing more than an academic discussion.
If we start from this scenario: assume we have a platform that only has a C++ compiler and a C++ standard library implementation, and no other OS or OS-specific API to rely on for other things that aren't covered by the standard library. Well, at that point, you still have to ask yourself:
What compiler is this? What version?
Is the standard library implementation correct? Bug-free?
Are those two entirely standard-compliant?
As far as I know, there is essentially no universal answer to this and there are no realistic guarantees. Most exotic platforms rely on exotic (or old) compilers with partial or non-compliant standard library implementations, and sometimes have self-imposed restrictions (e.g., no exceptions, no RTTI, etc.). An enormous amount of "pure standard C++ code" would never compile on these platforms.
Then, there is also the reality that most platforms today, even really small embedded systems have an operating system. The vast majority of them are POSIX compliant to some level (except for Windows, but Windows doesn't support any exotic platform anyways). So, in effect, platform-specific code that relies on POSIX functions is not really that bad since it is likely that most exotic platforms have them, for the most part.
I guess what I'm really getting at here is that this pure dividing line that you have in your mind about "pure C++" versus platform-specific code is really just an imaginary one. Every platform (compiler + std-lib + OS + ext-libs) lies somewhere along a continuum of level of support for standard language features, standard library features, OS API functions, and so on. And by that measure, all C++ code is platform-specific.
The only real question is how wide of a net it casts. For example, most Boost libraries (except for recent "flimsy" ones) generally support compilers down to a reasonable level of C++98 support, and many even try to support as far back as early 90s compilers and std-libs.
To know if a library, part of Boost or not, has wide enough support for your intended applications or platforms, you have the define the boundaries of that support. Just saying "pure C++" is not enough, it means nothing in the real world. You cannot say that you will be using C++11 compilers just after you've taken Boost.Thread as an example of a library with platform-specific code. Many C++11 implementations have very flimsy support for std::thread, but others do better, and that issue is as much of a "platform-specific" issue as using Boost.Thread will ever be.
The only real way to ever be sure about your platform support envelope is to actual set up machines (e.g., virtual machines, emulators, or real hardware) that will provide representative worst-cases. You have to select those worst-case machines based on a realistic assessment of what your clients may be using, and you have to keep that assessment up to date. You can create a regression test suite for your particular project, that uses the particular (Boost) libraries, and test that suite on all your worst-case test environments. Whatever doesn't pass the test, doesn't pass the test, it's that simple. And yes, you might find out in the future that some Boost library won't work under some new exotic platform, and if that happens you need to either get the Boost dev-team to add code to support it, or you have to re-write your code to get around it, but that's what software maintenance is all about, and it's a cost you have to anticipate, and such problems will come not only from Boost, but from the OS and from the compiler vendors too! At least, with Boost, you can fix the code yourself and contribute it to Boost, which you can't always do with OS or compiler vendors.
We had "Boost or not" discussion too. We decided not to use it.
We had some untypical hardware platforms to serve with one source code. Especially running boost on AVR was simply impossible because RTTI and exceptions, which Boost requires for a lot of things, aren't available.
There are parts of boost which use compiler specific "hacks" to e.g. get information about class structure.
We tried splitting the packages, but the inter dependency is quite high (at least 3 or 4 years ago).
In the meantime, C++11 was underway and GCC started supporting more and more. With that many reasons to use from boost faded (Which Boost features overlap with C++11?). We implemented the rest of our needs from scratch (with relative low effort thanks to variadic templates and other TMP features in C++11).
After a steep learning curve we have all we need without external libraries.
At the same time we have pondered the future of Boost. We expected the newly standardized C++11 features would be removed from boost. I don't know the current roadmap for Boost, but at the time our uncertainty made us vote against Boost.
This is not a real answer to your question, but it may help you decide whether to use Boost. (And sorry, it was to large for a comment)

STL vs Stlport: Which one is more lightweight

I have being using stlport to develop wince based custom OS, but from now on I am thinking about using stl provided by windows. I read that functionally they are not different from each other so currently what matters is my image's size. Unfortunately I cannot give both of them a try like first use stl and make a run time image and then use stlport, then compare both images' sizes, because I have a lot of other problems that I need to solve in order to succesfully build the OS. Hence I wanted to get an expert idea:
Which one do you think would be more lightweight? I know how stlport is attached, loaded etc but I am not quite sure about STL. I looked into STL headers and all I saw were thousands of inline functions. But is that all? I need to be sure about it. Does STL link any other libraries inside or does it simply include the headers and use those inline functions?
Best
Ps: I am using VS2012 and working on wec2013
Ps2: I know what STL and stlport stands for and how to build an application by using them. My actual question is which one would consume less memory, use smaller size on HDD? (Considering things like stlport is a lib but stl is not etc.)
I assume that by STL you mean your compiler's standard library. This is a common misunderstanding, as STL was the original name of a library that was proposed and accepted into the language, but it has evolved from that. Taking this into account, the question becomes:
Should I use the standard library provided with my compiler or use stlport [or other alternatives]?
The answer is that it will depend on your use case, but the good thing is that as long as you use the library as defined in the standard (i.e. without extensions) then you should be able to easily switch from building with one or the other, and that means that you can test this yourself. You can also test building with different compiler flags. This is specially important in VS, as by default the library uses checked iterators, that are good for debugging but at the cost of extra memory and processing.
STLPort is designed to be used on platforms that does not provide STL for some reasons (for example, embedded platforms without C++ exceptions support), or native STL support is outdated.
So, usually you do not need to replace native STL. There should be strong reasons to use STLPort in your project. In my experience, I used it for some embedded DSP platforms (no native STL), and for a UEFI platform (not really embedded, but no native STL as well, also runtime does not support C++ exceptions).
STLPort is highly customizable (you can disable exceptions, streams, etc), and can be used on almost any platform with basic C++ support.

Cross-platform C++ command line utility

I need to develop a Windows/Linux command line utility. The utility will talk to middleware that has a standard API on both platforms. I have done some cross-platform development before, on FreeBSD/Linux, which was considerably easier - and I had people in the group with experience that I could talk to.
At this point there is no one in my group who has tackled a Windows/Linux development project. I am looking for advice on how to best set it up. I'm kind of a newbie to C++ too, I have mostly developed C#/.Net GUI applications and Linux device driver level "stuff". Kind of a weird mix.
I was thinking that it would be best to define my own data types and not use either the Linux or the Windows defined types - keep the OS specific code in separate folders and include conditionally. That's kind of what we did for the Linux/BSD work. So it seemed like a good start.
One of the developers here is a big fan of Boost... another thought the TCLAP command line parser library was easier to use... Obviously everything has to be compatible with the licensing.
The code will be open sourced, but it is production code - so I don't want to be sloppy. What else should I be doing or looking for? Are there any best practices out there?
Boost is good, as is ACE. Between the 2 of those they cover pretty much anything you would want to do in a cross-platform manner. I have also gone the route of getting posix libraries for windows and using gcc on cygwin, but I don't recommend it.
Use a portable runtime that is supported on both platforms. I have had good luck with the Apache Portable Runtime.
Use standard C or C++ for most of the project. Only use platform specific functions when necessary. If possible, put those in a wrapper in isolated files so that the build (makefile) can substitute in the correct version for the appropriate platform.
Refrain from using #ifdef LINUX or #ifdef WINDOWS or similar conditional compilation. Those get really hard to debug and there are error prone when the keyword is not supplied to the compiler.
Use Boost. Among other things, you'll get a portable implementation of a subset of TR1, which is worth it if only for <cstdint> and the types within - int32_t etc. As well, shared_ptr is essential for many moderately complicated data structures.
Boost also has a slew of helper types which are extremely convenient in day-to-day C++ tasks. Two specific ones that come to mind right away are optional, and ptr_... polymorphic container types come to mind right away. String algorithm library is also very handy, considering the lack of very commonly needed string functions, such as case conversion or trimming, in the standard library.
Speaking of more heavyweight components, Boost.Filesystem is a very decent cross-platform abstraction for filesystem navigation, also a relatively common task in command-line tools. Then there's Boost.MultiIndex is a swiss army knife of containers - rarely truly needed, but when it is, it's indispensable.
I did a gig this summer in .NET and just ported to Mono. Worked great.
Although there are some good cross platform libraries out there (like Boost), remember that they are probably not there by default. This is especially problematic if you are shipping binaries only. The target platform is unlikely to have the library (or correct version of the library) that you need.
First prize is to stick with standard C++ (even if you need to implement simple stuff yourself). This avoid library dependence altogether.
If you must use a library, try statically linking against it (although this may create big binaries). This will allow you to avoid runtime failure due to lack of binaries.
If you must ship DLLs (or .so on some unixes) make sure that the correct version is shipped with your product and some way to avoid conflicts with the wrong version.
If you are shipping code, include the library with the code and build the library as well as your utility.
Also beware of GPL and possibly LGPL code. If you release a library with a GPL dependency (or modify an LGPL library) you will need to supply the code and allow redistribution as per the GPL.
TCLAP is the only header-only CLI parsing option that I'm aware of. As such, it strikes me as the most portable and is probably your best bet (it's currently what I use and recommend for exactly those reasons). It also helps that the API for TCLAP is very developer friendly and automatically generates decently formatted help messages for you.
Boost program_options has a shard library component to it, which is irritating to maintain ABI with. It also gets around nuisance parsing incompatibilities and behaviors from the getopt family of arguments.
I have used libparamset, that is cross platform (Windows, OS X, Linux) CLI parser. It provides flexible and powerful CLI parser and various UI building features (input error handling, wildcards, typo detection, task resolving, help formatting ...) to build a good command-line tool. It is suitable for both C and C++ projects.

Using stl containers without boost pointers

My company are currently not won over by the Boost libraries and while I've used them and have been getting them pushed through for some work, some projects due to their nature will not be allowed to use Boost. Basically libraries, like Boost, cannot be brought in for work so I am limited to the libraries available by default (Currently using Visual Studio 2005).
So... My question is, if I am unable to use Boost::shared_ptr and its little brothers, what is the alternative when using STL containers with pointers?
One option I see is writing a container class like a shared_ptr that looks after a given pointer, but I'd like to know if there are other alternatives first.
If they're not going to accept boost, I presume other "not developed here" libraries are out of the question.
It seems to me you're left with two options:
Roll your own shared_ptr.
Use raw pointers, and manage the memory yourself.
Neither is ideal, and each comes with it's own pain. Your saving grace might be that you have all of the source to boost available to you. You can use it as a model for writing your own shared_ptr.
In Visual Studio 2008 there is available std::tr1::shared_ptr. I'm not sure it is available in VS2005, you should check.
That definetly depends on what you want to do. It's not as if shared_ptr are absolutely necessary for a project that uses pointers.
If you really need them, import those classes/templates/functions you really need to your own project if possible without importing the whole boost lib.
Without knowing the background, it's hard to say why boost's libraries aren't permitted. If the reason is to avoid complex dependencies, you can easily work around the issue: Almost all boost libraries work with only a simply #include header: in short, they don't need linking and thus avoid dll-hell or any variant thereof.
So, if external libraries aren't appreciated because of the complexities involved in linking (whether statically or dynamically) them, you can simply copy the boost headers you need into the project by hand and use them directly.
For clarity and to make future upgrades and maintenance easier I'd avoid renaming the boost libraries (so future coders know where the code came from). If "they" don't want such simple code inclusions, well, you could make the argument that quite a few boost headers are headed for inclusion in the spec, and that they'll save everyone a bunch of headaches and time. Legally, the boost license is specifically designed to be as easy and safe to integrate as possible: all files have an explicit license which permits all relevant things, and almost all libs have exactly the same license.
I'm curious though: why exactly aren't boost headers permitted?