C++ Stdlib IO Implementation details - c++

Are there any guarantees that C++ std IO will be portable across all Desktop and Mobile OS (I'm interested in iOS and Android)?
Is implementation of std IO different across the platforms or it is rather uniform? If it is different, then does it happen due to SDK of the platform (in other words - do SDK's provide those different implementations)?
Who provide those implementation? Who is the author? Does anybody update them?
Where is documentation?

Are there any guarantees that C++ std
IO will be portable across all Desktop
and Mobile OS (I'm interested in iOS
and Android)?
No, there are no guarantees that these platforms will implement, correctly at all the standard library.
Is implementation of std IO different
across the platforms or it is rather
uniform? If it is different, then does
it happen due to SDK of the platform
(in other words - do SDK's provide
those different implementations)?
It's different. I/O is very different on different platforms.
Who provide those implementation? Who
is the author? Does anybody update
them? Where is documentation?
Either the compiler implementor or the platform owner provides them. The C++ Standard describes what the library must do.

I think you are failing to see the power of the standard libraries. They are designed to provide a common set of functionality that is available across any standards compliant compiler. For example, if I write the following code:
#include <iostream>
int main(int a, char** s)
{
std::cout << "Hello World" << std::endl;
return 0;
}
This will be compiled by any standards compliant compiler. You're getting hung up on, well the way std::cout works is different on each platform - yes of course it is. But this is the beauty of it - why do you have to care? On windows, if you compile this with MS Visual C++, that compiler will have the correct implementation (which the standard doesn't care about) to support the above standard way of writing to standard out. Similarly, on Linux GCC will have the correct code to write to whatever implementation, and on Solaris, CC will do the same.
You don't have to worry or frankly care. The handling for your platform is provided by the compiler that you are using for that platform. You have a nice clean high-level interface to work with.
Do you care how the Java VM handles the details of each platform? You don't, it's not your concern, you know when you do System.out.println() it will be written to the screen (or whatever for that VM) appropriately. SO why are you getting hung up on this?
The thing you have understand is whether the compiler that you are using on the specific platform will provide all the functionality in the standard library (i.e. is it fully standards compliant or not) and if not, what's missing and how to work around it. The rest is frankly irrelevant!
As for if you don't like it, well pay for something like Roguewave - which frankly is pissing money away, but it's your money to piss away...

Standard library is exactly that — standard. It's defined by standard. Every standard-compliant compiler must provide it. So the guarantee is that it will be portable across standard-compliant implementations (whether there's one for your target platform is a whole different question altogether). It has nothing to do with platform SDKs (whether it's implemented using one doesn't matter — the observable behaviour must be the same).

The idea of a standard (hence the std) is that it is respected and uniform no matter what platform you are on.
Some developers ship devices with support for all or some of the std library, it's really just up to them how it is implemented.
This is platform specific and probably available in each platform's SDK documentation, available most probably with the SDK or on the vendor's website.

Related

Is C++ std::string platform independent?

I am wondering if the std::string of C++ is platform independent meaning that it is available for all compiler implementations.
Background is that I am working with some pretty old legacy code that does not make use of any std::string (of std::something) stuff and that makes it troublesome to work with. It is compiled on Windows and Unix systems including somewhat more exotic platforms like AIX, zLinux, Solaris and HPUX.
It would make life much easier if I could just use the std library but I don't know if it will work on all of those platforms. Any experiences with things like this?
std::string is part of the C++ standard (cfr. [string.classes]) so it is available with every C++-standard conforming implementation.
Be aware though that something might have changed from a major C++ version to another one, e.g. std::string::front (since C++11). If you want your codebase to be portable and consistent you should keep this in mind and also check for the highest available (for whatever reason, stability, policies, backward compatibility, etc..) C++ version you can target.
std::string is part of the standard library and should be available on any conforming implementation.

Are Unix and Linux API headers compatible with C++?

I have previously written C++ code that #includes Unix and Linux API headers and these programs have produced the expected behavior. That said, I don't know if this can be relied on. It's possible that incompatibilities between C and C++ could cause valid C headers to act in unexpected ways when used by C++ programs.
Can Unix and Linux API headers reliably be used by code that will be compiled as C++?
Is this a goal of the authors of those headers? Or are those headers only intended to be valid C?
Are there any known pitfalls when doing this?
Obviously the Unix and Linux distributions are numerous and I don't expect an answer to address every distribution one by one. My expectation is that the same answer will apply to almost all distributions of Unix and Linux and exceptions will prove the rule. If this assumption is wrong, an explanation of that would also be a valid answer.
By Unix headers I mean these:
http://www.unix.org/version3/apis/headers.html
By Linux headers I mean the headers provided by Linux distributions usually as a package named "linux-headers" that allow programs to interact with the Linux kernel. For example, this Debian package:
https://packages.debian.org/wheezy/kernel/linux-headers-3.2.0-4-amd64
I realize the Unix link is only a specification and that each Linux distribution is different but again I suspect it's reasonable to ask this question for most distributions. If that's not true then correct me.
Edit I only mean to refer to headers used by user space programs.
C standard headers like <stdio.h>, <stdlib.h>, and so forth are specified in Appendix D of the C++ standard, which states:
These are deprecated features, where deprecated is defined as:
Normative for the current edition of the Standard, but not guaranteed
to be part of the Standard in future revisions.
The non-deprecated C++ versions of the C standard headers have names like <cstdio>, <cstdlib>, etc., and they technically put their definitions into the std (not global) namespace. So, to be 100% compliant with the non-deprecated part of the C++ spec, you need to write something like this:
#include <cstdio>
int main() {
std::printf("Hello, world!\n");
}
That said, to my knowledge, no existing implementation actually forces you to do this, and in my opinion it is unlikely any ever will. So in practice, you can safely use C standard headers in C++ without much concern.
Also, if you are on (e.g.) a POSIX system, you can generally use POSIX functionality from C++ equally safely. Certainly nobody is going to deliberately break any of this because users would revolt.
However, accidental breakage is conceivable when mixing paradigms. If both the platform and the language standard provide some feature, you should use one or the other but not both. In particular, I would not mix POSIX threading and synchronization mechanisms with standard C++11 threading and synchronization mechanisms, because it is easy to imagine an optimizer knowing too much about the latter and generating code incompatible with the former.
[Update, to elaborate somewhat]
<unistd.h> is an example of what I mean about platform-dependent functionality. It will generally work fine from C++, and neither the library nor the compiler developers will break it gratuitiously because that would be too annoying. So go ahead and call getpid() or pipe() or whatever.
But be aware that mixing paradigms raises all sorts of questions. To name just a few off the top of my head:
Can you call new from a signal handler?
Can you use dup2 onto descriptor 0 to redirect cin?
What POSIX functions can you call safely during static initialization (i.e. before main executes)?
These questions and others like them are not addressed by any spec. The answers depend on your specific implementation and could change between releases.
Having said all that... Just about every non-trivial C++ program relies on platform-specific functionality exposed by some C interface. So what you are describing will work fine in practice provided you (a) have some idea what is going on "under the hood"; (b) have reasonable expectations; and (c) do not attempt to mix standard and platform-specific paradigms.
1) Yes: "standard headers" are standard. You can safely use them regardless of platform.
2) Yes: you can mix C headers (e.g. <stdio.h>) with C++ headers (e.g. <iostream>) in the same C++ translation unit.
3) NO: you should NOT use linux kernel headers in a user mode program, nor vice versa.
Linux kernel headers are intended for kernel-mode drivers, not for "normal", user space applications.
Here is a bit more information:
https://unix.stackexchange.com/questions/27042/what-does-a-kernel-source-tree-contain-is-this-related-to-linux-kernel-headers
http://kernelnewbies.org/KernelHeaders

How to find Boost libraries that does not contain any platform specific code

For our current project, we are thinking to use Boost framework.
However, the project should be truly cross-platform and might be shipped to some exotic platforms. Therefore, we would like to use only Boost packages (libraries) that does not contain any platform specific code: pure C++ and that's all.
Boost has the idea of header-only packages (libraries).
Can one assume that these packages (libraries) are free from platform specific code?
In case if not, is there a way to identify these kind of packages of Boost?
All C++ code is platform-specific to some extent. On the one side, there is this ideal concept of "pure standard C++ code", and on the other side, there is reality. Most of the Boost libraries are designed to maintain the ideal situation on the user-side, meaning that you, as the user of Boost, can write platform-agnostic standard C++ code, while all the underlying platform-specific code is hidden away in the guts of those Boost libraries (for those that need them).
But at the core of this issue is the problem of how to define platform-specific code versus standard C++ code in the real world. You can, of course, look at the standard document and say that anything outside of it is platform-specific, but that's nothing more than an academic discussion.
If we start from this scenario: assume we have a platform that only has a C++ compiler and a C++ standard library implementation, and no other OS or OS-specific API to rely on for other things that aren't covered by the standard library. Well, at that point, you still have to ask yourself:
What compiler is this? What version?
Is the standard library implementation correct? Bug-free?
Are those two entirely standard-compliant?
As far as I know, there is essentially no universal answer to this and there are no realistic guarantees. Most exotic platforms rely on exotic (or old) compilers with partial or non-compliant standard library implementations, and sometimes have self-imposed restrictions (e.g., no exceptions, no RTTI, etc.). An enormous amount of "pure standard C++ code" would never compile on these platforms.
Then, there is also the reality that most platforms today, even really small embedded systems have an operating system. The vast majority of them are POSIX compliant to some level (except for Windows, but Windows doesn't support any exotic platform anyways). So, in effect, platform-specific code that relies on POSIX functions is not really that bad since it is likely that most exotic platforms have them, for the most part.
I guess what I'm really getting at here is that this pure dividing line that you have in your mind about "pure C++" versus platform-specific code is really just an imaginary one. Every platform (compiler + std-lib + OS + ext-libs) lies somewhere along a continuum of level of support for standard language features, standard library features, OS API functions, and so on. And by that measure, all C++ code is platform-specific.
The only real question is how wide of a net it casts. For example, most Boost libraries (except for recent "flimsy" ones) generally support compilers down to a reasonable level of C++98 support, and many even try to support as far back as early 90s compilers and std-libs.
To know if a library, part of Boost or not, has wide enough support for your intended applications or platforms, you have the define the boundaries of that support. Just saying "pure C++" is not enough, it means nothing in the real world. You cannot say that you will be using C++11 compilers just after you've taken Boost.Thread as an example of a library with platform-specific code. Many C++11 implementations have very flimsy support for std::thread, but others do better, and that issue is as much of a "platform-specific" issue as using Boost.Thread will ever be.
The only real way to ever be sure about your platform support envelope is to actual set up machines (e.g., virtual machines, emulators, or real hardware) that will provide representative worst-cases. You have to select those worst-case machines based on a realistic assessment of what your clients may be using, and you have to keep that assessment up to date. You can create a regression test suite for your particular project, that uses the particular (Boost) libraries, and test that suite on all your worst-case test environments. Whatever doesn't pass the test, doesn't pass the test, it's that simple. And yes, you might find out in the future that some Boost library won't work under some new exotic platform, and if that happens you need to either get the Boost dev-team to add code to support it, or you have to re-write your code to get around it, but that's what software maintenance is all about, and it's a cost you have to anticipate, and such problems will come not only from Boost, but from the OS and from the compiler vendors too! At least, with Boost, you can fix the code yourself and contribute it to Boost, which you can't always do with OS or compiler vendors.
We had "Boost or not" discussion too. We decided not to use it.
We had some untypical hardware platforms to serve with one source code. Especially running boost on AVR was simply impossible because RTTI and exceptions, which Boost requires for a lot of things, aren't available.
There are parts of boost which use compiler specific "hacks" to e.g. get information about class structure.
We tried splitting the packages, but the inter dependency is quite high (at least 3 or 4 years ago).
In the meantime, C++11 was underway and GCC started supporting more and more. With that many reasons to use from boost faded (Which Boost features overlap with C++11?). We implemented the rest of our needs from scratch (with relative low effort thanks to variadic templates and other TMP features in C++11).
After a steep learning curve we have all we need without external libraries.
At the same time we have pondered the future of Boost. We expected the newly standardized C++11 features would be removed from boost. I don't know the current roadmap for Boost, but at the time our uncertainty made us vote against Boost.
This is not a real answer to your question, but it may help you decide whether to use Boost. (And sorry, it was to large for a comment)

How portable IS C++?

In C++, if I write a simple game like pong using Linux, can that same code be compiled on Windows and OSX? Where can I tell it won't be able to be compiled?
You have three major portability hurdles.
The first, and simplest, is writing C++ code that all the target compilers understand. Note: this is different from writing to the C++ standard. The problem with "writing to the standard" starts with: which standard? You have C++98, C++03, C++TR1 or C++11 or C++14 or C++17? These are all revisions to C++ and the newer one you use the less compliant compilers are likely to be. C++ is very large, and realistically the best you can hope for is C++98 with some C++03 features.
Compilers all add their own extensions, and it's all too easy to unknowingly use them. You would be wise to write to the standard and not to the compiler documentation. Some compilers have a "strict" mode where they will turn off all extensions. You would be wise to do primary development in the compiler which has the most strictures and the best standard compliance. gcc has the -Wstrict family of flags to turn on strict warnings. -ansi will remove extensions which conflict with the standard. -std=c++98 will tell the compiler to work against the C++98 standard and remove GNU C++ extensions.
With that in mind, to remain sane you must restrict yourself to a handful of compilers and only their recent versions. Even writing a relatively simple C library for multiple compilers is difficult. Fortunately, both Linux and OS X use gcc. Windows has Visual C++, but different versions are more like a squabbling family than a single compiler when it comes to compatibility (with the standard or each other), so you'll have to pick a version or two to support. Alternatively, you can use one of the gcc derived compiler environments such as MinGW. Check the [list of C++ compilers](less compliant compilers are likely to be) for compatibility information, but keep in mind this is only for the latest version.
Next is your graphics and sound library. It has to not just be cross platform, it has to look good and be fast on all platforms. These days there's a lot of possibilities, Simple DirectMedia Layer is one. You'll have to choose at what level you want to code. Do you want detailed control? Or do you want an engine to take care of things? There's an existing answer for this so I won't go into details. Be sure to choose one that is dedicated to being cross platform, not just happens to work. Compatibility bugs in your graphics library can sink your project fast.
Finally, there's the simple incompatibilities which exist between the operating systems. POSIX compliance has come a long way, and you're lucky that both Linux and OS X are Unix under the hood, but Windows will always be the odd man out. Things which are likely to bite you mostly have to do with the filesystem. Here's a handful:
Filesystem layout
File path syntax (ie. C:\foo\bar vs /foo/bar)
Mandatory Windows file locking
Differing file permissions systems
Differing models of interprocess communication (ie. fork, shared memory, etc...)
Differing threading models (your graphics library should smooth this out)
There you have it. What a mess, huh? Cross-platform programming is as much a state of mind and statement of purpose as it is a technique. It requires some dedication and extra time. There are some things you can do to make the process less grueling...
Turn on all strictures and warnings and fix them
Turn off all language extensions
Periodically compile and test in Windows, not just at the end
Get programmer who likes Windows on the project
Restrict yourself to as few compilers as you can
Choose a well maintained, well supported graphics library
Isolate platform specific code (for example, in a subclass)
Treat Windows as a first class citizen
The most important thing is to do this all from the start. Portability is not something you bolt on at the end. Not just your code, but your whole design can become unportable if you're not vigilant.
C++ is ultra portable and has compilers available on more platforms than you can shake a stick at. Languages like Java are typically touted as being massively cross platform, ironically they are in fact usually implemented in C++, or C.
That covers "portability". If you actually mean, how cross platform is C++, then not so much: The C++ standard only defines an IO library suitable for console IO - i.e. text based, so as soon as you want develop some kind of GUI, you are going to need to use a GUI framework - and GUI frameworks are historically very platform specific. Windows has multiple "native" GUI frameworks now - the C++ framework made available from Microsoft is still MFC - which wraps the native Win32 API which is a C API. (WPF and WinForms are available to CLR C++).
The Apple Mac's GUI framework is called Cocoa, and is an objective-C library, but its easy to access Objective C from C++ in that development environment.
On Linux there is the GTK+ and Qt frameworks that are both actually ported to Windows and Apple, so one of these C++ frameworks can solve your "how to write a GUI application in C++ once that builds and runs on windows, apple mac and linux".
Of course, its difficult to regard Qt as strictly C++ anymore - Qt defines a special markup for signals and slots that requires a pre-compile compile step.
You can read the standard - if a program respects the standard, it should be compilable on all platforms that have a C++ standard-compliant compiler.
As for 3rd party libraries you might be using, the platform availability is usually specified in the documentation.
When GUI comes to question, there are cross-platform options (such as QT), but you should probably ask yourself - do I really want portability when it comes to UI? Sometimes, it's better to have the GUI part platform-specific.
If you are thinking of porting from Linux to Windows, using OPENGL for the graphical part gives you freedom to run your program on both operating systems as long as you don't use any system specific functionality.
Compared to C, C++ portability is extremely limited, if not completely unexisting. For one you can't disable exceptions (well you can), for the standard specifically says that's undefined behaviour. Many devices don't even support exceptions. So as for that, C++ is ZERO portable. Plus seeing the UB, it's obvioulsy a no-go for zero-fail high-performance real time systems in which exceptions are taboo - undefined behaviour has no place in zero-fail environment. Then there's the name mangling which most, if not every, compiler does completely different. For good portability and inter-compatibility extern "C" would have to be used to export symbols, yet this renders any and all namespace information completely void, resulting in duplicate symbols. One can ofcourse choose to not use namespaces and use unique symbol names. Yet another C++ feature rendered void. Then there's the complexity of the language, which results in implementation difficulties in the various compilers for various architectures. Due to these difficulties, true portability becomes a problem. One can solve this by having a large chain of compiler directives/#ifdefs/macros. Templates? Not even supported by most compilers.
What portability? You mean the semi-portability between a couple of main-stream build targets like MSVC for Windows and GCC for Linux? Even there, in that MAIN-STREAM segment, all the above problems and limitations exist. It's retarded to even think C++ is portable.

Program portability

How to make sure that my program will be fully portable?
Continuous integration on all target platforms.
1. Test
This is a necessary but not a sufficient condition for doing anything properly. To test portability, you'll want multiple platforms and compilers.
2. Write to the standard, not to your development platform.
This means, only do something if the standard says you can do it. Only expect a particular result if the standard says you can expect it. Only use a library or API if the standard says it exists. The standard is available here (among other places):
http://openassist.googlecode.com/files/C%2B%2B%20Standard%20-%20ANSI%20ISO%20IEC%2014882%202003.pdf
It helps if you assume that:
CHAR_BIT is equal to 9.
sizeof(int) is equal to 5 and int is a 37 bit type. Or a 16 bit type.
the basic character set is EBCDIC.
The epoc began in 1721.
time_t is double
And so on. By which I don't mean, write code that relies on those things to be true, I mean write code that will work if they are, and will also work on a sane implementation.
3. Use the most restrictive and pedantic compiler options you can find,
This is the only practical way to give yourself a reasonable chance of achieving (1).
4. Understand that "real compilers" fail to implement the standard correctly or fully, and make some concessions to this fact.
Theoretically, there's nothing non-portable about a C++ program that uses export. If it's a perfectly good C++ program in every other respect, then it will work on any conforming C++ compiler. But hardly anyone uses a conforming C++ compiler, so there's a de facto common subset of C++ that you'll want to stick to.
5. Understand that the C++ standard provides quite a restricted programming environment
Certain things are not portable in standard C++, such as drawing graphics on a screen, since standard C++ has no graphics or GUI API. So there is no such thing as a "fully portable" GUI program written in C++. So you may or may not need to revise your goal, depending what your program is supposed to do.
If your program requires something that simply cannot be done entirely within standard C++, then you can make your program easier to port by encapsulating that behaviour within an interface which you think should be implementable on all platforms you care about. Then set about implementing it for each one. This doesn't result in a "fully portable" program, though, since to me that means a program which you can compile and run unchanged on any conforming C++ implementation. A program which can be ported to most platforms with a C++ compiler, probably, assuming they have a screen and a mouse, with some bespoke programming work, isn't the same thing.
All this can be taken too far, of course. You will probably actually want to assume that CHAR_BIT is 8 (reading files is madness otherwise), and perhaps even rely on a GUI framework like Qt. But you did say, "fully portable", and one of the main things you need to do to write portable programs is usually to work out how far you're willing to compromise on "fully".
6. Assert what you assume
At compile-time if you can, or runtime otherwise, ensure that if your program requires int to be at least 32 bits (or whatever), then it will fail noisily when it isn't. OK, so comprehensive test coverage would catch cases where your arithmetic silently overflows and gives the wrong answer, but it's hard to write truly comprehensive tests, and anyway the tests might make the same non-portable errors as the code, or some poor sucker who has downloaded your code might not run them all properly.
When you use libraries, you are effectively doing this automatically. You'll #include some header, and if the library isn't available that will fail immediately. At least, you hope it will - it's conceivable that some other implementation could have a header of the same name which does something radically or subtly different. Radical differences usually result in compilation failures, for subtle differences you can test for preprocessor symbols to identify implementations.
Your question:
How to make sure that my program will be fully portable?
cannot be answered satisfyingly. You cannot, in any real-world application, make sure it's portable. You can only prove your expectation by accurate tests of the application on the target platform, as has been already proposed here by Lou Franco.
In the process of developing and testing in parallel on different platforms or environments, every one of us finds his bag of tricks and explores his share of pitfalls. You said in one comment you work on a Windows system. This is fine. Try to get your program working with the Visual Studio compiler (and environment). Then, install CygWin with the GCC 4.x compiler suite. Install the Netbeans IDE & C++ Environment and create a project based on the same sources. Netbeans will use the Cygwin GCC 4.x. If your compiled program works with both toolchains, you mastered probably about 90% of the real-world portability hurdles.
Regards
rbo
Avoid platform-specific libraries.
Make it standards compliant. At least a common subset of the standard that is implemented by vendors on all platforms you intend to deploy your application on.
Factor out platform specific portions from platform-independent ones. Typically, the lowest layer or two should deal with the platform.
Keep abreast with changes of:
Platform/OS APIs
Tool chains
Language features
Test, Deploy. Rinse and repeat.
Unit test it, on each platform, during development
Avoid using platform specific libraries. If you can implement desired functionality using the STL and BOOST only, go ahead.
Develop on the most restrictive compilation environment. Use the smallest set of features from C++. Split the platform-dependent portions of code into separate files. Develop a configuration (make) environment for each platform, as part of the software package.
Making sure to only use libraries that actually exist on all target platforms would be a good start.
It is impossible. What happens when I write my operating system that has a weird C compiler?
That said, to be portable, you need to:
Avoid Win32
Avoid POSIX (which is annoying... You may want to just use Cygwin to provide Windows support)
Avoid any platform specific library. This usually limits you in graphics to wxWindows, GTK, and QT.
TEST. Make sure it works.
Don't assume anything. Windows is weird and uses \r\n, so be careful about that.
I think Visual C++ on windows gives you warnings about "unsafe c functions" and asks you to use the "safe ones", which are not standard. Don't fall for Microsoft's attempt to monopolize your program.
Some things will help:
Autoconf will allow any decent system (ie one that includes a shell) to detect common portability issues and set up the correct headers
Cmake can do this as well, but only on platforms that Cmake itself is available on
Know the platforms that you intend to ship for. If some platform convention contradicts the standard, ignore the standard. I'm serious about that. For example, if you use the standard std::ifstream constructor, which takes a char* argument, you won't be able to open any files with Unicode filenames on Windows—you must use the nonstandard wchar_t* overload there. The functionality lost by not being able to open files that are allowed and legal on the platform severely outweighs the portability gained by using only what the standard knows; in the end, it's the functionality that matters, not the adherence to a particular standard.
This is less a direct answer to the question, than an answer in the light of other answers.
You need to balance a requirement for absolute portability against the expectations of platform users - there are different basic HCI/HIG guidelines for Windows, OS X, KDE and Gnome, and none of the portable GUI toolkits will automatically produce the right results in each (some allow you to apply different layouts, which is a start).
The inbetween approach is to have a pure portable core with multiple native GUIs.
It's not necessary (there is a lot of software that succeeds despite ignoring conventions) but it is a trade-off that needs to be considered - in particular if there is an existing strong native application.