Cross-Platform Networking Code in C++? - c++

I'm starting development on a new Application, and although my background is mainly Mac/iOS based, I need to work on a Windows application for participation in their Imagine cup.
This project includes communication between clients through a socket connection (to a server, not ad-hoc), and I need for both Mac and Windows clients to be able to communicate with each other. I'd also like to not have to write this networking code twice, and simply write different native-UI code on both platforms. This makes the networking easier (I'm confident that two different platforms are not going to be interacting with the server in different ways) and allows for a native UI on both platforms.
Is C++ the best language for this task? Is the standard library the same on both platforms? I understand that I'll have to use Microsoft's Visual C++ library, as it seems as though it is hard to utilize C++ code from C#; is this true?
I've never really written a cross-platform application before, especially one that deals with networking between platforms.

If you're going to use C++, I second "the_mandrill" above by strongly recommending you look at ASIO in Boost - that is a great way to write the code once and support both platforms.
As to whether or not C++ is the right language is rather more subjective. My personal feelings are that if you need to ask, odds on, it's not the best approach.
Reasons for selecting C++ to implement networking code are:
Possible to achieve very low latencies and high throughput with very carefully designed and implemented code.
Possible to take explicit control of memory management - avoiding variations in performance associated with garbage collection.
Potential for tight integration of networking code in-process with other native libraries.
Potential to build small components suitable for deployment in resource constrained environments.
Low level access to C socket API exposes features such as socket options to use protocols beyond vanilla TCP/IP and UDP.
Reasons for avoiding C++ are:
Lower productivity in developing code compared with higher-level languages (e.g. Java/C#/Python etc. etc.)
Greater potential for significant implementation errors (pointer-abuse etc.)
Additional effort required to verify code compiles on each platform.
Alternatives you might consider include:
Python... directly using low level socket support or high-level Twisted library. Rapid development using convenient high-level idioms and minimal porting effort. Biggest disadvantage - poor support to exploit multiple processor cores for high-throughput systems.
Ruby(socket)/Perl(IO::Socket)... High level languages which might be particularly suited if the communicated information is represented as text strings.
Java... garbage collection simplifies memory management; wide range of existing libraries.
CLR languages (C# etc.) also garbage collected - like Java... WCF is an effective framework within which bespoke protocols can be developed... which may prove useful.
You also asked: "I understand that I'll have to use Microsoft's Visual C++ library, as it seems as though it is hard to utilize C++ code from C#; is this true?"
You can avoid Visual C++ libraries entirely - either by developing using a language other than C++, or using an alternate C++ compiler - for example Cygwin or MinGW offer G++... though I'd recommend using Visual C++ to build C and C++ code for Windows.
It's not hard to use C++ code from C# - though I don't recommend it as an approach.. it's likely overly complicated. Visual C++ can (optionally) compile "Managed Code" from C++ source (there are a few syntax extensions to grasp and there's a slightly different syntax for interoperation using Mono rather than Visual C++, but these are not major issues IMHO.) These CLR objects interact directly with C# objects and can be linked together into a single assembly without issue. It's also easy to access native DLLs (potentially written using C++ for the native architecture) using Pinvoke. All this is somewhat irrelevant, however, as the .Net framework has good support for low level networking (similar to that provided by Winsock[2]) in system.net - this provides a convenient C#-oriented interface to similar facilities, and will likely provide a more convenient API to develop against if using C# (or VB.Net or any of the other CLR languages.)

I would suggest that you take a look at Qt. IMO Qt is one of the best C++ libraries for doing cross-platform application. The benefits of Qt when comparing with Boost is that Qt have even GUI classes.

Best language is very subjective, but if you want portable fast code with useful abstractions and C style syntax the C++ is a good choice. Note if you do not know any C++ already there is a steep learning curve.
The library as defined by the ISO standard is by definition the same on each platform, however the implementation of it my be less or more compliant. That said, both gcc, clang and MSVC(post .net) all implement C++98 very well. So long as you don't use compiler specific extensions.
I strong recommend looking at boost asio (and the boost library in general) for your networking, it uses the proactor design pattern which is very efficient. However it does take some time getting your head around it.
http://www.boost.org/doc/libs/1_48_0/doc/html/boost_asio.html
Stick with the Standard library and boost and for the most part cross platform problems are not a major concern.
Lastly, I would avoid using the C++11 features for writing cross platform code, because MSVC, GCC and Clang have all implemented different parts of the standard.

If you want to spend a year and put in a 1000 hrs sure use boost::asio. Or you can use a library that's built around boost::asio that invokes C++ network templates. This is crossplatform network and you can find it here:
https://bitbucket.org/ptroen/crossplatformnetwork/src/master/
It compiles on Windows,Android, Macos and Linux.
That is not to say if your absolutely expert level in boost::asio you can do a tiny bit better in performance but if you want to get like 98% of the performance gains you may find it useful. It also supports HTTP,HTTPS,NACK,RTP,TCP ,UDP,MulticastServer and Multicast Client.
Examples:
TCPServer: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Application/Stub/TCPServer/main.cc
HTTPServer: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Application/Stub/HTTPServer/httpserver.cc
OSI::Transport::TCP::TCP_ClientTransport<SampleProtocol::IncomingPayload<OSI::Transport::Interface::IClientTransportInitializationParameters>, SampleProtocol::OutgoingPayload<OSI::Transport::Interface::IClientTransportInitializationParameters>,
SampleProtocol::SampleProtocolClientSession<OSI::Transport::Interface::IClientTransportInitializationParameters>, OSI::Transport::Interface::IClientTransportInitializationParameters> tcpTransport(init_parameters);;
SampleProtocol::IncomingPayload< OSI::Transport::Interface::IClientTransportInitializationParameters> request(init_parameters);
request.msg = init_parameters.initialPayload;
std::string ipMsg=init_parameters.ipAddress;
LOGIT1(ipMsg)
tcpTransport.RunClient(init_parameters.ipAddress, request);
I'm biased because I wrote the library.

You can also check for this communication library:
finalmq
This library has the following properties:
C++, cross-platform, async/non-blocking, multiple protocols (TCP, HTTP, mqtt5), multiple encoding formats (json, protobuf). Check the examples.

Related

COM in the non-Windows world?

Hope this question isn't going to be too vague. Reading through the COM spec and Don Box's Essential COM book, there is plenty of talk of the "problems that COM solves" - and they all sound important, relevant and current.
So how are the problems that COM addresses dealt with on other systems (linux, unix, OSX, android)? I'm thinking of things like:
binary compatibility across compilers and compiler versions
binary component reuse
compiling an application such that it has run-time dependencies rather than load-time ones (so that it runs even when a dependency is missing)
access to library functionality from languages other than the library's own
reasonably low-overhead remote procedure calls to components loaded in the address space of a different process
etc (I'm sure the list goes on)
I'm basically just trying to understand why for instance on Linux CORBA isn't a thing like COM is a thing on Windows (if that makes any sense). Does maybe software development on Linux subscribe to a different philosophy than the component-based model proposed by COM?
And finally, is COM a C/C++ thing? Several times I've come across comments from people saying COM is made "obsolete" by .NET but without really explaining what they meant by that.
For the remainder of this post, I'm going to use Linux as an example of open-source software. Where I mention "Linux" it's mostly a short/simple way to refer to open source software in general though, not anything specific to Linux.
COM vs. .NET
COM isn't actually restricted to C and C++, and .NET doesn't actually replace COM. However, .NET does provide alternatives to COM for some situations. One common use of COM is to provide controls (ActiveX controls). .NET provides/supports its own protocol for controls that allows somebody to write a control in one .NET language, and use that control from any other .NET language--more or less the same sort of thing that COM provides outside the .NET world.
Likewise, .NET provides Windows Communication Foundation (WCF). WCF implements SOAP (Simple Object Access Protocol)--which may have started out simple, but grew into something a lot less simple at best. In any case, WCF provides many of the same kinds of capabilities as COM does. Although WCF itself is specific to .NET, it implements SOAP, and a SOAP server built using WCF can talk to one implemented without WCF (and vice versa). Since you mention overhead, it's probably worth mentioning that WCF/SOAP tend to add more overhead that COM (I've seen anywhere from nearly equal to about double the overhead, depending on the situation).
Differences in Requirements
For Linux, the first two points tend to have relatively low relevance. Most software is open source, and many users are accustomed to building from source in any case. For such users, binary compatibility/reuse is of little or no consequence (in fact, quite a few users are likely to reject all software that isn't distributed in source code form). Although binaries are commonly distributed (e.g., with apt-get, yum, etc.) they're basically just caching a binary built for a specific system. That is, on Windows you might have a single binary for use on anything from Windows XP up through Windows 10, but if you use apt-get on, say, Ubuntu 18.02, you're installing a binary built specifically for Ubuntu 18.02, not one that tries to be compatible with everything back to Ubuntu 10 (or whatever).
Being able to load and run (with reduced capabilities) when a component is missing is also most often a closed-source problem. Closed source software typically has several versions with varying capabilities to support different prices. It's convenient for the vendor to be able to build one version of the main application, and give varying levels of functionality depending on which other components are supplied/omitted.
That's primarily to support different price levels though. When the software is free, there's only one price and one version: the awesome edition.
Access to library functionality between languages again tends to be based more on source code instead of a binary interface, such as using SWIG to allow use of C or C++ source code from languages like Python and Ruby. Again, COM is basically curing a problem that arises primarily from lack of source code; when using open source software, the problem simply doesn't arise to start with.
Low-overhead RPC to code in other processes again seems to stem primarily from closed source software. When/if you want Microsoft Excel to be able to use some internal "stuff" in, say, Adobe Photoshop, you use COM to let them communicate. That adds run-time overhead and extra complexity, but when one of the pieces of code is owned by Microsoft and the other by Adobe, it's pretty much what you're stuck with.
Source Code Level Sharing
In open source software, however, if project A has some functionality that's useful in project B, what you're likely to see is (at most) a fork of project A to turn that functionality into a library, which is then linked into both the remainder of project A and into Project B, and quite possibly projects C, D, and E as well--all without imposing the overhead of COM, cross-procedure RPC, etc.
Now, don't get me wrong: I'm not trying to act as a spokesperson for open source software, nor to say that closed source is terrible and open source is always dramatically superior. What I am saying is that COM is defined primarily at a binary level, but for open source software, people tend to deal more with source code instead.
Of course SWIG is only one example among several of tools that support cross-language development at a source-code level. While SWIG is widely used, COM is different from it in one rather crucial way: with COM, you define an interface in a single, neutral language, and then generate a set of language bindings (proxies and stubs) that fit that interface. This is rather different from SWIG, where you're matching directly from one source to one target language (e.g., bindings to use a C library from Python).
Binary Communication
There are still cases where it's useful to have at least some capabilities similar to those provided by COM. These have led to open-source systems that resemble COM to a rather greater degree. For example, a number of open-source desktop environments use/implement D-bus. Where COM is mostly an RPC kind of thing, D-bus is mostly an agreed-upon way of sending messages between components.
D-bus does, however, specify things it calls objects. Its objects can have methods, to which you can send signals. Although D-bus itself defines this primarily in terms of a messaging protocol, it's fairly trivial to write proxy objects that make invoking a method on a remote object look pretty much like invoking one on a local object. The big difference is that COM has a "compiler" that can take a specification of the protocol, and automatically generate those proxies for you (and corresponding stubs in the far end to receive the message, and invoke the proper function based on the message it received). That's not part of D-bus itself, but people have written tools to take (for example) an interface specification and automatically generate proxies/stubs from that specification.
As such, although the two aren't exactly identical, there's enough similarity that D-bus can be (and often is) used for many of the same sorts of things as COM.
Systems Similar to DCOM
COM also allows you to build distributed systems using DCOM (Distributed COM). That is, a system where you invoke a method on one machine, but (at least potentially) execute that invoked method on another machine. This adds more overhead, but since (as pointed out above with respect to D-bus) RPC is basically communication with proxies/stubs attached to the ends, it's pretty easy to do the same thing in a distributed fashion. The difference in overhead, however, tends to lead to differences in how systems need to be designed to work well, though, so the practical advantage of using exactly the same system for distributed systems as local systems tends to be fairly minimal.
As such, the open source world provides tools for doing distributed RPC, but doesn't usually work hard at making them look the same as non-distributed systems. CORBA is well known, but generally viewed as large and complex, so (at least in my experience) current use is fairly minimal. Apache Thrift provides some of the same general type of capabilities, but in a rather simpler, lighter-weight fashion. In particular, where CORBA attempts to provide a complete set of tools for distributed computing (complete with everything from authentication to distributed time keeping), Thrift follows the Unix philosophy much more closely, attempting to meet exactly one need: generate proxies and stubs from an interface definition (written in a neutral language). If you want to do those CORBA-like things with Thrift you undoubtedly can, but in a more typical case of building internal infrastructure where the caller and callee trust each other, you can avoid a lot of overhead and just get on with the business at hand. Likewise, google RPC provides roughly the same sorts of capabilities as Thrift.
OS X Specific
Cocoa provides distributed objects that are fairly similar to COM. This is based on Objective-C though, and I believe it's now deprecated.
Apple also offers XPC. XPC is more about inter-process communication than RPC, so I'd consider it more directly comparable to D-bus than to COM. But, much like D-bus, it has a lot of the same basic capabilities as COM, but in different form that places more emphasis on communication, and less on making things look like local function calls (and many now prefer messaging to RPC anyway).
Summary
Open source software has enough different factors in its design that there's less demand for something providing the same mix of capabilities as Microsoft's COM provides on Windows. COM is largely a single tool that tries to meet all needs. In the open-source world, there's less drive to provide that single, all-encompassing solution, and more tendency toward a kit of tools, each doing one thing well, that can be put together into a solution for a specific need.
Being more commercially oriented, Apple OS X probably has what are (at least arguably) closer analogs to COM than most of the more purely open-source world.
A quick answer on the last question: COM is far from being obsolete. Almost everything in the Microsoft world is COM-based, including the .NET engine (the CLR), and including the new Windows 8.x's Windows Runtime.
Here is what Microsoft says about .NET in it latest C++ pages Welcome Back to C++ (Modern C++):
C++ is experiencing a renaissance because power is king again.
Languages like Java and C# are good when programmer productivity is
important, but they show their limitations when power and performance
are paramount. For high efficiency and power, especially on devices
that have limited hardware, nothing beats modern C++.
PS: which is a bit of a shock for a developer who has invested more than 10 years on .NET :-)
In the Linux world, it is more common to develop components that are statically linked, or which run in separate processes and communicate by piping text (maybe JSON or XML) back and forth.
Some of this is due to tradition. UNIX developers have been doing stuff like this long before CORBA or COM existed. It's "the UNIX way".
As Jerry Coffin says in his answer, when you have the source code for everything, binary interfaces are not as important, and in fact just make everything more difficult.
COM was invented back when personal computers were a lot slower than they are today. In those days, loading components into your app's process space and invoking native code was often necessary to achieve reasonable performance. Now, parsing text and running interpreted scripts aren't things to be afraid of.
CORBA never really caught on in the open-source world because the initial implementations were proprietary and expensive, and by the time high-quality free implementations were available, the spec was so complicated that nobody wanted to use it if they weren't required to do so.
To a large extent, the problems solved by COM are simply ignored on Linux.
It is true that binary compatibility is less important when you have the source code available. However, you still have to worry about modularisation and versioning. If two different programs depend on different versions of the same library, you need to somehow support that.
Then there is the case of the same program using different versions of the same library. This is often useful when working on large legacy programs, where upgrading everything can be prohibitively expensive but you would like to use new features anyway. With COM, the old parts of the program can just be left alone, since new library versions can more easily be made backwards compatible.
In addition, having to compile from source instead of binary compatibility is a huge hassle. Especially if you are actually developing software, since binary incompatibility means you have to recompile much more often. If one tiny part changes in a large C++ program, you may have to wait for a 30 minute recompile. If the different pieces are compatible, only the part which changed has to be recompiled.
COM and DCOM in particular have been around in windows for some considerable time now and naturally windows developers have made use of this powerful framework.
We are now in the cross platform age and when porting such applications to other platforms we are faced with challenges which in many cases can be mitigated or eliminated altogether unless the application we are porting is more than just one simple standalone app.
If your dealing with a whole suite of modules running on different machines all communicating using windows specific technologies such as DCE/RPC, DCOM or even windows named pipes then your job just became an order of magnitude harder.
DCE/RPC DCOM and windows named pipes all are very windows specific, non portable and of course subject to windows security access control.
For instance anyone familar with OPC DA (an industrial automation protocol based on DCOM still very much in use but now superceded by OPC UA (which avoids DCOM))) will know that there are no elegant solutions here if the client (or server) needs to be available for Linux!!
Sure there appear to be some technical hurdles here given that the MS code is not in the public domain but projects such as Wine have a partly ok DCE/RPC implementation and MS do publish some of the protocol docs. Try searching and you will probably find little information and few products open source or otherwise to help you.
Perhaps the lack of open source or affordable options here is more due to legal concerns - I wonder!
Some simpler solutions simply involve installing a "gateway service" on the windows machines to allow an alternative means of access to DCOM interfaces on that machine. This is fine if the windows machine does not belong to an unwilling 3rd party which unfortunately is sometimes the case!!! I know we'll just chuck another Windows machine as the gateway in the middle is the usual global warming enhancing solution to that problem.
I would conclude that Linux to Windows DCOM interoperability is certainly not impossible but it does appear to be a topic that few are interested in talking about unless you get your wallet out!

Cross-platform C++ tool chain

I'm a die-hard .NET developer with limited experience in C++. I'm really familiar with how happily interpreted languages (and scripting languages) work cross-platform, but what about C++?
I realize that GCC/GPP and some other compilers sorta work multi-platform with the right compiler flags, and I understand that the STL is normalized between compilers, but what else am I missing? I'll need to do audio in/out, high accuracy timers, and I'll need to do multithreading. I don't think any of these things are supported in the STL, so I'll be needing a cross-platform library of some type, right? Which one should I use?
I'm aiming to support the latest Mac and Windows platforms. This is a shared framework/sdk and won't have a UI. I plan on writing the UI in a native language such as Objective-C/Cocoa and .NET/WPF (both which have excellent native UI support).
So what should my tool chain look like? Should I be using GCC/GPP or MinGW? What other 'libraries' should I integrate that will function cross platform? I'd like to setup my build environment to 'just work' such that I can build a Mac compatible binary and a windows compatible binary (32 bit and 64 bit). How can I do that?
At this point, I'll be writing code for each platform so that I can interop with this my C++ multiplatform framework sdk/api thingy. In windows, I think this will look like a managed DLL, is that right? Any thoughts on how I'll do this on a Mac?
Any suggestions or recommendations?
Thanks for the suggestions,
Brett
1) Your real goal here should be to write portable C++ -- you get portability by keeping your language use clean, not by using a particular compiler.
2) Some of what you want, like audio i/o, is fairly inherently platform dependent. You can, however, cleanly isolate the platform dependencies inside particular modules and conditionally compile a set of platform support files on each platform.
3) Stuff like multithreading can be done in a platform independent manner using standard libraries.
Getting into subjective territory but for a developer used to working with large, do-it-all kind of frameworks like those we find in .NET, QT would probably be your best cross-platform C++ analogy (though I am not a huge fan of it). There you have your threads, XML I/O, localization, sockets, high accuracy timers, GUI building blocks, etc.
[...] and I'll need to do multithreading. I don't think any of these things
are supported in the STL [...]
Just a small thing, but the STL is limited to describing the aggregate containers and generic algorithms of the C++ Standard Library. It is not synonymous, but C++11 has specifications in the standard library for concurrency support. However, popular compilers are still slow to support it fully. You also have boost if you need threads for the time being: http://www.boost.org which is an extremely cross-platform library.
As for building an API that can happily plug in to various environments (.NET, Cocoa, etc), your best, most cross-platform option is actually to expose a C API (you're free to implement it using C++). It'll cause the least headaches this way (no issues with name mangling, trying to interop with complex, user-defined C++ types, etc.).
I recommend you get some practice with building DLLs/shared libraries in C++ as a way to extend .NET and Cocoa applications (ex: C++ module used in objective-C) before you start trying to develop a grand library.
For timing the standard library has <chrono>, which has nanosecond resolution on Mac. Unfortunately it's only got millisecond resolution on Windows as of the VS11 Beta. I'm hoping they fix it without too much delay.

C or C++ for a Robot?

Greetings,
I am trying to decide between C and C++ for my robot. I am a 5+ year veteran of Visual Basic.NET, however I'm going with Linux (Ubuntu) on this robot, and there is a compatibility problem between Linux and the .NET Framework. I want to stick with whichever language I choose for all of my projects, so I want to make sure that I choose the most appropriate one for the task.
For reference, I will describe my current robot in progress and what I am going to do with it. I am in the process of building a full-sized R4 Astromech (yep, I'm one of those guys). I have incorporated a PC motherboard with an Intel Core 2 2.1 GHz processor, 1 GB RAM. I will be using a scratch-built parallel interface card to control the drive motors, head motor, as well as a secondary parallel interface card (going to a second parallel port) which all of the sensors (IR, Ultrasonic Ranging, Visual Recognition via webcam, etc.) will be going to. Commands will be given using speech recognition (currently have a VB.NET scratch-built recognition program that I will be adapting to the new language).
Given the specifications and desired goals listed above, would I be better off with C or C++? I greatly appreciate any suggestions that you might have.
Thanks!
Thaskalas
What do you mean by a compatibility problem? Have you looked at Mono? It's an open-source implementation of the .NET libraries. It's geared toward C# not VB.NET but if you're more comfortable in a .NET environment use that. Speed isn't really an issue here as a Core2Duo is plenty fast for what you need to do.
If Mono won't work for you, I'd recommend C++. There are a lot more libraries out there for C++ (or at least, I am familiar with more, e.g. Boost), which can use most C libraries too. There's no real speed penalty for using C++. While using C wouldn't be bad per-se, C++ has some benefits and no drawbacks, so it's probably the better choice.
I would recommend using ROS. It will let you get started with a sophisticated Inter-Process Communications manager, as well as a large library of sophisticated robotics code, including multiple implementations of SLAM and other critical robotics algorithms. ROS also lets you program in multiple languages, including C, C++, and Python, so you aren't stuck with one language or another down the road.
I would also recommend C++ and ROS. In our company we're migrating to it, because there's so many people working on it, expanding it, and adding lots of cool features.
With this, you can forget about implementing most of the basic low-level stuff and start working on what you intend to research.
It's really easy to set up and start developing.
Since you're running Linux on it, I'd recommend a split approach, where you do the lower-level (device interface, where you may need fast performance) stuff in C (or C++), and the higher level stuff in a modern language like C# (using Mono) or Java, or even Python.
Python especially is hugely expressive, has a large set of libraries, and has a pretty straightforward C interface.
Writing your high-level control stuff in a low-level language like C/C++ will get old fast (IMHO). Robots should be fun!
Have you considered D? It's a fairly new language, is compiled to native code and can link directly to C. (The entire C standard library is even available from D, and bindings to the POSIX API are included in the standard library.) Basically all you need to do to use any C library from D is compile it with a C compiler and translate the function prototypes, constant declarations, etc. in the header file.
D is low-level enough that an experimental kernel is written in it, but has modern features like garbage collection (though manual memory management is still permitted), builtin strings and arrays, and more advanced/easier to use template metaprogramming facilities than C++. The biggest disadvantage is lack of a mature toolchain and libraries for enterprise-y things, but for your purposes that probably doesn't matter. BTW, if you need to do a bunch of matrix manipulation and stuff, there's the SciD project, which provides nice templated wrappers over LAPACK and BLAS.
Use C++. You have the space. You can use it "as a better C" to start with.
C++ is a bigger tool bag; why would you not want that!? You need not use all the tools, but with C you'd have no choice. Most importantly with C++ you have the choice of using both C and C++ third-party libraries.

If ANSI C++ doesn't support multithreading, how can unmanaged C++ apps be multithreaded?

I have heard that C++ offers no native support for multithreading. I assume that multithreaded C++ apps depended on managed code for multithreading; that is, for example, a Visual C++ app used MFC or .NET or something along those lines to provide multithreading capability. I further assume that some or all of those managed-code capabilities are unavailable to unmanaged applications. But I have read about unmanaged multithreaded applications. How is this possible? Which of my assumptions is false?
It is wholly up to the operating system to provide support for multi-threading. On Windows, the necessary functionality is available via the Win32 API. Frameworks such as MFC provide wrappers over the low-level threading functions to simplify things, while of course .NET/CLR has its own managed interface for accessing Win32 multi-threading capabilities.
A good explanation is offerred in this article (Multithreading in C++).
Why Doesn’t C++ Contain Built-In
Support for Multithreading?
C++ does not contain any built-in
support for multithreaded
applications. Instead, it relies
entirely upon the operating system to
provide this feature. Given that both
Java and C# provide built-in support
for multithreading, it is natural to
ask why this isn’t also the case for
C++. The answers are efficiency,
control, and the range of applications
to which C++ is applied. Let’s examine
each.
By not building in support for
multithreading, C++ does not attempt
to define a “one size fits all”
solution. Instead, C++ allows you to
directly utilize the multithreading
features provided by the operating
system. This approach means that your
programs can be multithreaded in the
most efficient means supported by the
execution environment. Because many
multitasking environments offer rich
support for multithreading, being able
to access that support is crucial to
the creation of high-performance,
multithreaded programs.
Multithreading in C++ does not require managed code.
In very much the same way that C++ does not provide native support for displaying graphics or emitting sounds or reading input from a mouse, the operating system that's being used will provide a C++ API for utilizing these features.
It's not a matter of C++ not being able to do it. It simply hasn't been written into the C++ standard yet.
Some of your assumptions are not quite right. The operating system (I'm talking about win32 since you mention .NET) provides support for threading. There are lots of good threading libs. that build ontop of the OS functionality in C++ to make multithreading "easier" :) -- pthreads for example. Here is more at MSDN.
The ISO standard for the programming language C++ neither defines nor prohibits multithreading. An implementation is allowed to provide extensions if it wishes. A program is allowed to use implementation extensions if it wishes, and then the program will only run on systems that provide those extensions.
For comparison, the ISO standard for the programming language C++ neither defines nor prohibits the use of a mouse. A program is allowed to use implementation extensions and then it will only run on systems that provide those extensions. For another comparison, the ISO standard for C++ neither defines nor prohibits UTF-8, so your program can depend on Latin-1 and then your program will only run on systems that provide Latin-1.
Native C++ does not offer "built in" multithreading support simply because it was not intended to, or in fact, needed.
Your misconception is that this is a fault, while it is in fact a strength of the language. By being "oblivious" to multithreading, C++ seamlessly integrates with the MT support offered by the OS your code will compile and run on, thereby offering much more flexibility and efficiency than if it came with it's own "MT baggage" so to speak.
You mention MFC and .NET as examples - be aware that these libraries/wrappers are merely a layer over basic Win32 API's. Using C++ as intended will provide you with efficient code that will run multithrededly on ANY OS, as long as you seperate the logic from the OS-specific MT API calles (i.e thread creation etc), so that porting between OS's is greatly facilitated.
Unlike Java, which defined language constructs and JVM specs, the C++ standard is oblivious to threading (so is C). As far as these languages are concerned, anything thread-related consists of function calls to OS functionality. Libraries compiled for multithreading simply make sense of the same calls, but from a language perspectives they are plain old code.
I think you misunderstand the definition of 'managed' code. 'Managed' code is a Microsoft-specific term meaning code that is uses the .NET framework and thus is subject to the various aspects of .NET. 'Unmanaged' code means code that runs outside that and does not operate through the .NET layer. MFC code is 'unmanaged'; it's merely a spectacularly bad wrapper for the ubiquitous Win32 API (which isn't even the lowest level API available on Windows).
The .NET libraries (including multithreading) are almost all, at some level, interfaces for the more basic system APIs used by traditional, 'unmanaged' applications. There is, generally speaking, no functionality available to 'managed' code that cannot be replicated in 'unmanaged' code with sufficient effort, though the reverse is not true (this is called the abstraction penalty, if you wish to know more). While it may be easier to do in 'managed' code, that's just because somewhere, some 'unmanaged' code is doing it for you, more or less. In the case of a threading API, it is in turn an interface to the operating system kernel, which itself accesses the processor's capabilities to allow a process to run in multiple places concurrently (if using multiple cores; if not, then it's just a pseudo-concurrency).
The C++ standard currently provides no definition of threads (the upcoming C++1x standard does). There are a number different threading libraries available, including those provided by Win32 and MFC, the pthreads library found on POSIX systems, and Boost.Thread, which will use the platform's local threading library.
The next C++ standard (named c++0x) will have support for multithreading.
Will include atomic operations.

Making portable code

With all the fuss about opensource projects, how come there is still not a strong standard that enables you to make portable code (I mean in C/C++ not Java or C#)
Everyone is kind of making it's own soup.
There are even some third party libs like Apache Portable Runtime.
Yes, there is no standard but libraries like Qt and boost can make your life much easier when you do cross-platform development.
wxwidgets is a great abstraction layer on the native GUI widgets of most window managers.
I think the main reason there isn't any single library anyone agrees on is that everyone's requirements are different. When you want to wrap system libraries you'll often need to make some assumptions about what the use cases will be, unless you want to make the wrapper huge and impossible to work with. I think that might be the main reason there's no single, common cross platform runtime.
For GUI, the reason would be that each platform has its own UI conventions, you can't code one GUI that fits all, you'll simply get one that fits just one or even none at all.
There are many libraries that make cross-platform development easier on their own, but making a complete wrapper for all platforms ends up being either small and highly customized, or massive and completely ridiculous.
Carried to it's logical conclusion, a complete wrapper for all aspects of an operating system becomes an entire virtual runtime. You might as well make your own programming language.
The ADAPTIVE Communication Environment (ACE) is an excellent object oriented framework that provides cross-platform support for all of the low level OS functionality like threading, sockets, mutexes, etc. It runs with a crazy number of compilers and operating systems.
C and C++ as languages are standards languages. If you closely follow their rules when coding (That means not using vendor-specific extensions) you're code should be portable and you should be able to compile it with any modern compiler on any OS.
However C and C++ don't have a GUI library, like Java or C#, however there exist some free or commercial GUI libraries that will allow you to write portable GUI applications.
I think the most populars are Qt (Commercial) and wxWidgets (FOSS). According to wikipedia there is a lot more.
There is also boost, while not a GUI library boost is a really great complement to C++'s STL. In fact some of the boost libraires will be added in the next C++ standard.
If you make sure it compiles cleanly with both GCC and MS VC++, it will be little extra effort to port to somewhere else.