I am involved in a venture that will port some communications, parsing, data handling functionality from Win32 to Linux and both will be supported. The problem domain is very sensitive to throughput and performance.
I have very little experience with performance characteristics of boost and ACE. Specifically we want to understand which library provides the best performance for threading.
Can anyone provide some data -- documented or word-of-mouth or perhaps some links -- about the relative performance between the two?
EDIT
Thanks all. Confirmed our initial thoughts - we'll most likely choose boost for system level cross-platform stuff.
Neither library should really have any overhead compared to using native OS threading facilities. You should be looking at which API is cleaner. In my opinion the boost thread API is significantly easier to use.
ACE tends to be more "classic OO", while boost tends to draw from the design of the C++ standard library. For example, launching a thread in ACE requires creating a new class derived from ACE_Task, and overriding the virtual svc() function which is called when your thread runs. In boost, you create a thread and run whatever function you want, which is significantly less invasive.
Do yourself a favor and steer clear of ACE. It's a horrible, horrible library that should never have been written, if you ask me. I've worked (or rather HAD to work with it) for 3 years and I tell you it's a poorly designed, poorly documented, poorly implemented piece of junk using archaic C++ and built on completely brain-dead design decisions ... calling ACE "C with classes" is actually doing it a favor. If you look into the internal implementations of some of its constructs you'll often have a hard time suppressing your gag reflex.
Also, I can't stress the "poor documentation" aspect enough. Usually, ACE's notion of documenting a function consists of simply printing the function's signature. As to the meaning of its arguments, its return value and its general behavior, well you're usually left to figure that out on your own. I'm sick and tired of having to guess which exceptions a function may throw, which return value denotes success, which arguments I have to pass to make the function do what I need it to do or whether a function / class is thread-safe or not.
Boost on the other hand, is simple to use, modern C++, extremely well documented, and it just WORKS! Boost is the way to go, down with ACE!
Don't worry about the overhead of an OS-abstraction layer on threading and synchronization objects. Threading overhead literally doesn't matter at all (since it only applies to thread creation, which is already enormously slow compared to the overhead of a pimpl-ized pointer indirection). If you find that mutex ops are slowing you down, you're better off looking at atomic operations or rearranging your data access patterns to avoid contention.
Regarding boost vs. ACE, it's a matter of "new-style" vs. "old-style" programming. Boost has a lot of header-only template-based shenanigans (that are beautiful to work with, if you can appreciate it). If, on the other hand, you're used to "C with classes" style of C++, ACE will feel much more natural. I believe it's mostly a matter of personal taste for your team.
I've used ACE for numerous heavy duty production servers. It never failed me. It is rock solid and do the work for many years now. Tried to learn BOOST's ASIO network framework-Couldn't get the hang of it. While BOOST is more "modern" C++, it also harder to use for non trivial tasks - and without a "modern" C++ experience and deep STL knowledge it is difficult to use correctly
Even if ACE is a kind of old school C++, it still has many thread oriented features that boost doesn't provide yet.
At the moment I see no reason to not use both (but for different purposes). Once boost provide a simple mean to implement message queues between tasks, I may consider abandoning ACE.
When it comes to ease-of-use, boost is way better than ACE. boost-asio has a more transparent API, its abstractions are simpler and can easily provide building blocks to your application. The compile-time polymorphism is judiciously used in boost to warn/prevent illegal code. ACE's uses of templates, on the other hand, is limited to generalization and is hardly ever user-centric enough to disallow illegal operations. You're more likely to discover problems at run-time with ACE.
A simple example which I can think of is ACE_Reactor - a fairly scalable and decoupled interface- but you must remember to call its "own" function if you're running its event loop in a thread different from where it was created. I spent hours to figure this out for the first time and could've easily spent days. Ironically enough its object model shows more details than it hides - good for learning but bad for abstraction.
https://groups.google.com/forum/?fromgroups=#!topic/comp.soft-sys.ace/QvXE7391XKA
Threading is really only a small part of what boost and ACE provide, and the two aren't really comparable overall. I agree that boost is easier to use, as ACE is a pretty heavy framework.
I wouldn't call ACE "C with classes." ACE is not intuitive, but if you take your time and use the framework as intended, you will not regret it.
From what I can tell, after reading Boost's docs, I'd want to use ACE's framework and Boost's container classes.
Use ACE and boost cooperatively. ACE has better communication API, based on OO design patterns, whereas boost has like "modern C++" design and works well with containers for example.
We started to use ACE believing that it would hide the platform differences present between windows and unix in TCP sockets and the select call. Turns out, it does not. Ace's take on select, the reactor pattern, cannot mix sockets and stdin on windows, and the semantic differences between the platforms concerning socket writablility notifications are still present at the ACE level.
By the time we realized this we were already using the thread and process features of ACE (the latter of which again does not hide the platform differences to the extent we would have liked) so that our code is now tied to a huge library that actually prevents the porting of our code to 64 Bit MinGW!
I can't wait for the day when the last ACE usage in our code is finally replaced with something different.
I've been using ACE for many years (8) but I have just started investigating the use of boost again for my next project. I'm considering boost because it has a bigger tool bag (regex, etc) and parts of it are getting absorbed into the C++ standard so long term maintenance should be easier.
That said, boost is going to require some adjustment. Although Greg mentions that the thread support is less invasive as it can run any (C or static) function, if you're used to using thread classes that are more akin to the Java and C# thread classes which is what ACE_Task provides, you have to use a little finesse to get the same with boost.
Related
My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)
First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.
There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.
Is the new C++11 going to contain any socket library? So that one could do something std::socket-ish?
Seeing as how std::thread will be added, it feels as if sockets should be added as well. C-style sockets are a pain... They feel extremely counter-intuitive.
Anyways: Will there be C++ sockets in C++11 (googled it but couldn't find an answer)? If not, are their any plans on adding this? Why (/ why not)?
No, it is not. As for the near future, the C++ standards committee has created a study group that is developing a networking layer proposal. It looks like they're going for a bottom-up approach, starting with a basic socket layer, then building HTTP/etc support on top of that. They're looking to present the basic socket proposal at the October committee meeting.
As for why they didn't put this into C++11, that is purely speculative.
If you want my opinion on the matter, it's for this reason.
If you are making a program that does something, that has a specific functionality to it, then you can pick libraries for one of two reasons. One reason is because that library does something that is necessary to implement your code. And the other is because it does something that is helpful in implementing code in general.
It is very difficult for a design for a particular program to say, "I absolutely must use a std::vector to hold this list of items!" The design for a program isn't that specific. If you're making a web browser, the idea of a browser doesn't care if it holds its tabs in a std::vector, std::list, or a user-created object. Now, some design can strongly suggest certain data structures. But rarely does the design say explicitly that something low-level like a std::list is utterly essential.
std::list could be used in just about any program. As can std::vector, std::deque, etc.
However, if you're making a web browser, bottled within that design is networking. You must either use a networking library or write a networking layer yourself. It is a fundamental requirement of the idea.
The term I use for the former type, for libraries that could be used in anything, is "utility" libraries.
Threading is a utility library. Design might encourage threading through the need to respond to the user, but there are ways to be responsive without preemptive multithreading. Therefore, in most cases, threading is an implementation choice. Threading is therefore a utility.
Networking is not. You only use networking if your design specifically calls for it. You don't decide to just dump networking into a program. It isn't an implementation detail; it is a design requirement.
It is my opinion that the standard C/C++ library should only implement utilities. It's also why I'm against other heavyweight ideas like XML parsers, etc. It isn't wrong for other libraries to have these things, but for C and C++, these are not good choices.
I think it should, since a lot of other popular languages support socket operations as a part of the language (they don't force the user to use any OS-specific API). If we already have file streams to read/write local files, I don't see why we can't have some method of transferring data with sockets.
There will be no sockets in C++11. The difference between threads and sockets is that threads involves making more guarantees about ordering, if your program involves threads. For a platform with just one core, then C++11 doesn't mandate that your CPU springs an extra core. Sockets, on the other hand, would be... difficult to implement portably and fail gracefully on systems that don't have them.
This is so weird that in 2022, there is still no standard for a basic OS construct as sockets in C++.
The closest I found is kissnet (Apparently exists since 2019).
It's small (~1500 lines), runs on Windows and Linux, uses OpenSSL, and requires C++ 17 (Which is a plus in my book), basically everything I needed.
There will not be in C++0x. There are proposals to add them in a future version.
The amount of new stuff in C++0x had to be limited to give the committee time to deal with it all thoroughly.
The wikipedia page for C++0x is usually pretty up to date and the section on library changes doesn't seem to mention sockets.
I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol.
The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed.
The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room.
In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble.
Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that > 80% of the daemons are written in plain C).
Now I wonder why that is.
Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons?
Thanks in advance!
Update 1:
For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming.
Yes, I'm aware that C++ is a superset of C, and that C code is basically C++ you can compile any C code with a C++ compiler. But that's not the point. ;)
Update 2:
I'm aware that nowadays it might make more sense to use a high level (scripting) language like Python, node.js or similar. I did that in the past, and I know of the benefits of doing that (at least I hope I do ;) - but this question is just about C and C++.
I for one can't think of any technical reason to chose C over C++. Not one that I can't instantly think of a counterpoint for anyway.
Edit in reply to edit: I would seriously discourage you from considering, "...C code is basically C++." Although you can technically compile any C program with a C++ compiler (in as far as you don't use any feature in C that's newer than what C++ has adopted) I really try to discourage anyone from writing C like code in C++ or considering C++ as "C with objects."
In response to C being standard in Linux, only in as far as C developers keep saying it :p C++ is as much part of any standard in Linux as C is and there's a huge variety of C++ programs made on Linux. If you're writing a Linux driver, you need to be doing it in C. Beyond that...I know RMS likes to say you're more likely to find a C compiler than a C++ one but that hasn't actually been true for quite a long time now. You'll find both or neither on almost all installations.
In response to maintainability - I of course disagree.
Like I said, I can't think of one that can't instantly be refuted. Visa-versa too really.
The resistance to C++ for the development for daemon code stem from a few sources:
C++ has a reputation for being hard to avoid memory leaks. And memory leaks are a no no in any long running software. This is to a degree untrue - the problem is developers with a C background tend to use C idioms in C++, and that is very leaky. Using the available C++ features like vectors and smart pointers can produce leak free code.
As a converse, the smart pointer template classes, while they hide resource allocation and deallocation from the programmer, do a lot of it under the covers. In fact C++ generally has a lot of implicit allocation as a result of copy constructors and so on. As a result the C++ heap can become fragmented over time and daemon processes will eventually fail with an out of memory error even though there is sufficient RAM. This can be ameliorated by the use of modern heap managers that are more fragmenttation resistant, but they do this by consuming more resource up front.
while this doesn't apply to usermode daemon code, kernel mode developers avoid C++, again because of the implicit code C++ generates, and the exceptions C++ libraries use to handle errors. Most c++ compilers implement c++ exceptions in terms of hardware exceptions, and lots of kernel mode code is executed in environments where exceptions are not allowed to be thrown. Also, all the implicit code generated by c++, being implicit, cannot be wrapped in #pragma directives to guarantee its placement in pageable, or non pageable memory.
As a result, C++ is not possible for kernel development on any platform at all, and generally shunned by daemon developers too. Even if one's code is written using the proper smart memory management classes and does not leak - keeping on top of potential memory fragmentation issues makes languages where memory allocation is explicit a preferred choice.
I would recommend whichever you feel more comfortable with. If you are more comfortable with C++, your code is going to be cleaner, and run more efficiently, as you'll be more used to it, if you know what I mean.
The same applies on a larger scale to something like a Python vs Perl discussion. Whichever you are more comfortable with will probably produce better code, because you'll have experience.
I think the reason is that ANSI C is the standard programming language in Linux. It is important to follow this standard whenever people want to share their code with others etc. But it is not a requirement if you just want to write something for yourself.
You personally can use C or C++ and the result will be identical. I think you should choose C++ if you know it well and can exploit some special object oriented features of it in your code. Don't look too much to other people here, if you are good in C++ just go and write your daemon in C++. I would personally write it in C++ as well.
You're right. The reason for not using C++ is KISS, particularly if you ever intend for someone else to maintain your code down the road. Most folks that I know of learned to write daemons from existing source or reading books by Stevens. Pretty much that means your examples will be in C. C++ is just fine, I've written daemons in it myself, but I think if you expect it to be maintained and you don't know who the maintainer might be down the road it shows better foresight to write in C.
Boost makes it incredibly easy to write single threaded, or multi-threaded and highly scalable, networking daemons with the asio library.
I would recommend using C++, with a reservation on using exception handling and dynamic RTTI. These features may have run time performance cost implications and may not be supported well across platforms.
C++ is more modular and maintainable so if you can avoid these features go ahead and use it for your project.
Both C and C++ are perfectly suited for the task of writing daemons.
Besides that, nowadays, you should consider also scripting languages as Perl or Python. Performance is usually just good enough and you will be able to write applications more robust and in less time.
BTW, take a look at ACE, a framework for writting portable network applications in C++.
In trying to get up to speed with C++ (coming from a long experience with C), I am obviously trying to do the right thing, and use as much as is standard as is possible.
However, in my readings on the matter I come accross a lot of criticism for standard things, and praise for non-standard things. For example, even the the (I assume) poorly-thought-of MFC library has features in, for example, its CString class that some folks think useful enough to cause them to continue using it despite the fact that it's (a) non-standard and (b) that it's (I assume, from the wealth of criticism) deficient in many important ways.
My question is twofold, then:
A. What libraries that are poorly-thought-of contain features that nonetheless make it worth continuing to use them, what are those features, and what's so good about them?
B. Are there "adaptor" libraries out there that simplify and/or tighten up the use of such libraries, e.g. providing nice interfaces that abstract resources leaks, adaptors that go from a non-STL library interface to a STL, and so on
As a relative newbie to StackOverflow, I'm not 100% sure that this question is sufficiently on-point, so I apologise up-front if it's too open-ended.
Thanks in advance
My personal grunge is with ACE. It was sort of the other way around - great idea, nothing else was available at the time for cross-platform threaded and network development in C++, wide deployment, books by the library authors, etc. But the implementation was terrible, usage patterns were complicated, almost all the useful features of C++ suppressed (or didn't exist at the time.) I think this library alone is responsible for good chunk of people thinking that C++ is hard and ugly. Very recently Boost collection started catching up with threads, ipc, and networking, so there is at least an alternative. BUT with all that said, I still think it's worth to be familiar with ACE if you are in that space since, again, way too many people use it, the ideas are good, and it can serve as great negative example for library design.
IMO the best thing about MFC is that, historically, it was available before the STL was available, and something was better than nothing.
MFC is still good, if you're writing code to be compatible with an existing MFC code base.
Apart from that there's little merit in MFC, except that it's perhaps still one of the (if not the most) obvious C++ class library for Windows.
On a more general note I think one of the reasons that people stick with old, awkward and perhaps even obsolete libraries is that they have grown used to it and anything new and shiny might do the job better but are harder to grasp/understand initially. And so you stay with what is familiar and keep your productivity.
I think it is a balance between getting the job done and getting the job done "right". Somewhere in between is probably where most of us end up.
CString is non-standard because MFC is not cross-platform, I think. std::string() is standard, but if we use everything standard, then why MS developed MFC?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have been working with the Boost C++ Libraries for quite some time. I absolutely love the Boost Asio C++ library for network programming. However I was introduced to two other libraries: POCO and Adaptive Communication Environment (ACE) framework. I would like to know the good and bad of each.
As rdbound said, Boost has a "near STL" status. So if you don't need another library, stick to Boost. However, I use POCO because it has some advantages for my situation. The good things about POCO IMO:
Better thread library, especially a Active Method implementation. I also like the fact that you can set the thread priority.
More comprehensive network library than boost::asio. However boost::asio is also a very good library.
Includes functionality that is not in Boost, like XML and database interface to name a few.
It is more integrated as one library than Boost.
It has clean, modern and understandable C++ code. I find it far easier to understand than most of the Boost libraries (but I am not a template programming expert :)).
It can be used on a lot of platforms.
Some disadvantages of POCO are:
It has limited documentation. This somewhat offset by the fact that the source is easy to understand.
It has a far smaller community and user base than, say, Boost. So if you put a question on Stack Overflow for example, your chances of getting an answer are less than for Boost
It remains to be seen how well it will be integrated with the new C++ standard. You know for sure that it will not be a problem for Boost.
I never used ACE, so I can't really comment on it. From what I've heard, people find POCO more modern and easier to use than ACE.
Some answers to the comments by Rahul:
I don't know about versatile and advanced. The POCO thread library provides some functionality that is not in Boost: ActiveMethod and Activity, and ThreadPool. IMO POCO threads are also easier to use and understand, but this is a subjective matter.
POCO network library also provides support for higher level protocols like HTTP and SSL (possibly also in boost::asio, but I am not sure?).
Fair enough.
Integrated library has the advantage of having consistent coding, documentation and general "look and feel".
Being cross-platform is an important feature of POCO, this is not an advantage in relation to Boost.
Again, you should probably only consider POCO if it provides some functionality you need and that is not in Boost.
I've used all three so here's my $0.02.
I really want to vote for Doug Schmidt and respect all the work he's done, but to be honest I find ACE mildly buggy and hard to use. I think that library needs a reboot. It's hard to say this, but I'd shy away from ACE for now unless there is a compelling reason to use TAO, or you need a single code base to run C++ on both Unix variants and Windows. TAO is fabulous for a number of difficult problems, but the learning curve is intense, and there's a reason CORBA has a number of critics. I guess just do your homework before making a decision to use either.
If you are coding in C++, boost is in my mind a no-brainer. I use a number of the low level libraries and find them essential. A quick grep of my code reveals shared_ptr, program_options, regex, bind, serialization, foreach, property_tree, filesystem, tokenizer, various iterator extensions, alogrithm, and mem_fn. These are mostly low-level functionality that really ought to be in the compiler. Some boost libraries are very generic; it can be work to get them to do what you want, but it's worthwhile.
Poco is a collection of utility classes that provide functionality for some very concrete common tasks. I find the libraries are well-written and intuitive. I don't have to spend much time studying documentation or writing silly test programs. I'm currently using Logger, XML, Zip, and Net/SMTP. I started using Poco when libxml2 irritated me for the last time. There are other classes I could use but haven't tried, e.g. Data::MySQL (I'm happy with mysql++) and Net::HTTP (I'm happy with libCURL). I'll try out the rest of Poco eventually, but that's not a priority at this point.
Many POCO users report using it alongside Boost, so it is obvious that there are incentives for people in both projects. Boost is a collection of high-quality libraries. But it is not a framework. As for ACE, I have used it in the past and did not like the design. Additionally, its support for ancient non-compliant compilers has shaped the code base in an ugly way.
What really distinguishes POCO is a design that scales and an interface with rich library availability reminiscent of those one gets with Java or C#. At this time, the most acutely lacking thing from POCO is asynchronous IO.
I have used ACE for a very high performance data acquisition application with real time constraints. A single thread handles I/O from over thirty TCP/IC socket connections and a serial port. The code runs on both 32 and 64 bit Linux. A few of the many ACE classes I have used are the ACE_Reactor, ACE_Time_Value, ACE_Svc_Handler, ACE_Message_Queue, ACE_Connector. ACE was a key factor to the success of our project. It does take a significant effort to understand how to use the ACE classes. I have all the books written about ACE. Whenever I have had to extend the functionality our system it typically takes some time to study what to do and then the amount of code required is very small. I have found ACE to very reliable. I also use a little bit of code from Boost. I do not see the same functionality in Boost. I would use either or both libraries.
I recently got a new job and work on a project that uses ACE and TAO. Well, what I can tell is, that ACE and TAO work and fully accomplish their tasks. But the overall organisation and design of the libraries are quite daunting...
For example, the main part of ACE consists of hundreds of classes starting with "ACE_". It seems like they've ignored namespaces for decades.
Additionally, many of ACE's class names don't provide useful information either. Or can you guess what classes like ACE_Dev_Poll_Reactor_Notify or ACE_Proactor_Handle_Timeout_Upcall can be used for?
Additonally, the documentation of ACE is really lacking, so unless you want to learn ACE the hard way (it is really hard without any good documentation..), I would NOT recommend using ACE, unless you really need TAO for CORBA, If you don't need CORBA, go ahead and use some modern libraries..
Boost enjoys a "near STL" status due to the number of people on the C++ standards committee who are also Boost developers. Poco and ACE do not enjoy that benefit, and from my anecdotal experience Boost is more widespread.
However, POCO as a whole is more centered around network-type stuff. I stick to Boost so I can't help you there, but the plus for Boost is its (relatively) widespread use.
The ACE socket libraries are solid. If you are trying to port a standard implementation of sockets you can't go wrong. The ACE code sticks to a rigid development paradigm. The higher level contructs are a little confusing to use. The rigid paradigm causes some anomolies with exception handling. There are or used to be situations where string value pairs being passed into an exception with one of the pair being null causes an exception throw in the exception that will boggle you. The depth of the class layering is tedious when debugging. I have never tried the other libraries so can't make an intelligent comment.
Boost is great, I've only heard good things about POCO (but never used) but I don't like ACE and would avoid it in future. Although you will find fans of ACE you will also find many detractors which you don't tend to get with boost or poco (IME), to me that sends a clear signal that ACE is not the best tool (although it does what it says on the tin).
Out of those I've only ever really used ACE. ACE is a great framework for cross-platform enterprise networking applications. It's extremely versatile and scalable and comes with TAO and JAWS for quick, powerful development of ORB and/or Web based applications.
Getting up to speed with it can be somewhat daunting, but there is a lot of literature on it, and commercial support available.
It's somewhat heavy though, so for smaller-scale apps it may be a bit of an overkill. Reading the summary for POCO it sounds like they're aiming for a system that can be run on embedded systems so I'm assuming it can be used in a much lighter way. I may now give it a whirl :P
I think it is really matter of an opinion, there is hardly a right answer.
In my experience with writing portable Win32/Linux server code (15+ years), I personally find boost/ACE unnecessarily bloated and introduces maintenance hazards (otherwise known as "dll hell") for the little advantage they give.
ACE also seems to be horribly outdated, it is a "c++ library" written by "c programmers" in the 90-s and it really shows in my opinion. It so happens, right now I am re-engineering the project written with Pico, it seems to me it completely follows the ACE idea, but in more contemporary terms, not much better at that.
In any case for high performance, efficient, elegant server communications you might be better off not using any of them.