Is it common practice to abstract library dependencies from implementation? - c++

My answer to this question would be "no." But my coworkers disagree.
We're rebuilding our product and have a lot of critical decisions to make in the near-term.
While doing some of my own work I noticed that we've got some in-house C++ classes to abstract some of the POSIX API (threads, mutexes, semaphores, and rw locks) and other utility classes. Note that these classes are basic, and haven't been ported from Linux (portability is a factor in the rebuild.) We are also using POCO C++ libraries.
I brought this to the attention of my coworkers and suggested that we ditch our in-house classes in favour of their POCO equivalents. I want to take full advantage of the library we're already using. They suggested that we should implement our in-house classes using POCO, and further abstract additional POCO classes as necessary, so as not to depend on any specific C++ library (citing future unknowns - what if we want to use a different lib/framework like QT or boost, what if the one we choose turns out to be no good or development becomes inactive, etc.)
They also don't want to refactor legacy code, and by abstracting parts of POCO with our own classes, we can implement additional functionality (classic OOP.) Both of these arguments I can appreciate. However, I argue that if we're doing a recode we should go big, or go home. Now would be the time to refactor and it really shouldn't be that bad especially given the similarity between our classes and those in POCO (threads, etc.) I don't know what to say regarding the second point - should we only use extended classes where the functionality is necessary?
My coworkers also don't want to litter the POCO namespace all over the place. I argue that we should pick a library/framework/toolkit, and stick with it. Take full advantage of its features. Is this not typical practice? The only project I've seen that abstracts an entire framework is Freeswitch (that provides its own interface to APR.)
One suggestion is that the API we expose to each other, and potential customers, should be free of POCO, but it would be present in the implementation (which makes sense.)
None of us really have experience in these kinds of design decisions, and it shows in the current product. Having been at this since I was young, I've got some intuition that has brought me here, but no practical experience either. I really want to avoid poor solutions to problems that are already solved.
I think my question boils down to this: When building a product, should we a) choose a dominant framework on which to base most of our code, and b) expect that framework to be tightly coupled with the product? Isn't that the point of a framework? (Is framework or library more appropriate for POCO?)

First, the API that you expose should definitely be free of POCO, boost, qt, or any other type that is not part of the standard C++ library. This is because the base libraries have their own release cycle, distinct from the release cycle of your library. If the users of your library also use boost, but a different, incompatible, version, they would need to spend time to resolve the incompatibility. The only exception to this rule is when you design a library to be released as part of a wider framework - say, an addition to the POCO toolkit. In this case the release of your library is tied to the release of the entire toolkit.
Internally, however, you should avoid using your own wrappers, unless the library that you are abstracting out is a true "commodity library"1. The reason for this is that when you hide an external library behind your classes, most of the time you mimic the level of abstraction of the library that you are hiding. The code that uses your wrapper will program to the level of abstraction dictated by the external library. When you swap the implementation behind your wrapper for a different framework, it is very likely that you would either (1) adapt the new framework to fit the level of abstraction of the old framework, or (2) will need to change the way in which you use your wrapper. Both cases are highly suspect: if you do (1), perhaps you shouldn't switch in the first place, and if you do (2), then your wrappers prove to be useless.
1 By "commodity library" I mean a library that provides a level of abstraction commonly found in other libraries that serve a similar purpose.

There are two situations where I think it's worth having your own wrappers:
1) You've looked at several different mutex implementations on different systems/libraries, you've established a common set of requirements that they can all satisfy and that are sufficient for your software. Then you define that abstraction and implement it one or more times, knowing that you've planned ahead for flexibility. The rest of your code is written to rely only on your abstraction, not on any incidental properties of the current implementation(s). I have done this in the past, although not in code I can show you.
A classic example of this "least common interface" would be to change rename in the filesystem abstraction, on the basis that Windows cannot implement an atomic rename-over-an-existing-file. So your code must not rely on atomic rename-replacement if you might in future swap out your current *nix implementation for one that can't do that. You have to restrict the interface from the start.
When done right, this kind of interface can considerably ease any kind of future porting, either to a new system or because you want to change your third-party library dependencies. However, an entire framework is probably too big to successfully do this with -- essentially you'd be inventing and writing your own framework, which is not a trivial task and conceivably is a larger task than writing your actual software.
2) You want to be able to mock/stub/sham/spoof/plagiarize/whatever the next clever technique is, the mutex in tests, and decide that you will find this easier if you have your own wrapper thrown over it than if you're trying to mess with symbols from third-party libraries, or that are built-in.
Note that defining your own functions called wrap_pthread_mutex_init, wrap_pthread_mutex_lock etc, that precisely mimic pthread_* functions, and take exactly the same parameters, might satisfy (2) but doesn't satisfy (1). And anyway, doing (2) properly probably requires more than just wrappers, you usually also want to inject the dependencies into your code.
Doing extra work under the heading of flexibility, without actually providing for flexibility, is pretty much a waste of time. It can be very difficult or even provably impossible to implement one threading environment in terms of another one. If you decide in future to switch from pthreads to std::thread in C++, then having used an abstraction that looks exactly like the pthreads API under different names is (approximately) no help whatsoever.
For another possible change you might make, implementing the full pthreads API on Windows is sort of possible, but probably more difficult than only implementing what you actually need. So if you port to Windows, all your abstraction saves you is the time to search and replace all calls in the rest of your software. You're still going to have to (a) plug in a complete Posix implementation for Windows, or (b) do the work to figure out what you actually need, and only implement that. Your wrapper won't help with the real work.

Related

Generic interface for big projects

Let's say you are writing a bigger project and you have to use 3rd party libraries. So your complete project will be depended on these libraries.
I thought instead of using those 3rd party libraries directly, I would write some sort of an wrapper library or an interface that would look something like this.
void myMainloop(...){
3rdpartyMainloop(...);
}
So if the 3rd party library gets outdated I just could switch to another library by just integrating it in my wrapper library.
Is this a good thing to do? What alternatives do I have?
I am a little bit worried that if I have two libraries that are essentially doing the same thing but are completely different designed, that it is not possible to find an generic interface for both.
You could write a generic interface to which you could adapt one or more 3rd party libraries. However, it would require very careful design and planning, not to mention a huge amount of development effort if you're talking about a library of the size and scope of Qt. You'd first have to design your interfaces, then implement those interfaces for each library you wanted to be able to plug in.
Bottom line, it's a very lofty goal, and you're not likely to end up with anything that's truly "generic". At best, you'd probably have to dumb down the interface to the point where you're not taking advantage of many of the features that each library has to offer. Also, you'd likely introduce overhead that would diminish your applications performance, so keep that in mind if performance is critical.
I'd say just choose the library that suits your needs the best, unless you want to roll your own, which I doubt.

Advantage of wrapping classes in DLLs

I've just finished a phase in my project where I wrote a small infrastructure to carry out a certain task, made of a core class with several auxiliary classes.
The C++'ness is quite basic - single inheritance, some STL containers, that's it.
No threads - client runs the show.
What I would like to do now is wrap it all up in a DLL, version it, and use
it as a standalone unit. I'd like that seperation in order to track changes and
development better, and perhaps for other projects as well.
As I don't have experience with classes in DLLs, I would like to hear yours:
What's your approach to this problem?
Specifically:
Is it worth the trouble?
Do you do that often or not at all?
What about compatibiliy issues (like clients compiled using a different compiler)?
I'm not really asking for a debate (though that's the probable outcome), but rather an advice from experience.
Thanks for your time.
I find it hard to see any benefit with this. I can see plenty of problems:
No type checking across a DLL boundary. Any version mismatches will result in runtime failures, harder to detect than compile time failures.
Extra deployment headaches. You may be tempted to update some but not all modules and so deal with complex dependencies.
All clients that want to use these DLLs must use the same compiler.
Only make this change if you can identify benefits that outweigh the negatives.
C++ code is not binary compatible between compilers, it's generally no use creating DLLs exposing C++ classes that aren't built as part of the project that uses them.
If you want to create a Windows DLL with a well-defined object-oriented interface that the rest of the world can use, make it a COM inproc server.

Why is the Loki library not more widely used?

The Loki library implements some very widely used concepts (smart pointer, visitor, factory, etc.). The associated book "Modern C++ Design" is often mentioned, but the library itself is not widely used. Why is that?
Most developers seem to prefer Boost. In particular, why do people often decide to use Boost's smart pointers rather than Loki's?
Loki is a research/proof-of-concept sort of thing. Alexandrescu pushes new ideas, other people adopt those for real world. Also boost::shared_ptr is almost literally in TR1.
Loki's suffers from being a good library touching on several functional areas (template metaprogamming support with a few specific applications: smart pointers, singletons, function objects, scope guards etc.), whereas boost is a collection of many libraries typically exhaustively covering each functional area and much more highly tuned for portability (first).
When 9 birds out of 10 can be killed with the same stone, many people just start with boost and fill in the gaps with third party libraries. It's very hard to compete with boost if you overlap. Because you won't overlap with much of boost, people will download/install boost anyway to get the other functionality, so unless you nail an area that boost is weak at - and the difference is significant to the project, they'll "settle" for boost there too.
Further, Alexandrescu made repeated attempts to get Loki included in boost, and some of the key boost authors just weren't cooperative. My personal view is that they want the more complete but much less user-friendly MPL to have more "market share": as authors of the library and the hard-copy books that are the only decent documentation (in stark contrast with most other boost libraries which have excellent online documentation), they do quite well out of this.
If anyone is offended by and disagrees with this analysis, I'm all ears.
Another practical issue with extremely parameterised code is that in large projects where different developers/teams work independently, they'll often end up using subtly different instantiations of the same template pretty arbitrarily. This makes it harder to pass values between those subsystems: the receiver may need to:
be parameterised (i.e. templated, and hence inline, which introduces compilation dependencies and slower builds in enterprise-scale systems)
provide some minimal coverage for all possible instantiations (e.g. checking error codes and expecting/handling exceptions)
working through some compile-time to run-time hand-over based on an abstract base accessor with implementations for each instantiation) which compromises some of the performance benefits of parameterisation
This is all possible, but it takes a great programmer to navigate the terrain.
You want to use a library that the next programmer is going to know and that is going to be well supported in the future - so you pick a major lib.
Because it's a major lib lots of people use it, so it becomes the default choice.
I actually prefer Loki's way of doing things and I have contributed to Loki myself a Decorator pattern which now sits in the tracker because the project as far as I know is no longer maintained.
I use boost shared_pointer just because it will be the standard very soon, I may dislike the fact that I can't customize it to act exactly the way I want it to act but I have to live with it.
Usage of the standard library is important as it keeps the code maintainable by other programmers. If it's open source and you want to experiment go ahead and use Loki. No one is stopping you.
Actually Windows Vista uses some of Loki's features.
I am guessing they are not using the redundant implementations of smart pointers and visitors.
Speaking as someone who's used quite a bit of the Boost library, and also looked at Loki more than once, the biggest problem was the sparsity of documentation. Also, Loki uses some of the hairiest bits of C++ templates. Exciting stuff, but also rather daunting.
I used Loki once for a little tool (basically an interpreter) and actually liked it. My coworkers were less enthusiastic about the library, so its use remained constrained to this small sub-project.

How to design a C / C++ library to be usable in many client languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end.
Choice of language
Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it.
Requirements
It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question:
The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel.
Most of the complexity will be hidden inside the library, though
The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way
Clients may change the objects, and the library must be notified thereof
The library may change the objects, and the client must be notified thereof, if it already has a handle to that object
The library must be multi-threaded, because it will maintain network connections to several other hosts
While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure)
Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience!
My assumptions, so far
So here are some of my assumptions and conclusions, which I gathered in the past months:
Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++)
But my interface has to be a clean C interface that does not involve name mangling
Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values
If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory
For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form)
Is this also true for usage in C# ?
For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it.
Not all client languages handle multithreading well, so there should be a single thread talking to the client
For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM
For usage in Bash scripts, there must be an executable with a command line interface
For the other client languages, I have no idea
For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others
For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort
Possible subquestions
is this possible with manageable effort, or is it just too much portability?
are there any good books / websites about this kind of design criteria?
are any of my assumptions wrong?
which open source libraries are worth studying to learn from their design / interface / souce?
meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)
Mostly correct. Straight procedural interface is the best. (which is not entirely the same as C btw(**), but close enough)
I interface DLLs a lot(*), both open source and commercial, so here are some points that I remember from daily practice, note that these are more recommended areas to research, and not cardinal truths:
Watch out for decoration and similar "minor" mangling schemes, specially if you use a MS compiler. Most notably the stdcall convention sometimes leads to decoration generation for VB's sake (decoration is stuff like #6 after the function symbol name)
Not all compilers can actually layout all kinds of structures:
so avoid overusing unions.
avoid bitpacking
and preferably pack the records for 32-bit x86. While theoretically slower, at least all compilers can access packed records afaik, and the official alignment requirements have changed over time as the architecture evolved
On Windows use stdcall. This is the default for Windows DLLs. Avoid fastcall, it is not entirely standarized (specially how small records are passed)
Some tips to make automated header translation easier:
macros are hard to autoconvert due to their untypeness. Avoid them, use functions
Define separate types for each pointer types, and don't use composite types (xtype **) in function declarations.
follow the "define before use" mantra as much as possible, this will avoid users that translate headers to rearrange them if their language in general requires defining before use, and makes it easier for one-pass parsers to translate them. Or if they need context info to auto translate.
Don't expose more than necessary. Leave handle types opague if possible. It will only cause versioning troubles later.
Do not return structured types like records/structs or arrays as returntype of functions.
always have a version check function (easier to make a distinction).
be careful with enums and boolean. Other languages might have slightly different assumptions. You can use them, but document well how they behave and how large they are. Also think ahead, and make sure that enums don't become larger if you add a few fields, break the interface. (e.g. on Delphi/pascal by default booleans are 0 or 1, and other values are undefined. There are special types for C-like booleans (byte,16-bit or 32-bit word size, though they were originally introduced for COM, not C interfacing))
I prefer stringtypes that are pointer to char + length as separate field (COM also does this). Preferably not having to rely on zero terminated. This is not just because of security (overflow) reasons, but also because it is easier/cheaper to interface them to Delphi native types that way.
Memory always create the API in a way that encourages a total separation of memory management. IOW don't assume anything about memory management. This means that all structures in your lib are allocated via your own memory manager, and if a function passes a struct to you, copy it instead of storing a pointer made with the "clients" memory management. Because you will sooner or later accidentally call free or realloc on it :-)
(implementation language, not interface), be reluctant to change the coprocessor exception mask. Some languages change this as part of conforming to their standards floating point error(exception-)handling.
Always pair a callbacks with an user configurable context. This can be used by the user to give the the callback state without defining global variables. (like e.g. an object instance)
be careful with the coprocessor status word. It might be changed by others and break your code, and if you change it, other code might stop working. The status word is generally not saved/restored as part of calling conventions. At least not in practice.
don't use C style varargs parameters. Not all languages allow variable number of parameters in an unsafe way
(*) Delphi programmer by day, a job that involves interfacing a lot of hardware and thus translating vendor SDK headers. By night Free Pascal developer, in charge of, among others, the Windows headers.
(**)
This is because what "C" means binary is still dependant on the used C compiler, specially if there is no real universal system ABI. Think of stuff like:
C adding an underscore prefix on some binary formats (a.out, Coff?)
sometimes different C compilers have different opinions on what to do with small structures passed by value. Officially they shouldn't support it at all afaik, but most do.
structure packing sometimes varies, as do details of calling conventions (like skipping
integer registers or not if a parameter is registerable in a FPU register)
===== automated header conversions ====
While I don't know SWIG that well, I know and use some delphi specific header tools( h2pas, Darth/headconv etc).
However I never use them in fully automatic mode, since more often then not the output sucks. Comments change line or are stripped, and formatting is not retained.
I usually make a small script (in Pascal, but you can use anything with decent string support) that splits a header up, and then try a tool on relatively homogeneous parts (e.g. only structures, or only defines etc).
Then I check if I like the automated conversion output, and either use it, or try to make a specific converter myself. Since it is for a subset (like only structures) it is often way easier than making a complete header converter. Of course it depends a bit what my target is. (nice, readable headers or quick and dirty). At each step I might do a few substitutions (with sed or an editor).
The most complicated scheme I did for Winapi commctrl and ActiveX/comctl headers. There I combined IDL and the C header (IDL for the interfaces, which are a bunch of unparsable macros in C, the C header for the rest), and managed to get the macros typed for about 80% (by propogating the typecasts in sendmessage macros back to the macro declaration, with reasonable (wparam,lparam,lresult) defaults)
The semi automated way has the disadvantage that the order of declarations is different (e.g. first constants, then structures then function declarations), which sometimes makes maintenance a pain. I therefore always keep the original headers/sdk to compare with.
The Jedi winapi conversion project might have more info, they translated about half of the windows headers to Delphi, and thus have enormous experience.
I don't know but if it's for Windows then you might try either a straight C-like API (similar to the WINAPI), or packaging your code as a COM component: because I'd guess that programming languages might want to be able to invoke the Windows API, and/or use COM objects.
Regarding automatic wrapper generation, consider using SWIG. For Java, it will do all the JNI work. Also, it is able to translate complex OO-C++-interfaces properly (provided you follow some basic guidelines, i.e. no nested classes, no over-use of templates, plus the ones mentioned by Marco van de Voort).
Think C, nothing else. C is one of the most popular programming languages. It is widely used on many different software platforms, and there are few computer architectures for which a C compiler does not exist. All popular high-level languages provide an interface to C. That makes your library accessible from almost all platforms in existence. Don't worry too much about providing an Object Oriented interface. Once you have the library done in C, OOP, functional or any other style interface can be created in appropriate client languages. No other systems programming language will give you C's flexibility and potability.
NestedVM I think is going to be slower than pure Java because of the array bounds checking on the int[][] that represents the MIPS virtual machine memory. It is such a good concept but might not perform well enough right now (until phone manufacturers add NestedVM support (if they do!), most stuff is going to be SLOW for now, n'est-ce pas)? Whilst it may be able to unpack JPEGs without error, speed is of no small concern! :)
Nothing else in what you've written sticks out, which isn't to say that it's right or wrong! The principles sound (mainly just listening to choice of words and language to be honest) like roughly standard best practice but I haven't thought through the details of everything you've said. As you said yourself, this really ought to be several questions. But of course doing this kind of thing is not automatically easy just because you're fixed on perhaps a slightly different architecture to the last code base you've worked on...! ;)
My thoughts:
All your comments on C interface compatibility sound sensible to me, pretty much best practice except you don't seem to properly address memory management policy - some sentences a bit ambiguous/vague/wrong-sounding. The design of the memory management will be to a large extent determined by the access patterns made in your application, rather than the functionality per se. I suiggest you study others' attempts at making portable interfaces like the standard ANSI C API, Unix API, Win32 API, Cocoa, J2SE, etc carefully.
If it was me, I'd write the library in a carefully chosen subset of the common elements of regular Java and Davlik virtual machine Java and also write my own custom parser that translates the code to C for platforms that support C, which would of course be most of them. I would suggest that if you restrict yourself to data types of various size ints, bools, Strings, Dictionaries and Arrays and make careful use of them that will help in cross-platform issues without affecting performance much most of the time.
your assumptions seem ok, but i see trouble ahead, much of which you have already spotted in your assumptions.
As you said, you can't really export c++ classes and methods, you will need to provide a function based c interface. What ever facade you build around that, it will remain a function based interface at heart.
The basic problem i see with that is that people choose a specific language and its runtime because their way of thinking (functional or object oriented) or the problem they address (web programming, database,...) corresponds to that language in some way or other.
A library implemented in c will probably never feel like the libraries they are used to, unless they program in c themselves.
Personally, I would always prefer a library that "feels like python" when I use python, and one that feels like java when I do Java EE, even though I know c and c++.
So your effort might be of little actual use (other than your gain in experience), because people will probably want to stick with their mindset, and rather re-implement the functionality than use a library that does the job, but does not fit.
I also fear the desired portability will seriously hamper development. Just think of the infinite build settings needed, and tests for that. I have worked on a project that tried to maintain compatibility for 5 operating systems (all posix-like, but still) and about 10 compilers, the builds were a nightmare to test and maintain.
Give it an XML interface, whether passed as a parameter and return value or as files through a command-line invocation. This may not seem as direct as a normal function interface, but is the most practical way to access an executable from, e.g., Java.

boost vs ACE C++ cross platform performance comparison?

I am involved in a venture that will port some communications, parsing, data handling functionality from Win32 to Linux and both will be supported. The problem domain is very sensitive to throughput and performance.
I have very little experience with performance characteristics of boost and ACE. Specifically we want to understand which library provides the best performance for threading.
Can anyone provide some data -- documented or word-of-mouth or perhaps some links -- about the relative performance between the two?
EDIT
Thanks all. Confirmed our initial thoughts - we'll most likely choose boost for system level cross-platform stuff.
Neither library should really have any overhead compared to using native OS threading facilities. You should be looking at which API is cleaner. In my opinion the boost thread API is significantly easier to use.
ACE tends to be more "classic OO", while boost tends to draw from the design of the C++ standard library. For example, launching a thread in ACE requires creating a new class derived from ACE_Task, and overriding the virtual svc() function which is called when your thread runs. In boost, you create a thread and run whatever function you want, which is significantly less invasive.
Do yourself a favor and steer clear of ACE. It's a horrible, horrible library that should never have been written, if you ask me. I've worked (or rather HAD to work with it) for 3 years and I tell you it's a poorly designed, poorly documented, poorly implemented piece of junk using archaic C++ and built on completely brain-dead design decisions ... calling ACE "C with classes" is actually doing it a favor. If you look into the internal implementations of some of its constructs you'll often have a hard time suppressing your gag reflex.
Also, I can't stress the "poor documentation" aspect enough. Usually, ACE's notion of documenting a function consists of simply printing the function's signature. As to the meaning of its arguments, its return value and its general behavior, well you're usually left to figure that out on your own. I'm sick and tired of having to guess which exceptions a function may throw, which return value denotes success, which arguments I have to pass to make the function do what I need it to do or whether a function / class is thread-safe or not.
Boost on the other hand, is simple to use, modern C++, extremely well documented, and it just WORKS! Boost is the way to go, down with ACE!
Don't worry about the overhead of an OS-abstraction layer on threading and synchronization objects. Threading overhead literally doesn't matter at all (since it only applies to thread creation, which is already enormously slow compared to the overhead of a pimpl-ized pointer indirection). If you find that mutex ops are slowing you down, you're better off looking at atomic operations or rearranging your data access patterns to avoid contention.
Regarding boost vs. ACE, it's a matter of "new-style" vs. "old-style" programming. Boost has a lot of header-only template-based shenanigans (that are beautiful to work with, if you can appreciate it). If, on the other hand, you're used to "C with classes" style of C++, ACE will feel much more natural. I believe it's mostly a matter of personal taste for your team.
I've used ACE for numerous heavy duty production servers. It never failed me. It is rock solid and do the work for many years now. Tried to learn BOOST's ASIO network framework-Couldn't get the hang of it. While BOOST is more "modern" C++, it also harder to use for non trivial tasks - and without a "modern" C++ experience and deep STL knowledge it is difficult to use correctly
Even if ACE is a kind of old school C++, it still has many thread oriented features that boost doesn't provide yet.
At the moment I see no reason to not use both (but for different purposes). Once boost provide a simple mean to implement message queues between tasks, I may consider abandoning ACE.
When it comes to ease-of-use, boost is way better than ACE. boost-asio has a more transparent API, its abstractions are simpler and can easily provide building blocks to your application. The compile-time polymorphism is judiciously used in boost to warn/prevent illegal code. ACE's uses of templates, on the other hand, is limited to generalization and is hardly ever user-centric enough to disallow illegal operations. You're more likely to discover problems at run-time with ACE.
A simple example which I can think of is ACE_Reactor - a fairly scalable and decoupled interface- but you must remember to call its "own" function if you're running its event loop in a thread different from where it was created. I spent hours to figure this out for the first time and could've easily spent days. Ironically enough its object model shows more details than it hides - good for learning but bad for abstraction.
https://groups.google.com/forum/?fromgroups=#!topic/comp.soft-sys.ace/QvXE7391XKA
Threading is really only a small part of what boost and ACE provide, and the two aren't really comparable overall. I agree that boost is easier to use, as ACE is a pretty heavy framework.
I wouldn't call ACE "C with classes." ACE is not intuitive, but if you take your time and use the framework as intended, you will not regret it.
From what I can tell, after reading Boost's docs, I'd want to use ACE's framework and Boost's container classes.
Use ACE and boost cooperatively. ACE has better communication API, based on OO design patterns, whereas boost has like "modern C++" design and works well with containers for example.
We started to use ACE believing that it would hide the platform differences present between windows and unix in TCP sockets and the select call. Turns out, it does not. Ace's take on select, the reactor pattern, cannot mix sockets and stdin on windows, and the semantic differences between the platforms concerning socket writablility notifications are still present at the ACE level.
By the time we realized this we were already using the thread and process features of ACE (the latter of which again does not hide the platform differences to the extent we would have liked) so that our code is now tied to a huge library that actually prevents the porting of our code to 64 Bit MinGW!
I can't wait for the day when the last ACE usage in our code is finally replaced with something different.
I've been using ACE for many years (8) but I have just started investigating the use of boost again for my next project. I'm considering boost because it has a bigger tool bag (regex, etc) and parts of it are getting absorbed into the C++ standard so long term maintenance should be easier.
That said, boost is going to require some adjustment. Although Greg mentions that the thread support is less invasive as it can run any (C or static) function, if you're used to using thread classes that are more akin to the Java and C# thread classes which is what ACE_Task provides, you have to use a little finesse to get the same with boost.