Recommendations for a Boost::ASIO based CORBA library - c++

I'm looking for a CORBA kit. I need the IDL compiler plus libraries (or source) for the ORB. I don't really know a helluva lot more about CORBA, but we need to interface with a server whose functions are exposed via CORBA.
The requirements I've been given, in rough order of priority are:
1 - Low cost or license amenable to commercial (closed source) use.
2 - Performance performance performance - is there a Boost::ASIO based ORB?
3 - Simple to integrate for at least Windows and Linux development.
We measure our software's performance in microseconds, so I need to be sure that the underlying network latency has been kept to an absolute minimum, but also, personally, I don't want to wrestle with a half-finished or half-working project and I don't want integrating this stuff to become the whole project. Essentially I need to get this API built and be calling remote functions with as little fuss as possible. That might just be wishful thinking, but it's worth mentioning.
So, has anyone out there had RECENT experience integrating CORBA into modern desktop application project? What would you recommend to use, and what should I beware of?

I'm currently using omniorb for an embedded software in the telecommunication field.
As for your questions:
It is free even for commercial use. It comes with a LGPL license
I haven't mesured performances, but I've got good results in an embedded real-time project. (About your question on boost::asio: I'm pretty sure that an ORB based on boost::asio doesn't exist)
It's been tested on many platforms, including linux and windows.
Maybe you could give a try to omniorb. Otherwise you could try TAO: it's a real-time ORB, but I never used it.

As far as I know there is no ORB that is bui;d on top of boost::asio. I would recommend you to have a look at TAO or TAOX11 which is a modern CORBA implementation. There is a free CORBA Programmers Guide with some starter information by Remedy IT, or the OCI Developers Guide.

Related

COM in the non-Windows world?

Hope this question isn't going to be too vague. Reading through the COM spec and Don Box's Essential COM book, there is plenty of talk of the "problems that COM solves" - and they all sound important, relevant and current.
So how are the problems that COM addresses dealt with on other systems (linux, unix, OSX, android)? I'm thinking of things like:
binary compatibility across compilers and compiler versions
binary component reuse
compiling an application such that it has run-time dependencies rather than load-time ones (so that it runs even when a dependency is missing)
access to library functionality from languages other than the library's own
reasonably low-overhead remote procedure calls to components loaded in the address space of a different process
etc (I'm sure the list goes on)
I'm basically just trying to understand why for instance on Linux CORBA isn't a thing like COM is a thing on Windows (if that makes any sense). Does maybe software development on Linux subscribe to a different philosophy than the component-based model proposed by COM?
And finally, is COM a C/C++ thing? Several times I've come across comments from people saying COM is made "obsolete" by .NET but without really explaining what they meant by that.
For the remainder of this post, I'm going to use Linux as an example of open-source software. Where I mention "Linux" it's mostly a short/simple way to refer to open source software in general though, not anything specific to Linux.
COM vs. .NET
COM isn't actually restricted to C and C++, and .NET doesn't actually replace COM. However, .NET does provide alternatives to COM for some situations. One common use of COM is to provide controls (ActiveX controls). .NET provides/supports its own protocol for controls that allows somebody to write a control in one .NET language, and use that control from any other .NET language--more or less the same sort of thing that COM provides outside the .NET world.
Likewise, .NET provides Windows Communication Foundation (WCF). WCF implements SOAP (Simple Object Access Protocol)--which may have started out simple, but grew into something a lot less simple at best. In any case, WCF provides many of the same kinds of capabilities as COM does. Although WCF itself is specific to .NET, it implements SOAP, and a SOAP server built using WCF can talk to one implemented without WCF (and vice versa). Since you mention overhead, it's probably worth mentioning that WCF/SOAP tend to add more overhead that COM (I've seen anywhere from nearly equal to about double the overhead, depending on the situation).
Differences in Requirements
For Linux, the first two points tend to have relatively low relevance. Most software is open source, and many users are accustomed to building from source in any case. For such users, binary compatibility/reuse is of little or no consequence (in fact, quite a few users are likely to reject all software that isn't distributed in source code form). Although binaries are commonly distributed (e.g., with apt-get, yum, etc.) they're basically just caching a binary built for a specific system. That is, on Windows you might have a single binary for use on anything from Windows XP up through Windows 10, but if you use apt-get on, say, Ubuntu 18.02, you're installing a binary built specifically for Ubuntu 18.02, not one that tries to be compatible with everything back to Ubuntu 10 (or whatever).
Being able to load and run (with reduced capabilities) when a component is missing is also most often a closed-source problem. Closed source software typically has several versions with varying capabilities to support different prices. It's convenient for the vendor to be able to build one version of the main application, and give varying levels of functionality depending on which other components are supplied/omitted.
That's primarily to support different price levels though. When the software is free, there's only one price and one version: the awesome edition.
Access to library functionality between languages again tends to be based more on source code instead of a binary interface, such as using SWIG to allow use of C or C++ source code from languages like Python and Ruby. Again, COM is basically curing a problem that arises primarily from lack of source code; when using open source software, the problem simply doesn't arise to start with.
Low-overhead RPC to code in other processes again seems to stem primarily from closed source software. When/if you want Microsoft Excel to be able to use some internal "stuff" in, say, Adobe Photoshop, you use COM to let them communicate. That adds run-time overhead and extra complexity, but when one of the pieces of code is owned by Microsoft and the other by Adobe, it's pretty much what you're stuck with.
Source Code Level Sharing
In open source software, however, if project A has some functionality that's useful in project B, what you're likely to see is (at most) a fork of project A to turn that functionality into a library, which is then linked into both the remainder of project A and into Project B, and quite possibly projects C, D, and E as well--all without imposing the overhead of COM, cross-procedure RPC, etc.
Now, don't get me wrong: I'm not trying to act as a spokesperson for open source software, nor to say that closed source is terrible and open source is always dramatically superior. What I am saying is that COM is defined primarily at a binary level, but for open source software, people tend to deal more with source code instead.
Of course SWIG is only one example among several of tools that support cross-language development at a source-code level. While SWIG is widely used, COM is different from it in one rather crucial way: with COM, you define an interface in a single, neutral language, and then generate a set of language bindings (proxies and stubs) that fit that interface. This is rather different from SWIG, where you're matching directly from one source to one target language (e.g., bindings to use a C library from Python).
Binary Communication
There are still cases where it's useful to have at least some capabilities similar to those provided by COM. These have led to open-source systems that resemble COM to a rather greater degree. For example, a number of open-source desktop environments use/implement D-bus. Where COM is mostly an RPC kind of thing, D-bus is mostly an agreed-upon way of sending messages between components.
D-bus does, however, specify things it calls objects. Its objects can have methods, to which you can send signals. Although D-bus itself defines this primarily in terms of a messaging protocol, it's fairly trivial to write proxy objects that make invoking a method on a remote object look pretty much like invoking one on a local object. The big difference is that COM has a "compiler" that can take a specification of the protocol, and automatically generate those proxies for you (and corresponding stubs in the far end to receive the message, and invoke the proper function based on the message it received). That's not part of D-bus itself, but people have written tools to take (for example) an interface specification and automatically generate proxies/stubs from that specification.
As such, although the two aren't exactly identical, there's enough similarity that D-bus can be (and often is) used for many of the same sorts of things as COM.
Systems Similar to DCOM
COM also allows you to build distributed systems using DCOM (Distributed COM). That is, a system where you invoke a method on one machine, but (at least potentially) execute that invoked method on another machine. This adds more overhead, but since (as pointed out above with respect to D-bus) RPC is basically communication with proxies/stubs attached to the ends, it's pretty easy to do the same thing in a distributed fashion. The difference in overhead, however, tends to lead to differences in how systems need to be designed to work well, though, so the practical advantage of using exactly the same system for distributed systems as local systems tends to be fairly minimal.
As such, the open source world provides tools for doing distributed RPC, but doesn't usually work hard at making them look the same as non-distributed systems. CORBA is well known, but generally viewed as large and complex, so (at least in my experience) current use is fairly minimal. Apache Thrift provides some of the same general type of capabilities, but in a rather simpler, lighter-weight fashion. In particular, where CORBA attempts to provide a complete set of tools for distributed computing (complete with everything from authentication to distributed time keeping), Thrift follows the Unix philosophy much more closely, attempting to meet exactly one need: generate proxies and stubs from an interface definition (written in a neutral language). If you want to do those CORBA-like things with Thrift you undoubtedly can, but in a more typical case of building internal infrastructure where the caller and callee trust each other, you can avoid a lot of overhead and just get on with the business at hand. Likewise, google RPC provides roughly the same sorts of capabilities as Thrift.
OS X Specific
Cocoa provides distributed objects that are fairly similar to COM. This is based on Objective-C though, and I believe it's now deprecated.
Apple also offers XPC. XPC is more about inter-process communication than RPC, so I'd consider it more directly comparable to D-bus than to COM. But, much like D-bus, it has a lot of the same basic capabilities as COM, but in different form that places more emphasis on communication, and less on making things look like local function calls (and many now prefer messaging to RPC anyway).
Summary
Open source software has enough different factors in its design that there's less demand for something providing the same mix of capabilities as Microsoft's COM provides on Windows. COM is largely a single tool that tries to meet all needs. In the open-source world, there's less drive to provide that single, all-encompassing solution, and more tendency toward a kit of tools, each doing one thing well, that can be put together into a solution for a specific need.
Being more commercially oriented, Apple OS X probably has what are (at least arguably) closer analogs to COM than most of the more purely open-source world.
A quick answer on the last question: COM is far from being obsolete. Almost everything in the Microsoft world is COM-based, including the .NET engine (the CLR), and including the new Windows 8.x's Windows Runtime.
Here is what Microsoft says about .NET in it latest C++ pages Welcome Back to C++ (Modern C++):
C++ is experiencing a renaissance because power is king again.
Languages like Java and C# are good when programmer productivity is
important, but they show their limitations when power and performance
are paramount. For high efficiency and power, especially on devices
that have limited hardware, nothing beats modern C++.
PS: which is a bit of a shock for a developer who has invested more than 10 years on .NET :-)
In the Linux world, it is more common to develop components that are statically linked, or which run in separate processes and communicate by piping text (maybe JSON or XML) back and forth.
Some of this is due to tradition. UNIX developers have been doing stuff like this long before CORBA or COM existed. It's "the UNIX way".
As Jerry Coffin says in his answer, when you have the source code for everything, binary interfaces are not as important, and in fact just make everything more difficult.
COM was invented back when personal computers were a lot slower than they are today. In those days, loading components into your app's process space and invoking native code was often necessary to achieve reasonable performance. Now, parsing text and running interpreted scripts aren't things to be afraid of.
CORBA never really caught on in the open-source world because the initial implementations were proprietary and expensive, and by the time high-quality free implementations were available, the spec was so complicated that nobody wanted to use it if they weren't required to do so.
To a large extent, the problems solved by COM are simply ignored on Linux.
It is true that binary compatibility is less important when you have the source code available. However, you still have to worry about modularisation and versioning. If two different programs depend on different versions of the same library, you need to somehow support that.
Then there is the case of the same program using different versions of the same library. This is often useful when working on large legacy programs, where upgrading everything can be prohibitively expensive but you would like to use new features anyway. With COM, the old parts of the program can just be left alone, since new library versions can more easily be made backwards compatible.
In addition, having to compile from source instead of binary compatibility is a huge hassle. Especially if you are actually developing software, since binary incompatibility means you have to recompile much more often. If one tiny part changes in a large C++ program, you may have to wait for a 30 minute recompile. If the different pieces are compatible, only the part which changed has to be recompiled.
COM and DCOM in particular have been around in windows for some considerable time now and naturally windows developers have made use of this powerful framework.
We are now in the cross platform age and when porting such applications to other platforms we are faced with challenges which in many cases can be mitigated or eliminated altogether unless the application we are porting is more than just one simple standalone app.
If your dealing with a whole suite of modules running on different machines all communicating using windows specific technologies such as DCE/RPC, DCOM or even windows named pipes then your job just became an order of magnitude harder.
DCE/RPC DCOM and windows named pipes all are very windows specific, non portable and of course subject to windows security access control.
For instance anyone familar with OPC DA (an industrial automation protocol based on DCOM still very much in use but now superceded by OPC UA (which avoids DCOM))) will know that there are no elegant solutions here if the client (or server) needs to be available for Linux!!
Sure there appear to be some technical hurdles here given that the MS code is not in the public domain but projects such as Wine have a partly ok DCE/RPC implementation and MS do publish some of the protocol docs. Try searching and you will probably find little information and few products open source or otherwise to help you.
Perhaps the lack of open source or affordable options here is more due to legal concerns - I wonder!
Some simpler solutions simply involve installing a "gateway service" on the windows machines to allow an alternative means of access to DCOM interfaces on that machine. This is fine if the windows machine does not belong to an unwilling 3rd party which unfortunately is sometimes the case!!! I know we'll just chuck another Windows machine as the gateway in the middle is the usual global warming enhancing solution to that problem.
I would conclude that Linux to Windows DCOM interoperability is certainly not impossible but it does appear to be a topic that few are interested in talking about unless you get your wallet out!

Advantages of COM over implementing your own Office library

COM has severe performance penalty since it creates a separate process with all the resources allocated like a normal application instance. Also it requires Microsoft Office to be installed on the system and is not cross platform. So are there any advantages of using COM other than saving effort of churning your own library.
Also are there any open source implementations available for C++ for handling Office files or one has to build everything from scratch? How difficult is to build such a library to support all capabilities?
Supporting all capabilities would not be just difficult -- it would be pretty much impossible. Office documents are layer upon layer of historical kludges, mistakes, and design decisions good and bad. And much of it is undocumented.
But supporting some capabilities is certainly doable, and some libraries do exist. Apache POI, the one I'm most familiar with is in Java, though.

What is the C version of RMI

I have a set of functions in C/C++ that I need to be available to accept calls and return values to C/C++ code in a remote location, similar to RMI on the java platform. With RMI the Java methods are set up through the rmiregistry and remain available in memory to accept requests. I'm looking for similar functionality in C/C++, but i'm a bit confused with all the options that are out there. Is this type of scenario that CORBA was intended for and if so, is this still the best technology to use or are there better options out there. I've read about XML-RPC, CORBA, and a few others but i'm not sure which of these is what i need.
Thanks for your help.
Mike
Is this type of scenario that CORBA was intended for and if so, is this still the best technology to use or are there better options out there.
Yes, this is what CORBA was intended to solve. Whether it's "best" is subjective and argumentative. :) I can say, from my personal experience, I don't miss my short experience with CORBA and would suggest you explore other options.
I've read about XML-RPC, CORBA, and a few others but i'm not sure which of these is what i need.
As you seem to be aware, you're looking for any technology that implements RMI (also frequently called RPC). It's not built-in to C/C++.
On Linux, there is SunRPC. I would also recommend looking at Google protocol buffers, which provide a mechanism for serializing data as well as an interface for defining RPC services. There are several service implementations available, but I don't have experience with the service implementations.
On Unix-like platforms, you're probably looking for Sun RPC (remote procedure calls).
CORBA is also relevant but has a more natural binding to languages with object oriented capability.
There's no built in method for accomplishing this in C or C++. That said there are several libraries that can accomplish this.
If you're on Windows, then the best answer is probably DCOM, which is part of the OS itself. I'm not sure about other platforms.
I shall suggest either CORBA or any webservice library available
CORBA is a reasonable choice for you (though it may be a little bit old technology for now). I have been using CORBA for several years in my previous job.
I should say, learning curve for CORBA is kind of steep, and you need to cater a lot of extra setup, but once it it done correct, it become smooth to use. (The problem is it takes really some time to use it correctly)
Webservice is an de facto industrial standard now, and I believe C++ will have some reasonable implementation and library for that. CORBA cover more features than WS but those feature are seldom used in simple systems.

ACE vs Boost vs POCO [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have been working with the Boost C++ Libraries for quite some time. I absolutely love the Boost Asio C++ library for network programming. However I was introduced to two other libraries: POCO and Adaptive Communication Environment (ACE) framework. I would like to know the good and bad of each.
As rdbound said, Boost has a "near STL" status. So if you don't need another library, stick to Boost. However, I use POCO because it has some advantages for my situation. The good things about POCO IMO:
Better thread library, especially a Active Method implementation. I also like the fact that you can set the thread priority.
More comprehensive network library than boost::asio. However boost::asio is also a very good library.
Includes functionality that is not in Boost, like XML and database interface to name a few.
It is more integrated as one library than Boost.
It has clean, modern and understandable C++ code. I find it far easier to understand than most of the Boost libraries (but I am not a template programming expert :)).
It can be used on a lot of platforms.
Some disadvantages of POCO are:
It has limited documentation. This somewhat offset by the fact that the source is easy to understand.
It has a far smaller community and user base than, say, Boost. So if you put a question on Stack Overflow for example, your chances of getting an answer are less than for Boost
It remains to be seen how well it will be integrated with the new C++ standard. You know for sure that it will not be a problem for Boost.
I never used ACE, so I can't really comment on it. From what I've heard, people find POCO more modern and easier to use than ACE.
Some answers to the comments by Rahul:
I don't know about versatile and advanced. The POCO thread library provides some functionality that is not in Boost: ActiveMethod and Activity, and ThreadPool. IMO POCO threads are also easier to use and understand, but this is a subjective matter.
POCO network library also provides support for higher level protocols like HTTP and SSL (possibly also in boost::asio, but I am not sure?).
Fair enough.
Integrated library has the advantage of having consistent coding, documentation and general "look and feel".
Being cross-platform is an important feature of POCO, this is not an advantage in relation to Boost.
Again, you should probably only consider POCO if it provides some functionality you need and that is not in Boost.
I've used all three so here's my $0.02.
I really want to vote for Doug Schmidt and respect all the work he's done, but to be honest I find ACE mildly buggy and hard to use. I think that library needs a reboot. It's hard to say this, but I'd shy away from ACE for now unless there is a compelling reason to use TAO, or you need a single code base to run C++ on both Unix variants and Windows. TAO is fabulous for a number of difficult problems, but the learning curve is intense, and there's a reason CORBA has a number of critics. I guess just do your homework before making a decision to use either.
If you are coding in C++, boost is in my mind a no-brainer. I use a number of the low level libraries and find them essential. A quick grep of my code reveals shared_ptr, program_options, regex, bind, serialization, foreach, property_tree, filesystem, tokenizer, various iterator extensions, alogrithm, and mem_fn. These are mostly low-level functionality that really ought to be in the compiler. Some boost libraries are very generic; it can be work to get them to do what you want, but it's worthwhile.
Poco is a collection of utility classes that provide functionality for some very concrete common tasks. I find the libraries are well-written and intuitive. I don't have to spend much time studying documentation or writing silly test programs. I'm currently using Logger, XML, Zip, and Net/SMTP. I started using Poco when libxml2 irritated me for the last time. There are other classes I could use but haven't tried, e.g. Data::MySQL (I'm happy with mysql++) and Net::HTTP (I'm happy with libCURL). I'll try out the rest of Poco eventually, but that's not a priority at this point.
Many POCO users report using it alongside Boost, so it is obvious that there are incentives for people in both projects. Boost is a collection of high-quality libraries. But it is not a framework. As for ACE, I have used it in the past and did not like the design. Additionally, its support for ancient non-compliant compilers has shaped the code base in an ugly way.
What really distinguishes POCO is a design that scales and an interface with rich library availability reminiscent of those one gets with Java or C#. At this time, the most acutely lacking thing from POCO is asynchronous IO.
I have used ACE for a very high performance data acquisition application with real time constraints. A single thread handles I/O from over thirty TCP/IC socket connections and a serial port. The code runs on both 32 and 64 bit Linux. A few of the many ACE classes I have used are the ACE_Reactor, ACE_Time_Value, ACE_Svc_Handler, ACE_Message_Queue, ACE_Connector. ACE was a key factor to the success of our project. It does take a significant effort to understand how to use the ACE classes. I have all the books written about ACE. Whenever I have had to extend the functionality our system it typically takes some time to study what to do and then the amount of code required is very small. I have found ACE to very reliable. I also use a little bit of code from Boost. I do not see the same functionality in Boost. I would use either or both libraries.
I recently got a new job and work on a project that uses ACE and TAO. Well, what I can tell is, that ACE and TAO work and fully accomplish their tasks. But the overall organisation and design of the libraries are quite daunting...
For example, the main part of ACE consists of hundreds of classes starting with "ACE_". It seems like they've ignored namespaces for decades.
Additionally, many of ACE's class names don't provide useful information either. Or can you guess what classes like ACE_Dev_Poll_Reactor_Notify or ACE_Proactor_Handle_Timeout_Upcall can be used for?
Additonally, the documentation of ACE is really lacking, so unless you want to learn ACE the hard way (it is really hard without any good documentation..), I would NOT recommend using ACE, unless you really need TAO for CORBA, If you don't need CORBA, go ahead and use some modern libraries..
Boost enjoys a "near STL" status due to the number of people on the C++ standards committee who are also Boost developers. Poco and ACE do not enjoy that benefit, and from my anecdotal experience Boost is more widespread.
However, POCO as a whole is more centered around network-type stuff. I stick to Boost so I can't help you there, but the plus for Boost is its (relatively) widespread use.
The ACE socket libraries are solid. If you are trying to port a standard implementation of sockets you can't go wrong. The ACE code sticks to a rigid development paradigm. The higher level contructs are a little confusing to use. The rigid paradigm causes some anomolies with exception handling. There are or used to be situations where string value pairs being passed into an exception with one of the pair being null causes an exception throw in the exception that will boggle you. The depth of the class layering is tedious when debugging. I have never tried the other libraries so can't make an intelligent comment.
Boost is great, I've only heard good things about POCO (but never used) but I don't like ACE and would avoid it in future. Although you will find fans of ACE you will also find many detractors which you don't tend to get with boost or poco (IME), to me that sends a clear signal that ACE is not the best tool (although it does what it says on the tin).
Out of those I've only ever really used ACE. ACE is a great framework for cross-platform enterprise networking applications. It's extremely versatile and scalable and comes with TAO and JAWS for quick, powerful development of ORB and/or Web based applications.
Getting up to speed with it can be somewhat daunting, but there is a lot of literature on it, and commercial support available.
It's somewhat heavy though, so for smaller-scale apps it may be a bit of an overkill. Reading the summary for POCO it sounds like they're aiming for a system that can be run on embedded systems so I'm assuming it can be used in a much lighter way. I may now give it a whirl :P
I think it is really matter of an opinion, there is hardly a right answer.
In my experience with writing portable Win32/Linux server code (15+ years), I personally find boost/ACE unnecessarily bloated and introduces maintenance hazards (otherwise known as "dll hell") for the little advantage they give.
ACE also seems to be horribly outdated, it is a "c++ library" written by "c programmers" in the 90-s and it really shows in my opinion. It so happens, right now I am re-engineering the project written with Pico, it seems to me it completely follows the ACE idea, but in more contemporary terms, not much better at that.
In any case for high performance, efficient, elegant server communications you might be better off not using any of them.

What is the best approach to both modularity and platform independence? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
Improve this question
I hope this question does not come off as broad as it may seem at first. I am designing a software application that I would like to be both cross-platform and modular. I am still in the planning phase and can pick practically any language and toolset.
This makes things harder, not easier, because there are seemingly so many ways of accomplishing both of the goals (modularity, platform agnosticism).
My basic premise is that security, data storage, interaction with the operating system, and configuration should all be handled by a "container" application - but most of the other functionality will be supplied through plug-in modules. If I had to describe it at a high level (without completely giving away my idea), it would be a single application that can do many different jobs, all dedicated to the same goal (there are lots of disparate things to do, but all the data has to interact and be highly available).
I find myself wrestling with not so much how to do it (I can think of lots of ways), but which method is best.
For example, I know that Eclipse practically embodies what I am describing, but I find Java applications in general (and Eclipse is no exception) to be too large and slow for what I need. Ditto desktop apps written Python and Ruby (which are excellent languages!)
I don't mind recompiling the code base for different platforms as native exectables. Yet, C and C++ have their own set of issues.
As a C# developer, I have a preference for managed code, but I am not at all sold on Mono, yet (I could be convinced).
Does anyone have any ideas/experiences/ specific favorite frameworks to share?
Just to cite an example: for .NET apps there are the CAB (Composite Application Block) and the Composite Application Guidance for WPF. Both are mainly implementations of a set of several design patterns focused on modularity and loose coupling between components similar to a plug-in architecture: you have an IOC framework, MVC base classes, a loosely coupled event broker, dynamic loading of modules and other stuff.
So I suppose that kind of pattern infrastructure is what you are trying to find, just not specifically for .NET. But if you see the CAB as a set of pattern implementations, you can see that almost every language and platform has some form of already built-in or third party frameworks for individual patterns.
So my take would be:
Study (if you are not familiar with) some of those design patterns. You could take as an example the CAB framework for WPF documentation: Patterns in the Composite Application Library
Design your architecture thinking on which of those patterns you think would be useful for what you want to achieve first without thinking in specific pattern implementations or products.
Once you have your 'architectural requirements' defined more specifically, look for individual frameworks that help accomplish each one of those patterns/features for the language you decide to use and put together your own application framework based on them.
I agree that the hard part is to make all this platform independent. I really cannot think on any other solution to choose a mature platform independent language like Java.
Are you planning a desktop or web application?
Everyone around here seems to think that Mono is great, but I still do not think it is ready for industry use, I would equate mono to where wine is, great idea; when it works it works well, and when it doesn't...well your out of luck. mod_mono for Apache is extremely glitchy and is hard to get running correctly.
If your aiming for the desktop, nothing beats the eclipse RCP (Rich Client Platform) framework: http://wiki.eclipse.org/index.php/Rich_Client_Platform.
You can build window, linux, mac all under the same code and all UI components are native to the OS. And RCP wins in modularity hands down, it has a plug-in architecture that is unrivaled (from what I have seen)
I have worked with RCP for 1.5 years now and I dunno what else could replace it, it is #1 in it's niche.
If your totally opposed to java I would look into wxWidgets with either python or C++
If you want platform independence, then you'll have to trade off between performance and development effort. C++ may be faster than Java (this is debatable FWIW) but you'll get platform independence a lot more easily with Java. Python and Ruby are in the same boat.
I doubt that .NET would be much faster than Java (they're both VM languages after all), but the big problem with .NET is platform independence. Mono has a noble goal and surprisingly good results so far but it will always be playing catch-up with Microsoft on Windows. You might be able to accept its limitations but it's still not the same as having identical multiplatform environments that Java, Python, and Ruby have. Also: the .NET development and support tools are heavily skewed towards Windows, and probably always will be.
IMO, your best bet is to target Java... or, at the very least, the JVM. If you don't like the Java language (and as a C# dev I'm guessing that's not the case) then you at least have options like Jython, JRuby, and Scala. With the JVM, you get very good platform independence, good performance, and access to a huge number of libraries and support tools. There's almost always a Java library, port or implementation that will do what you need it to do. I don't think any other platform out there has the same number of options; there's real value in that flexibility.
As for modularity: that's more about how you build the software than what platform you use. I don't know much about plugin architectures like you describe but I'm guessing that it will be possible in pretty much any modern platform you pick.
If you plan on doing python development, you can always use pyrex to optimize some of the slower parts.
With my limited Mono experience I can say I'm quite sold on it. The fact that there is active development and a lot of ongoing effort to bring it up to spec with the latest .Net technologies is encouraging. It is incredibly useful to be able to use existing .Net skills on multiple platforms. I had similar issues with performance when attempting to accomplish some basic tasks in Python + PyGTK -- maybe they can be made to perform in the right hands but it is nice to not have to worry about performance 90% of the time.
For desktop applications, writing it in an interpreted language, and using a cross-platform UI toolkit like wxWidgets will get you a long way towards platform independence (you just have to be careful not to use any other modules that aren't cross-platform, use things like Python's os.path module, in place of doing things like config_path = "/home/$USER")
That said, to make a good cross-platform application, you will have to do some things differently on each platform..
For example, OS X is probably the most different - preferences are usually stored in ~/Library/Prefernces/ as .plists, UI's are generally based around floating windows, with a single menu-bar docked at the top-of-screen.
I suppose this is where the modularity comes into play.. With the preferences example above, you could have a class UserConfig, of which you have OS-specific versions of. The Windows one stores config data in the appropriate Application Data folder, or the registry. The Mac OS one uses .plist files on ~/Library/Preferences/, and the unix'y one uses ~/.dotfiles.