Testing framework for functional/system testing for C/C++? - c++

For C++, there are lots of good unit test frameworks out there, but I couldn't find a good one for functional testing. With functional testing, I mean stuff which touches the disk, requires the whole application to be in place etc.
Point in case: What framework helps with testing things like whether your I/O works? I've got a hand-rolled system in place, which creates temporary folders, copies around a bunch of data, so the tests are always done in the same environment, but before I spend more time on my custom framework -- is there a good one out there already?

I wrote one from scratch three times already - twice for testing C++ apps that talked to exchanges using FIX protocol, once for a GUI app.
The problem is, you need to emulate the outside world to do proper system testing. I don't mean "outside of your code" - outside of your application. This involves emulating end users, outside entities, the Internet and so on.
I usually use perl to write my system testing framework and tests, mostly because it's good with accessing all sorts of OS facilities and regexps are first-class citizens.
Some tips: make sure your logs are easy to parse, detailed but not too verbose. Have a sane default configuration. Make it easy to "reset" the application - you need to do it after each test.
The approach I usually use is to have some sort of "adapter" that turns the application's communications with the outside world into stdin/stdout of some executable. Then I build a perl framework on top of that, and then the test cases use the framework.

Below I list a couple of tools and larger testing applications of which I am aware. If you provide more information on your platform (OS, etc.) we can probably provide better answers.
For part of what you require, Microsoft provides the Application Verifier:
Application Verifier (AppVerifier) is a runtime verification tool used in testing applications for compatibility with Microsoft Windows XP. This tool can be used to test for a wide variety of known compatibility issues while the application is running. This article details the steps for using AppVerifier as an effective addition to the application development and testing cycles.
Application Verifier can be useful for testing out low memory conditions, other low resources, and other API usage.
Another part of the puzzle, is the Microsoft Detours package, which can be used to replace API calls with your own code (useful for say, returning error codes for tests that are hard to set up).
Detours is a library for instrumenting arbitrary Win32 functions on x86, x64, and IA64 machines. Detours intercepts Win32 functions by re-writing the in-memory code for target functions. The Detours package also contains utilities to attach arbitrary DLLs and data segments (called payloads) to any Win32 binary.
There are other, larger (and more expensive) comprehensive packages available too. Borland makes Silk.
Automated Software makes TestComplete. The selection of one of these tools would be up to your needs for your applications.
IBM/Rational provides the Rational Functional Tester, which is available across many platforms, and feature-rich.

Hi I am not sure if the framework we have helps in your situation but it hooks into Rational Functional Tester and allows the user to create various datasets to be attached to different tests and to change the enviornments without changing the scripting and reuses the automation in an efficient way.
Have a look if your interested:
http://www.testpro.com.au/Test-Automation-Framework.html

Related

Need to sandbox application that compiles C++ modules from untrusted sources online

I’m developing a C++ application where I want to compile C++ modules from potentially untrusted sources online, and have them operate on a specific bank of data within a single process. I’d like to sandbox these somehow. This is obviously a complex issue, but hoping to discover if there’s any potential approach or tool/library I haven’t yet thought of. The app will run on Windows & OSX at minimum, and (hopefully) Linux, iOS, Android.
My app would locally compile the C++ modules it downloads, and dynamically link the object code to a process in the app (not necessarily the “main” app process). The C++ modules would only have access to my API via the headers I provide, however the API (and any dependent libraries) need to be linked into the same process. The API’s dependent libraries are compute-based only, such as native SIMD-based math and possibly memory allocation. I don’t expect they will need to call any network, disk, or any other OS functionality, for that matter – except for needing to communicate their input data and computed results to the main process (maybe over shared memory ?)
I don’t care if the sandboxed process’ memory is corrupted or hollowed, as long as it’s contained in that process. I also want to avoid having any system API call addresses linked into in the process memory space, to prevent compromised code from finding them.
I’ve done a review of the basic security issues (stack crashes, return oriented programming hacks, etc.). Also looked at some related projects:
I see Google has a sandbox subproject within the Chromium repo which might be useful, but unsure of it’s utility in my case.
Windows Sandbox is a Microsoft tool for Windows only, and isn’t available on some versions anyway. Moreover. there are big performance issues with using it. The app runs in real time, with frame rate requirements similar to a video game.
considered compiling to WebAssembly, but at the moment it seems too immature (no SIMD, hard to debug, and potentially vulnerable to hacks in the wrapping host or browser.)
I thought there might be some kind of wrapper library already out there to intercept all OS calls and allow custom configuration of what calls get passed through (in my case, anything except what’s needed for the inter-process communication would be denied)
Any other ideas, architectural suggestions, or promising open source projects on the horizon for this ?
Thanks,
C
Compiling untrusted source code and linking to your app sounds really unsafe. If I understand your problem correctly, you need to "provide safe runtime environment for single threaded user code with only access to your API", then in my opinion its better to use runtime interpreter instead. It will provide you more control and sandbox capabilities, safe API calls and users code exceptions handling.
If you have doubts about interpreters performance, its a good trade of to safety, flexibilty and control. Vast of interpreters compile source code to bytecode and runs realy fast. Also you can reach better performance by providing fast API to script.
In my Java enterprise projects I use built-in Rhino JavaScript interpreter to run user scripts and provide API to reach flexibility, required performance and control. This scripts can call nothing but my API. Its safe, flexible and absolutely controllable.
I found these C/C++ (C like syntax) interpreter libraries:
JavaScript (ECMA)
https://v8.dev/
Lua
http://acamara.es/blog/2012/08/running-a-lua-5-2-script-from-c/
C++ interpreter
https://github.com/root-project/cling

COM in the non-Windows world?

Hope this question isn't going to be too vague. Reading through the COM spec and Don Box's Essential COM book, there is plenty of talk of the "problems that COM solves" - and they all sound important, relevant and current.
So how are the problems that COM addresses dealt with on other systems (linux, unix, OSX, android)? I'm thinking of things like:
binary compatibility across compilers and compiler versions
binary component reuse
compiling an application such that it has run-time dependencies rather than load-time ones (so that it runs even when a dependency is missing)
access to library functionality from languages other than the library's own
reasonably low-overhead remote procedure calls to components loaded in the address space of a different process
etc (I'm sure the list goes on)
I'm basically just trying to understand why for instance on Linux CORBA isn't a thing like COM is a thing on Windows (if that makes any sense). Does maybe software development on Linux subscribe to a different philosophy than the component-based model proposed by COM?
And finally, is COM a C/C++ thing? Several times I've come across comments from people saying COM is made "obsolete" by .NET but without really explaining what they meant by that.
For the remainder of this post, I'm going to use Linux as an example of open-source software. Where I mention "Linux" it's mostly a short/simple way to refer to open source software in general though, not anything specific to Linux.
COM vs. .NET
COM isn't actually restricted to C and C++, and .NET doesn't actually replace COM. However, .NET does provide alternatives to COM for some situations. One common use of COM is to provide controls (ActiveX controls). .NET provides/supports its own protocol for controls that allows somebody to write a control in one .NET language, and use that control from any other .NET language--more or less the same sort of thing that COM provides outside the .NET world.
Likewise, .NET provides Windows Communication Foundation (WCF). WCF implements SOAP (Simple Object Access Protocol)--which may have started out simple, but grew into something a lot less simple at best. In any case, WCF provides many of the same kinds of capabilities as COM does. Although WCF itself is specific to .NET, it implements SOAP, and a SOAP server built using WCF can talk to one implemented without WCF (and vice versa). Since you mention overhead, it's probably worth mentioning that WCF/SOAP tend to add more overhead that COM (I've seen anywhere from nearly equal to about double the overhead, depending on the situation).
Differences in Requirements
For Linux, the first two points tend to have relatively low relevance. Most software is open source, and many users are accustomed to building from source in any case. For such users, binary compatibility/reuse is of little or no consequence (in fact, quite a few users are likely to reject all software that isn't distributed in source code form). Although binaries are commonly distributed (e.g., with apt-get, yum, etc.) they're basically just caching a binary built for a specific system. That is, on Windows you might have a single binary for use on anything from Windows XP up through Windows 10, but if you use apt-get on, say, Ubuntu 18.02, you're installing a binary built specifically for Ubuntu 18.02, not one that tries to be compatible with everything back to Ubuntu 10 (or whatever).
Being able to load and run (with reduced capabilities) when a component is missing is also most often a closed-source problem. Closed source software typically has several versions with varying capabilities to support different prices. It's convenient for the vendor to be able to build one version of the main application, and give varying levels of functionality depending on which other components are supplied/omitted.
That's primarily to support different price levels though. When the software is free, there's only one price and one version: the awesome edition.
Access to library functionality between languages again tends to be based more on source code instead of a binary interface, such as using SWIG to allow use of C or C++ source code from languages like Python and Ruby. Again, COM is basically curing a problem that arises primarily from lack of source code; when using open source software, the problem simply doesn't arise to start with.
Low-overhead RPC to code in other processes again seems to stem primarily from closed source software. When/if you want Microsoft Excel to be able to use some internal "stuff" in, say, Adobe Photoshop, you use COM to let them communicate. That adds run-time overhead and extra complexity, but when one of the pieces of code is owned by Microsoft and the other by Adobe, it's pretty much what you're stuck with.
Source Code Level Sharing
In open source software, however, if project A has some functionality that's useful in project B, what you're likely to see is (at most) a fork of project A to turn that functionality into a library, which is then linked into both the remainder of project A and into Project B, and quite possibly projects C, D, and E as well--all without imposing the overhead of COM, cross-procedure RPC, etc.
Now, don't get me wrong: I'm not trying to act as a spokesperson for open source software, nor to say that closed source is terrible and open source is always dramatically superior. What I am saying is that COM is defined primarily at a binary level, but for open source software, people tend to deal more with source code instead.
Of course SWIG is only one example among several of tools that support cross-language development at a source-code level. While SWIG is widely used, COM is different from it in one rather crucial way: with COM, you define an interface in a single, neutral language, and then generate a set of language bindings (proxies and stubs) that fit that interface. This is rather different from SWIG, where you're matching directly from one source to one target language (e.g., bindings to use a C library from Python).
Binary Communication
There are still cases where it's useful to have at least some capabilities similar to those provided by COM. These have led to open-source systems that resemble COM to a rather greater degree. For example, a number of open-source desktop environments use/implement D-bus. Where COM is mostly an RPC kind of thing, D-bus is mostly an agreed-upon way of sending messages between components.
D-bus does, however, specify things it calls objects. Its objects can have methods, to which you can send signals. Although D-bus itself defines this primarily in terms of a messaging protocol, it's fairly trivial to write proxy objects that make invoking a method on a remote object look pretty much like invoking one on a local object. The big difference is that COM has a "compiler" that can take a specification of the protocol, and automatically generate those proxies for you (and corresponding stubs in the far end to receive the message, and invoke the proper function based on the message it received). That's not part of D-bus itself, but people have written tools to take (for example) an interface specification and automatically generate proxies/stubs from that specification.
As such, although the two aren't exactly identical, there's enough similarity that D-bus can be (and often is) used for many of the same sorts of things as COM.
Systems Similar to DCOM
COM also allows you to build distributed systems using DCOM (Distributed COM). That is, a system where you invoke a method on one machine, but (at least potentially) execute that invoked method on another machine. This adds more overhead, but since (as pointed out above with respect to D-bus) RPC is basically communication with proxies/stubs attached to the ends, it's pretty easy to do the same thing in a distributed fashion. The difference in overhead, however, tends to lead to differences in how systems need to be designed to work well, though, so the practical advantage of using exactly the same system for distributed systems as local systems tends to be fairly minimal.
As such, the open source world provides tools for doing distributed RPC, but doesn't usually work hard at making them look the same as non-distributed systems. CORBA is well known, but generally viewed as large and complex, so (at least in my experience) current use is fairly minimal. Apache Thrift provides some of the same general type of capabilities, but in a rather simpler, lighter-weight fashion. In particular, where CORBA attempts to provide a complete set of tools for distributed computing (complete with everything from authentication to distributed time keeping), Thrift follows the Unix philosophy much more closely, attempting to meet exactly one need: generate proxies and stubs from an interface definition (written in a neutral language). If you want to do those CORBA-like things with Thrift you undoubtedly can, but in a more typical case of building internal infrastructure where the caller and callee trust each other, you can avoid a lot of overhead and just get on with the business at hand. Likewise, google RPC provides roughly the same sorts of capabilities as Thrift.
OS X Specific
Cocoa provides distributed objects that are fairly similar to COM. This is based on Objective-C though, and I believe it's now deprecated.
Apple also offers XPC. XPC is more about inter-process communication than RPC, so I'd consider it more directly comparable to D-bus than to COM. But, much like D-bus, it has a lot of the same basic capabilities as COM, but in different form that places more emphasis on communication, and less on making things look like local function calls (and many now prefer messaging to RPC anyway).
Summary
Open source software has enough different factors in its design that there's less demand for something providing the same mix of capabilities as Microsoft's COM provides on Windows. COM is largely a single tool that tries to meet all needs. In the open-source world, there's less drive to provide that single, all-encompassing solution, and more tendency toward a kit of tools, each doing one thing well, that can be put together into a solution for a specific need.
Being more commercially oriented, Apple OS X probably has what are (at least arguably) closer analogs to COM than most of the more purely open-source world.
A quick answer on the last question: COM is far from being obsolete. Almost everything in the Microsoft world is COM-based, including the .NET engine (the CLR), and including the new Windows 8.x's Windows Runtime.
Here is what Microsoft says about .NET in it latest C++ pages Welcome Back to C++ (Modern C++):
C++ is experiencing a renaissance because power is king again.
Languages like Java and C# are good when programmer productivity is
important, but they show their limitations when power and performance
are paramount. For high efficiency and power, especially on devices
that have limited hardware, nothing beats modern C++.
PS: which is a bit of a shock for a developer who has invested more than 10 years on .NET :-)
In the Linux world, it is more common to develop components that are statically linked, or which run in separate processes and communicate by piping text (maybe JSON or XML) back and forth.
Some of this is due to tradition. UNIX developers have been doing stuff like this long before CORBA or COM existed. It's "the UNIX way".
As Jerry Coffin says in his answer, when you have the source code for everything, binary interfaces are not as important, and in fact just make everything more difficult.
COM was invented back when personal computers were a lot slower than they are today. In those days, loading components into your app's process space and invoking native code was often necessary to achieve reasonable performance. Now, parsing text and running interpreted scripts aren't things to be afraid of.
CORBA never really caught on in the open-source world because the initial implementations were proprietary and expensive, and by the time high-quality free implementations were available, the spec was so complicated that nobody wanted to use it if they weren't required to do so.
To a large extent, the problems solved by COM are simply ignored on Linux.
It is true that binary compatibility is less important when you have the source code available. However, you still have to worry about modularisation and versioning. If two different programs depend on different versions of the same library, you need to somehow support that.
Then there is the case of the same program using different versions of the same library. This is often useful when working on large legacy programs, where upgrading everything can be prohibitively expensive but you would like to use new features anyway. With COM, the old parts of the program can just be left alone, since new library versions can more easily be made backwards compatible.
In addition, having to compile from source instead of binary compatibility is a huge hassle. Especially if you are actually developing software, since binary incompatibility means you have to recompile much more often. If one tiny part changes in a large C++ program, you may have to wait for a 30 minute recompile. If the different pieces are compatible, only the part which changed has to be recompiled.
COM and DCOM in particular have been around in windows for some considerable time now and naturally windows developers have made use of this powerful framework.
We are now in the cross platform age and when porting such applications to other platforms we are faced with challenges which in many cases can be mitigated or eliminated altogether unless the application we are porting is more than just one simple standalone app.
If your dealing with a whole suite of modules running on different machines all communicating using windows specific technologies such as DCE/RPC, DCOM or even windows named pipes then your job just became an order of magnitude harder.
DCE/RPC DCOM and windows named pipes all are very windows specific, non portable and of course subject to windows security access control.
For instance anyone familar with OPC DA (an industrial automation protocol based on DCOM still very much in use but now superceded by OPC UA (which avoids DCOM))) will know that there are no elegant solutions here if the client (or server) needs to be available for Linux!!
Sure there appear to be some technical hurdles here given that the MS code is not in the public domain but projects such as Wine have a partly ok DCE/RPC implementation and MS do publish some of the protocol docs. Try searching and you will probably find little information and few products open source or otherwise to help you.
Perhaps the lack of open source or affordable options here is more due to legal concerns - I wonder!
Some simpler solutions simply involve installing a "gateway service" on the windows machines to allow an alternative means of access to DCOM interfaces on that machine. This is fine if the windows machine does not belong to an unwilling 3rd party which unfortunately is sometimes the case!!! I know we'll just chuck another Windows machine as the gateway in the middle is the usual global warming enhancing solution to that problem.
I would conclude that Linux to Windows DCOM interoperability is certainly not impossible but it does appear to be a topic that few are interested in talking about unless you get your wallet out!

Easiest way to build a cross-platform application

I have read a few articles in the cross-platform tag. However, as I'm starting a fresh application (mostly a terminal/console app), I'm wondering about the easiest way to make it cross-platform (i.e. working for Linux, Mac OS X, and Windows). I have thought about the following:
adding various macro/tags in my code to build different binary executables for each operating system
use Qt platform to develop a cross-functional app (although the GUI and platform component would add more development time as I'm not familiar with Qt)
Your thoughts? Thanks in advance for your contribution!
Edit: Sounds like there are a lot of popular responses on Java and Qt. What are the tradeoffs between these two while we're at it?
Do not go the first way. You'll encounter a lot of problems that are already solved for you by numerous tools.
Qt is an excellent choice if you definitely want C++. In fact, it will speed up development even if you aren't familiar with it, as it has excellent documentation and is easy to use. The good part about it is that it isn't just a GUI framework, but also networking, XML, I/O and lots of other stuff you'll probably need.
If not necessary C++, I'd go with Java. C++ is far too low level language for most applications. Debugging memory management and corrupt stacks can be a nightmare.
To your edited question:
The obvious one: Java has garbage collection, C++ doesn't. It means no memory leaks in Java (unless you count possible bugs in JVM), no need to worry about dangling pointers and such.
Another obvious one: it is extremely easy to use platform-dependent code in C++ using #ifdefs. In Java it is a real pain. There is JNI but it isn't easy to use at all.
Java has very extensive support of exceptions. While C++ has exceptions too, Qt doesn't use them, and some things that generate exceptions in Java will leave you with corrupt memory and crashes in C++ (think buffer overflows).
"Write once, run everywhere." Recompiling C++ program for many platforms can be daunting. Java programs don't need to be recompiled.
It is open to debate, but I think Java has more extensive and well-defined library. The abstraction level is generally higher, the interfaces are cleaner. And it supports more useful things, like XML schemas and such. I can't think of a feature that is present in Qt, but absent in Java. Maybe multimedia or something, I'm not sure.
Both languages are very fast nowadays, so performance is usually not an issue, but Java can be a real memory hog. Not extremely important on modern hardware too, but still.
The least obvious one: C++ can be more portable than Java. One example is FreeBSD OS which had very poor support for Java some time ago (don't know if it is still the case). C++/Qt works perfectly there. If you plan on supporting a wide range of Unix systems, C++ may be a better choice.
Use Java. As much bashing as it gets/used to get, it's the best thing to get stuff working across any platform. Sure, you will still need to handle external OS related functions you may be using, but it's much better than using anything else.
Apart from Java, there are a few things you can run on the JVM - JRuby, Jython, Scala come to mind.
You could also write with the scripting languages directly( Ruby, Python, etc ).
C/C++ is best left for applications that demand complete memory control and high controllability.
I'd go with the QT (or some other framework) option. If you went with the first you'd find it considerably harder. After all, you have to know what to put into the various conditionally compiled sections for all the platforms you're targeting.
I would suggest using a technology designed for cross-platform application development. Here are two technologies I know of that -- as long as you read the documentation and use the features properly -- you can build the application to run on all 3 platforms:
Java
XULRunner (Mozilla's Development Platform)
Of course, there is always the web. I mostly use web applications not just for their portability, but also because they run on my Windows PC, my Ubuntu computer, and my Mac.
We mainly build web applications because the web is the future. Local applications are viewed in my organization as mostly outdated, unless there is of course some feature or technology the web doesn't yet support that holds that application back from being fully web-based.
I would also suggest Github's electron which allows to build cross platform desktop applications using NodeJs and the Google's Chromium. The only drawback for this method is that an electron application run much slower than a native C++ application due to the abstraction layers between Javascript and native C++.
If you're making a console app, you should be able to use the same source for all three platforms if you stick to the functions defined in the POSIX libraries. Setting up your build environment is the most complicated part, especially if you want to be able to build for multiple platforms out of the same source tree.
I'd say if you really want to use C++, QT is the easiest way for cross-platform application, I found myself using QT when I need an UI even though QT has a large set of library which makes pretty much everything easier in C++.
If you don't want to use QT then you need a good design and a lot of abstraction to make cross-platfform application.
However I'm using more and more Python bindinq to QT for medium size application.
If you are working on a console application and you know a bit of python, you might find Python scripting much more comfortable than C++. It keeps the time comsuming stuff away to be able to focus on your application.

Are there any automated unit testing frameworks for testing an in-house threading framework?

We have created a common threading framework to manage how we want to use threads in our applications. Are there any frameworks out there like gtest or cppunit that solely focus on unit testing threads, thread pools, thread queues, and such?
Right now I just kind of manually go through some steps that I know I should cover and do checks in the code to make sure that certain conditions are met (like values aren't corrupted b/c a shared resource was accessed simultaneously by two or more threads at once) If I'm not able to create definitive check, then I step through the debugger but this seems like it's testing in the 1990's.
I would like to more systematically test the functionality of the threading framework for it's internal functionality that might not be the same as all threading frameworks, but I also want to test common functionality that all threading frameworks should have (like not deadlocking, not corrupting data a.k.a counts are what they should be, etc ...).
Any suggestions would be greatly appreciated.
If your threads are built on OpenMP, you can use VivaMP for static checking.
But you want dynamic checking with unit tests. I'm not aware of any existing framework for this purpose. You could roll your own with one of the many unit test frameworks out there, but it would be hard to make it robust. Intel has a suite of parallel development tools that might be of interest, but I've never used them. They say that they can help with unit tests from within Visual Studio.
If you write a threading library you have to debug it yourself. Threading libraries aren't as general purpose as general purpose programs =D so you won't find a specific unit testing framework for your specific problem :D
After that disclaimer though. If you were running on Solaris / OSX or FreeBSD.... dtrace would make it trivial to unit test your library.

What is the best approach to both modularity and platform independence? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
Improve this question
I hope this question does not come off as broad as it may seem at first. I am designing a software application that I would like to be both cross-platform and modular. I am still in the planning phase and can pick practically any language and toolset.
This makes things harder, not easier, because there are seemingly so many ways of accomplishing both of the goals (modularity, platform agnosticism).
My basic premise is that security, data storage, interaction with the operating system, and configuration should all be handled by a "container" application - but most of the other functionality will be supplied through plug-in modules. If I had to describe it at a high level (without completely giving away my idea), it would be a single application that can do many different jobs, all dedicated to the same goal (there are lots of disparate things to do, but all the data has to interact and be highly available).
I find myself wrestling with not so much how to do it (I can think of lots of ways), but which method is best.
For example, I know that Eclipse practically embodies what I am describing, but I find Java applications in general (and Eclipse is no exception) to be too large and slow for what I need. Ditto desktop apps written Python and Ruby (which are excellent languages!)
I don't mind recompiling the code base for different platforms as native exectables. Yet, C and C++ have their own set of issues.
As a C# developer, I have a preference for managed code, but I am not at all sold on Mono, yet (I could be convinced).
Does anyone have any ideas/experiences/ specific favorite frameworks to share?
Just to cite an example: for .NET apps there are the CAB (Composite Application Block) and the Composite Application Guidance for WPF. Both are mainly implementations of a set of several design patterns focused on modularity and loose coupling between components similar to a plug-in architecture: you have an IOC framework, MVC base classes, a loosely coupled event broker, dynamic loading of modules and other stuff.
So I suppose that kind of pattern infrastructure is what you are trying to find, just not specifically for .NET. But if you see the CAB as a set of pattern implementations, you can see that almost every language and platform has some form of already built-in or third party frameworks for individual patterns.
So my take would be:
Study (if you are not familiar with) some of those design patterns. You could take as an example the CAB framework for WPF documentation: Patterns in the Composite Application Library
Design your architecture thinking on which of those patterns you think would be useful for what you want to achieve first without thinking in specific pattern implementations or products.
Once you have your 'architectural requirements' defined more specifically, look for individual frameworks that help accomplish each one of those patterns/features for the language you decide to use and put together your own application framework based on them.
I agree that the hard part is to make all this platform independent. I really cannot think on any other solution to choose a mature platform independent language like Java.
Are you planning a desktop or web application?
Everyone around here seems to think that Mono is great, but I still do not think it is ready for industry use, I would equate mono to where wine is, great idea; when it works it works well, and when it doesn't...well your out of luck. mod_mono for Apache is extremely glitchy and is hard to get running correctly.
If your aiming for the desktop, nothing beats the eclipse RCP (Rich Client Platform) framework: http://wiki.eclipse.org/index.php/Rich_Client_Platform.
You can build window, linux, mac all under the same code and all UI components are native to the OS. And RCP wins in modularity hands down, it has a plug-in architecture that is unrivaled (from what I have seen)
I have worked with RCP for 1.5 years now and I dunno what else could replace it, it is #1 in it's niche.
If your totally opposed to java I would look into wxWidgets with either python or C++
If you want platform independence, then you'll have to trade off between performance and development effort. C++ may be faster than Java (this is debatable FWIW) but you'll get platform independence a lot more easily with Java. Python and Ruby are in the same boat.
I doubt that .NET would be much faster than Java (they're both VM languages after all), but the big problem with .NET is platform independence. Mono has a noble goal and surprisingly good results so far but it will always be playing catch-up with Microsoft on Windows. You might be able to accept its limitations but it's still not the same as having identical multiplatform environments that Java, Python, and Ruby have. Also: the .NET development and support tools are heavily skewed towards Windows, and probably always will be.
IMO, your best bet is to target Java... or, at the very least, the JVM. If you don't like the Java language (and as a C# dev I'm guessing that's not the case) then you at least have options like Jython, JRuby, and Scala. With the JVM, you get very good platform independence, good performance, and access to a huge number of libraries and support tools. There's almost always a Java library, port or implementation that will do what you need it to do. I don't think any other platform out there has the same number of options; there's real value in that flexibility.
As for modularity: that's more about how you build the software than what platform you use. I don't know much about plugin architectures like you describe but I'm guessing that it will be possible in pretty much any modern platform you pick.
If you plan on doing python development, you can always use pyrex to optimize some of the slower parts.
With my limited Mono experience I can say I'm quite sold on it. The fact that there is active development and a lot of ongoing effort to bring it up to spec with the latest .Net technologies is encouraging. It is incredibly useful to be able to use existing .Net skills on multiple platforms. I had similar issues with performance when attempting to accomplish some basic tasks in Python + PyGTK -- maybe they can be made to perform in the right hands but it is nice to not have to worry about performance 90% of the time.
For desktop applications, writing it in an interpreted language, and using a cross-platform UI toolkit like wxWidgets will get you a long way towards platform independence (you just have to be careful not to use any other modules that aren't cross-platform, use things like Python's os.path module, in place of doing things like config_path = "/home/$USER")
That said, to make a good cross-platform application, you will have to do some things differently on each platform..
For example, OS X is probably the most different - preferences are usually stored in ~/Library/Prefernces/ as .plists, UI's are generally based around floating windows, with a single menu-bar docked at the top-of-screen.
I suppose this is where the modularity comes into play.. With the preferences example above, you could have a class UserConfig, of which you have OS-specific versions of. The Windows one stores config data in the appropriate Application Data folder, or the registry. The Mac OS one uses .plist files on ~/Library/Preferences/, and the unix'y one uses ~/.dotfiles.