What do the terms platform and framework refer to? - c++

I ran into this question many times ago and have seen the terms again and didn't know their real concept in computer engineering.
What do platform and framework refer to?
I see many terms like platform-independent and development platforms, and also same for frameworks, but i can't quietly understand them. Do they refer to libraries? do they refer to different kinds of Operating-System?

The term framework is very well defined: a framework is very similar to a library, except that Control is Inverted. (Inversion Of Control is the defining characteristic of what constitutes a framework.) IOW: you call a library, but a framework calls you.
Another way to think about it, is that you write an application, but leave all the un-interesting details blank and use libraries to fill them. A framework OTOH is an application. It is an application which has all the interesting details left blank for you to fill in. (Of course, in the code you use to fill in the blanks you can in turn call libraries yourself. Also, the framework itself will call libraries to implement its inner workings. And, frameworks usually come bundled with a rich set of libraries which are tightly integrated with the framework. However, the distinction is still clear. Just because the framework and the libraries ship together in one package doesn't mean there is no distinction.)
The term platform, however, is not so well defined. It is also heavily overloaded. In the context of porting native applications, it usually refers to the combination of CPU ISA (e.g. x86, AMD64, IA-64, POWER, MIPS, ARMv9, Sparc), hardware architecture (PC, CHRP, PReP, Mac), kernel (Linux, NT) and base libraries (POSIX, Win32, Core Foundation).
In the broader context of software development, "platform" usually literally means "that which your code stands on". For a native application, that could be basically the same as above, for a JVM application it could be the JVM plus the JRE plus OSGI.
Basically, you can take the metaphor quite literal: a platform allows you (i.e. your code) to stand on higher ground than you could without it.

Platform is an amorphous term, which can mean:
Hardware (usually CPU/Architecture) e.g. x86, Alpha etc..
Operating System e.g. Unix, Windows, Linux, Mac OS X etc..
Virtual Machine e.g. Java JVM, FlashPlayer AVM
Frameworks on the other hand are usually a collection of tools: which could be software, hardware, methodology/pattern based (although not necessarily all of these in any particular framework) that combine to provide a way of building applications (or specific layers of an application)
A few examples of frameworks are:
Software
Java Swing
Microsoft WPF
Adobe Flex
Ruby on Rails
Django on Python
Hardware/Software
Arduino (arguable)
Trusted ILLIAC
Method/Patterns (or Process)
SCRUM
IBM Rational Unified Process

Platform usually means something to do with the environment the software is running in. So it often means the operating system (e.g. windows or Linux), but sometimes the architecture (x86 might be a platform, or the java virtual machine). A framework is usually a collection of functions or classes so often is the same as a library, or can be roughly understood in the same way.

I'll try the platform part: Platform is used to talk about something that you "build upon" or you can think of as "stand upon" for a literal analogy to get something done. I've used "telephony platforms" - which consist of software and hardware components that enable the development of interactive voice response systems.

You can have a read about Platforms and Software Frameworks here:
link text

Related

Communicate with CoDeSys program on a Linux-based WAGO PFC200 PLC

I'm currently getting familiar with PLC's, the WAGO 750-8206 PLC in particular. It offers a linux OS and can run CoDeSys programs. There are some I/O modules attached to the controller: 750-530, 750-430 and 750-600. What I would like to know is this:
Is it possible to write a C++ linux application that runs on the PLC and gets/sets the digital inputs and outputs?
Even better: can I write a CoDeSys program that "talks to the I/O's" and handles all the logic and at the same time can be accessed by a C++ linux program? THe idea is this: I would like the CoDeSys program to check for let's say two digital inputs. If both are high, a variable should be set to a defined value. The linux application should be able to read that variable and conduct further processing (such as sending JSon data to a server or similar).
Also, I would need to be able to send commands from the linux application to the CoDeSys program in order to switch digital outputs (or set values on analog outputs etc) when the linux application receives a message that triggers the command.
Any thoughts and comments on this topic are greatly appreciated as I am completely new to this topic. Thanks in advance!
The answer you might want
The actual situation has changed into the opposite of the previous answer.
WAGO's recent Board Support Packages and Documentation actively support you in making changes and extensions to the PLC200 line. Specifically the WAGO 750-8206 and 17 (as of March 2016) other PLCs :
wago.us -> Products -> Components for Automation -> Modular WAGO-I/O-SYSTEM, IP 20 (750/753 Series)
What you have to do is get in touch with them and ask for their latest Board Support Package (BSP) for the PLC200 line.
I quote from the previous answer and mark the changes, my additions are in bold.
Synopsis
Could you hack a PFC200 and get custom binaries executed? Probably Absolutely yes. As long as the program is content to run on the Linux-3.6.11 kernel and glibc-2.16 and is compiled for the "armhf" API, any existing ARM application, provided you also copy the libraries it uses as well, will just run without even compiling it specifically for the PFC200.
Would it be easy or quick? No. Yes, if you have no fear of the Linux Command line. It is as easy as using the Cross Compiler provided by the Board Support Package (BSP) with the provided C-libraries and then run this to transfer your program to the PFC's flash and run it: scp your-program root#PFC200:/usr/bin
ssh root#FC200 /usr/bin/your-programOf course, you can use Eclipse CDT with the Cross Toolchain for the PFC200 and configure Eclipse to do do remote run and debug.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.It has, PFC200 has appeared in September 2014
The public HOWTO Building FORTE for Wago describes how to use the initial BSP to run FORTE, which is the IEC 61499 run-time environment of 4DIAC (link: sf.net/projects/fordiac ), an open source PLC environment allowing to implement industrial control solutions in a vendor neutral way. 4DIAC implements IEC 61499 extending IEC 61131-3 with better support for controller to controller communication and dynamic reconfiguration.
In case you want to access the KBUS (which talks to the I/Os) directly, you have to know that currently only one application can be in charge of KBUS.
So either CODESYS, or FORTE, or your own KBUS application can be in charge of the KBUS.
The BSP from 2015 has many examples and demos how to use all the I/O of the PLC200 (KBUS, CAN, MODBUS, PROFIBUS as well as the Switches and LEDs on the PFC200 directly). Sources for the kernel and with all kernel drivers and the other Open Source components is provided and compiled in the Board Support Package (BSP).
But, the sources for the libraries and tools developed from scratch by WAGO and are not based on GPL/Open Source code are not provided: These include the Application Device Interface(ADI)/Device Abstraction Layer(DAL) libraries which do CANopen, PROFIBUS-Slave and KBus (which is used all PLC I/O modules connected to the main PLC unit)
While CANopen is using the standard Linux Socketcan API to talk to the kernel and you could just write a normal socketcan program using the provided libsocketcan, the KBus API is an WAGO-specific invention and there, you'd have to do some reverse-engineering if you'd not want to use WAGO's DAL for accessing all the electrical I/O of the PLC, but the DAL is documented and examples how to use it are provided in the BSP.
If you use CODESYS however, there is an "codesys_lib_demo-0.1" example library which shows how to provide a library for CODESYS to use.
Outdated Answer
This answer was very specific to circumstances in 2014 and 2015. As of 2016, it contains incorrect information. Still going to leave as-is for now to provide background.
The quick answer you probably don't want
You could very reasonably write code using Codesys that put together a JSON packet and sent it off to a server elsewhere. JSON is just text, and Codesys can manipulate text in a fashion very similar to C. And there are many ethernet protocols available from within Codesys using addon libraries provided by Wago.
Now the long Answers
First some background
Since you seem to be new to Wago and the philosophy of Codesys in general... a short history.
Codesys is used to build and deploy Hard Realtime execution environments, and it is important to understand that utilizing libraries without fully understanding the consequences can destabilize performance of the entire system (bringing Codesys to its knees and throwing watchdog errors in the program). Remember, many PLC's are controlling equipment that could kill someone if it ever crashed.
Wago is fond of using Linux to provide the preemptive RT kernel for the low level task scheduling and then configuring Codesys to utilize much of the standard C-libraries that often accompany linux. Wago has been doing this for quite some time, but they would never allow you to peel back the covers without going through Codesys (which means using IEC 61131 languages, of which C++ is not included), and this was for your own safety (and their product image). If you wanted the power of linux on a Wago, you had to get a special PLC with a completely naked OS, practically no manual or support, and forfeit the entire Codesys runtime.
The new PFC200's have much more RAM and memory available than recent models, allowing for more of the standard linux userland stack (ssh, ftp, http,...) to be included without compromising the Codesys runtime, and they advertise this. BUT... they are still keeping a lid on compilation tools and required header files needed to compile and link to Codesys libraries or access specialized hardware (the Wago KBUS, which interfaces your I/O modules).
The Synapsis
Could you hack a PFC200 and get custom binaries executed? Probably yes.
Would it be easy or quick? No.
Will this change in the future? Maybe. Remember that PFC200 is fairly new in North America.
Things you may not know
Codesys does not necessarily know or care about Wago. You can get Target Platforms for Codesys that do target Intel processors running a linux os. Codesys DOES SUPPORT accessing external libraries (communication in the reverse direction is dangerous), but they often expect a C style interface, and you can only access those libraries by defining C-headers that Codesys will analyze, so you may need to do some magic to get C++ working seemlessly. What you can do is create a segment of shared memory that both C++ and Codesys access, and that is how they pass information (synchronization is another problem).
You can get an Open Wago PLC, running Codesys on Linux. Wago's IPC are made specifically for this purpose. They have more power, memory, and communication capabilities in general; but they do cost more than double your typical Wago PLC.
If you feel like toying with the idea of hacking a Wago, you will need to tear apart the manuals for Codesys (it has its own), the manuals for the Wago IPC's, and already be familiar with linux style inter-process-communication and/or dynamic libraries.
Also, there is an older Wago PLC that had the naked Linux on it 750-8??. It also has a very good manual on how to access the Wago hardware using supplied headers.
You must first understand how Codesys expects to talk to its target operating system. Then you work backwards to make it talk to Wago specific libraries living on that operating system. You must be careful not to hijack Codesys.
Your extra C++ libs should assist Codesys, not take it over. For instance, host a sqlite database on the same device, and use C++ to manage the database and provide a very simple interface that Codesys can utilize. All Codesys would do is call a function and pass some values, but your C++ would actually build an SQL query and issue it to the database (Codesys doesn't need to know why or how this is happening).
I hope at least one paragraph is helpful in some way.

COM in the non-Windows world?

Hope this question isn't going to be too vague. Reading through the COM spec and Don Box's Essential COM book, there is plenty of talk of the "problems that COM solves" - and they all sound important, relevant and current.
So how are the problems that COM addresses dealt with on other systems (linux, unix, OSX, android)? I'm thinking of things like:
binary compatibility across compilers and compiler versions
binary component reuse
compiling an application such that it has run-time dependencies rather than load-time ones (so that it runs even when a dependency is missing)
access to library functionality from languages other than the library's own
reasonably low-overhead remote procedure calls to components loaded in the address space of a different process
etc (I'm sure the list goes on)
I'm basically just trying to understand why for instance on Linux CORBA isn't a thing like COM is a thing on Windows (if that makes any sense). Does maybe software development on Linux subscribe to a different philosophy than the component-based model proposed by COM?
And finally, is COM a C/C++ thing? Several times I've come across comments from people saying COM is made "obsolete" by .NET but without really explaining what they meant by that.
For the remainder of this post, I'm going to use Linux as an example of open-source software. Where I mention "Linux" it's mostly a short/simple way to refer to open source software in general though, not anything specific to Linux.
COM vs. .NET
COM isn't actually restricted to C and C++, and .NET doesn't actually replace COM. However, .NET does provide alternatives to COM for some situations. One common use of COM is to provide controls (ActiveX controls). .NET provides/supports its own protocol for controls that allows somebody to write a control in one .NET language, and use that control from any other .NET language--more or less the same sort of thing that COM provides outside the .NET world.
Likewise, .NET provides Windows Communication Foundation (WCF). WCF implements SOAP (Simple Object Access Protocol)--which may have started out simple, but grew into something a lot less simple at best. In any case, WCF provides many of the same kinds of capabilities as COM does. Although WCF itself is specific to .NET, it implements SOAP, and a SOAP server built using WCF can talk to one implemented without WCF (and vice versa). Since you mention overhead, it's probably worth mentioning that WCF/SOAP tend to add more overhead that COM (I've seen anywhere from nearly equal to about double the overhead, depending on the situation).
Differences in Requirements
For Linux, the first two points tend to have relatively low relevance. Most software is open source, and many users are accustomed to building from source in any case. For such users, binary compatibility/reuse is of little or no consequence (in fact, quite a few users are likely to reject all software that isn't distributed in source code form). Although binaries are commonly distributed (e.g., with apt-get, yum, etc.) they're basically just caching a binary built for a specific system. That is, on Windows you might have a single binary for use on anything from Windows XP up through Windows 10, but if you use apt-get on, say, Ubuntu 18.02, you're installing a binary built specifically for Ubuntu 18.02, not one that tries to be compatible with everything back to Ubuntu 10 (or whatever).
Being able to load and run (with reduced capabilities) when a component is missing is also most often a closed-source problem. Closed source software typically has several versions with varying capabilities to support different prices. It's convenient for the vendor to be able to build one version of the main application, and give varying levels of functionality depending on which other components are supplied/omitted.
That's primarily to support different price levels though. When the software is free, there's only one price and one version: the awesome edition.
Access to library functionality between languages again tends to be based more on source code instead of a binary interface, such as using SWIG to allow use of C or C++ source code from languages like Python and Ruby. Again, COM is basically curing a problem that arises primarily from lack of source code; when using open source software, the problem simply doesn't arise to start with.
Low-overhead RPC to code in other processes again seems to stem primarily from closed source software. When/if you want Microsoft Excel to be able to use some internal "stuff" in, say, Adobe Photoshop, you use COM to let them communicate. That adds run-time overhead and extra complexity, but when one of the pieces of code is owned by Microsoft and the other by Adobe, it's pretty much what you're stuck with.
Source Code Level Sharing
In open source software, however, if project A has some functionality that's useful in project B, what you're likely to see is (at most) a fork of project A to turn that functionality into a library, which is then linked into both the remainder of project A and into Project B, and quite possibly projects C, D, and E as well--all without imposing the overhead of COM, cross-procedure RPC, etc.
Now, don't get me wrong: I'm not trying to act as a spokesperson for open source software, nor to say that closed source is terrible and open source is always dramatically superior. What I am saying is that COM is defined primarily at a binary level, but for open source software, people tend to deal more with source code instead.
Of course SWIG is only one example among several of tools that support cross-language development at a source-code level. While SWIG is widely used, COM is different from it in one rather crucial way: with COM, you define an interface in a single, neutral language, and then generate a set of language bindings (proxies and stubs) that fit that interface. This is rather different from SWIG, where you're matching directly from one source to one target language (e.g., bindings to use a C library from Python).
Binary Communication
There are still cases where it's useful to have at least some capabilities similar to those provided by COM. These have led to open-source systems that resemble COM to a rather greater degree. For example, a number of open-source desktop environments use/implement D-bus. Where COM is mostly an RPC kind of thing, D-bus is mostly an agreed-upon way of sending messages between components.
D-bus does, however, specify things it calls objects. Its objects can have methods, to which you can send signals. Although D-bus itself defines this primarily in terms of a messaging protocol, it's fairly trivial to write proxy objects that make invoking a method on a remote object look pretty much like invoking one on a local object. The big difference is that COM has a "compiler" that can take a specification of the protocol, and automatically generate those proxies for you (and corresponding stubs in the far end to receive the message, and invoke the proper function based on the message it received). That's not part of D-bus itself, but people have written tools to take (for example) an interface specification and automatically generate proxies/stubs from that specification.
As such, although the two aren't exactly identical, there's enough similarity that D-bus can be (and often is) used for many of the same sorts of things as COM.
Systems Similar to DCOM
COM also allows you to build distributed systems using DCOM (Distributed COM). That is, a system where you invoke a method on one machine, but (at least potentially) execute that invoked method on another machine. This adds more overhead, but since (as pointed out above with respect to D-bus) RPC is basically communication with proxies/stubs attached to the ends, it's pretty easy to do the same thing in a distributed fashion. The difference in overhead, however, tends to lead to differences in how systems need to be designed to work well, though, so the practical advantage of using exactly the same system for distributed systems as local systems tends to be fairly minimal.
As such, the open source world provides tools for doing distributed RPC, but doesn't usually work hard at making them look the same as non-distributed systems. CORBA is well known, but generally viewed as large and complex, so (at least in my experience) current use is fairly minimal. Apache Thrift provides some of the same general type of capabilities, but in a rather simpler, lighter-weight fashion. In particular, where CORBA attempts to provide a complete set of tools for distributed computing (complete with everything from authentication to distributed time keeping), Thrift follows the Unix philosophy much more closely, attempting to meet exactly one need: generate proxies and stubs from an interface definition (written in a neutral language). If you want to do those CORBA-like things with Thrift you undoubtedly can, but in a more typical case of building internal infrastructure where the caller and callee trust each other, you can avoid a lot of overhead and just get on with the business at hand. Likewise, google RPC provides roughly the same sorts of capabilities as Thrift.
OS X Specific
Cocoa provides distributed objects that are fairly similar to COM. This is based on Objective-C though, and I believe it's now deprecated.
Apple also offers XPC. XPC is more about inter-process communication than RPC, so I'd consider it more directly comparable to D-bus than to COM. But, much like D-bus, it has a lot of the same basic capabilities as COM, but in different form that places more emphasis on communication, and less on making things look like local function calls (and many now prefer messaging to RPC anyway).
Summary
Open source software has enough different factors in its design that there's less demand for something providing the same mix of capabilities as Microsoft's COM provides on Windows. COM is largely a single tool that tries to meet all needs. In the open-source world, there's less drive to provide that single, all-encompassing solution, and more tendency toward a kit of tools, each doing one thing well, that can be put together into a solution for a specific need.
Being more commercially oriented, Apple OS X probably has what are (at least arguably) closer analogs to COM than most of the more purely open-source world.
A quick answer on the last question: COM is far from being obsolete. Almost everything in the Microsoft world is COM-based, including the .NET engine (the CLR), and including the new Windows 8.x's Windows Runtime.
Here is what Microsoft says about .NET in it latest C++ pages Welcome Back to C++ (Modern C++):
C++ is experiencing a renaissance because power is king again.
Languages like Java and C# are good when programmer productivity is
important, but they show their limitations when power and performance
are paramount. For high efficiency and power, especially on devices
that have limited hardware, nothing beats modern C++.
PS: which is a bit of a shock for a developer who has invested more than 10 years on .NET :-)
In the Linux world, it is more common to develop components that are statically linked, or which run in separate processes and communicate by piping text (maybe JSON or XML) back and forth.
Some of this is due to tradition. UNIX developers have been doing stuff like this long before CORBA or COM existed. It's "the UNIX way".
As Jerry Coffin says in his answer, when you have the source code for everything, binary interfaces are not as important, and in fact just make everything more difficult.
COM was invented back when personal computers were a lot slower than they are today. In those days, loading components into your app's process space and invoking native code was often necessary to achieve reasonable performance. Now, parsing text and running interpreted scripts aren't things to be afraid of.
CORBA never really caught on in the open-source world because the initial implementations were proprietary and expensive, and by the time high-quality free implementations were available, the spec was so complicated that nobody wanted to use it if they weren't required to do so.
To a large extent, the problems solved by COM are simply ignored on Linux.
It is true that binary compatibility is less important when you have the source code available. However, you still have to worry about modularisation and versioning. If two different programs depend on different versions of the same library, you need to somehow support that.
Then there is the case of the same program using different versions of the same library. This is often useful when working on large legacy programs, where upgrading everything can be prohibitively expensive but you would like to use new features anyway. With COM, the old parts of the program can just be left alone, since new library versions can more easily be made backwards compatible.
In addition, having to compile from source instead of binary compatibility is a huge hassle. Especially if you are actually developing software, since binary incompatibility means you have to recompile much more often. If one tiny part changes in a large C++ program, you may have to wait for a 30 minute recompile. If the different pieces are compatible, only the part which changed has to be recompiled.
COM and DCOM in particular have been around in windows for some considerable time now and naturally windows developers have made use of this powerful framework.
We are now in the cross platform age and when porting such applications to other platforms we are faced with challenges which in many cases can be mitigated or eliminated altogether unless the application we are porting is more than just one simple standalone app.
If your dealing with a whole suite of modules running on different machines all communicating using windows specific technologies such as DCE/RPC, DCOM or even windows named pipes then your job just became an order of magnitude harder.
DCE/RPC DCOM and windows named pipes all are very windows specific, non portable and of course subject to windows security access control.
For instance anyone familar with OPC DA (an industrial automation protocol based on DCOM still very much in use but now superceded by OPC UA (which avoids DCOM))) will know that there are no elegant solutions here if the client (or server) needs to be available for Linux!!
Sure there appear to be some technical hurdles here given that the MS code is not in the public domain but projects such as Wine have a partly ok DCE/RPC implementation and MS do publish some of the protocol docs. Try searching and you will probably find little information and few products open source or otherwise to help you.
Perhaps the lack of open source or affordable options here is more due to legal concerns - I wonder!
Some simpler solutions simply involve installing a "gateway service" on the windows machines to allow an alternative means of access to DCOM interfaces on that machine. This is fine if the windows machine does not belong to an unwilling 3rd party which unfortunately is sometimes the case!!! I know we'll just chuck another Windows machine as the gateway in the middle is the usual global warming enhancing solution to that problem.
I would conclude that Linux to Windows DCOM interoperability is certainly not impossible but it does appear to be a topic that few are interested in talking about unless you get your wallet out!

what are the disadvantages of using a cross-platform framework to develop for one platform?

Lately I've been contemplating whether I should start studying another framework since I only have a windows machine and I don't intend to make cross-platform software anytime soon. So to help me with that decision...
Is there any disadvantage to using a cross-platform framework when I don't intend to develop cross-platform? Intuitively I would say that a framework specialized for a certain platform would perform better in said platform than a cross-platform framework. But I am just assuming that.
Please enumerate frameworks and libraries that I can start studying for rapid application development on Windows using C++. One with lots of documentation is preferred. I would appreciate it if you included a link that can help me get started.
Is there any disadvantage to using a cross-platform framework when I don't intend to develop cross-platform?
It depends on the framework. Most frameworks limit themselves to functionality which is available across all platforms, which may limit you somewhat. You may also not be able to take advantage of the best features of a given platform or the best development environment on that platform.
Please enumerate frameworks and libraries that I can start studying for rapid application development on Windows using C++.
A good option here is Qt. It provides a very nice C++ based framework for Windows and other platforms. If you want Windows only, there are other options, including the Windows Runtime via C++ (for Windows 8 development), or the Microsoft Foundation Classes.
By using a cross-platform framework, you will miss out on platform-specific frills, like programmatic control over Windows 7 Jump Lists. Because of these things, it won't quite feel like a native application, but like a port of an application written for another OS. In many cases this doesn't matter.
A modern C++ framework built using templates isn't going to perform any worse simply because it's cross-platform. You'll simply miss out on features that don't exist on multiple platforms.
Generally the issue with cross platform frameworks are framework specific.
e.g. wxWidgets - They are fast, but not too many GUI classes available. Documentation is not excellent however updated properly.
GNome - It is widely used but requires a heavy runtime deployment, bit more heavy in terms of memory usage.
These both are UI Frameworks. both are GPL and hence you can use it.
Nokia Qt - It is an excellent cross platform framework and it is not just yet another UI but a complete framework for cross platform development. However, problem with Qt is metaobjectcompiler (mod). It is a kind of language extension.
I would recommend that you opt QT as your next framework. It is being actively developed, lightweight, recently being open sourced and is available under LGPL lic.
Just to second the other answers, Qt is a great framework (and is hopefully going to survive Nokia).
Cross platform frameworks have mainly two disadvantages: performance (often they add another layer that is not necessary in native platforms) and of course being cross-platform, i.e. often not supporting functinality that is specific to your target platform. With Qt, I never saw performance as a problem. Also Qt brings in so many libraries that actually extend what you can do natively in Windows, that also the second point is not really a disadvantage here.
The only problem with Qt is in fact the metaobjectcompiler (moc). In the beginning, you will stumble across some strange compiler errors, that come in the end from the moc. Just remember this and google for the errors, you will get used to this.

Can a desktop app be developed in C++ that would work on both Windows and Mac OS?

I am trying to save some money and develop a desktop application that would work on both Windows and a Mac OS. Is this possible? Can we do it in C++ and then, with a few fixes and tweaks, still reuse the same app on both OS?
Yes this is possible. Some code may differ as there are differences in the operating systems.
You should use a common library for GUI such as Qt: http://qt.nokia.com/
It is worth noting that Qt brings much more cross-platform features to the table, so familiarize yourself with it.
There will be some differences to handle such as
File paths (C: doesn't exist on Mac, \ and / are path separators, etc)
File endings differ (CrLf in Windows, Lf in Mac)
You need to compile to two different target CPU's. Most C++ compilers can do this.
The same code can be used for both, you just define regions to be (or not be) included depending on what OS the compiler is targeting.
Just Google a cross-os development guide, looooots of people has done this before. :)
It may not be relevant, but still worth noting (because you said "save money"), that both Java and the Mono Project (.Net, Qt) allows you to write cross platform applications with limited skills about the underlying platform. They are higher level language which in general are considered a time saver (but that is a separate discussion.)
Expanding on my comment:
Don't.
Write your library code in portable C++; putting as much as possible of the functionality in the library, making sure you study the platform-specific APIs (probably Cocoa and .NET) as you go, so the interfaces to the library are at least moderately suitable for either.
Then wrap your library in native binaries; ensuring that you pay attention to how applications are supposed to look on each platform, as well as the feel of them.
Building an application that looks like an X11 application and does everything in a manner somewhere between a Gnome application, a KDE application, an OS X application and a Windows application will really hurt user experience.
Badly.
WxWidgets
This question gets asked a lot, see also:
this question, this one and this one amongst others.
Coming in late to the party here!
I'm in the last stages of finishing a cross-platform, commercial application (OS/X and Windows for right now, conceivably Linux or iOS later).
We're using an open-source, cross-platform C++ development library called Juce, and I can't speak highly enough of it. It's extremely full-featured, the code is solid and high-quality, and you can apparently build for Windows, OS/X, Linux, iOS and Android from the same codebase (we've only tried the first two, but other developers are apparently reporting success for the other platforms).
What's particularly nice is that lead developer is very active on his bulletin boards and extremely responsive to trouble reports.
Also, you can license the library under GPL, and they also have a very reasonably priced commercial license.
Juce is very popular amongst people doing digital audio applications - indeed, to my best knowledge many or perhaps most of the top commercial digital audio apps use this system - but it's very full-featured and extremely fast and should be considered a top candidate for any cross-platform development application.

Testing framework for functional/system testing for C/C++?

For C++, there are lots of good unit test frameworks out there, but I couldn't find a good one for functional testing. With functional testing, I mean stuff which touches the disk, requires the whole application to be in place etc.
Point in case: What framework helps with testing things like whether your I/O works? I've got a hand-rolled system in place, which creates temporary folders, copies around a bunch of data, so the tests are always done in the same environment, but before I spend more time on my custom framework -- is there a good one out there already?
I wrote one from scratch three times already - twice for testing C++ apps that talked to exchanges using FIX protocol, once for a GUI app.
The problem is, you need to emulate the outside world to do proper system testing. I don't mean "outside of your code" - outside of your application. This involves emulating end users, outside entities, the Internet and so on.
I usually use perl to write my system testing framework and tests, mostly because it's good with accessing all sorts of OS facilities and regexps are first-class citizens.
Some tips: make sure your logs are easy to parse, detailed but not too verbose. Have a sane default configuration. Make it easy to "reset" the application - you need to do it after each test.
The approach I usually use is to have some sort of "adapter" that turns the application's communications with the outside world into stdin/stdout of some executable. Then I build a perl framework on top of that, and then the test cases use the framework.
Below I list a couple of tools and larger testing applications of which I am aware. If you provide more information on your platform (OS, etc.) we can probably provide better answers.
For part of what you require, Microsoft provides the Application Verifier:
Application Verifier (AppVerifier) is a runtime verification tool used in testing applications for compatibility with Microsoft Windows XP. This tool can be used to test for a wide variety of known compatibility issues while the application is running. This article details the steps for using AppVerifier as an effective addition to the application development and testing cycles.
Application Verifier can be useful for testing out low memory conditions, other low resources, and other API usage.
Another part of the puzzle, is the Microsoft Detours package, which can be used to replace API calls with your own code (useful for say, returning error codes for tests that are hard to set up).
Detours is a library for instrumenting arbitrary Win32 functions on x86, x64, and IA64 machines. Detours intercepts Win32 functions by re-writing the in-memory code for target functions. The Detours package also contains utilities to attach arbitrary DLLs and data segments (called payloads) to any Win32 binary.
There are other, larger (and more expensive) comprehensive packages available too. Borland makes Silk.
Automated Software makes TestComplete. The selection of one of these tools would be up to your needs for your applications.
IBM/Rational provides the Rational Functional Tester, which is available across many platforms, and feature-rich.
Hi I am not sure if the framework we have helps in your situation but it hooks into Rational Functional Tester and allows the user to create various datasets to be attached to different tests and to change the enviornments without changing the scripting and reuses the automation in an efficient way.
Have a look if your interested:
http://www.testpro.com.au/Test-Automation-Framework.html