Running ELF binaries on ReactOS - c++

Please be patient in answering as I am new to all this and want to get my basics 100 percent right. I am a Mechanical Engineer, so do not be harsh. I am learning about some very basic low level stuff and was interested in understanding a concept related to compiler backends. The C/C++ compiler output is probably the machine code specifically tailored for the computer architecture. This also means that it should be same in Windows and Linux if both run on the same hardware, say, i7 processor. But there is another layer of difference in the form of binary format. That is, we have ELF(Executable and Linkable Format) on Linux and PE/COFF(Portable Executable) on Windows.
Thus, I feel, the compilers on Linux and Windows have backends that work differently and emit binaries in ELF or PE/COFF format.
ReactOS is a clone of Windows and is binary compatible to an extent with Windows.
Is it theoretically possible to have a LOADER in ReactOS that understands ELF and loads it properly?
I understand that we need to have a layer of software that maps the Linux APIs to ReactOS APIs. If such a mapping layer exists, does my question make sense?

Loader is not enough.
Operating systems have their own system call interface. I don't know too much about Linux and Windows binary APIs, last time I was using system calls directly was MS-DOS.
In MS-DOS, you can call DOS function by loading function code to AH register, then call INT 21H. Register AL is often used as sub-function or primary parameter. E.g. I can recall how to exit a program:
MOV AX,4C01H ; funciton AH = $4C (exit), error code is AH = 1
INT 21H
; program gets never here
So, other operating systems provide other fashion interface. E.g. AmigaDOS has exec.library's address on the absolute address of 4 (yep, $00000004), and library functions can be accessed thru a jump table located to negative offsets to library's "base" address (-4, -8 etc.). Other libraries' pointer can be asked from exec.library, by using open function.
Okay, MS-DOS and AmigaDOS runs on different architectures, but it's a good example of how operating system calls may differ. Software interrupts vs. library addresses provided by the first library.
Sometimes, difference is a luck. When the different operating system calls does not interfere, it's possible to write a wrapper, which receives alien operating system calls, and transforms them to host operating system. It would be perfect, if operating systems APIs would differ only the order of parameters of system calls - but the situation is more difficult. Simpler function can be mapped to other OS's flavour, but more complex functions - with callbacks! - are harder. Wrappers may emulate not only functions, but bugs of the operating system.
Anyway, there are some good stuff in this genre.
A good example is CygWin, which let you run Linux programs under Win32. When I last used it, there were no problems running any command-line stuff, even with threads, network etc. EDITED: it requires re-compiling and libs as #fortran says.
For Linux, WINE is a nice effort to run Win32 apps. There are even official Linux versions of commercial software, which use WINE! If your program doesn't utilize the lastest Windows API calls, WINE should work.
As Linux and BSD are both POSIX compatible operating systems, it's no surprise, that such a thing, like Linux Compatibility Layer for BSD exists.

Related

Can an x86 executable run on any x86 platform given the right runtime libraries?

While I did find similar-ish questions, they did not really answer this specific question.
Can a compiled x86 executable run on any x86 platform given the right runtime libraries?
Say I make a C++17 program without dependencies, could I run this program on Windows 95 or is there some sort of support required by the OS?
I also heard that RTTI (in the case of C++) may not be supported everywhere, is this only due to the processor having to support this feature or does the OS play a role in that? This would imply that new features would maybe not be supported by, e.g., Windows 95.
Edit
What I'm after is whether an executable (e.g., x86) can run on any platform supporting that instruction set or wether certain features, like RTTI, need specific OS support and thus are not available on all platforms supporting that instruction set.
In general you cannot, even if you restricted your universe to x86 hardware - at least not without some conversion of the binary or some platform-specific "loader" for each target platform.
For exmaple a typical binary emitted by a C or C++ compiler1 will have some minimal dependency on the OS and runtime, for example to load and do runtime linking on the executable. Different platforms have different binary formats (such as PE/COFF on Windows or ELF across various UNIX flavors and Linux) and there isn't any common "x86 format" that would work directly on any platform.
Furthermore, any non-trivial program and in many cases any program, trivial or not, is going to have platform-specific dependencies on the the langauge runtime. For example, even an empty main() function often requires runtime support to get from the OS-defined "start" method to the main method, and without unusual build options there are often calls at startup to initialize parts of the standard library.
Finally, as you alluded to with your comment about RTTI, various language or platform features may essentially be compiled into the binary and require OS support. RTTI probably doesn't obviously fall into this category, but things like position-independent code, thread-local storage and stack-unwinding support for exception handling often do. The compiled x86 code that uses such features may be quite different on different platforms since it needs to build in assumptions of how those work.
In principle, however, you could imagine this working, at least for some limited subset of programs. For example, while the various executable formats are in practice incompatible, they aren't that different and tools exist to convert between them. So you could certainly implement a minimal runtime on your platform of interest that takes an x86 executable compiled to whatever fixed format you choose and converts at runtime to the local format and runs it.
Beyond that actually trying to map even standard library calls would be quite difficult since different operating systems using different calling conventions, but it could be possible for "C" functions using some thunks to put things in the right place. C++ is pretty much right out because the ABI there is much more complex, compiler-and-platform specific and much of the implementation detail is already compiled-in for stuff implemented in headers.
In fact, the idea that (a subset of) x86 might provide a interesting intermediate language for cross-platform execution is exactly the idea behind exploited in Google's [NaCl project]. Essentially, the NaCl runtime provides platform agnostic "loading" capabilities which allow x86 code to run more-or-less natively on various platforms. Subsequently other native formats such as ARM were added, but it started as an x86 sandbox. A large part of the project deals with running code that provably safe (i.e., sandboxed) - but it shows that with some infrastructure you can write "portable" x86. A standard C or C++ compiler isn't going to emit NaCl compatible code directly, however.
1 Really, any compiler that compiles to a native format. I just call out C and C++ since they seem like the ones you are interested in and are widely familiar.
This question misses the point. C++ is, first and foremost, a language to describe the behaviour of a computer program.
Using a compiler to create a native binary executable file to produce that behaviour on an actual computer is the typical way of using the language.
Once you have the binary file, all traces of the source code used to produce it are gone (unless you have built a special version for debugging purposes). The compatibility of the binary file with specific hardware or operating systems is beyond the scope of C++ itself.
The same is true for C, or any other programming language which typically gets compiled to native binary code.
Or, to answer the question more briefly:
Can compiled C++/C code (i.e. an executable) run anywhere given the right runtime libraries?
No.
Can a compiled x86 executable run anywhere given the right runtime libraries?
No, it will only work on x86 hardware, or other hardware (or software, such as a virtual machine) that emulates the x86 instruction set (such as a x64 CPU). In practice, that's very likely to be a far cry from "anywhere."
And even if the hardware matches, an x86 executable will have operating system dependencies. A Windows binary won't run on Linux, even if the hardware is the same. There are various strategies that can make things like this "work" in some situations, Microsoft's Linux Subsystem for Windows is one recent example which allows Linux binaries to run unchanged on Windows. Again, a fry cry from "anywhere."

How does a language expand itself?

I am learning C++ and I've just started learning about some of Qt's capabilities to code GUI programs. I asked myself the following question:
How does C++, which previously had no syntax capable of asking the OS for a window or a way to communicate through networks (with APIs which I don't completely understand either, I admit) suddenly get such capabilities through libraries written in C++ themselves? It all seems terribly circular to me. What C++ instructions could you possibly come up with in those libraries?
I realize this question might seem trivial to an experienced software developer but I've been researching for hours without finding any direct response. It's gotten to the point where I can't follow the tutorial about Qt because the existence of libraries is incomprehensible to me.
A computer is like an onion, it has many many layers, from the inner core of pure hardware to the outermost application layer. Each layer exposes parts of itself to the next outer layer, so that the outer layer may use some of the inner layers functionality.
In the case of e.g. Windows the operating system exposes the so-called WIN32 API for applications running on Windows. The Qt library uses that API to provide applications using Qt to its own API. You use Qt, Qt uses WIN32, WIN32 uses lower levels of the Windows operating system, and so on until it's electrical signals in the hardware.
You're right that in general, libraries cannot make anything possible that isn't already possible.
But the libraries don't have to be written in C++ in order to be usable by a C++ program. Even if they are written in C++, they may internally use other libraries not written in C++. So the fact that C++ didn't provide any way to do it doesn't prevent it from being added, so long as there is some way to do it outside of C++.
At a quite low level, some functions called by C++ (or by C) will be written in assembly, and the assembly contains the required instructions to do whatever isn't possible (or isn't easy) in C++, for example to call a system function. At that point, that system call can do anything your computer is capable of, simply because there's nothing stopping it.
C and C++ have 2 properties that allow all this extensibility that the OP is talking about.
C and C++ can access memory
C and C++ can call assembly code for instructions not in the C or C++ language.
In the kernel or in a basic non-protected mode platform, peripherals like the serial port or disk drive are mapped into the memory map in the same way as RAM is. Memory is a series of switches and flipping the switches of the peripheral (like a serial port or disk driver) gets your peripheral to do useful things.
In a protected mode operating system, when one wants to access the kernel from userspace (say when writing to the file system or to draw a pixel on the screen) one needs to make a system call. C does not have an instruction to make a system calls but C can call assembler code which can trigger the correct system call, This is what allows one's C code to talk to the kernel.
In order to make programming a particular platform easier, system calls are wrapped in more complex functions which may perform some useful function within one's own program. One is free to call the system calls directly (using assembler) but it is probably easier to just make use of one of the wrapper functions that the platform supplies.
There is another level of API that are a lot more useful than a system call. Take for example malloc. Not only will this call the system to obtain large blocks of memory but will manage this memory by doing all the book keeping on what is take place.
Win32 APIs wrap some graphic functionality with a common platform widget set. Qt takes this a bit further by wrapping the Win32 (or X Windows) API in a cross platform way.
Fundamentally though a C compiler turns C code into machine code and since the computer is designed to use machine code, you should expect C to be able to accomplish the lions share or what a computer can do. All that the wrapper libraries do is do the heavy lifting for you so that you don't have to.
Languages (like C++11) are specifications, on paper, usually written in English. Look inside the latest C++11 draft (or buy the costly final spec from your ISO vendor).
You generally use a computer with some language implementation (You could in principle run a C++ program without any computer, e.g. using a bunch of human slaves interpreting it; that would be unethical and inefficient)
Your C++ implementation general works above some operating system and communicate with it (using some implementation specific code, often in some system library). Generally that communication is done thru system calls. Look for instance into syscalls(2) for a list of system calls available on the Linux kernel.
From the application point of view, a syscall is an elementary machine instruction like SYSENTER on x86-64 with some conventions (ABI)
On my Linux desktop, the Qt libraries are above X11 client libraries communicating with the X11 server Xorg thru X Windows protocols.
On Linux, use ldd on your executable to see the (long) list of dependencies on libraries. Use pmap on your running process to see which ones are "loaded" at runtime. BTW, on Linux, your application is probably using only free software, you could study its source code (from Qt, to Xlib, libc, ... the kernel) to understand more what is happening
I think the concept you are missing is system calls. Each operating system provides an enormous amount of resources and functionality that you can tap into to do low-level operating system related things. Even when you call a regular library function, it is probably making a system call behind the scenes.
System calls are a low-level way of making use of the power of the operating system, but can be complex and cumbersome to use, so are often "wrapped" in APIs so that you don't have to deal with them directly. But underneath, just about anything you do that involves O/S related resources will use system calls, including printing, networking and sockets, etc.
In the case of windows, Microsoft Windows has its GUI actually written into the kernel, so there are system calls for making windows, painting graphics, etc. In other operating systems, the GUI may not be a part of the kernel, in which case as far as I know there wouldn't be any system calls for GUI related things, and you could only work at an even lower level with whatever low-level graphics and input related calls are available.
Good question. Every new C or C++ developer has this in mind. I am assuming a standard x86 machine for the rest of this post. If you are using Microsoft C++ compiler, open your notepad and type this (name the file Test.c)
int main(int argc, char **argv)
{
return 0
}
And now compile this file (using developer command prompt) cl Test.c /FaTest.asm
Now open Test.asm in your notepad. What you see is the translated code - C/C++ is translated to assembler. Do you get the hint ?
_main PROC
push ebp
mov ebp, esp
xor eax, eax
pop ebp
ret 0
_main ENDP
C/C++ programs are designed to run on the metal. Which means they have access to lower level hardware which makes it easier to exploit the capabilities of the hardware. Say, I am going to write a C library getch() on a x86 machine.
Depending on the assembler I would type something this way :
_getch proc
xor AH, AH
int 16h
;AL contains the keycode (AX is already there - so just return)
ret
I run it over with an assembler and generate a .OBJ - Name it getch.obj.
I then write a C program (I dont #include anything)
extern char getch();
void main(int, char **)
{
getch();
}
Now name this file - GetChTest.c. Compile this file by passing getch.obj along. (Or compile individually to .obj and LINK GetChTest.Obj and getch.Obj together to produce GetChTest.exe).
Run GetChTest.exe and you would find that it waits for the keyboard input.
C/C++ programming is not just about language. To be a good C/C++ programmer you need to have a good understanding on the type of machine that it runs. You will need to know how the memory management is handled, how the registers are structured, etc., You may not need all these information for regular programming - but they would help you immensely. Apart from the basic hardware knowledge, it certainly helps if you understand how the compiler works (ie., how it translates) - which could enable you to tweak your code as necessary. It is an interesting package!
Both languages support __asm keyword which means you could mix your assembly language code too. Learning C and C++ will make you a better rounded programmer overall.
It is not necessary to always link with Assembler. I had mentioned it because I thought that would help you understand better. Mostly, most such library calls make use of system calls / APIs provided by the Operating System (the OS in turn does the hardware interaction stuff).
How does C++ ... suddenly get such capabilities through libraries
written in C++ themselves ?
There's nothing magical about using other libraries. Libraries are simple big bags of functions that you can call.
Consider yourself writing a function like this
void addExclamation(std::string &str)
{
str.push_back('!');
}
Now if you include that file you can write addExclamation(myVeryOwnString);. Now you might ask, "how did C++ suddenly get the capability to add exclamation points to a string?" The answer is easy: you wrote a function to do that then you called it.
So to answer your question about how C++ can get capabilities to draw windows through libraries written in C++, the answer is the same. Someone else wrote function(s) to do that, and then compiled them and gave them to you in the form of a library.
The other questions answer how the window drawing actually works, but you sounded confused about how libraries work so I wanted to address the most fundamental part of your question.
The key is the possibility of the operating system to expose an API and a detailed description on how this API is to be used.
The operating system offers a set of APIs with calling conventions.
The calling convention is defining the way a parameter is given into the API and how results are returned and how to execute the actual call.
Operating systems and the compilers creating code for them play nicely together, so you usually have not to think about it, just use it.
There is no need for a special syntax for creating windows. All that is required is that the OS provides an API to create windows. Such an API consists of simple function calls for which C++ does provide syntax.
Furthermore C and C++ are so called systems programming languages and are able to access arbitrary pointers (which might be mapped to some device by the hardware). Additionally, it is also fairly simple to call functions defined in assembly, which allows the full range of operations the processor provides. Therefore it is possible to write an OS itself using C or C++ and a small amount of assembly.
It should also be mentioned that Qt is a bad example, as it uses a so-called meta compiler to extend C++' syntax. This is however not related to it's ability to call into the APIs provided by the OS to actually draw or create windows.
First, there's a little misunderstading, I think
How does C++, which previously had no syntax capable of asking the OS for a window or a way to communicate through networks
There is no syntax for doing OS operations. It's the question of semantics.
suddenly get such capabilities through libraries written in C++ themselves
Well, the operating system is writen mostly in C. You can use shared libraries (so, dll) to call the external code. Additionally, the operating system code can register system routines on syscalls* or interrupts which you can call using assembly. That shared libraries often just make that system calls for you, so you are spared using inline assembly.
Here's the nice tutorial on that: http://www.win.tue.nl/~aeb/linux/lk/lk-4.html
It's for Linux, but the principles are the same.
How the operating system is doing operations on graphic cards, network cards etc? It's a very broad thema, but mostly you need to access interrupts, ports or write some data to special memory region. Since that operations are protected, you need to call them through the operating system anyway.
In an attempt to provide a slightly different view to other answers, I shall answer like this.
(Disclaimer: I am simplifying things slightly, the situation I give is purely hypothetical and is written as a means of demonstrating concepts rather than being 100% true to life).
Think of things from the other perspective, imagine you've just written a simple operating system with basic threading, windowing and memory management capabilities. You want to implement a C++ library to let users program in C++ and do things like make windows, draw onto windows etc. The question is, how to do this.
Firstly, since C++ compiles to machine code, you need to define a way to use machine code to interface with C++. This is where functions come in, functions accept arguments and give return values, thus they provide a standard way of transferring data between different sections of code. They do this by establishing something known as a calling convention.
A calling convention states where and how arguments should be placed in memory so that a function can find them when it gets executed. When a function gets called, the calling function places the arguments in memory and then asks the CPU to jump over to the other function, where it does what it does before jumping back to where it was called from. This means that the code being called can be absolutely anything and it will not change how the function is called. In this case however, the code behind the function would be relevant to the operating system and would operate on the operating system's internal state.
So, many months later and you've got all your OS functions sorted out. Your user can call functions to create windows and draw onto them, they can make threads and all sorts of wonderful things. Here's the problem though, your OS's functions are going to be different to Linux's functions or Windows' functions. So you decide you need to give the user a standard interface so they can write portable code. Here is where QT comes in.
As you almost certainly know, QT has loads of useful classes and functions for doing the sorts of things that operating systems do, but in a way that appears independent of the underlying operating system. The way this works is that QT provides classes and functions that are uniform in the way they appear to the user, but the code behind the functions is different for each operating system. For example QT's QApplication::closeAllWindows() would actually be calling each operating system's specialised window closing function depending on the version used. In Windows it would most likely call CloseWindow(hwnd) whereas on an os using the X Window System, it would potentially call XDestroyWindow(display,window).
As is evident, an operating system has many layers, all of which have to interact through interfaces of many varieties. There are many aspects I haven't even touched on, but to explain them all would take a very long time. If you are further interested in the inner workings of operating systems, I recommend checking out the OS dev wiki.
Bear in mind though that the reason many operating systems choose to expose interfaces to C/C++ is that they compile to machine code, they allow assembly instructions to be mixed in with their own code and they provide a great degree of freedom to the programmer.
Again, there is a lot going on here. I would like to go on to explain how libraries like .so and .dll files do not have to be written in C/C++ and can be written in assembly or other languages, but I feel that if I add any more I might as well write an entire article, and as much as I'd love to do that I don't have a site to host it on.
When you try to draw something on the screen, your code calls some other piece of code which calls some other code (etc.) until finally there is a "system call", which is a special instruction that the CPU can execute. These instructions can be either written in assembly or can be written in C++ if the compiler supports their "intrinsics" (which are functions that the compiler handles "specially" by converting them into special code that the CPU can understand). Their job is to tell the operating system to do something.
When a system call happens, a function gets called that calls another function (etc.) until finally the display driver is told to draw something on the screen. At that point, the display driver looks at a particular region in physical memory which is actually not memory, but rather an address range that can be written to as if it were memory. Instead, however, writing to that address range causes the graphics hardware to intercept the memory write, and draw something on the screen.
Writing to this region of memory is something that could be coded in C++, since on the software side it's just a regular memory access. It's just that the hardware handles it differently.
So that's a really basic explanation of how it can work.
Your C++ program is using Qt library (also coded in C++). The Qt library will be using Windows CreateWindowEx function (which was coded in C inside kernel32.dll). Or under Linux it may be using Xlib (also coded in C), but it could as well be sending the raw bytes that in X protocol mean "Please create a window for me".
Related to your catch-22 question is the historical note that “the first C++ compiler was written in C++”, although actually it was a C compiler with a few C++ notions, enough so it could compile the first version, which could then compile itself.
Similarly, the GCC compiler uses GCC extensions: it is first compiled to a version then used to recompile itself. (GCC build instructions)
How i see the question this is actually a compiler question.
Look at it this way, you write a piece of code in Assembly(you can do it in any language) which translates your newly written language you want to call Z++ into Assembly, for simplicity lets call it a compiler (it is a compiler).
Now you give this compiler some basic functions, so that you can write int, string, arrays etc. actually you give it enough abilities so that you can write the compiler itself in Z++. and now you have a compiler for Z++ written in Z++, pretty neat right.
Whats even cooler is that now you can add abilities to that compiler using the abilities it already has, thus expanding the Z++ language with new features by using the previous features
An example, if you write enough code to draw a pixel in any color, then you can expand it using the Z++ to draw anything you want.
The hardware is what allows this to happen. You can think of the graphics memory as a large array (consisting of every pixel on the screen). To draw to the screen you can write to this memory using C++ or any language that allows direct access to that memory. That memory just happens to be accessible by or located on the graphics card.
On modern systems accessing the graphics memory directly would require writing a driver because of various restrictions so you use indirect means. Libraries that create a window (really just an image like any other image) and then write that image to the graphics memory which the GPU then displays on screen. Nothing has to be added to the language except the ability to write to specific memory locations, which is what pointers are for.

Using 32-bit library in 64-bit C++ program

Is there any way how use an old 32-bit static library *.a in a 64-bit system.
The is no chance to obtain a source code of this old library to compile it again.
I also do not want to use -m32 in gcc, because the program use many 64bit libraries.
Thanks.
That depends entirely on the platform on which you're running. OS X on PowerPC, for example, that would "Just Work".
On x86 platforms, you can't link a 32-bit library into a 64-bit executable. If you really need to use that library, you'll need to start a separate 32-bit process to handle your calls to the library, and use some form of IPC to pass those calls between your 64-bit application and that helper process. Be forewarned: this is a lot of hassle. Make sure that you really need that library before starting down this road.
On the x86/x86_64 platform, you can't do this. I mean, maybe you could if you wrote custom assembly language wrappers for each and every 32 bit function you wanted to call. But that's the only way it's even possible. And even if you were willing to do that work I'm not sure it would work.
The reason for this is that the calling conventions are completely different. The x864_64 platform has many more registers to play with, and the 64-bit ABI (Application Binary Interface, basically how parameters are passed, how a stack frame is set up and things like that) standards for all of the OSes make use of these extra registers for parameter passing and the like.
This makes the ABI of 32-bit and 64-bit x86/x86_64 systems completely incompatible. You'd have to write a translation layer. And it's possible the 32-bit ABI allows 32-bit code to fiddle around with CPU stuff that 64-bit code is not allowed to fiddle with, and that would make your job even harder since you'd be required to restore the possibly modified state before returning to the 64-bit code.
And that's not even talking about this issue of pointers. How do you pass a pointer to a data structure that's sitting at a 64-bit address to 32-bit code?
Simple answer: You can't .
You need to use -m32 in order to load a 32-bit library.
Probably your best approach is to create a server wrapping the library. Then a 64-bit application can use IPC (various methods, e.g. sockets, fifos) in order to communicate to and from the process hosting the library.
On Windows this would be called out-of-process COM. I don't know that there's a similar framework on unix, but the same approach will work.

How can I make a portable executable?

It's there a way to compile a c/c++ source file to output a .exe file that can be run on other processors on different computers ?
I am asking this for windows platform.
I know it can be done with java or c# , but it uses virtual machine.
PS: For those who said that it can be done just with virtual machines or the source cod must be compiled on every machine , i am asking if all viruses are written in java or c# and you need a vm machine to be infected or you need to compile source cod of worm on your machine to be infected ? (i am not trying to make a virus, but is a good example :) )
Different computers use different instruction sets, OS system calls, etc., which is why machine code is platform specific. This is why various technologies like byte code, virtual machines, etc., have been developed to allow portable executables. However, generally C/C++ compiles directly to platform-specific machine code.
So a Windows exe simply won't run on another platform without some kind of emulation layer.
However, you can make your C/C++ source code portable. This means all you need to do to make your application run on another platform is to compile it for that platform.
Yes, you can, but it's not necessarily a good idea.
Apple introduced the idea of a fat binary when they were migrating from the Motorola 68000 to the PowerPC chips back in the early 90s (I'm not saying they invented it, that's just the earliest incarnation I know of). That link also describes the FatELF Linux universal binaries but, given how little we hear about them, they don't seem to have taken off.
This was a single file which basically contained both the 68000 and PowerPc executables bundled into one single file and required some smarts from the operating system so it could load and execute the relevant one.
You could, if you were so inclined, write a compiler which produced a fat binary that would run on a great many platforms but:
it would be hideously large; and
it would almost certainly require special loaders on each target system.
Since gcc has such a huge amount of support for different platforms and cross-compiling, it would be where I would concentrate the effort, were I mad enough to try :-)
The short answer is - you can't. Longer answer is write in portable (standard) C/C++ and compile on the platforms you need.
You can, however, do it in a different language. If you need something to run on multiple platforms, I suggest you investigate Java. Similar language to c/c++, and you can "compile" (sort of) programs to run on pretty much any computer.
Do not confuse processor platforms with OS platforms.
for different OS platforms, machine binaries are altogether different. even not possible to make one-to-one instruction mapper. it is beacause whole instruction set architecture may be different, and different instruction groups may have totally different instruction format, and even some instructions may be missing or target platform.
Only an emulator or virtual machine can do this.
Actually, some operating systems support this; it is usually called a "fat binary".
In particular, Mac OS uses (used) it to support PowerPC and x86 in one binary.
On MS Windows however, this is not possible as the OS does not support it.
Windows can run 32bit executables in 64bit mode with no problem. So your exe will be portable if you compile it in 32bit mode. Or else you must release 2 versions of your executable. If you use Java or C# (or any bytecode-compiled language), the JIT/interpreter can optimize your code for the current OS's mode, so it's fully portable. But on C++, since it produces native code, I'm afraid this can't be done except using 2 versions of your binary.
The trick to do this, is to create a binary which has machine instructions which will be emulated in a virtual machine on the operating systems and processors you want to support.
The most widely spread such virtual machine are variants of the Java virtual machine. So my suggestion would be to look at a compiler which compiles C code to Java byte code.
Also, Windows once upon a time treated x86 as a virtual machine on other (Alpha) architectures.
To summarize the other answers, if you want to create a single executable file that can be loaded and run on multiple platforms, you basically have two options:
Create a "fat binary", which contains the machine code for multiple platforms. This is not normally supported by most development tools and may require special loaders on the target platform;
Compile to a byte code for the JVM or for .Net. I've heard of one C compiler that generates Java byte code (can't remember the name offhand), but never used it, nor do I have any idea what the quality of the implementation would be.
Normally, the procedure for supporting multiple platforms in C is to generate different executables for each target, either by using a cross compiler or running a compiler on each platform. That requires you to put some thought into how you write and organize your source code so that the platform-specific bits can be easily swapped out without affecting the overall program logic, for varying degrees of "easily".
The short answer is you can't, The long answer is there are several options.
Fat Binary. Downside is that this requires OS support. The only user level OS I know of that supports it is OS X for their power pc to Intel migration.
On the fly cross translation. As used by Transmeta and Apple. Again no general solution provider that I know of.
a C\C++ interpreter. There is at least one I am aware of Ch. It runs on Windows, Linux, OS X. Note Ch is not fully C++ compatible.
This question likes to that ask "Is there a way can travel from Canada to another city in the world?"
And answer is: "yes there is."
for compiling a C/C++ Source code to an Executable file on Windows Platform without any virtual machine you can use Windows Legacy API or MFC (specially with use MFC in a Static Library instead in Dll). This executable file approximately runs on all PCs that have windows, because windows runs on only 3 platforms (x86, x64, IA64; except windows 8 & 8.1 that supports ARMs).Of course you should compile your source to x86 codes to run on 32 bit and x86-64 platforms and Itaniums can run your exe in emulation. But about all Processors that runs windows as their OS (Like ARMS in mobiles) you should compile that for the Windows Phone or Windows CE.
You can write a mid-library~ such as:
[ Library ]
[ mid-library]
[linux part] [windows part]
then, you can use library, your app will be portable~~

Issues in porting c/c++ code to VxWorks

I need to port a c/c++ codebase that already supports Linux/Mac, to VxWorks. I am pretty new to VxWorks. Could you let me know what are the possible issues that could arise?
We recently did the opposite conversion - we ported code from a PowerPC machine running VxWorks to an Intel system running Linux. I don't remember hitting many snags as far as the differences between the operating systems. Obviously any call to an OS specific API will have to change and we were not making extensive use of these functions.
Our biggest problem was not the difference between the operating systems, but rather the difference between PowerPC and Intel hardware. PowerPC is Big Endian and Intel is Little Endian. Our software is written in C and made many assumptions as to the order of bytes and this was an absolute nightmare to get it working smoothly again. There were literally hundreds of structures that defined bitfields and needed to be re-ordered to work correctly. We ended up implementing a #pragma in GCC that reversed these bitfields at their definition (#pragma reverse_bitfields).
Much depends on which version of VxWorks you're targeting, and the actual target processor itself. One thing you will have to deal with is that there is no paged memory system or virtual memory--you have what's there. The environment itself is far more constrained than a linux system. Sometimes the work involved in porting applications goes all the way back to the architecture level because resources are not as unlimited as they are in linux.
Some other tips:
license vxworks such that you have the source code available
use a real, physical target as soon as possible in the development cycle; do not count on the simulators accurately emulating the target
use TSRs (technical support requests) as necessary; I don't know how they structure the purchase of the right to create TSRs, but don't let anybody cheap out on these
Depending on what processor you are running with VxWorks endianness, structure packing, and memory alignment could all be issues. The last time I used VxWorks it supported a pthreads, sockets, and mutex layer that mimicked the unix environments easily enough.
It's difficult to tell, without knowing more about the application that you're porting: What linux libraries and api calls does it use? Is it self-contained, or does it rely on slews of linux command-line tools and scripts to do its job?
As Average says, endianness can cause you way more problems than you expect - particularly if you're not prepared for it.