if i have code compiled under Solaris 8 and 10 and now have a vendor that wants to use my bin/exe under Linux. Could there be compatibility issues?
I am pretty sure i would need to compile/link under Linux OS for it to work 100% but i just wanted to know if someone can give me the breakdown as to why it would not work on Linux even though the exe has everything and there is nothing dynamic about it, as in it should not need anything further to run it. Unless we talking runtime libs, that if there is a mismatch might cause the exe to fail.
You have to recompile your application on Linux.
Linux is a completely different run-time compared to Solaris. Even if you have compiled your application statically, there's the interface/system calls to the kernel that is different among these two operating systems. The processor architecture might be different too, e.g. SPARC vs X86.
Both Solaris and Linux support most of the standard C and Posix APIs, so if you've not used any APIs exclusive to Solaris, recompiling on Linux is often not that big a deal - but you surly should test everything throughly, and be aware of any endianess, and potential 64 bit vs 32 bit issues.
Other things that I think will not allow your Solaris binary to run on Linux out of the box are:
the hardware architecture:
1.1 Solaris usually runs on Sun's own SPARC machines, especially 8 - 10 can run on Intel architectures as well;
1.2 Linux usually runs on Intel machines (although it can run on Sparc machines).
the compilers:
2.1 Solaris 8 uses Sun's own compilers (Sun WorkShop 6+) & standard library implementation (so you'll have different library names, ABI incompatibilities and so on). Solaris 10 actually comes with gcc but you're probably not using it (I gather you're building on Solaris 8 only);
2.2 Linux uses g++, same as above for library names, ABI incompatibilities & so on.
Related
Suppose we take a compiled language, for example, C++. Now let's take an example Framework, suppose Qt. Qt has it's source code publically available and has the options for users to download the binary files and let users use their API. My question is however, when they compiled their code, it was compiled to their specific HardWare, Operating System, all that stuff. I understand how many Software Require recompilation for different types of Operating Systems (Including 32 vs 64bit) and offer multiple downloads on their website, however how does it not go even further to suggest it is also Hardware Specific and eventually result in the redistribution of compiled executes extremely frustrating to produce?
Code gets compiled to a target base CPU (e.g. 32-bit x86, x86_64, or ARM), but not necessarily a specific processor like the Core i9-10900K. By default, the compiler typically generates the code to run on the widest range of processors. And Intel and AMD guarantee forward compatibility for running that code on newer processors. Compilers often offer switches for optimizing to run on newer processors with new instruction sets, but you rarely do that since not all your customers have that config. Or perhaps you build your code twice (once for older processors, and an optimized build for newer processors).
There's also a concept called cross-compiling. That's where the compiler generates code for a completely different processor than it runs on. Such is the case when you build your iOS app on a Mac. The compiler itself is an x86_64 program, but it's generating ARM CPU instruction set to run on the iPhone.
Code gets compiled and linked with a certain set of OS APIs and external runtime libraries (including the C/C++ runtime). If you want your code to run on Windows 7 or Mac OSX Maverics, you wouldn't statically link to an API that only exists on Windows 10 or Mac OS Big Sur. The code would compile, but it wouldn't run on the older operating systems. Instead, you'd do a workaround or conditionally load the API if it is available. Microsoft and Apple provides the forward compatibility of providing those same runtime library APIs to be available on later OS releases.
Additionally Windows supports running 32-bit processes on 64-bit chips and OS. Mac can even emulate x86_64 on their new ARM based devices coming out later this year. But I digress.
As for Qt, they actually offer several pre-built configurations for their reference binary downloads. Because, at least on Windows, the MSVCRT (C-runtime APIs from Visual Studio) are closely tied to different compiler versions of Visual Studio. So they offer various downloads to match the configuration you want to build your your code for (32-bit, 64-bit, VS2017, VS2019, etc...). So when you put together a complete application with 3rd party dependencies, some of these build, linkage, and CPU/OS configs have to be accounted for.
Is it safe to assume that any x86 compiled app would always run under x64 edition of same OS the app was compiled in?
As far as I know, For Windows OS the answer is "Yes". Windows x86 emulation layer is built for the same purpose. But, I just want to reconfirm this from experts here.
What about Unix, Linux? Are there any caveats?
No, for x86 code to run it need to be run in compatibility or legacy mode. If the OS doesn't support running processes in compatibility mode the program would most likely not being able to run.
Linux and IFAIK Windows currently supports compatibility mode and it looks like many more are supporting it too, more or less. My understanding is that NETBSD requires a special module to support this, so it's not necessarily without special care that it will be supported and it shows that it's quite possible that there exists OS'es where the possibility has been dropped completely.
Then in addition there's the possibility of breaking the backwards compatibility in the future, that's already happened on the CPU as virtual x86 mode is no longer available from long mode that is you can't run 16 bits program anymore under 64-bit Windows or Linux.
It could also happen on the OS side, that the developers could decide not to support compatibility mode anymore. Note that this may also have happened as it might be possible to support virtual x86 mode by first switching to legacy mode, but if possible no-one seem to have bothered doing it. Similarly neither Windows or Linux developers seems to have bothered implementing possibility to run kernel code in legacy mode in 64-bit kernel.
The preceding two sections shows that there are future or even present indications on this might not always be possible.
Besides as this is a C++ question you would have to ask yourself the question why you would want to make such a assumption? If well written your code should be able to be compiled for 64-bit mode - for you haven't relied on data types being of specific width, have you?
No. We have a whole bunch of Debian servers which miss the Multi-Arch i386 (32 bits) libraries. On Windows Server Core, WoW64 (the 32 bit subsystem) is optional. So for both Linux and Windows, there are known 64 bit systems that won't run x86 executables.
I am developing a cross-platform application and I need to determine whether machine B will be able to run the application that is compiled on machine A.
I am using Qt and I already understand that I need to either package the Qt libraries with the application or statically link against Qt itself.
I also understand that something compiled on Windows can't run on Linux.
However, there are still some other vectors that I'm not sure to what extent they matter. Here is the summary of my current understanding:
Affects Portability
Operating System (Windows, Mac, Linux)
Availability of third party libraries (Qt, static vs dynamic linking, etc)
May Affect Portability
Flavor of Linux (Ubuntu, Red Hat, Fedora)
Architecture (32 or 64-bit)
Version of Operating System (Windows 7 vs Windows XP, Rhel5 vs Rhel6)
Instruction type (i386, x64)
Of the May Affect Portability items, which ones actually do? Are there any that I am missing?
All. At least potentially.
If two different machines have no binary compatibility (e.g.
they run on different architectures, or interface to
incompatible systems), then it will be impossible to create
a single binary that will run on both. (Or... does running
a Windows program under Wine on Linux count?)
Otherwise, it depends. You mention third party libraries: if
they're dynamically loaded, they have to be there, but there's
always static linking, and there may be ways of deploying with
the dynamic library, so that it will be there.
The 32 bit vs. 64 bit is a difference in architectures: a 32 bit
program will not run in a 64 bit environment and vice versa.
But most modern systems will make both environments available
if they are on a 64 bit machine.
Issues like the flavor and version of the OS are more complex.
If you use any functions recently added to the OS, of course,
you won't be able to run on machines with an OS from before they
were added. Otherwise: the main reason why the low level system
libraries are dynamically loaded is to support forwards and
backwards compatibility; I've heard that it doesn't always work,
but I suspect that any problems involve some of the rarer
functions. (There are limits to this. Modern Windows programs
will not run under Windows95, and vice versa.)
There is also an issue as to whether various optional
packages are installed. Qt requires X Windows under Linux or
Solaris; I've worked on a lot of Linux and Solaris boxes where
it wasn't installed (and where there wasn't even a display
device).
And there is the issue whether it will run acceptably. It may
run on a smaller, older machine than the one on which you tested
it, but it could end up paging like crazy, to the point where it
becomes unusable.
If you compile an application on a 64-bit processor, it wouldn't by default run on a 32-bit processor. However, you can pass options to the compiler to have it compile code to run on a 32-bit processor. For example, if you're using GCC on a 64-bit machine, if you pass -m32, it will compile 32-bit code. 32-bit code by default can run on a 64-bit machine.
Sources
https://stackoverflow.com/a/3501880/193973
Different flavors of Linux or versions of operating systems may have different system libraries. For example, the GetModuleFileNameEx function is only available in Windows XP and up. As long as you watch what functions you use though, it shouldn't be too much of a problem.
The x64 architecture is backwards compatible with x86 ("32-bit"), so programs compiled for the x86 will run on x64 machines, but not vice versa. Note that there are other, less common architectures too, such as the ARM and PowerPC.
I can immediately think of three things that interfere with portability. But if your code is in a file format understood by both systems, an instruction set understood by both systems, and only makes system calls understood by both systems, then your code should probably run on both systems.
Executable File format
Windows understands PE, COFF, COM, PEF, .NET, and others
Linux by default ELF, OUT, and others
MaxOSX uses Mach-O
Linux has extensions to run the Windows/Mac formats too (Wine, Mono...)
Instruction set
Each processor is a little different, but
There is usually a "lowest common denominator" which can be targetted
Sometimes compilers will write code to do a function twice
one with the LCD set, one with a "faster" instruction set
and the code will pick the right one at runtime.
x64 can run 64bit and 32bit, but not 16bit.
x86 can run 32bit and 16bit, but no 64bit.
Operating System calls
Each OS usually has different calls
This can be avoided by using a dynamic library, instead of direct OS calls
Such as... the C runtime library.
Or the CLR for .Net.
I am worried about the reliability of the MinGW compiler for 64-bit, as an alternative to the Visual C++ compiler.
For example, assuming C++ code builds and runs perfectly under Linux using GCC 4.6.2, will the corresponding MinGW produce similarly reliable executables/libraries under 64-bit Windows?
Is Cygwin a better option in terms of reliablity? Is neither to the Visual C++ compiler?
First, some misconceptions:
MinGW(.org) does not provide a 64-bit version of its runtime. MinGW-w64 does, in addition to their 32-bit CRT. They are also working on ARM support. And support various additional APIs (Win32 and others).
Cygwin <-> MinGW-w64: Cygwin does not use the MS CRT (msvcrt.dll). It instead inserts a POSIX compatibility layer in between your Cygwin app and the system's OS libraries (kernel32.dll, ntdll.dll, etc.), namely cygwin1.dll.
On to the question then...
I have found the MinGW-w64 compilers very good, and GCC 4.6 and above (actually, 4.5.1 and above) are very capable of producing good 64-bit code for Windows. Please remember that MinGW provides essentially the same C API as msvcrt.dll, so go to msdn.com for documentation (and be sure to look at the "MSVC++ 2003" version of documentation, some functions differ with the newer runtimes), do not think that because it's GCC, glibc documentation suddenly applies to Windows. Your code will have to be cross-platform. Also note that sizeof(long)!=sizeof(T*) on x64 Windows. A commonly encountered error when porting *nix or x86 Windows code to x64 Windows.
It is said that by using C/C++, one can write 'native' programs - that run on the platform. I am confused about what is considered native - the processor architecture or the OS version?
For example:
I have a 32 bit processor and Windows 7 ( 32 bit ), and I compile and generate and .exe file. Is it guaranteed to run on any Windows 7 32 Bit? ( Win 7 32 bit on 32/64 Bit machines )
Edit1:
I did not intend only Windows OS here. My example can be extended to Linux also. For example, generating an executable ( by default a.out ) on a 32 bit Linux OS running on 32 bit processor, and then running it on a 32bit Linux on a 64 bit processor.
Edit2:
Thanks for the responses, but I also intended that I am using the standard libraries and functions - nothing OS Specific. Just the once specified by the ANSI or ISO C++ Standard. No references to OS specific windowing systems or other libraries.
Thanks
Both; kind of.
The actual instructions don't really differ across Windows and Linux as they are compiled down for a single CPU architecture (x86).
However, a binary is more than just code that runs on bare hardware. For instance, it also contains information that tells the operating system how to load the executable and its dependencies. The binary is packaged in a specific format. This format can be different in different operating systems.
Besides that, the operating system provides some services to the applications (through system calls and APIs). The services that operating systems provide, and the way they can be used varies from an operating system to another.
These reasons contribute to the fact that most of the time a native binary depends on both OS and CPU architecture it's compiled for.
Answer to the updated question:
C++ Standard doesn't require anything about the nature of the compiled target. It just specifies compatibility requirements at the source level. Consequently, if you stick to the standard libraries, you'll be able to use the same source code to compile on platforms that offer a conforming C++ implementation. The standard doesn't say anything about binary portability. As I mentioned above, the primitive system calls that operating systems provide can vary and the actual implementation of the standard library depends on the way those system calls are provided by the OS.
In order to run a Windows binary on Linux, you need to use some sort of emulation like Wine which understands Windows binary format and simulates Windows API for applications.
1) The processor architecture (plus the targeted libraries static or dynamic)
2) Yes
A 32bit windows application will run on a Windows 64bit platform as WOW.
If your (windows) compiler's target architecture is x86 (32-bit) then it can run on any 32 bit and 64 bit Windows 7. But if its x86-64, it will only run on 64 bit Windows 7.
To answer the title specifically, you code for both.
The executable contains machine code, which is specific to the processor, and a lot of metadata for the OS on how to load/execute the program, which is specific to the OS.
The code may also (and typically does) contain calls into functions defined by the OS. And so, while it is just perfectly ordinary machine code that any compatible CPU will understand, it attempts to call code that only exists on Windows.
So "native" really means both. You code for the specific OS (and all compatible OS'es) and that specific CPU (and all compatible CPUs).
In the case of Windows, you typically target a specific version of Windows, and the program will then work on that, and future versions of Windows.
for the processor on which Windows (and your program) runs, the executable contains x86 machine code, which can be executed on any x86 CPU, whether it is from Intel, AMD, Via or whoever else have made compatible processors over the years.
Without being able to see your code, only you can tell us whether you're coding for 32-bit or 64-bit platform - for example, if you reinterpret_cast a pointer into a 32 bit int then back to a pointer, you are coding for 32-bit, whereas if you use a type such as int_ptr you are safe whether your code is compiled for 32 or 64 bit machines. Similarly, coding for Windows desktops your coding can assume the machine's endianess.
If, as in your example, you compile that code for 32-bit Windows 7, then it will also run on 64 bit Windows 7. If you use Windows 7 features, it won't run on earlier versions. Microsoft are very good at backward compatibility, so it probably will run on later versions.
Short answer: No.
Longer: When you compile "native code", you compile for a specific processor architecture; MIPS, ARM, x86, 68k, Sparc and so on. These architectures can have a wordlength of 8, 16, 32 and 64 (there are exceptions). Also these architectures can have extensions from generation to generation like MMX, SSE, SSE2, Neon and such.
Also you need to consider the operating system and what libraries you can use, and different calling conventions.
So, there's no guarantee. But if you compile with MSVC on Windows 7 it's almost guaranteed to run on Windows 7. I think it only exist for x86 at the moment.