I'm working on a game using C++ and relying on legacy features of OpenGL.
I've mostly been doing programming on Windows machines and am now looking into expanding into Linux and Mac OS. As a personal challenge, I'd like to keep my software as backwards-compatible as possible and support as many OSes as possible. With Windows, I've been able to upkeep support for as low as Windows 95 (with Visual Studio giving me some trouble in the process but working it out nevertheless). However, I have next to no knowledge of Mac OS, having never used it myself until just now. Considering my software runs on Windows 95, it certainly does not require any advanced features on functions.
The lowest version of Mac OS that old versions of Xcode seems to support is 10.1.x. If I was to write software for this version of Mac OS, would it run on more modern versions of Mac OS? I've read that there are software patches that enable support for some older MacOS software on never versions but I would rather not inconvenience the end user like that.
I would not mind building two different versions of the software (say, a legacy and modern launcher) but having to build five or ten different versions of the software would be a massive pain and I'd rather avoid that if at all possible.
I apologize for my lack of knowledge in the field, my own research has only yielded very limited information and I would rather not waste weeks on a fruitless endeavor.
TL;DR: I want to write software in Xcode (C++) for Mac OSX that supports as many versions of the OS as possible. If I was to target Mac OS version 10.1, can I expect it to run on modern versions of Mac OS? If not, how much effort would it take to support as many versions of Mac OS as possible?
You're going to have to make a decision as to where you're going to cut off support, or you're going to have to build multiple versions of the app in different specially-built build environments.
Modern macOS (10.15) is strictly 64-bit x86-64 only, zero support for 32-bit apps. If you build on 10.15 you can't even build a 32-bit image. If you did it would be linked against libraries that don't exist on older versions of the OS.
Mac OS X 10.6 was the last 32-bit x86 version that shipped, but was also the first 64-bit version of the OS, as it represented the transition to 64-bit. It was also the last PowerPC version that shipped.
If you want to support PowerPC machines, which was the only option in the 10.1 days, you'll need a compatible system to build it.
Ecosystem-wise, most Mac users are able to run current versions of the OS. Anything 2012+ is still able to get current OS releases, which is a lot of hardware. Anything before that is a mix of 32-bit and PowerPC hardware, though you won't see a lot of people with systems like that in the wild.
In short, 10.15 is easy, 10.14 and 10.13 aren't going to be hard. Anything beyond that will be various degrees of challenging, but beyond 10.6 things will get super complicated.
One way to estimate popularity is things like the Steam Hardware Survey where the breakdown looks like this:
10.15: 31.6%
10.14: 34.1%
10.13: 14.8%
10.12: 5.7%
10.11: 3.7%
So the good news is things drop off pretty steeply after 10.13 and by 10.11 there's not many users running an OS that old.
Related
Suppose we take a compiled language, for example, C++. Now let's take an example Framework, suppose Qt. Qt has it's source code publically available and has the options for users to download the binary files and let users use their API. My question is however, when they compiled their code, it was compiled to their specific HardWare, Operating System, all that stuff. I understand how many Software Require recompilation for different types of Operating Systems (Including 32 vs 64bit) and offer multiple downloads on their website, however how does it not go even further to suggest it is also Hardware Specific and eventually result in the redistribution of compiled executes extremely frustrating to produce?
Code gets compiled to a target base CPU (e.g. 32-bit x86, x86_64, or ARM), but not necessarily a specific processor like the Core i9-10900K. By default, the compiler typically generates the code to run on the widest range of processors. And Intel and AMD guarantee forward compatibility for running that code on newer processors. Compilers often offer switches for optimizing to run on newer processors with new instruction sets, but you rarely do that since not all your customers have that config. Or perhaps you build your code twice (once for older processors, and an optimized build for newer processors).
There's also a concept called cross-compiling. That's where the compiler generates code for a completely different processor than it runs on. Such is the case when you build your iOS app on a Mac. The compiler itself is an x86_64 program, but it's generating ARM CPU instruction set to run on the iPhone.
Code gets compiled and linked with a certain set of OS APIs and external runtime libraries (including the C/C++ runtime). If you want your code to run on Windows 7 or Mac OSX Maverics, you wouldn't statically link to an API that only exists on Windows 10 or Mac OS Big Sur. The code would compile, but it wouldn't run on the older operating systems. Instead, you'd do a workaround or conditionally load the API if it is available. Microsoft and Apple provides the forward compatibility of providing those same runtime library APIs to be available on later OS releases.
Additionally Windows supports running 32-bit processes on 64-bit chips and OS. Mac can even emulate x86_64 on their new ARM based devices coming out later this year. But I digress.
As for Qt, they actually offer several pre-built configurations for their reference binary downloads. Because, at least on Windows, the MSVCRT (C-runtime APIs from Visual Studio) are closely tied to different compiler versions of Visual Studio. So they offer various downloads to match the configuration you want to build your your code for (32-bit, 64-bit, VS2017, VS2019, etc...). So when you put together a complete application with 3rd party dependencies, some of these build, linkage, and CPU/OS configs have to be accounted for.
Is it safe to assume that any x86 compiled app would always run under x64 edition of same OS the app was compiled in?
As far as I know, For Windows OS the answer is "Yes". Windows x86 emulation layer is built for the same purpose. But, I just want to reconfirm this from experts here.
What about Unix, Linux? Are there any caveats?
No, for x86 code to run it need to be run in compatibility or legacy mode. If the OS doesn't support running processes in compatibility mode the program would most likely not being able to run.
Linux and IFAIK Windows currently supports compatibility mode and it looks like many more are supporting it too, more or less. My understanding is that NETBSD requires a special module to support this, so it's not necessarily without special care that it will be supported and it shows that it's quite possible that there exists OS'es where the possibility has been dropped completely.
Then in addition there's the possibility of breaking the backwards compatibility in the future, that's already happened on the CPU as virtual x86 mode is no longer available from long mode that is you can't run 16 bits program anymore under 64-bit Windows or Linux.
It could also happen on the OS side, that the developers could decide not to support compatibility mode anymore. Note that this may also have happened as it might be possible to support virtual x86 mode by first switching to legacy mode, but if possible no-one seem to have bothered doing it. Similarly neither Windows or Linux developers seems to have bothered implementing possibility to run kernel code in legacy mode in 64-bit kernel.
The preceding two sections shows that there are future or even present indications on this might not always be possible.
Besides as this is a C++ question you would have to ask yourself the question why you would want to make such a assumption? If well written your code should be able to be compiled for 64-bit mode - for you haven't relied on data types being of specific width, have you?
No. We have a whole bunch of Debian servers which miss the Multi-Arch i386 (32 bits) libraries. On Windows Server Core, WoW64 (the 32 bit subsystem) is optional. So for both Linux and Windows, there are known 64 bit systems that won't run x86 executables.
I was recently tasked with performing a feasibility study based around switching from using DOS to Linux for use as an OS to run our industrial control software (developed internally). In a nutshell I have been restricted to using Ubuntu 8.04 (with a vendor supplied kernel upgrade providing drivers for the hardware on the board). As this is no longer supported I am unable to update or install software meaning that I am stuck using gcc version 4.2. I want to be able to use C++ and preferably boost libraries but currently this seems like I will not be able to do so.
Basically I am asking how do companies/professionals go about using Linux as a development environment? Is what I described above a common occurrence? Do you simply pick a version and a compiler and stick with it throughout the product lifetime to ensure that the development environment doesn't change too much or can you freely upgrade the kernel, compiler etc. as you go along? Is it common to be constrained by what a particular vendor can provide. Would anyone be prepared to give their opinion as to whether ubuntu 8.04 is a suitable choice of OS for development of industrial control software?
I am not a linux expert at all, but my research and experimentation so far is leading me to conclusion that I should abandon the linux approach and use DOS. Our company has no linux knowledge and is very small and for personal career reasons I have no interest in learning redundant technology like DOS.
I realise this is not exactly a yes/no type question but any responses will be gratefully received.
GCC 4.2 has no C++11 support but the C++03 support should be good and you should be able to find a version of Boost that can deal with that quite easily.
Ultimately, Linux has many upsides you won't find in DOS- for example, no segmentation, virtual memory, and such things that will make it easier and faster to develop software, not to mention additional libraries you might need, as absolutely nobody whatsoever will support DOS today.
With linux-based systems there's not much reason to stick with fixed OS+toolchain version, because backwards compatibility is a very serious issue in Unix-world. Sometimes it is good to target certain fixed system, but frankly these are rather rare, and even then the development can be done on up-to-date systems as long as testing is done on the target macine/platform.
Basically you could just upgrade to for example Ubuntu 12.04 LTS(long term support) for development and stick with it, it is very unlikely that there would be any sorts of uncompatibility problems on the target platform/machine.
Libraries and such tend to change between Linux distros, new versions of linux distros, and other *nix OSes.
I once worked on a C++ application that had to run on both Windows and RHEL. I was the 'Linux guy' on the team, so I got to deal with coaxing all the open-source linux libraries we were using to build and work on Windows (using cygwin), and getting the latest changes made by the devs working on Windows to work on Linux.
Midway through development, we upgraded to a newer version of RHEL. It was not a fun experience. Library versions had changed, some had been removed in favor of other 'equivalent' libs, etc. Shaking out all of the problems caused by changing gcc versions took a little while too (granted, the newer gcc version was a bit less forgiving and exposed some stuff in our code that probably wasn't quite right anyway).
A couple of days before a big demo, management informed us that the app needed to run on Solaris as well. That was not a trivial task -- Solaris is NOT Linux. They hinted about wanting it to run on IRIX at one point. Glad that didn't happen.
I would recommend that you pick a specific version of a Linux distro, gcc, etc. and stick with it throughout development. Upgrading that stuff can happen later, when the software is in maintenance. RHEL offers long-term support, at a cost. You might also consider the newly released Ubuntu 12.04 LTS
if i have code compiled under Solaris 8 and 10 and now have a vendor that wants to use my bin/exe under Linux. Could there be compatibility issues?
I am pretty sure i would need to compile/link under Linux OS for it to work 100% but i just wanted to know if someone can give me the breakdown as to why it would not work on Linux even though the exe has everything and there is nothing dynamic about it, as in it should not need anything further to run it. Unless we talking runtime libs, that if there is a mismatch might cause the exe to fail.
You have to recompile your application on Linux.
Linux is a completely different run-time compared to Solaris. Even if you have compiled your application statically, there's the interface/system calls to the kernel that is different among these two operating systems. The processor architecture might be different too, e.g. SPARC vs X86.
Both Solaris and Linux support most of the standard C and Posix APIs, so if you've not used any APIs exclusive to Solaris, recompiling on Linux is often not that big a deal - but you surly should test everything throughly, and be aware of any endianess, and potential 64 bit vs 32 bit issues.
Other things that I think will not allow your Solaris binary to run on Linux out of the box are:
the hardware architecture:
1.1 Solaris usually runs on Sun's own SPARC machines, especially 8 - 10 can run on Intel architectures as well;
1.2 Linux usually runs on Intel machines (although it can run on Sparc machines).
the compilers:
2.1 Solaris 8 uses Sun's own compilers (Sun WorkShop 6+) & standard library implementation (so you'll have different library names, ABI incompatibilities and so on). Solaris 10 actually comes with gcc but you're probably not using it (I gather you're building on Solaris 8 only);
2.2 Linux uses g++, same as above for library names, ABI incompatibilities & so on.
I would like to port my C/C++ apps to OS X.
I don't have a Mac, but I have Linux and Windows. Is there any tool for this?
For Linux, there is a prebuilt GCC cross-compiler (from publicly available Apple's modified GCC sources).
https://launchpad.net/~flosoft/+archive/cross-apple
Update for 2015
After so many years, the industry-standard IDE now supports OSX/iOS/Android.
http://channel9.msdn.com/Events/Visual-Studio/Connect-event-2014/311
Embarcadero's RadStudio also supports building OSX/iOS/Android apps on Windows.
This answer by Thomas also provides a cross-compilation tool.
For all these options you still need a real mac/i-device to test the application.
I have created a project called OSXCross which aims to target OS X (10.4-10.9) from Linux.
It currently supports clang 3.2 up to 3.8 (trunk) (you can use your dist's clang).
In addition you can build an up-to-date vanilla GCC as well (4.6+).
LTO works as well, for both, clang and GCC.
Currently using cctools-870 with ld64-242.
https://github.com/tpoechtrager/osxcross
There appears to be some scripts that have been written to help get you set up cross compiling for the Mac; I can't say how good they are, or how applicable to your project. In the documentation, they refer to these instructions for cross-compiling for 10.4, and these ones for cross compiling for 10.5; those instructions may be more helpful than the script, depending on how well the script fits your needs.
If your program is free or open source software, then you may wish instead to create a MacPorts portfile (documentation here), and allow your users to build your program using MacPorts; that is generally the preferred way to install portable free or open source software on Mac OS X. MacPorts has been known to run on Linux in the past, so it may be possible to develop and test your Portfile on Linux (though it will obviously need to be tested on a Mac).
Get "VMware Player"
Get "Mac OS X vm image"
Compile/Debug/Integrate-and-test your code on the new OS to make sure everything works
When you are trying to get something working on multiple platforms you absolutely must compile/run/integrate/test on the intended platform. You can not just compile/run on one platform and then say "oh it should work the same on the other platform".
Even with the a really good cross-platform language like Java you will run into problems where it won't work exactly the same on the other platform.
The only way I have found that respects my time/productivity/ability-to-rapidly iterate on multiple platforms is to use a VM of the other platforms.
There are other solutions like dual-boot and ones that I haven't mentioned but I find that they don't respect my productivity/time.
Take dual-booting as an example:
I make a change on OS 1
reboot into OS 2
forget something on OS 1
reboot into OS 1
make a change on OS 1
reboot into OS 2 ... AGAIN...
BAM there goes 30 minutes of my time and I haven't done anything productive.
You would need a toolchain to cross compile for mach-o, but even if you had that, you won't have the Apple libraries around to develop with. I'm not sure how you would be able to port without them, unfortunately.
Apple development is a strange beast unto itself. OS X uses a port of GCC with some modifications to make it 'appley'. In theory, it's possible to the the sources to the Apple GCC and toolchain as well as the Apple kernel and library headers and build a cross compiler on your Windows machine.
Why you'd want to go down this path is beyond me. You can have a cheap Mac mini from $600. The time you invest getting a cross compiler working right (particularly with a Windows host for Unix tools) will probably cost more than the $600 anyway.
If you're really keen to make your app cross platform look into Qt, wxWidgets or FLTK. All provide cross-platform support with minimal changes to the base code. At least that way all you need to do is find a Mac to compile your app on, and that's not too hard to do if you have some technically minded friends who don't mind giving you SSH access to their Mac.
You will definitely need OS X somehow. Even if you don't own a Mac, you can still try some alternatives.
I found this small documentation on the net:
http://devs.openttd.org/~truebrain/compile-farm/apple-darwin9.txt
This describes exactly what you want. I am interested in it myself, haven't tested it yet though (found it 10 minutes ago). The documentation seems good, though.
You can hire a mac in the cloud from this website. You can hire them from $1, which should be enough (unless you need root access, then you are looking at $49+).
There are a few cross-compiler setups, but nearly all of them are meant for distcc-style distributed compiling. To my knowledge there is no way to directly target the Mac platform without actually having a Mac. The closest you can get without resorting to QT or wxWidgets is OpenStep with GNUStep or similar, but that's not a true Cocoa platform, just very close.
I know this question isn’t very active but answering anyways. Why don’t you try using TransMac, then download the XCode image and do it that way? Or you can use a VM, or Sosumi. You’ll find a video on youtube about sosumi, definitely.
The short answer is kind of. You will need to use a cross-platform library like QT. There are IDE's like QT Creator that will let you develop on one OS and generate Makefiles for others. For more information on cross platform development, check out the cross-platform episodes of this podcast (note that the series isn't over and new episodes appear to come out weekly).
As other answers explain you can probably compile for a Mac on Windows or Linux but you won't be able to test your applications so you should probably spend the $600 for a Mac if you’re doing professional programming, or if you’re working on open-source software find a developer with a Mac who will help you.