Say I developed a simple software using C++, and I want to distribute it for free. For simplicity sake assume it is some sort of GUI. Now when I compile it on Windows and run the .exe, it works as intended. However, the end user might have a different version of Windows then I do. So the .exe file may not run because of differences in machine code or instruction set (I think?) In these situation is it common practice to include the compiler to the end user so that the compiler is built and then my source code is compiled or do I use a cross compiler instead? Or how is it commonly done? Sorry for the stupid question.
Related
Okay so I know that there are multiple c++ versions. And I dont really know much about the differences between them but my question is:
Lets say i made a c++ application in c++ 11 and sent it off to another computer would it come up with errors from other versions of c++ or will it automatically detect it and run with that version? Or am I getting this wrong and is it defined at compile time? Someone please tell me because I am yet to find a single answer to my question on google.
It depends if you copy the source code to the other machine, and compile it there, or if you compile it on your machine and send the resulting binary to the other computer.
C++ is translated by the compiler to machine code, which runs directly on the processor. Any computer with a compatible processor will understand the machine code, but there is more than that. The program needs to interface with a filesystem, graphic adapters, and so on. This part is typically handled by the operating system, in different ways of course. Even if some of this is abstracted by C++ libraries, the calls to the operating system are different, and specific to it.
A compiled binary for ubuntu will not run on windows, for example, even if both computers have the same processor and hardware.
If you copy the source code to the other machine, and compile it there (or use a cross-compiler), your program should compile and run fine, if you don't use OS-specific features.
The C++ version does matter for compilation, you need a C++11 capable compiler of course if you have C++11 source code, but once the program is compiled, it does not matter any more.
C++ is compiled to machine code, which is then runnable on any computer having that architecture e.g. i386 or x64 (putting processor features like SSE etc. aside).
For Java, to bring a counterexample, it is different. There the code is compiled to a bytecode format, that is machine independent. This bytecodeformat is read/understood by the Java Virtual Machine (JVM). The JVM then has to be available for your architecture and the correct version has to be installed.
Or am I getting this wrong and is it defined at compile time?
This is precisely the idea: The code is compiled, and after that the language version is almost irrelevant. The only possible pitfall would be if a newer C++ version would include a breaking change to the standard C++ library (the library, not the language itself!). However, since the vast majority of that library is template code, it's compiled along with your own code anyway. It's basically baked into your .exe file along with your own code, so it's just as portable as yours. Also, both the C and C++ designers take great care not to break old code; so you can expect even those parts that are provided by the system itself (the standard C library) not to break anything.
So, even though there are things that could break in theory, pure C++ code should run fine on all machines that understand the same .exe format as the machine it was compiled on.
So I made a c++ console game. Now I'd like to "release" the game. I want to only give the .exe file and not the code. How do i go about this. I'd like to make sure it will run on all windows devices.
I used the following headers-
iostream
windows.h
MMSystem.h
conio.h
fstream
ctime
string
string.h
*I used namespace std
*i used code::blocks 13.12 with mingw
& I used the following library-
libwinmm.a
Thank you in advance
EDIT
There are many different ways of installing applications. You could go with an installer like Inno or just go with a regular ZIP file. Some programs can even be standalone by packaging all resources within the executable, but this is not an easy option to my knowledge for C++.
I suppose the most basic way is to create different builds for different architectures with static libraries and then find any other DLLs specific to that architecture and bundle it together in one folder. Supporting x86/x86-64/ARM should be enough for most purposes. I do know that LLVM/Clang and GCC should have extensive support for many architectures, and if need be, you should be able to download the source code of the libraries you use and then compile them for each architecture you plan to support as well as the compilation options you need to compile to each one.
A virtual machine can also be helpful for this cross-compilation and compatibility testing.
tldr; Get all the libraries you need in either static or dynamic (DLL) format. Check that they are of the right architecture (x86 programs/code will not run on MIPS and vice versa). Get all your resources. Get a virtual machine, and then test your program on it. Keep testing until all the dependency problems go away.
Note: when I did this, I actually had some compatibility issues with, of all things, MinGW-w64. Just a note; you may need some DLLs from MinGW, or, if you're using Cygwin, of course you need the Cygwin DLL. I don't know much about MSVC, but I would assume that even they have DLLs needed on some level if you decide to support an outdated Windows OS.
I'm a bit stuck on trying to port my code from Windows to Linux. I created a Bluetooth based program, which seems to work in Windows well, that I need to get working in Ubuntu.
Unfortunately the computer with Linux on isn't mine, so I can't have any easy hacks using Wine or other massive compiler altering methods, I really need some advice on porting my code across so it'll be recognised and work in the different OS.
The computer does have code::blocks installed, which from what I understand is fairly useful in converting some things for cross-OS compiling, but I'm not getting too far.
The original code was written in Visual Studio 2013 and understandably it doesn't play nice in code::blocks. I'm getting a lot of 'can't find header' errors, but I don't think simply finding all the missing headers and copying them across will work (will it?).
I need some suggestions on the easiest, stand alone solution for my situation. By standalone I mean I want to get as much of the needed changes and libraries in my project, rather than change/install lots of things on the Linux machine.
I don't really know where to start and searches online don't seem to be too helpful.
Thanks!
First of all, I suggest you examine your Windows code, and use the PIMPL idiom (also here, here, ...) in your classes to isolate all platform-dependent code to separate windows and linux class implementations. Your main platform-independent class then will simply delegate to each implementation at compile time using preprocessor macros to include the appropriate platform implementation header and cpp files.
Beyond this, many runtime functions, as implemented in Visual Studio as either Microsoft-specific, or have been 'modified' and are no longer compatible or even have the same names as the standard ones you will find in linux. For these, you'll need to use a platform.h and platform.cpp file, with separate sections for the two operating systems, containing the missing functions in either macro-defined form (i.e. windows: strnicmp(), linux: strncasecomp() ), or write the missing ones yourself. Example:
// Linux section ...
#ifdef LINUX
#define strnicmp strncasecmp
#endif
The final work involved depends on how many windows-specific calls you have in your code.
I am a C++ developer I am interested in Cocos2d-x framework. I know that you can write C++ code using the framework, compile it for different platforms and that's it, you have your 2D on Windows, Android, iOS. This is amazing but I don't understand how it is being done and, consequently, I worry that some thing that I have done for one platform will not work on other one. To go into details I would like to open my concerns. In order to do that let's clarify what is compiling and what is running a code.
What does it mean to compile C++ code for a platform (platform is OS + CPU architecture)? It means that C++ source code is mapped to instructions which is understandable for a concrete CPU architecture. And the final set of instructions is packaged into an executable file which is understandable for a concrete OS which means a concrete SO or OSes that understand how to handle the executable file can run it. Also we should not forget that in the set of instructions that the executable contains there could be system calls. Which is also specific to OS.
What does it mean to run the executable? It means that OS knows the format of particular executable file. When you give run command OS loades it into the virtual memory and starts to execute that CPU instructions set step by step. (Very raw but in general it is like that I guess.)
Now returning to the COSOS2D-X. How it is possible to compile a C++ code so that it was able to be recognized and loaded by different OSes and by different CPUs. What mechanisms we use in order to get appropriate .apk, .ipa or .exe files. Is there a trap that we can fall while using system calls or processor specific calls? In general how all this problems are solved? Please explain the process, for example, for Android or it would be great for iOS too. :)
Cocos2d-x has 95% of the same code for all target OS and platforms. And it has 5% of code which is written for the concrete platform. For example there are some Java sources for Android. And there are some Obj-C files for iOS. Also there is some code in C++ for different platform. #define is used to separate this code. Examples of such code is working with files which is written in C++ but differs from platform to platform.
Generating of appropriate output file is responsibility of the compiler and SDK used for target platform. For example xCode with clang compiler will generate the iOS build. While Android NDK with gcc inside will build the apk.
I am currently developing a C++ command line utility to be distributed as an open-source utility on Github. However, I want people who download the program to be able to easily compile and run the program on any platform (specifically Mac, Linux, and Windows) in as few steps as possible. Assuming only small changes have to be made to the code to make it compatible with the various platform-independent C++ compilers (g++ and win32), how can I do this? Are makefiles relevant?
My advice is, do not use make files, maintaining the files for big enougth projects is tedious and errors happen sometimes which you don't catch immediatly (because the *.o file is still there).
See this question here
Makefiles are indeed highly relevant. You may find that you need (at least) two different makefiles to compensate for the fact that you have different compilers.
It's hard to be specific about how you solve this, since it depends on how complex the project is. It may be easiest to write a script/batchfile, and just document "Use the command build.sh on Linux/Unix, and build.bat on Windows") - and then let the respective files deal with for example setting up the name of the compiler and flags, etc.
Or you can have an include into the makefile, which is determined by the architecture. Or different makefiles.
If the project is REALLY simple, it may be just enough to provide a basic makefile - but it's unlikely, as a compile of x.cpp on Linux/MacOS makes an object file is called x.o, on windows the object file is called x.obj. Libraries have different names, dll's have differnet names, and on Linux/MacOS, the final executable has no extension (typically) so it's called "myprog", where the executable under windows is called "myprog.exe".
These sorts of differences mean that the makefile needs to be different.