note:before down vote or anything like this,this is a general question to understand more how everything is going
the question simply is:
assume I compiled a program with c++11 features (using VS2012 on windows) is there a guarantee that this program will run on older processors?(like core 2 duo;as most laptops got this)
I'm currently working with VS2010,but found libraries that needs C++11 features.
so I want to port the whole work to VS2012,but my knowledge is limited about how this gonna work
correct anything wrong in the question
edit:
another 2 questions:
1 -can I "mix" a compiled c++11 program with older one?
like calling functions which are inside the new version (.dll) from an old version(.exe) so I got 2 files:
1 is compiled with VS2010 the other 1 is compiled with VS2012,with DLL EXPORT can they work like that?
2 -suggest a better environment than VS2012 for windows
As long as the architecture for which the target is built is the same (x86 for 32 bits or amd64 for 64 bits) you shouldn't have any issue.
Of course you will need to provide the older machine with the correct runtime library to run your program (for the current architecture).
Yes, The compiler requires additional libraries to build the program. But this shouldn't affect the ability to run on older processors. The only time this changes is when your trying to run a 64 bit program on a 32 bit processor.
Porting to VS2012 is simple, open the solution in VS2012, and save it as a vs2012 solution. It should all be fine.
Edit: odds are, if your new to programming, all of your programs are compiled for 32 bit processors by default unless you code to change this, so you shouldn't worry. You can run 32 bit programs on 64 bit processors, just not the other way around. If you really want to step it up, you can make a program that can run on both processors ;)
given a compiled executable the usual requirements to run it are:
ABI
platform
libraries
and since Windows is a commercial product, depending on what are you doing, you could add another factor
environment
which means that sometimes a software house intentionally breaks the compatibility with other products to sell more stuff.
In general VS it's not that good, certainly it's not the best the compiler I have ever used and basically anything from GCC to Clang to MinGW can supersede VS easily, but VS it's the official compiler and environment for Windows so this is what you have to deal with most of the time.
If you have fulfilled the listed requirements you are good to go.
By the way a Core 2 Duo it's not that old, and the actual iCore generation it's not that different either.
Related
Suppose we take a compiled language, for example, C++. Now let's take an example Framework, suppose Qt. Qt has it's source code publically available and has the options for users to download the binary files and let users use their API. My question is however, when they compiled their code, it was compiled to their specific HardWare, Operating System, all that stuff. I understand how many Software Require recompilation for different types of Operating Systems (Including 32 vs 64bit) and offer multiple downloads on their website, however how does it not go even further to suggest it is also Hardware Specific and eventually result in the redistribution of compiled executes extremely frustrating to produce?
Code gets compiled to a target base CPU (e.g. 32-bit x86, x86_64, or ARM), but not necessarily a specific processor like the Core i9-10900K. By default, the compiler typically generates the code to run on the widest range of processors. And Intel and AMD guarantee forward compatibility for running that code on newer processors. Compilers often offer switches for optimizing to run on newer processors with new instruction sets, but you rarely do that since not all your customers have that config. Or perhaps you build your code twice (once for older processors, and an optimized build for newer processors).
There's also a concept called cross-compiling. That's where the compiler generates code for a completely different processor than it runs on. Such is the case when you build your iOS app on a Mac. The compiler itself is an x86_64 program, but it's generating ARM CPU instruction set to run on the iPhone.
Code gets compiled and linked with a certain set of OS APIs and external runtime libraries (including the C/C++ runtime). If you want your code to run on Windows 7 or Mac OSX Maverics, you wouldn't statically link to an API that only exists on Windows 10 or Mac OS Big Sur. The code would compile, but it wouldn't run on the older operating systems. Instead, you'd do a workaround or conditionally load the API if it is available. Microsoft and Apple provides the forward compatibility of providing those same runtime library APIs to be available on later OS releases.
Additionally Windows supports running 32-bit processes on 64-bit chips and OS. Mac can even emulate x86_64 on their new ARM based devices coming out later this year. But I digress.
As for Qt, they actually offer several pre-built configurations for their reference binary downloads. Because, at least on Windows, the MSVCRT (C-runtime APIs from Visual Studio) are closely tied to different compiler versions of Visual Studio. So they offer various downloads to match the configuration you want to build your your code for (32-bit, 64-bit, VS2017, VS2019, etc...). So when you put together a complete application with 3rd party dependencies, some of these build, linkage, and CPU/OS configs have to be accounted for.
Is it safe to assume that any x86 compiled app would always run under x64 edition of same OS the app was compiled in?
As far as I know, For Windows OS the answer is "Yes". Windows x86 emulation layer is built for the same purpose. But, I just want to reconfirm this from experts here.
What about Unix, Linux? Are there any caveats?
No, for x86 code to run it need to be run in compatibility or legacy mode. If the OS doesn't support running processes in compatibility mode the program would most likely not being able to run.
Linux and IFAIK Windows currently supports compatibility mode and it looks like many more are supporting it too, more or less. My understanding is that NETBSD requires a special module to support this, so it's not necessarily without special care that it will be supported and it shows that it's quite possible that there exists OS'es where the possibility has been dropped completely.
Then in addition there's the possibility of breaking the backwards compatibility in the future, that's already happened on the CPU as virtual x86 mode is no longer available from long mode that is you can't run 16 bits program anymore under 64-bit Windows or Linux.
It could also happen on the OS side, that the developers could decide not to support compatibility mode anymore. Note that this may also have happened as it might be possible to support virtual x86 mode by first switching to legacy mode, but if possible no-one seem to have bothered doing it. Similarly neither Windows or Linux developers seems to have bothered implementing possibility to run kernel code in legacy mode in 64-bit kernel.
The preceding two sections shows that there are future or even present indications on this might not always be possible.
Besides as this is a C++ question you would have to ask yourself the question why you would want to make such a assumption? If well written your code should be able to be compiled for 64-bit mode - for you haven't relied on data types being of specific width, have you?
No. We have a whole bunch of Debian servers which miss the Multi-Arch i386 (32 bits) libraries. On Windows Server Core, WoW64 (the 32 bit subsystem) is optional. So for both Linux and Windows, there are known 64 bit systems that won't run x86 executables.
Firstly, please forgive my ignorance regarding these matters, I have done a search and not found any comprehensive answers as of yet.
I plan on learning how to develop for Windows, however I am very fond of the GNU toolchain and don't really want to move onto using big environments like Visual Studio until I feel more comfortable with the underlying basics.
From what I understand, one can download the Windows SDK, which contains the headers and libraries needed to build native Windows applications.
Is the SDK literally just a collection of libraries and headers? If so, as my logic goes, it should be possible to point MinGW towards these libraries/headers, and simply build as normal.
When I build using Visual Studio, I can't see what preprocessor directives are being defined, what is being linked in etc. etc., as I am still learning, I like to be able to know exactly what is going on, preferably so I have to manually define, link etc. Hence the question.
So, what I want to know: is my logic correct?
Again, apologies if the question is rudimentary, I am still learning.
P.s. I am planning to develop Windows applications in a windows environment, this is not a question regarding cross-compilation.
Thanks!
MinGW is not compatible with the official Windows SDK, with one of the reasons
being that the SDK contains many VS-specific things (opposed to the GCC base
on MinGW). MinGW has adapted many of the necessary files, and for many programs
this is enough.
You don´t need to know the VS project settings for some program;
MinGW is still GCC in the core and used as such. If you can compile
programs with GCC on linux, learning how to use MinGW won´t be hard.
If you need functions/structures/etc. which are not yet part of it,
you´re out of luck, other than doing the adaption yourself, which
can be everything between very easy or very hard, depending on the case.
Additionally, proper thread usage is a bit quirky (has some "hidden" pitfalls,
which could go unnoticed in an actual program for years, but then...).
(While this is a disadvantage to VS, you´ll get C++11/14 (while VS hasn´t
even finished with 11, see link), better optimzation in many cases etc.)
If you´re choosing what exactly to download, look at WinGW-W64 instead of
the "original" old one. The original project somewhat stopped, has poor
lib support compared to W64, no 64bit compiler etc. (and don´t misunderstand
the "W64", it can be used for 32bit programs too)
I am required to write a C++ application to run on an embedded Linux setup (DMP Vortex86DX processor). The vendor provides a minimal linux installation image that can be installed to the board and contains appropriate hardware drivers. My question is motivated by the answer to my previous question about writing Linux software on a particular kernel to run on a different kernel . I don't really know where to start when it comes to writing the software with regards to ensuring compatibility.
My instinctive approach would be to install the same versions of g++ on the embedded device and on my desktop development machine, write the application on the dev maching, copy to the board and compile it there. This seems madness though and I find it hard to believe that this is how embedded software is developed. With regards to the answer to my previous question, is there a way I can simply build on my desktop but use the version of glibc that exists on the embedded device - if so how can enforce linkage to a specific version? Or is it possible to build everything statically so that the application doesn't link to anything dynamically (I doubt this is possible).
I am a total novice to embedded development, and foresee months of frustration unless I can get hold of some good advice or resources. Any pointers or suggestion of where to start will be very gratefully received no matter how simple or trivial they seem - I really am starting at the very bottom with regards to embedded stuff.
OK, given the fact that the Vortex86SX/DX/MX claims to be x86 compatible, a small set of compiler switches should enable you to compile code for your target machine: -m32 to ensure 32bit code, and no -march switch targeting a specific CPU.
Then you'll need to link your code. As long as you don't use anything fancy, but simple established glibc functions, I'd expect the ABI to be the same on your development machine and the embedded system. In other words, you compile against your host libraries, copy the binary to the embedded system, and it should simply run using the libraries available there.
If X-Linux were to use some other libc, like uclibc or similar, then you'd need a cross compiler on your host. I have little experience with Ubuntu in that regard, but I know that the sys-devel/crossdev package for Gentoo linux makes generation of cross-compilers very easy. This can be both for different architectures (not needed in your case) and different libraries (like e.g. uclibc).
I'd say simply give copying the binaries a try, and report back if you encounter any problems there.
I have windows application build using Visual C++. Its being build and run on 32 bit windows env. Now I need to make sure it works on windows vista / 7 64 bit env. What all things I need to consider for this porting process ??
That's not porting from 32bit to 64, that's just running your 32bit code on a 64bit machine to make sure it still works.
The way to do that is to just test all the functionality on the 64-bit machine, just as you do every time you release a new version, right? :-)
If you really want to port it (i.e., compile it to a 64bit executable), the first step is to just try it. You may find it works as-is. I'd only be worried about porting problems if you try it and then problems appear.
Then, and only then, would I go looking for the causes. Otherwise it's potentially wasted effort.
Porting guide: http://msdn.microsoft.com/en-us/library/aa384190(VS.85).aspx
Before building you project in x64 mode.
Include all necessary 64 bit dll's required for your project
Include Library files in linker - additional dependencies - configuration properties.
Add necessary preprocessors in C\C++ - preprocessors - configuration properties.
Enable the 64 bit warnings - When compiled the compiler shows warnings
Ex
Conversion of datatype from int to size_t there might loss of data
Storing pointer address
Magic no.
refer the below link for more on errors and warnings while porting
http://www.viva64.com/en/a/0065/
One think to watch out for is if you're storing plain old data (POD) into files or passing POD data to other apps via IPC or sockets etc.
We also had code which assumed 4 byte longs and also assumed 4 byte pointers. Needless to say we removed these anachronisms.
Compilers are usually good at spotting the other kinds of errors, i.e. long to int conversions etc. So it's usually just a case of heeding your compilers warnings and altering your code accordingly.