What makes a C/C++ program 32/64 bit? [closed] - c++

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have written many programs since i learned to program about a year ago.I never could understand why i have to download separate programs for my 32 and 64 bit machines. When i write a program on one 32 bit machine it runs on my 64 bit machine. So my question what can you do in C/C++ that defines the bit type of the program?

What makes a C/C++ program 32/64 bit?
Whether it's 32-bit or 64-bit is baked in at compile/link time along with other things, like the target architecture.
When i write a program on one 32 bit machine to runs on my 64 bit machine
Assuming your OS is Windows, and you're compiling the application on a 32-bit machine and outputting a 32-bit executable (i.e. not cross compiling), then Windows has a technology called Windows on Windows64 (WOW64) that allows 32-bit code to run on a 64-bit operating system.
I never could understand why i have to download separate programs for my 32 and 64 bit machines
This isn't universally true, you only need to download some separate programs. Things like drivers interface more closely with the kernel and need to have "bitness parity". That is to say, if you have a 32-bit operating system you need 32-bit drivers and if you have a 64-bit operating system you need 64-bit drivers.
For the most part, you can get away with 32-bit applications on 64-bit Windows thanks to WOW.
So my question what can you do in C/C++ that defines the bit type of the program
Nothing. It's up to the compiler.

What really? 64 bit machines run on different CPU architecture. Theres ppc, arm, i386 etc some of the older most famous CPUs are z80 and 6502. They run a different instruction set (binary). Some 64bit processors extend or is capable of running with the instruction set of the widespread i386 cpus.
There are 64 bit CPUs that cannot run with other 64bit CPUs. 64bit binaries are definitely not compatible with 32 bit because they are missing a lot of instructions. I haven't heard of a 32bit that supports some 64bit instructions. That'd be useless too because the CPU would be slower if it cant support 64bit thus pointless to run a 64bit binary
The C/C++ compiler would generate the instructions for the machines (64bit, 32bit, arm, ppc, etc) and the instruction (also known as binary) are different. Kind of like speaking a foreign language to a machine it doesn't understand what to do

On a 64-bit machine, compare the output of these two builds:
g++ -Wall -Wextra -m32 bits.cpp
g++ -Wall -Wextra -m64 bits.cpp
#include <iostream>
int main() {
std::cout << sizeof(void*) << std::endl;
return 0;
}

Related

What components of a machine affect the machine code produced given a C++ file input? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 months ago.
Improve this question
I wrote this question What affects generated machine code at each step of the compilation process? and realized that is was much too broad. So I will try to ask each component of it in a different question.
The first question I will ask is, given an arbitrary C++ file what affects the resulting executable binary file it produces? So far I understand each of the following play a role
The CPU architecture like x86_64, ARM64, Power PC, Microblaze, ect.
The kernel of a machine like Linux kernel v5.18, v5.17, a Windows Kernel version, a Mac kernel version ect.
The operating system such as Debian, CentOS, Windows 7, Windows 10, Mac OS X Mountain Lion, Mac OS X Sierra.
Not sure what the OS changes on top of the kernel changes.
Finally the tools used to compile, assembly and link. Things like GCC, Clang, Visual Studio (VS), GNU assembler, GNU compiler, VS Compiler, VS linker, ect.
So the 2 questions I have from this are
Is there some other component that I left out that affects how the final executable looks like?
And does the operating system play a role in affecting the final executable machine code? Because I thought it would all be due to the kernel.
The main one I think you're missing is the Application Binary Interface.  Part of the ABI is the calling convention, which determines certain properties of register usage and parameter passing, so these directly affect the generated machine code.
The kernel has a loader, and that loader works with file formats, like ELF or PE.  These influence the machine code by determining the layout of the process and how the program's code & data are loaded into memory, and how the machine code instructions access data and other code.  Some environments want position independent code, for example, which affects some of the machine code instructions.
The CPU architecture like x86_64, ARM64, Power PC, Microblaze, ect.
Yes.  The instruction set architecture defines the available instructions to use, which in turn define the available CPU registers and how they can be used as well as and sizes of things like pointers.
The kernel of a machine like Linux kernel v5.18, v5.17, a Windows Kernel version, a Mac kernel version ect.
Not really.  The operating system choice influences the ABI, which is very relevant, though.
The operating system such as Debian, CentOS, Windows 7, Windows 10, Mac OS X Mountain Lion, Mac OS X Sierra.
The operating system usually dictates the ABI, which is important.
the tools used to compile, assembly and link. Things like GCC, Clang, Visual Studio (VS), GNU assembler, GNU compiler, VS Compiler, VS linker, ect.
Of course, different tools produce some different machine code, sometimes the differences are equivalent, though some tools produce better machine code than others for some inputs.

Is it safe to assume that any x86 compiled app would always run under x64 edition?

Is it safe to assume that any x86 compiled app would always run under x64 edition of same OS the app was compiled in?
As far as I know, For Windows OS the answer is "Yes". Windows x86 emulation layer is built for the same purpose. But, I just want to reconfirm this from experts here.
What about Unix, Linux? Are there any caveats?
No, for x86 code to run it need to be run in compatibility or legacy mode. If the OS doesn't support running processes in compatibility mode the program would most likely not being able to run.
Linux and IFAIK Windows currently supports compatibility mode and it looks like many more are supporting it too, more or less. My understanding is that NETBSD requires a special module to support this, so it's not necessarily without special care that it will be supported and it shows that it's quite possible that there exists OS'es where the possibility has been dropped completely.
Then in addition there's the possibility of breaking the backwards compatibility in the future, that's already happened on the CPU as virtual x86 mode is no longer available from long mode that is you can't run 16 bits program anymore under 64-bit Windows or Linux.
It could also happen on the OS side, that the developers could decide not to support compatibility mode anymore. Note that this may also have happened as it might be possible to support virtual x86 mode by first switching to legacy mode, but if possible no-one seem to have bothered doing it. Similarly neither Windows or Linux developers seems to have bothered implementing possibility to run kernel code in legacy mode in 64-bit kernel.
The preceding two sections shows that there are future or even present indications on this might not always be possible.
Besides as this is a C++ question you would have to ask yourself the question why you would want to make such a assumption? If well written your code should be able to be compiled for 64-bit mode - for you haven't relied on data types being of specific width, have you?
No. We have a whole bunch of Debian servers which miss the Multi-Arch i386 (32 bits) libraries. On Windows Server Core, WoW64 (the 32 bit subsystem) is optional. So for both Linux and Windows, there are known 64 bit systems that won't run x86 executables.

Why 64Bit version app is much slower than 32Bit version

In order to solve the 3G (Ubuntu) memory issue (sometimes we do need more memory than 3G), I compiled my app under a 64bit environment to use more memory.
But, my 64bit app is much slower than the 32bit version.
32Bit version is built on a 32 bit machine;
64Bit version is build on a 64 bit machine;
both 32Bit and 64Bit versions run on the 64Bit machine in our loading test.
I googled, and some folks said, the unnecessary long type can make the 64bit slower than 32bit, because:
man g++:
-m64
Generate code for a 32-bit or 64-bit environment. The 32-bit environment
sets int, long and pointer to 32 bits and generates code that runs on any
i386 system. The 64-bit environment sets int to 32 bits and long and
pointer to 64 bits and generates code for AMD's x86-64 architecture. For
darwin only the -m64 option turns off the -fno-pic and -mdynamic-no-pic
options.
So I changed all my longs to ints, but still same result.
Please advise.
Peter
Edit:
About memory, both 32 and 64 versions use similar memory, about 1.5 ~
2.5 GB, and my machine has 9GB physical memory;
I profiled using OProfile, and for most of the functions, the 64bit version collects
more profiling samples than the 32bit version;
I cannot think of any
other bottlenecks, please advise.
My app is a server, and the loading test was done under a 100 client connections. The server does a lot of computation processing the audio data from the clients.
Profile your app. That will tell you where the slow code is.
For the question "why", no one will know the reason without details. You must analyze the profiled result and if there are any problem with the result, post it as a question here.
If your app does not need more than 4GB of RAM (1.5~2.5GB in your case), you should try x32. It's a new ABI that allows for 32-bit pointers in 64-bit environment.

32 bit library on 64 bit system

Can it cause any problem if I use 32 bit library on a 64 bit system?
What could be the incompatibilities?
I know this question is too vague. An example :
I tried to setup FreeGlut 32bit library with VS2010 on windows 7 64bit. There were a lot of issues at first.So, I was looking for 64bit of FreeGLUT, thinking 32bit FreeGlut might conflict with 64 bit windows. But later on I managed to run my 32bit FreeGlut with my 64bit windows without any problem.
Question is, if there are any thing in the program, that we should look into while using those libraries which doesn't match with the system. (32bit library on 64 bit OS)
64 bit Windows is fully capable of running 32 bit applications. In fact, the default configuration for Visual C++ is to build an x86 application regardless of the operating system it is running on.
There are some gotchas you have to be aware of when running a 32bit app on 64bit Windows. For instance, there's registry redirection (to avoid it you pass KEY_WOW64_64KEY to e.g. RegOpenKeyEx). There's filesystem redirection, too. These are important things to have in mind when you are reading system configuration values, for example.
Also, be aware that you can't mix 32 and 64 bitness in the same process. Your 32bit app can't use 64bit DLLs or static libraries.
Visual studio can compile for 32 bit or 64 bit based on the project setting.
Probably, Question you mean to ask is about Linking 32-bit library to 64-bit program
The answer is:
You can't directly link to 32bit code inside of a 64bit program.
The only option is to compile a 32bit (standalone) program that can run on your 64bit platform (using ia32), and then use inter-process communication to communicate to it from your 64bit program.
It's not about the operating system- Windows doesn't care if your code is 32bit or 64bit. It matters that your process is the same bitness as all libraries it cares to load- and they must all be the same bitness.
You might be interested in this link explaining common compiler errors when porting to 64-bit. It might help you solve your problem.
To try to answer your question more directly, there are things that might make 32 bit libraries break in a 64 bit environment, but that's too much information to share in a SO answer.
This link is the MSDN index related to development for 64 bit systems and might interest you as well.
Your question could be about how to develop specifically code that will run natively on 64-bit versus 32-bit hardware. There is info on that here. In particular you would have to follow the instructions here to enable 64-bit build tools.
If you are building 64-bit binaries, you will need to link with 64-bit libraries. Note that 32-bit binaries should run just fine on 64-bit Windows, so you may not need to go to this trouble.

Am I coding for an OS or the Processor?

It is said that by using C/C++, one can write 'native' programs - that run on the platform. I am confused about what is considered native - the processor architecture or the OS version?
For example:
I have a 32 bit processor and Windows 7 ( 32 bit ), and I compile and generate and .exe file. Is it guaranteed to run on any Windows 7 32 Bit? ( Win 7 32 bit on 32/64 Bit machines )
Edit1:
I did not intend only Windows OS here. My example can be extended to Linux also. For example, generating an executable ( by default a.out ) on a 32 bit Linux OS running on 32 bit processor, and then running it on a 32bit Linux on a 64 bit processor.
Edit2:
Thanks for the responses, but I also intended that I am using the standard libraries and functions - nothing OS Specific. Just the once specified by the ANSI or ISO C++ Standard. No references to OS specific windowing systems or other libraries.
Thanks
Both; kind of.
The actual instructions don't really differ across Windows and Linux as they are compiled down for a single CPU architecture (x86).
However, a binary is more than just code that runs on bare hardware. For instance, it also contains information that tells the operating system how to load the executable and its dependencies. The binary is packaged in a specific format. This format can be different in different operating systems.
Besides that, the operating system provides some services to the applications (through system calls and APIs). The services that operating systems provide, and the way they can be used varies from an operating system to another.
These reasons contribute to the fact that most of the time a native binary depends on both OS and CPU architecture it's compiled for.
Answer to the updated question:
C++ Standard doesn't require anything about the nature of the compiled target. It just specifies compatibility requirements at the source level. Consequently, if you stick to the standard libraries, you'll be able to use the same source code to compile on platforms that offer a conforming C++ implementation. The standard doesn't say anything about binary portability. As I mentioned above, the primitive system calls that operating systems provide can vary and the actual implementation of the standard library depends on the way those system calls are provided by the OS.
In order to run a Windows binary on Linux, you need to use some sort of emulation like Wine which understands Windows binary format and simulates Windows API for applications.
1) The processor architecture (plus the targeted libraries static or dynamic)
2) Yes
A 32bit windows application will run on a Windows 64bit platform as WOW.
If your (windows) compiler's target architecture is x86 (32-bit) then it can run on any 32 bit and 64 bit Windows 7. But if its x86-64, it will only run on 64 bit Windows 7.
To answer the title specifically, you code for both.
The executable contains machine code, which is specific to the processor, and a lot of metadata for the OS on how to load/execute the program, which is specific to the OS.
The code may also (and typically does) contain calls into functions defined by the OS. And so, while it is just perfectly ordinary machine code that any compatible CPU will understand, it attempts to call code that only exists on Windows.
So "native" really means both. You code for the specific OS (and all compatible OS'es) and that specific CPU (and all compatible CPUs).
In the case of Windows, you typically target a specific version of Windows, and the program will then work on that, and future versions of Windows.
for the processor on which Windows (and your program) runs, the executable contains x86 machine code, which can be executed on any x86 CPU, whether it is from Intel, AMD, Via or whoever else have made compatible processors over the years.
Without being able to see your code, only you can tell us whether you're coding for 32-bit or 64-bit platform - for example, if you reinterpret_cast a pointer into a 32 bit int then back to a pointer, you are coding for 32-bit, whereas if you use a type such as int_ptr you are safe whether your code is compiled for 32 or 64 bit machines. Similarly, coding for Windows desktops your coding can assume the machine's endianess.
If, as in your example, you compile that code for 32-bit Windows 7, then it will also run on 64 bit Windows 7. If you use Windows 7 features, it won't run on earlier versions. Microsoft are very good at backward compatibility, so it probably will run on later versions.
Short answer: No.
Longer: When you compile "native code", you compile for a specific processor architecture; MIPS, ARM, x86, 68k, Sparc and so on. These architectures can have a wordlength of 8, 16, 32 and 64 (there are exceptions). Also these architectures can have extensions from generation to generation like MMX, SSE, SSE2, Neon and such.
Also you need to consider the operating system and what libraries you can use, and different calling conventions.
So, there's no guarantee. But if you compile with MSVC on Windows 7 it's almost guaranteed to run on Windows 7. I think it only exist for x86 at the moment.