I'm a C# developer so to solve this I try MSDN and StackOverflow. Unfortunately, they seem to conflict here. MSDN states that it will depend on the architecture - 64 bits for a 64 bit process and 32 bits for 32 bit one. SO, on the other hand has this answer describing SYSTEM_INFO that seems to indicate that it's always 32 bits because MinimumApplicationAddress and MaximumApplicationAddress are defined as UInt32s even though the question is tagged with win-universal-app which is more often x64 than x86. (Yes, I know that doesn't mean that every process will be 64 bits.)
LP is a Microsoft prefix notation (known as Hungarian notation) that indicates a pointer. Originally it was short for “long pointer”, which made sense in DOS programming. For 32-bit code it's 32 bits, and for 64-bit code it's 64 bits.
If you're talking about a uint32, that is guaranteed to be 32 bits, if you are talking about an int pointer, that will vary from system to system, so yes it just depends on what the underlying code is and what it's doing that will determine this.
Related
I am working on project (c++ integration with Python) which has migrated to 32 bit machine to 64 bit machine. In Python, C long is mapped with Python Integer.
SO I can not change in Python Interface(client interface) and always gets overflow error from python client. it was working fine in 32 bit machine
So I have following solution
1)convert all long to int in 64 bit machine.
2)Declare 32 bit long in 64 bit machine.
Do we have any general solution/header file which give me option to declare 32 bit
datatype always So I can handle this issue in more general way.
I know it may be small issue but I am not able to find general solution.
Do we have any general solution/header file which give me option to declare 32 bit datatype always?
Yes, there is, since C99.
#include <stdint.h>
uint32_t foo;
standard C99 (and newer) has <stdint.h> header defining int32_t for 32 bits signed integers (and many other types) and recent C++ have <cstdint>
If you care about bignums (arbitrary precision numbers), be aware that it is a difficult subject and use some existing library, like GMP.
I know that the C/C++ standards only guarantee a minimum of 8 bits per char, and that theoretically 9/16/42/anything else is possible, and that therefore all sites about writing portable code warn against assuming 8bpc. My question is how "non-portable" is this really?
Let me explain. As I see it, there a 3 categories of systems:
Computers - I mean desktops, laptops, servers, etc. running Mac/Linux/Windows/Unix/*nix/posix/whatever (I know that list isn't strictly correct, but you get the idea). I would be very surprised to hear of any such system where char is not exactly 8 bits. (please correct me if I am wrong)
Devices with operating systems - This includes smartphones and such embedded systems. While I will not be very surprised to find such a system where char is more tham 8 bits, I have not heard of one to date (again, please inform me if I am just unaware)
Bare metal devices - VCRs, microwave ovens, old cell phones, etc. In this field I haven't the slightest experience, so anything can happen here. However, do I really need my code to be cross platform between my Windows desktop and my microwave oven? Am I likely to ever have code common to both?
Bottom line: Are there common (more than %0.001) platforms (in categories 1&2 above) where char is not 8 bits? And is my above surmise true?
use limits.h
CHAR_BIT
http://www.cplusplus.com/reference/clibrary/climits/
also, when you want to use exactly a given size, use stdint.h
For example, many DSP have CHAR_BIT greater than or equal to 16.
At least, similar to the integer size in 64bit architectures, future platforms may use a wider char, with more bits. ASCII characters might become obsolete, replaced by unicode. This might be a reason so be cautious.
You can normally safely assume that files will have 8 bit bytes, or if not, that 8 bit byte files can be converted to a zero padded native format by a commonly-used tool. But it is much more dangerous to assume that CHAR_BIT == 8. Currently that is almost always the case, but it might not always be the case in future. 8 bit access to memory is increasingly a bottleneck.
The Posix standards require CHAR_BIT to be 8.
So, if you only care about your code running on Posix compliant platforms, then assuming CHAR_BIT == 8 is fine and good.
The vast majority of commodity PC platforms and build systems comply with this requirement. Most any platform that uses the BSD socket interface likely implicitly has this requirement because the assumption that a platform byte is an octet is extremely widely distributed.
#if CHAR_BIT != 8
#error Your platform is unsupported!
#endif
Why did POSIX mandate CHAR_BIT==8?
You should only worry about this assumption / constraint if you want your code to run today on embedded and esoteric platforms. Otherwise, it's a pretty safe assumption in my view.
I would like to use SWIG on Windows for building 64 bit applications. I have a class that has a pointer in it to a buffer, and because I would like to interface with a .NET class, the pointer is declared as intptr_t.
The problem is that the standard SWIG stdint.i assumes that intptr_t is either int (on 32 bit environment) or long (on 64 bit environment). While this is true on Unix, it is false on Windows. Does anyone have either similar experience or any ideas how to create a workaround for this?
I already set up the typemaps needed for the intptr_t => IntPtr conversion and it is working fine in 32 bit environment, but it truncates the pointer in 64 bit environment.
Ok, I will answer my own question. It seems that this is a bug in SWIG on Windows, that it treats long as int64 on 64-bit Windows, while in reality it is an int32. See more about this topic here:
What is the bit size of long on 64-bit Windows?
The other problem with SWIG is that it differentiates 32 and 64 bit code, but the reason I used intptr_t was to avoid bitness issues, as by definition it gives an integer large enough to hold a pointer.
So what I did at the end was to write a script that I run after generating the wrapper to fix the type signatures from int to intptr_t. While this is not elegant, I already have to do this for other reasons for my Python and PHP wrappers.
I know this is a strange questions but I was wondering if it was possible to make a 32 bit pointer in 64 bit compile on Solaris using g++. The final object would need to be 64 bit however one of my pointers offsets is becomming larger on Solaris then it is in windows if I do use 64 bit to compile. This is causing a big problem. I was wondering if it was possible to make a 32bit pointer within my 64 bit compiled object.
Pointer size is a property of your target architecture, so you cannot mix and match 32- and 64-bit pointers. I would strongly suggest re-thinking your design (which smells like usual mistake of casting pointers to integers and back.) You can theoretically work with "limited-reach" offsets, but again please ask yourself why, and what would be a better way of doing it.
You can't change regular pointers, the size of a pointer is sizeof(void *). And if you could, what would you do with an 32bit pointer on an 64bit system?
Do you mean pointers in C or do you maybe mean pointers to a file offset?
If you have pointer type there, then you shouldn't make it 32-bit in 64-bit program. If it is just some offset that not related to memory model, then you could use different type with stable size across platforms, something like uint32_t.
It does not make sense to "need" a 32 bit pointer on a 64 bit machine. I also dont understand this line:
The final object would need to be 64 bit however
I would take a closer look and try to fix the bug on your end. If you post some example code we may be able to help more.
As per c99 standard, size of long long should be minimum 64 bits. How is this implemented in a 32 bit machine (eg. addition or multiplication of 2 long longs). Also, What is the equivalent of long long in C++.
The equivalent in C++ is long long as well. It's not required by the standard, but most compilers support it because it's so usefull.
How is it implemented? Most computer architectures already have built-in support for multi-word additions and subtractions. They don't do 64 bit addititions directly but use the carry flag and a special add-instruction to build a 64 bit add from two 32 bit adds.
The same extension exists for subtraction as well (the carry is called borrow in these cases).
Longword multiplications and divisions can be built from smaller multiplications without the help of carry-flags. Sometimes simply doing the operations bit by bit is faster though.
There are architectures that don't have any flags at all (some DSP chips and simple micros). On these architectures the overflow has to be detected with logic operations. Multi-word arithmetic tend to be slow on these machines.
On the IA32 architecture, 64-bit integer are implemented in using two 32-bit registers (eax and edx).
There are platform specific equivalents for C++, and you can use the stdint.h header where available (boost provides you with one).
As everyone has stated, a 64-bit integer is typically implemented by simply using two 32-bit integers together. Then clever code generation is used to keep track of the carry and/or borrow bits to keep track of overflow, and adjust accordingly.
This of course makes such arithmetic more costly in terms of code space and execution time, than the same code compiled for an architecture with native support for 64-bit operations.
If you care about bit-sizes, you should use
#include <stdint.h>
int32_t n;
and friends. This works for C++ as well.
64-bit numbers on 32-bit machines are implemented as you think,
by 4 extra bytes. You could therefore implement your own 64-bit
datatype by doing something like this:
struct my_64bit_integer {
uint32_t low;
uint32_t high;
};
You would of course have to implement mathematical operators yourself.
There is an int64_t in the stdint.h that comes with my GCC version,
and in Microsoft Visual C++ you have an __int64 type as well.
The next C++ standard (due 2009, or maybe 2010), is slated to include the "long long" type. As mentioned earlier, it's already in common use.
The implementation is up to the compiler writers, although computers have always supported multiple precision operations. Some languages, like Python and Common Lisp, require support for indefinite-precision integers. Long ago, I wrote 64-bit multiplication and division routines for a computer (the Z80) that could manage 16-bit addition and subtraction, with no hardware multiplication at all.
Probably the easiest way to see how an operation is implemented on your particular compiler is to write a code sample and examine the assembler output, which is available from all the major compilers I've worked with.