32 bit long vs 64 bit long - c++

I am working on project (c++ integration with Python) which has migrated to 32 bit machine to 64 bit machine. In Python, C long is mapped with Python Integer.
SO I can not change in Python Interface(client interface) and always gets overflow error from python client. it was working fine in 32 bit machine
So I have following solution
1)convert all long to int in 64 bit machine.
2)Declare 32 bit long in 64 bit machine.
Do we have any general solution/header file which give me option to declare 32 bit
datatype always So I can handle this issue in more general way.
I know it may be small issue but I am not able to find general solution.

Do we have any general solution/header file which give me option to declare 32 bit datatype always?
Yes, there is, since C99.
#include <stdint.h>
uint32_t foo;

standard C99 (and newer) has <stdint.h> header defining int32_t for 32 bits signed integers (and many other types) and recent C++ have <cstdint>
If you care about bignums (arbitrary precision numbers), be aware that it is a difficult subject and use some existing library, like GMP.

Related

What is the length of an lp variable?

I'm a C# developer so to solve this I try MSDN and StackOverflow. Unfortunately, they seem to conflict here. MSDN states that it will depend on the architecture - 64 bits for a 64 bit process and 32 bits for 32 bit one. SO, on the other hand has this answer describing SYSTEM_INFO that seems to indicate that it's always 32 bits because MinimumApplicationAddress and MaximumApplicationAddress are defined as UInt32s even though the question is tagged with win-universal-app which is more often x64 than x86. (Yes, I know that doesn't mean that every process will be 64 bits.)
LP is a Microsoft prefix notation (known as Hungarian notation) that indicates a pointer. Originally it was short for “long pointer”, which made sense in DOS programming. For 32-bit code it's 32 bits, and for 64-bit code it's 64 bits.
If you're talking about a uint32, that is guaranteed to be 32 bits, if you are talking about an int pointer, that will vary from system to system, so yes it just depends on what the underlying code is and what it's doing that will determine this.

Convert byte array to/from boost numeric?

I'm trying to convert a byte array to and from a Boost number with the cpp_int backend. What is a portable way to do this?
The platforms I'm concerned about are all little endian, but can be 32 or 64 bit and can be compiled with different compilers. Some of the ways I've seen to do this break depending on compiler versions and such, and that's what I want to avoid.
The only real difference between x86 and x64 is the size of pointers. So unless it relies on the size of pointers somehow, there shouldn't be much of a problem. Especially since a byte is always 8bits and you already ruled out endiannes problems.

How to make SWIG use 64bit integer for intptr_t on Windows

I would like to use SWIG on Windows for building 64 bit applications. I have a class that has a pointer in it to a buffer, and because I would like to interface with a .NET class, the pointer is declared as intptr_t.
The problem is that the standard SWIG stdint.i assumes that intptr_t is either int (on 32 bit environment) or long (on 64 bit environment). While this is true on Unix, it is false on Windows. Does anyone have either similar experience or any ideas how to create a workaround for this?
I already set up the typemaps needed for the intptr_t => IntPtr conversion and it is working fine in 32 bit environment, but it truncates the pointer in 64 bit environment.
Ok, I will answer my own question. It seems that this is a bug in SWIG on Windows, that it treats long as int64 on 64-bit Windows, while in reality it is an int32. See more about this topic here:
What is the bit size of long on 64-bit Windows?
The other problem with SWIG is that it differentiates 32 and 64 bit code, but the reason I used intptr_t was to avoid bitness issues, as by definition it gives an integer large enough to hold a pointer.
So what I did at the end was to write a script that I run after generating the wrapper to fix the type signatures from int to intptr_t. While this is not elegant, I already have to do this for other reasons for my Python and PHP wrappers.

porting linux 32 bit app to 64 bit?

i'm about to port very large scale application to 64 Bits,
i've noticed in that in the web there some articles which shows
many pitfalls of this porting ,
i wondered if there is any tool which can assist in porting to 64 bit , meaning
finding the places in code that needs to be changed.... maybe the gcc with warnnings enabled... is it good enough ? is there anything better ?
EDIT: Guys i am searching for a tool if any that might be a complete to the compiler,
i know GCC can asist , but i doubt it will find all un portable problems that
will be discovered in run-time....maybe static code analysis tool that emphasize
porting to 64 bits ?
thanks
Here's a guide. Another one
Size of some data types are different in 32-bit and 64-bit OS, so check for place where the code is assuming the size of data types. eg If you were casting a pointer to an int, that won't work in 64bit. This should fix most of the issues.
If your app uses third-party libraries, make sure those work in 64-bit too.
A good tool is called grep ;-) do
grep -nH -e '\<int\>\|\<short\>\|\<long\>' *
and replace all bare uses of these basic integer types by the proper one:
array indices should be size_t
pointer casts should be uintptr_t
pointer differences should be
prtdiff_t
types with an assumption of width N
should be uintN_t
and so on, I probably forgot some. Then gcc with all warnings on will tell you. You could also use clang as a compiler it gives even more diagnostics.
First off, why would there be 'porting'?
Consider that most distros have merrily provided 32 and 64 bit variants for well over a decade. So unless you programmed in truly unportable manner (and you almost have to try) you should be fine.
What about compiling the project in 64 bits OS? gcc compiler looks like such tool :)
Here is a link to an Oracle webpage that talks about issues commonly encountered porting a 32bit application to 64bit:
http://www.oracle.com/technetwork/server-storage/solaris/ilp32tolp64issues-137107.html
One section talks how to use lint to detect some common errors. Here is a copy of that section:
Use the lint Utility to Detect Problems with 64-bit long and Pointer Types
Use lint to check code that is written for both the 32-bit and the 64-bit compilation environment. Specify the -errchk=longptr64 option to generate LP64 warnings. Also use the -errchk=longptr64 flag which checks portability to an environment for which the size of long integers and pointers is 64 bits and the size of plain integers is 32 bits. The -errchk=longptr64 flag checks assignments of pointer expressions and long integer expressions to plain integers, even when explicit casts are used.
Use the -errchk=longptr64,signext option to find code where the normal ISO C value-preserving rules allow the extension of the sign of a signed-integral value in an expression of unsigned-integral type. Use the -m64 option of lint when you want to check code that you intend to run in the Solaris 64-bit SPARC or x86 64-bit environment.
When lint generates warnings, it prints the line number of the offending code, a message that describes the problem, and whether or not a pointer is involved. The warning message also indicates the sizes of the involved data types. When you know a pointer is involved and you know the size of the data types, you can find specific 64-bit problems and avoid the pre-existing problems between 32-bit and smaller types.
You can suppress the warning for a given line of code by placing a comment of the form "NOTE(LINTED())" on the previous line. This is useful when you want lint to ignore certain lines of code such as casts and assignments. Exercise extreme care when you use the "NOTE(LINTED())" comment because it can mask real problems. When you use NOTE, also include #include. Refer to the lint man page for more information.

long long implementation in 32 bit machine

As per c99 standard, size of long long should be minimum 64 bits. How is this implemented in a 32 bit machine (eg. addition or multiplication of 2 long longs). Also, What is the equivalent of long long in C++.
The equivalent in C++ is long long as well. It's not required by the standard, but most compilers support it because it's so usefull.
How is it implemented? Most computer architectures already have built-in support for multi-word additions and subtractions. They don't do 64 bit addititions directly but use the carry flag and a special add-instruction to build a 64 bit add from two 32 bit adds.
The same extension exists for subtraction as well (the carry is called borrow in these cases).
Longword multiplications and divisions can be built from smaller multiplications without the help of carry-flags. Sometimes simply doing the operations bit by bit is faster though.
There are architectures that don't have any flags at all (some DSP chips and simple micros). On these architectures the overflow has to be detected with logic operations. Multi-word arithmetic tend to be slow on these machines.
On the IA32 architecture, 64-bit integer are implemented in using two 32-bit registers (eax and edx).
There are platform specific equivalents for C++, and you can use the stdint.h header where available (boost provides you with one).
As everyone has stated, a 64-bit integer is typically implemented by simply using two 32-bit integers together. Then clever code generation is used to keep track of the carry and/or borrow bits to keep track of overflow, and adjust accordingly.
This of course makes such arithmetic more costly in terms of code space and execution time, than the same code compiled for an architecture with native support for 64-bit operations.
If you care about bit-sizes, you should use
#include <stdint.h>
int32_t n;
and friends. This works for C++ as well.
64-bit numbers on 32-bit machines are implemented as you think,
by 4 extra bytes. You could therefore implement your own 64-bit
datatype by doing something like this:
struct my_64bit_integer {
uint32_t low;
uint32_t high;
};
You would of course have to implement mathematical operators yourself.
There is an int64_t in the stdint.h that comes with my GCC version,
and in Microsoft Visual C++ you have an __int64 type as well.
The next C++ standard (due 2009, or maybe 2010), is slated to include the "long long" type. As mentioned earlier, it's already in common use.
The implementation is up to the compiler writers, although computers have always supported multiple precision operations. Some languages, like Python and Common Lisp, require support for indefinite-precision integers. Long ago, I wrote 64-bit multiplication and division routines for a computer (the Z80) that could manage 16-bit addition and subtraction, with no hardware multiplication at all.
Probably the easiest way to see how an operation is implemented on your particular compiler is to write a code sample and examine the assembler output, which is available from all the major compilers I've worked with.