Integers greater than 4294967295 on 32-bit Windows - c++

I'm trying to get to grips with C++ basics by building a simple arithmetic calculator application. Right now I'm trying to figure out how to make it capable of dealing with integers greater than 4294967295 on 32-bit Windows. I know that Windows' integrated Calculator is capable of this. What have I missed?
Note that this application should be compilable with both MSVC compiler and g++ (MinGW/GCC).
Thank you.

If you want to be both gcc and msvc compatible use <stdint.h>. It's source code compatible with both.
You probably want uint64_t for this. It will get you up to 18,446,744,073,709,551,615.
There are also libraries to get you up to integers as large as you have memory to handle as well.

Use __int64 to get 64-bit int calculations in Visual C++ - not sure if GCC will like this, though.
You could create a header file that typedefs (say) MyInt64 to the appropriate thing for each compiler. Then you can work internally with MyInt64, and the compiled code will be correct for each target. This is a pretty standard way of supporting different target compilers on one source codebase.
afai can tell, long long would work OK for both, but I have not used GCC so YMMV - see here for GCC info and here for Visual C++.

You could also create a "Large Number" class that would basically store the value across multiple variables in one form or another

There are different solutions, if 2^64 is big enough for you, you can use a 64 bit integer type (these are implementation dependent, so search for your particular compiler). On the other hand, if you want to be able to handle any number, you will have to use or implement a BigInteger type that encapsulates it. The implementation is an interesting exercise... basically use a vector of smaller type, operate on each subelement and then merge and normalize the result.

Related

Can I convert my Visual C++ win32 project into a crossplatform later?

does anybody know if it is possible to use the code from a win32 static libary
in a cross platform one? I know i have to make some changes to the code but i think c++ code is the same on all platforms except for the classes. For simple types somebody told me to use the int16_t or int16_fast_t instead or short for example. Is that correct that i could just copy ally the headers and files from visual studio and compile them for mac with for example code lite - if code itself is crossplatform?
Best Regards,
K
It depends on what's in that library. The fact that it's a Visual C++ Win32 static library project tells us how your library is compiled. It does not tell us anything about what code goes into that library. It might be all perfectly portable Standard C++ code. It might as well be code where every second line is a Windows API function call that will obviously not be portable.
Whether or not the code of a library will be portable all depends on the code. Replacing short with int16_t or int_fast16_t will do nothing to increase the portability of code (unless the original use of short assumed some implementation-defined properties). So I'm not sure what a blanket replacement of short with int16_t is supposed to achieve. short is a fundamental type built into the language. int16_t is a type defined by the standard library if the target platform supports it. So, in a way, int16_t is actually less portable than short. While int_fast16_t is always defined, so is short. Use the fixed-width integer types of the standard library if you need the semantics that they provide. If you don't then there's no reason to use them. Note that to use the fixed-width integer types in C++, include the C++ header <cstdint> rather than <stdint.h> which is not guaranteed to be present in C++. Also note that <cstdint> is only guaranteed to place declarations in namespace std. So for maximum portability, use std::int16_t because it is not guaranteed that ::int16_t is available.
If the code of the library is portable, then all you will need to build that library on another platform is a buildsystem for that platform. So yes, it is correct that if the code is portable, all you need to do is compile that code on the other platform using whatever tools you use on that other platform…that's kind of the very definition of what it means for code to be portable… ;-)

How to make C++ code robust with regard changing compilers/OS's

I have been developing some hopefully generic C++ code using MS Visual Studio 2008 / Windows. The code is ultimately going to be used within both IOS and Android apps. After some initial testing we found that my program behaved differently on Android/IOS and we traced this down to different values of RAND_MAX. Now the code is behaving better, but it is still not exactly the same as on Windows and it is a tricky process finding the differences, especially as I do not have the IOS/Android development environments set up at my end and my client is in a different time zone.
My question is, what could I do to avoid issues with different subtle compiler differences. For example is there a way of making one compiler behave like another? Or perhaps a website that lists common problems with compiler differences?... any ideas?
EDIT: The program does not employ any third party libraries.
The way to make code easier to go from one compiler to another is to make your code as standard compliant as possible. If you take RAND_MAX as an example the C11 7.22.2.1 (5) standard says
The value of the RAND_MAX macro shall be at least 32767
So if you are using RAND_MAX you have to take into account that it could be more than 32767 depending on what compiler you are using.
I would suggest getting a copy of both the C and C++ standards and start getting familiar with them. Whenever you are going to make an assumption of how the code will be treated you should consult those standard to make sure that you are using well defined behavior.
If you are having problems with RAND_MAX being different, it does imply that you're also using srand and rand - are you using them? Keep in mind that no standard actually say what pseudo random number generator they need to implement, so you will not get the same sequence of random numbers on different platforms even if RAND_MAX happens to have the same value. In order to get identical sequences of random numbers on varying platforms, you need to implement a pseudo random number generator of your own - see e.g. http://en.wikipedia.org/wiki/Linear_congruential_generator for an example of a very simple one.
As for the rest of your question - in most cases, it's not compiler differences that will cause you problems (i.e., as long as you don't rely on undefined behaviour or odd corner cases, chances are that your code itself will behave the same assuming that it doesn't fail to compile at all), but differences in the APIs and environment. In this case, RAND_MAX doesn't depend on the compiler, but is a feature of the standard C library of your target platform.
So if your code fails to compile, you clearly are relying on some nonstandard feature of the language (or nonstandard/nonportable API), but if it does compile but behaves differently, you're relying on some undefined detail in the standard C/C++ libraries.
In my work, our code is compiled with Visual 2008 (and soon 2013), gcc 4.8 on Linux, gcc 4.8 for Android, XCode (clang) for iOS. We're doing standard C++, or try to, but each compiler has its own way to deal with what standard defines.
The best thing to do is use only standard librairies (STL, boost), as much as possible. If there are some functions that are available only on one platform or compiler, you have to define a generic one yourself, and call each one for each platform.
And for what I've seen, building with gcc, almost 90% of the code (if not 99%) will be good on Android and iOS.
You can give a try with cygwin and gcc, that runs on Windows, that could help you to detect issues before your code is tested on other platforms.

Char type on 32 bit vs 64 bit

Here is the following issue:
If I am developing on a 32 bit machine and want my code to be ported to a 64 bit machine here is the senario.
My function internally use a lot of std strings. Now if I want to provide APIs can I ask them to send char * which I can then use internally? Or ask them to send me a __int64 which I convert to a string?
Another reason to use char * in my API was that at least in one type of implementation of unix (a different version of the tool) it picks up data from stdin via argv which is a char *.
In the Windows version I am not sure what to do. I could just ask for __int64 and then convert it into a string...and make it work that way or just use char * as well?
If you're providing a C++ implementation, then your public interface should just use std::string.
If however for compatibility reasons (which it sounds like you may have) you need to provide a C-style interface, then using char* is precisely the way to do it. In your 32-bit library it will be a 32 bit pointer, and in the 64 bit version of the library it will be 64 bits. This will then agree with the client users' expectations regarding the API. You should absolutely convert to a std::string inside your library at the earliest possible point however.
You seem somewhat confused. If the code you are writing is used only within the target machine, recompile will take care of most of the problems. Just don't rely on specific memory layout and you are fine. Using strings (as opposed to wstrings) probably means that the character encoding is UTF-8 (if not, reconsider) and thus limited form of data exhance (e.g. files) between platforms is also fine.
In this case, your interface decision comes to selecting between (const) std::string(&), and (const) char*, integer_type (don't rely on null terminator, please). Deciding factor being whether or not you anticipate need to support other compilers or programming languages.
Now, if you intent to make the interface callable from other machines (i.e. network interface), you have much tougher job. In that case, specify size of everything explicitly.
char is always one byte in size, both on 32-bit and 64-bit systems. However, using the std library is not the worst choice. ;) std should cope with different platforms as it is platform independent for the "most" part...
Converting to/from char* doesn't really help if you can't represent the number on your architecture.
If you are converting a 64bit integer from its decimal (or hexadecimal) textual representation into a value, you still need 64bits to store it.
You would do well to convert to string at the earliest opportunity, it is the recommended/standard for C++, and will help do away with all your char* problems.
There is a few scenarios you can follow to write portable code, see these questions:
What's the funniest user request you've ever had?
How to do portable 64 bit arithmetic, without compiler warnings
You would have problems achieving binary portability between different architectures, C++ provides for source-level portability.

porting linux 32 bit app to 64 bit?

i'm about to port very large scale application to 64 Bits,
i've noticed in that in the web there some articles which shows
many pitfalls of this porting ,
i wondered if there is any tool which can assist in porting to 64 bit , meaning
finding the places in code that needs to be changed.... maybe the gcc with warnnings enabled... is it good enough ? is there anything better ?
EDIT: Guys i am searching for a tool if any that might be a complete to the compiler,
i know GCC can asist , but i doubt it will find all un portable problems that
will be discovered in run-time....maybe static code analysis tool that emphasize
porting to 64 bits ?
thanks
Here's a guide. Another one
Size of some data types are different in 32-bit and 64-bit OS, so check for place where the code is assuming the size of data types. eg If you were casting a pointer to an int, that won't work in 64bit. This should fix most of the issues.
If your app uses third-party libraries, make sure those work in 64-bit too.
A good tool is called grep ;-) do
grep -nH -e '\<int\>\|\<short\>\|\<long\>' *
and replace all bare uses of these basic integer types by the proper one:
array indices should be size_t
pointer casts should be uintptr_t
pointer differences should be
prtdiff_t
types with an assumption of width N
should be uintN_t
and so on, I probably forgot some. Then gcc with all warnings on will tell you. You could also use clang as a compiler it gives even more diagnostics.
First off, why would there be 'porting'?
Consider that most distros have merrily provided 32 and 64 bit variants for well over a decade. So unless you programmed in truly unportable manner (and you almost have to try) you should be fine.
What about compiling the project in 64 bits OS? gcc compiler looks like such tool :)
Here is a link to an Oracle webpage that talks about issues commonly encountered porting a 32bit application to 64bit:
http://www.oracle.com/technetwork/server-storage/solaris/ilp32tolp64issues-137107.html
One section talks how to use lint to detect some common errors. Here is a copy of that section:
Use the lint Utility to Detect Problems with 64-bit long and Pointer Types
Use lint to check code that is written for both the 32-bit and the 64-bit compilation environment. Specify the -errchk=longptr64 option to generate LP64 warnings. Also use the -errchk=longptr64 flag which checks portability to an environment for which the size of long integers and pointers is 64 bits and the size of plain integers is 32 bits. The -errchk=longptr64 flag checks assignments of pointer expressions and long integer expressions to plain integers, even when explicit casts are used.
Use the -errchk=longptr64,signext option to find code where the normal ISO C value-preserving rules allow the extension of the sign of a signed-integral value in an expression of unsigned-integral type. Use the -m64 option of lint when you want to check code that you intend to run in the Solaris 64-bit SPARC or x86 64-bit environment.
When lint generates warnings, it prints the line number of the offending code, a message that describes the problem, and whether or not a pointer is involved. The warning message also indicates the sizes of the involved data types. When you know a pointer is involved and you know the size of the data types, you can find specific 64-bit problems and avoid the pre-existing problems between 32-bit and smaller types.
You can suppress the warning for a given line of code by placing a comment of the form "NOTE(LINTED())" on the previous line. This is useful when you want lint to ignore certain lines of code such as casts and assignments. Exercise extreme care when you use the "NOTE(LINTED())" comment because it can mask real problems. When you use NOTE, also include #include. Refer to the lint man page for more information.

Should a C++ embedded application use a common header with typedefs for built-in C++ types?

It's common practice where I work to avoid directly using built-in types and instead include a standardtypes.h that has items like:
// \Common\standardtypes.h
typedef double Float64_T;
typedef int SInt32_T;
Almost all components and source files become dependent on this header, but some people argue that it's needed to abstract the size of the types (in practice this hasn't been needed).
Is this a good practice (especially in large-componentized systems)? Are there better alternatives? Or should the built-in types be used directly?
You can use the standardized versions available in modern C and C++ implementations in the header file: stdint.h
It has types of the like: uint8_t, int32_t, etc.
In general this is a good way to protect code against platform dependency. Even if you haven't experienced a need for it to date, it certainly makes the code easier to interpret since one doesn't need to guess a storage size as you would for 'int' or 'long' which will vary in size with platform.
It would probably be better to use the standard POSIX types defined in stdint.h et al, e.g. uint8_t, int32_t, etc. I'm not sure if there are part of C++ yet but they are in C99.
Since it hasn't been said yet, and even though you've already accepted an answer:
Only used concretely-sized types when you need concretely sized types. Mostly, this means when you're persisting data, if you're directly interacting with hardware, or using some other code (e.g. a network stack) that expects concretely-sized types. Most of the time, you should just use the abstractly-sized types so that your compiler can optimize more intelligently and so that future readers of your code aren't burdened with useless details (like the size and signedness of a loop counter).
(As several other responses have said, use stdint.h, not something homebrew, when writing new code and not interfacing with the old.)
The biggest problem with this approach is that so many developers do it that if you use a third-party library you are likely to end up with a symbol name conflict, or multiple names for the same types. It would be wise where necessary to stick to the standard implementation provided by C99's stdint.h.
If your compiler does not provide this header (as for example VC++), then create one that conforms to that standard. One for VC++ for example can be found at https://github.com/chemeris/msinttypes/blob/master/stdint.h
In your example I can see little point for defining size specific floating-point types, since these are usually tightly coupled to the FP hardware of the target and the representation used. Also the range and precision of a floating point value is determined by the combination of exponent width and significant width, so the overall width alone does not tell you much, or guarantee compatibility across platforms. With respect to single and double precision, there is far less variability across platforms, most of which use IEEE-754 representations. On some 8 bit compilers float and double are both 32-bit, while long double on x86 GCC is 80 bits, but only 64 bits in VC++. The x86 FPU supports 80 bits in hardware (2).
I think it's not a good practice. Good practice is to use something like uint32_t where you really need 32-bit unsigned integer and if you don't need a particular range use just unsigned.
It might matter if you are making cross-platform code, where the size of native types can vary from system to system. For example, the wchar_t type can vary from 8 bits to 32 bits, depending on the system.
Personally, however, I don't think the approach you describe is as practical as its proponents may suggest. I would not use that approach, even for a cross-platform system. For example, I'd rather build my system to use wchar_t directly, and simply write the code with an awareness that the size of wchar_t will vary depending on platform. I believe that is FAR more valuable.
As others have said, use the standard types as defined in stdint.h. I disagree with those who say to only use them in some places. That works okay when you work with a single processor. But when you have a project which uses multiple processor types (e.g. ARM, PIC, 8051, DSP) (which is not uncommon in embedded projects) keeping track of what an int means or being able to copy code from one processor to the other almost requires you to use fixed size type definitions.
At least it is required for me, since in the last six months I worked on 8051, PIC18, PIC32, ARM, and x86 code for various projects and I can't keep track of all the differences without screwing up somewhere.