Limb in the vocabulary of arbitrary precision integer? - c++

What does a "limb" refer to in the domain of arbitrary precision integer?

In the GNU Multiple Precision Arithmetic Library (GMP), as mentioned in comments, is the largest integer word available:
#ifdef __GMP_SHORT_LIMB
typedef unsigned int mp_limb_t;
typedef int mp_limb_signed_t;
#else
#ifdef _LONG_LONG_LIMB
typedef unsigned long long int mp_limb_t;
typedef long long int mp_limb_signed_t;
#else
typedef unsigned long int mp_limb_t;
typedef long int mp_limb_signed_t;
#endif
#endif
typedef unsigned long int mp_bitcnt_t;
typedef struct
{
int _mp_alloc; /* Number of *limbs* allocated and pointed to by the _mp_d field. */
int _mp_size; /* abs(_mp_size) is the number of limbs the last field points to. If _mp_size is negative this is a negative number. */
mp_limb_t *_mp_d; /* Pointer to the limbs. */
} __mpz_struct;
...
typedef __mpz_struct mpz_t[1];
So a limb can be a unsigned int, unsigned long int, or unsigned long long int, depending on the underlying architecture.
GMP then uses multiple limbs to store and calculate multiple precision integers, by applying a very efficient implementation of machine-specific integer code with highly optimized algorithms for multiple-precision arithmetic. The reason to use machine unsigned integers for these calculations is because integer arithmetic is simple, fast, and very realiable, whereas floating-point arithmetic and signed integer arithmetic are not as nearly standarized and portable as unsigned integer arithmetic.

Related

Too many types in declaration C++

When unsigned/signed long int a; is possible
why unsigned/signed long float/double a; is not possible ?
Why do I get too many types in declaration error for the latter and not for the former ?
There are three floating point types: float, double and long double. None of these have unsigned equivalents, so putting signed or unsigned in front of them is not valid. There is no such type as long float.
You are getting that message because a long double exists, but an unsigned long double does not. unsigned can also be interpreted as an int, therefore you possess two types in the latter declaration: unsigned and long double. I do not believe there is a long float in C++.
That is because the first (long int) is a documented variable type, while the second isn't.
The data types that the C++ language supports are:
char
unsigned char
signed char
int
unsigned int
signed int
short int
unsigned short int
signed short int
long int
signed long int
unsigned long int
float
double
long double

What types can "unsigned" be used with? When is "signed" needed?

int
short
long
long long
unsigned int / unsigned
unsigned short
unsigned long
unsigned long long
char
bool
float
double
I just never get the limit. Are these all or are there more like:
unsigned char
unsigned bool
unsigned float
unsigned double
or any other?
I have a tomorrow and I want to be clear with the basics.
I just never get the limit. Are these all[?] ...
Don't bother providing links, I have a text book for that matter. Just
answer my question. Yes or No? This is really frustrating. Nothing has
been explicitly mentioned anywhere.
No.
Integer and character types (e.g., int, short, char, wchar_t, etc.) support signedness modifiers (signed/unsigned) and can therefore all be unsigned.
Floating point types (e.g., float, double, long double) do not support signedness modifiers and therefore cannot be unsigned or explicitly signed, for that matter.
A few examples of valid expressions:
char
unsigned char
int
signed int
unsigned short
unsigned long long
A few examples of invalid expressions:
signed double
unsigned double
unsigned float
signed unsigned int

What are the rules for 'typing' of parameters when explicit integer suffixes are not given?

For example, if I were to say:
#define UINT_DEF 500u
Then such a definition would have the type unsigned int. However, what is the default rule for when such suffixes are not given? As in
#define SOME_DEF 500
being placed in the type int. That is, at compile-time, if no suffix is given, are the constants slotted into the lowest data type in which they fit?
Would, for instance,
#define SOME_DEF_2 100
Acquire the datatype of char since it fits?
I asked a previous question on a similar topic and had some good responses. However, little was said to the case where no suffix is given. It was said that if a given suffix is requested of the compiler and the assigned value does not fit in such a type then the constant would get promoted, but little else was said about it. I imagine the answer to be something similar to this in that a default casting (perhaps the smallest available) is given to the constants and in such cases where the value should not fit into this default type then a promotion is realized.
And finally, do arithmetic promotion rules still apply as normal for macros? That is, would
#define TEST_DEF 5000000/50
#define TEST_DEF_2 5000000/50.0
respectively evaluate to 100,000 with a type of long int and 100,000.00 of type float (assuming 5,000,000 is a long and 50 is an int/char, whatever).
Or in the case:
#define TEST_MACRO(x) (16*x)
Since 16 is a constant of type int most likely, would TEST_MACRO(70000) promote the whole thing to long?
#define SOME_DEF 500
500 has type int. The type of an unsuffixed decimal integer constant is the first of the corresponding list in which its value can be represented: int, long, long long.
Then:
#define TEST_DEF 5000000/50
#define TEST_DEF_2 5000000/50.0
Assuming 5000000 is of type int in your system then:
5000000/50 is of type int
5000000/50.0 is of type double
Of course the fact that it is macro does not change anything as macros are just relatively simple textual substitutions.
Finally, assuming 70000 is of type int then:
16 * 70000 is also of type int
Per the 2011 online draft of the C standard:
6.4.4.1 Integer constants
...
5 The type of an integer constant is the first of the corresponding list in which its value can be represented.
Suffix Decimal Constant Octal or Hexadecimal
Constant
-----------------------------------------------------------------
None int int
long int unsigned int
long long int long int
unsigned long int
long long int
unsigned long long int
-----------------------------------------------------------------
u or U unsigned int unsigned int
unsigned long int unsigned long int
unsigned long long int unsigned long long int
------------------------------------------------------------------
l or L long int long int
long long int unsigned long int
long long int
unsigned long long int
------------------------------------------------------------------
Both u or U unsigned long int unsigned long int
and l or L unsigned long long int unsigned long long int
------------------------------------------------------------------
ll or LL long long int long long int
unsigned long long int
------------------------------------------------------------------
Both u or U unsigned long long int unsigned long long int
and ll or LL
So, if you have a decimal integer constant without a suffix, its type will be the smallest of int, long int, or long long int that can represent that value.
Not so elegant but possible is to cast:
#include <inttypes.h> /* For uint16_t */
#define MYFLOAT ((float) 1)
#define MYUNSIGNED16BITINT ((uint16_t) 42.)
#define MYVOIDPOINTER ((void *) 0)

What should `intmax_t` be on platform with 64-bit `long int` and `long long int`?

In the C++ standard 18.4 it specifies:
typedef 'signed integer type' intmax_t;
By the standard(s) on a platform with a 64-bit long int and a 64-bit long long int which should this "signed integer type" be?
Note that long int and long long int are distinct fundamental types.
The C++ standard says:
The header defines all functions, types, and macros the same as 7.18 in the C standard.
and in 7.18 of the C standard (N1548) it says:
The following type designates a signed integer type capable of representing any value of
any signed integer type:
intmax_t
It would seem that in this case that both long int and long long int qualify?
Is that the correct conclusion? That either would be a standard-compliant choice?
Yes, your reasoning is correct. Most real-world implementations choose the lowest-rank type satisfying the conditions.
Well, assuming the GNU C library is correct (from /usr/include/stdint.h):
/* Largest integral types. */
#if __WORDSIZE == 64
typedef long int intmax_t;
typedef unsigned long int uintmax_t;
#else
__extension__
typedef long long int intmax_t;
__extension__
typedef unsigned long long int uintmax_t;
#end

Difference between different integer types

I was wondering what is the difference between uint32_t and uint32, and when I looked in the header files it had this:
types.h:
/** #brief 32-bit unsigned integer. */
typedef unsigned int uint32;
stdint.h:
typedef unsigned uint32_t;
This only leads to more questions:
What is the difference between
unsigned varName;
and
unsigned int varName;
?
I am using MinGW.
unsigned and unsigned int are synonymous, much like unsigned short [int] and unsigned long [int].
uint32_t is a type that's (optionally) defined by the C standard. uint32 is just a name you made up, although it happens to be defined as the same thing.
There is no difference.
unsigned int = uint32 = uint32_t = unsigned in your case and unsigned int = unsigned always
unsigned and unsigned int are synonymous for historical reasons; they both mean "unsigned integer of the most natural size for the CPU architecture/platform", which is often (but by no means always) 32 bits on modern platforms.
<stdint.h> is a standard header in C99 that is supposed to give type definitions for integers of particular sizes, with the uint32_t naming convention.
The <types.h> that you're looking at appears to be non-standard and presumably belongs to some framework your project is using. Its uint32 typedef is compatible with uint32_t. Whether you should use one or the other in your code is a question for your manager.
There is absolutely no difference between unsigned and unsigned int.
Whether that type is a good match for uint32_t is implementation-dependant though; an int could be "shorter" than 32 bits.