How can I manually calculate the high part for a signed multiplication in C++? Like Getting the high part of 64 bit integer multiplication (unsigned only), but how do I calculate carry/borrow?
I do not mean that cast in a larger type (thats simple), but really manual calculation, so it works also with int128_t.
My goal is to write a template function that always returns the correct high-part for signed and unsigned arguments (u/int8..128_t):
template <typename Type>
constexpr Type mulh(const Type& op1, const Type& op2) noexcept
{
if constexpr (std::is_signed_v<Type>) return ???;
else return see link;
}
It seems that you are implementing things that are usually available as
compiler builtin functions.
Implementing those in standard C++ results with less efficient code. That can be still fun as mental exercise but then why you ask us to spoil it as whole?
If you can do signed, then you could convert unsigned to signed x=(x<0)?x*-1:x get the result and then calculate the sign afterwards z=((x<0)!=(y<0))?z*-1:z
This works becuase the magnitude of a signed integer of arbitraty length is always going to fit into an unsigned integer of the same length, and you know that if both numbers are negative or positive the answer will be positive, if only one of them is negative it will be negative.
Related
I'm writing C++ code in an environment in which I don't have access to the C++ standard library, specifically not to std::numeric_limits. Suppose I want to implement
template <typename T> constexpr T all_ones( /* ... */ )
Focusing on unsigned integral types, what do I put there? Specifically, is static_cast<T>(-1) good enough? (Other types I could treat as an array of unsigned chars based on their size I guess.)
Use the bitwise NOT operator ~ on 0.
T allOnes = ~(T)0;
A static_cast<T>(-1) assumes two's complement, which is not portable. If you are only concerned about unsigned types, hvd's answer is the way to go.
Working example: https://ideone.com/iV28u0
Focusing on unsigned integral types, what do I put there? Specifically, is static_cast(-1) good enough
If you're only concerned about unsigned types, yes, converting -1 is correct for all standard C++ implementations. Operations on unsigned types, including conversions of signed types to unsigned types, are guaranteed to work modulo (max+1).
This disarmingly direct way.
T allOnes;
memset(&allOnes, ~0, sizeof(T));
Focusing on unsigned integral types, what do I put there?
Specifically, is static_cast(-1) good enough
Yes, it is good enough.
But I prefer a hex value because my background is embedded systems, and I have always had to know the sizeof(T).
Even in desktop systems, we know the sizes of the following T:
uint8_t allones8 = 0xff;
uint16_t allones16 = 0xffff;
uint32_t allones32 = 0xffffffff;
uint64_t allones64 = 0xffffffffffffffff;
Another way is
static_cast<T>(-1ull)
which would be more correct and works in any signed integer format, regardless of 1's complement, 2's complement or sign-magnitude. You can also use static_cast<T>(-UINTMAX_C(1))
Because unary minus of an unsigned value is defined as
The negative of an unsigned quantity is computed by subtracting its value from 2^n, where n is the number of bits in the promoted operand."
Therefore -1u will always return an all-one-bits data in unsigned int. ll suffix is to make it work for any types narrower than unsigned long long. There's no extended integer types (yet) in C++ so this should be fine
However a solution that expresses the intention clearer would be
static_cast<T>(~0ull)
This question already has answers here:
Difference between unsigned and unsigned int in C
(5 answers)
Closed 9 years ago.
I saw in some C++ code the keyword "unsigned" in the following form:
const int HASH_MASK = unsigned(-1) >> 1;
and later:
unsigned hash = HASH_SEED;
(it is taken from the CS106B/X reader - of Stanford - by Eric S. Roberts - on the topic of "implementation of the hash code function for strings").
Can someone tell me please what does that keyword mean and when do I use it anyway?
Thanks!
Take a look: https://stackoverflow.com/a/7176690/1758762
unsigned is a modifier which can apply to any integral type (char,
short, int, long, etc.) but on its own it is identical to unsigned
int.
It's a short version of unsigned int. Syntactically, you can use it anywhere you would use any other datatype like float or short.
Unsigned types are types that can't represent negative numbers; only zero and positive numbers. In C++, they use modular arithmetic; the modulus for an N-bit type is 2^N. It's a good idea to use unsigned rather than signed types when messing around with bit patterns (for example, when calculating hash codes), since C++ allows several different representations of negative numbers which could lead to portability issues.
unsigned can be used as a qualifier for any integer type (e.g. unsigned int or unsigned long long); or on its own as shorthand for unsigned int.
So the first converts -1 into unsigned int. Due to modular arithmetic, this gives the largest representable value. This could also be written (more clearly, in my opinion) as std::numeric_limits<unsigned>::max().
The second declares and initialises a variable of type unsigned int.
Values are signed by default, which means they can be positive or negative. The unsigned keyword is used to specify that a value must be positive.
Signed variables use 1 bit to specify whether the value is positive or not. The unsigned keyword actualy makes this bit part of the value (thus allowing bigger numbers to be stored).
Lastly, unsigned hash is interpreted by compilers as unsigned int hash (int being the default type in C programming).
To get a good idea what unsigned means, one has to understand signed and unsigned integers. For a full explanation of twos-compliment, search Wikipedia, but in a nutshell, a computer stores negative numbers by subtracting negative numbers from 2^32 (for a 32-bit integer). In this way, -1 is stored as 2^32-1. This does mean that you only have 2^31 positive numbers, but that is by the by. This is known as signed integers (as it can have positive or negative sign)
Unsigned tells the compiler that you don't want twos compliment and are dealing only in positive numbers. When -1 is typecast (as it is in the code) to an unsigned int it becomes
2^32-1 = 0b111111111...
Thus that is an easy way of getting a whole lot of 1s in binary.
Use unsigned rarely. If you need to do bit operations, or for some reason need only positive integers bigger than 2^31. Otherwise, if you leave it out, c++ assumes signed integers.
C allows chars to be signed or unsigned, depending on which is more efficient for the host computer. if you want to be sure your char is unsigned, you can declare your variable to be unsigned char. You can use signed char if you want the ensure signed interpretation.
Incidentally, the C and C++ compilers treatd char, signed char, and unsigned char as three distinct types, even though char is compiled into one of the other two.
Let's say you have a very long binary-word (>64bit), which represents an unsigned integral value, and you would like to print the actual number. We're talking C++, so let's assume you start off with a bool[ ] or std::vector<bool> or a std::bitset, and end up with a std::string or some kind of std::ostream - whatever your solution prefers. But please only use the core-language and STL.
Now, i suspected, you must evaluate it chunkwise, to have some intermediate results, that are small enough to store away - preferably base 10, as in x·10k. I could figure out to assemble the number from that point. But since there is no chunk-width that corresponds to the base of 10, I don't know how to do it. Of course, you can start with any other chunk-width, let's say 3, to get intermediates in the form of x·(23)k, and then convert it to base 10, but this will lead to x·103·k·lg2 which obviously has a floating-point exponent, that isn't of any help.
Anyway, I'm exhausted of this math-crap and I would appreciate a thoughtful suggestion.
Yours sincerely,
Armin
I'm going to assume you already have some sort of bignum division/modulo function to work with, because implementing such a thing is a complete nightmare.
class bignum {
public:
bignum(unsigned value=0);
bignum(const bignum& rhs);
bignum(bignum&& rhs);
void divide(const bignum& denominator, bignum& out_modulo);
explicit operator bool();
explicit operator unsigned();
};
std::ostream& operator<<(std::ostream& out, bignum value) {
std::string backwards;
bignum remainder;
do {
value.divide(10, remainder);
backwards.push_back(unsigned(remainder)+'0');
}while(value);
std::copy(backwards.rbegin(), backwards.rend(), std::ostream_iterator(out));
return out;
}
If rounding is an option, it should be fairly trivial to convert most bignums to double as well, which would be a LOT faster. Namely, copy the 64 most significant bits to an unsigned long, convert that to a double, and then multiply by 2.0 to the power of the number of significant bits minus 64. (I say significant bits, because you have to skip any leading zeros)
So if you have 150 significant bits, copy the top 64 into an unsigned long, convert that to a double, and multiply that by std::pow(2.0, 150-64) ~ 7.73e+25 to get the result. If you only have 40 significant bits, pad with zeros on the right it still works. copy the 40 bits to the MSB of an unsigned long, convert that to a double, and multiply that by std::pow(2.0, 40-64) ~ 5.96e-8 to get the result!
Edit
Oli Charlesworth posted a link to the wikipedia page on Double Dabble which blows the first algorithm I showed out of the water. Don't I feel silly.
I'm writing a Fixedpoint class, but have ran into bit of a snag... The multiplication, division portions, I am not sure how to emulate. I took a very rough stab at the division operator but I am sure it's wrong. Here's what it looks like so far:
class Fixed
{
Fixed(short int _value, short int _part) :
value(long(_value + (_part >> 8))), part(long(_part & 0x0000FFFF)) {};
...
inline Fixed operator -() const // example of some of the bitwise it's doing
{
return Fixed(-value - 1, (~part)&0x0000FFFF);
};
...
inline Fixed operator / (const Fixed & arg) const // example of how I'm probably doing it wrong
{
long int tempInt = value<<8 | part;
long int tempPart = tempInt;
tempInt /= arg.value<<8 | arg.part;
tempPart %= arg.value<<8 | arg.part;
return Fixed(tempInt, tempPart);
};
long int value, part; // members
};
I... am not a very good programmer, haha!
The class's part is 16 bits wide (but expressed as a 32-bit long since I imagine it'd need the room for possible overflows before they're fixed) and the same goes for value which is the integer part. When the 'part' goes over 0xFFFF in one of it's operations, the highest 16 bits are added to 'value', and then the part is masked so only it's lowest 16 bits remain. That's done in the init list.
I hate to ask, but if anyone would know where I could find documentation for something like this, or even just the 'trick' or how to do those two operators, I would be very happy for it! I am a dimwit when it comes to math, and I know someone has had to do/ask this before, but searching google has for once not taken me to the promised land...
As Jan says, use a single integer. Since it looks like you're specifying 16 bit integer and fractional parts, you could do this with a plain 32 bit integer.
The "trick" is to realise what happens to the "format" of the number when you do operations on it. Your format would be described as 16.16. When you add or subtract, the format stays the same. When you multiply, you get 32.32 -- So you need a 64 bit temporary value for the result. Then you do a >>16 shift to get down to 48.16 format, then take the bottom 32 bits to get your answer in 16.16.
I'm a little rusty on the division -- In DSP, where I learned this stuff, we avoided (expensive) division wherever possible!
I'd recommend using one integer value instead of separate whole and fractional part. Than addition and subtraction are the integeral counterparts directly and you can simply use 64-bit support, which all common compilers have these days:
Multiplication:
operator*(const Fixed &other) const {
return Fixed((int64_t)value * (int64_t)other.value);
}
Division:
operator/(const Fixed &other) const {
return Fixed(((int64_t)value << 16) / (int64_t)other.value);
}
64-bit integers are
On gcc, stdint.h (or cstdint, which places them in std:: namespace) should be available, so you can use the types I mentioned above. Otherwise it's long long on 32-bit targets and long on 64-bit targets.
On Windows, it's always long long or __int64.
To get things up and running, first implement the (unary) inverse(x) = 1/x, and then implement a/b as a*inverse(b). You'll probably want to represent the intermediates as a 32.32 format.
Say I want a function that takes two floats (x and y), and I want to compare them using not their float representation but rather their bitwise representation as a 32-bit unsigned int. That is, a number like -495.5 has bit representation 0b11000011111001011100000000000000 or 0xC3E5C000 as a float, and I have an unsigned int with the same bit representation (corresponding to a decimal value 3286614016, which I don't care about). Is there any easy way for me to perform an operation like <= on these floats using only the information contained in their respective unsigned int counterparts?
You must do a signed compare unless you ensure that all the original values were positive. You must use an integer type that is the same size as the original floating point type. Each chip may have a different internal format, so comparing values from different chips as integers is most likely to give misleading results.
Most float formats look something like this: sxxxmmmm
s is a sign bit
xxx is an exponent
mmmm is the mantissa
The value represented will then be something like: 1mmm << (xxx-k)
1mmm because there is an implied leading 1 bit unless the value is zero.
If xxx < k then it will be a right shift. k is near but not equal to half the largest value that could be expressed by xxx. It is adjusted for the size of the mantissa.
All to say that, disregarding NaN, comparing floating point values as signed integers of the same size will yield meaningful results. They are designed that way so that floating point comparisons are no more costly than integer comparisons. There are compiler optimizations to turn off NaN checks so that the comparisons are straight integer comparisons if the floating point format of the chip supports it.
As an integer, NaN is greater than infinity is greater than finite values. If you try an unsigned compare, all the negative values will be larger than the positive values, just like signed integers cast to unsigned.
If you truly truly don't care about what the conversion yields, it isn't too hard. But the results are extremely non-portable, and you almost certainly won't get an ordering that at all resembles what you'd get by comparing the floats directly.
typedef unsigned int TypeWithSameSizeAsFloat; //Fix this for your platform
bool compare1(float one, float two)
union Convert {
float f;
TypeWithSameSizeAsFloat i;
}
Convert lhs, rhs;
lhs.f = one;
rhs.f = two;
return lhs.i < rhs.i;
}
bool compare2(float one, float two) {
return reinterpret_cast<TypeWithSameSizeAsFloat&>(one)
< reinterpret_cast<TypeWithSameSizeAsFloat&>(two);
}
Just understand the caveats, and chose your second type carefully. Its a near worthless excersize at any rate.
In a word, no. IEEE 754 might allow some kinds of hacks like this, but they do not work all the time and handle all cases, and some platforms do not use that floating point standard (such as doubles on x87 having 80 bit precision internally).
If you're doing this for performance reasons I suggest you strongly reconsider -- if it's faster to use the integer comparison the compiler will probably do it for you, and if it is not, you pay for a float to int conversion multiple times, when a simple comparison may be possible without moving the floats out of registers.
Maybe I'm misreading the question, but I suppose you could do this:
bool compare(float a, float b)
{
return *((unsigned int*)&a) < *((unsigned int*)&b);
}
But this assumes all kinds of things and also warrants the question of why you'd want to compare the bitwise representations of two floats.