C++ - Getting size in bits of integer - c++

I need to know whether an integer is 32 bits long or not (I want to know if it's exactly 32 bits long (8 hexadecimal characters). How could I achieve this in C++? Should I do this with the hexadecimal representation or with the unsigned int one?
My code is as follows:
mistream.open("myfile.txt");
if(mistream)
{
for(int i=0; i<longArray; i++)
{
mistream >> hex >> datos[i];
}
}
mistream.close();
Where mistream is of type ifstream, and datos is an unsigned int array
Thank you

std::numeric_limits<unsigned>::digits
is a static integer constant (or constexpr in C++11) giving the number of bits (since unsigned is stored in base 2, it gives binary digits).
You need to #include <limits> to get this, and you'll notice here that this gives the same value as Thomas' answer (while also being generalizable to other primitive types)
For reference (you changed your question after I answered), every integer of a given type (eg, unsigned) in a given program is exactly the same size.
What you're now asking is not the size of the integer in bits, because that never varies, but whether the top bit is set. You can test this trivially with
bool isTopBitSet(uint32_t v) {
return v & 0x80000000u;
}
(replace the unsigned hex literal with something like T{1} << (std::numeric_limits<T>::digits-1) if you want to generalise to unsigned T other than uint32_t).

As already hinted in a comment by #chux, you can use a combination of the sizeof operator and the CHAR_BIT macro constant. The former tells you (at compile-time) the size (in multiples of sizeof(char) aka bytes) of its argument type. The latter is the number of bits to the byte (usually 8).
You can encapsulate this nicely into a function template.
#include <climits> // CHAR_BIT
#include <cstddef> // std::size_t
#include <iostream> // std::cout, std::endl
template <typename T>
constexpr std::size_t
bit_size() noexcept
{
return sizeof(T) * CHAR_BIT;
}
int
main()
{
std::cout << bit_size<int>() << std::endl;
std::cout << bit_size<long>() << std::endl;
}
On my implementation, it outputs 32 and 64.
Since the function is a constexpr, you can use it in static contexts, such as in static_assert<bit_size<int>() >= 32, "too small");.

Try this:
#include <climits>
unsigned int bits_per_byte = CHAR_BIT;
unsigned int bits_per_integer = CHAR_BIT * sizeof(int);
The identifier CHAR_BIT represents the number of bits in a char.
The sizeof returns the number of char locations occupied by the integer.
Multiplying them gives us the number of bits for an integer.

OP said "if it's exactly 32 bits long (8 hexadecimal characters)" and further with ".. interested in knowing if the value is between power(2, 31) and power(2, 32) - 1". So it is a little fuzzy on negative 32-bit numbers.
Certainly OP wants to know the result based on the value and not the type.
bool integer_is_32_bits_long(int x) =
// cope with 32-bit int
((INT_MAX == 0x7FFFFFFF) && (x < 0)) ||
// larger 32-bit int
((INT_MAX > 0x7FFFFFFF) && (x >= 0x80000000) && (x <= 0xFFFFFFFF));
Of course if int is 16-bit, then the result is always false.

I want to know if it's exactly 32 bits long (8 hexadecimal characters)
I am interested in knowing if the value is between power(2, 31) and power(2, 32) - 1
So you want to know if the upper bit is set? Then you can simply test if the number is negative:
bool upperBitSet(int x)
{
return x < 0;
}
For unsigned numbers, you can simply shift left and back right and then check if you lost data:
bool upperBitSet(unsigned x)
{
return (x << 1 >> 1) != x;
}

The simplest way probably is to check if the 32nd bit is set:
bool isReally32bitsLong(uint32_t in) {
return (in >> 31)!=0;
}
bool isExactly32BitsLong(uint64_t in) {
return ((in >> 31)!=0) && ((in >> 32) == 0);
}

Related

How to extract a subset of a bitset larger than unsigned long long without using to_string or extracting bits one-by-one?

I want to extract a subset from a given bitset, where:
both G(given bitset size) and S(subset size) are known at compile-time
G > sizeof(unsigned long long) * 8
S <= sizeof(int) * 8
The subset may start anywhere in the given bitset
The resulting subset should be a bitset of the correct size, S
I can isolate the subset using bitwise math but then bitset.to_ullong does not work because of overflow error.
I know I can use bitset.to_string and do string manipulations/conversion. What I'm currently doing is extracting bits one-by-one manually (this is enough for my use case).
I'm just interested to know if there are other ways of doing this.
std::bitset has bitshift operators, so for example you can write a function to place bits (begin,end] at the front:
template <size_t size>
std::bitset<size> extract_bits(std::bitset<size> x,int begin, int end) {
(x >>= (size-end)) <<= (size-end);
x <<= begin;
return x;
}
For example to get the bits 1 and 2:
int main()
{
std::bitset<6> x{12};
std::cout << x << "\n";
std::cout << extract_bits(x,1,3);
}
Output:
001100
010000
If you know begin and end at compile-time you can get a std::bitset containing only the desired bits:
template <size_t begin,size_t end,size_t in>
std::bitset<end-begin> crop(const std::bitset<in>& x) {
return std::bitset<end-begin>{ x.to_string().substr(begin,end-begin) };
};
auto c = crop<1,3>(x);
std::cout << c; // 01
If the bitset is too large to fit in a unsigned long long I don't know another way that does not use to_string() or loops for individual bits.

Converting a 'long' type into a binary String

My objective is to write an algorithm that would be able to convert a long number into a binary number stored in a string.
Here is my current block of code:
#include <iostream>
#define LONG_SIZE 64; // size of a long type is 64 bits
using namespace std;
string b10_to_b2(long x)
{
string binNum;
if(x < 0) // determine if the number is negative, a number in two's complement will be neg if its' first bit is zero.
{
binNum = "1";
}
else
{
binNum = "0";
}
int i = LONG_SIZE - 1;
while(i > 0)
{
i --;
if( (x & ( 1 << i) ) == ( 1 << i) )
{
binNum = binNum + "1";
}
else
{
binNum = binNum + "0";
}
}
return binNum;
}
int main()
{
cout << b10_to_b2(10) << endl;
}
The output of this program is:
00000000000000000000000000000101000000000000000000000000000001010
I want the output to be:
00000000000000000000000000000000000000000000000000000000000001010
Can anyone identify the problem? For whatever reason the function outputs 10 represented by 32 bits concatenated with another 10 represented by 32 bits.
why would you assume long is 64 bit?
try const size_t LONG_SIZE=sizeof(long)*8;
check this, the program works correctly with my changes
http://ideone.com/y3OeB3
Edit: and ad #Mats Petersson pointed out you can make it more robust by changing this line
if( (x & ( 1 << i) ) == ( 1 << i) )
to something like
if( (x & ( 1UL << i) ) ) where that UL is important, you can see his explanation the the comments
Several suggestions:
Make sure you use a type that is guaranteed to be 64-bit, such as uint64_t, int64_t or long long.
Use above mentioned 64-bit type for your variable i to guarantee that the 1 << i calculates correctly. This is caused by the fact that shift is only guaranteed by the standard when the number of bits shifted are less or equal to the number of bits in the type being shifted - and 1 is the type int, which for most modern platforms (evidently including yours) is 32 bits.
Don't put semicolon on the end of your #define LONG_SIZE - or better yet, use const int long_size = 64; as this allows all manner of better behaviour, for example that you in the debugger can print long_size and get 64, where print LONG_SIZE where LONG_SIZE is a macro will yield an error in the debugger.

Binary-Decimal Negative bit set

How can I tell if a binary number is negative?
Currently I have the code below. It works fine converting to Binary. When converting to decimal, I need to know if the left most bit is 1 to tell if it is negative or not but I cannot seem to figure out how to do that.
Also, instead of making my Bin2 function print 1's an 0's, how can I make it return an integer? I didn't want to store it in a string and then convert to int.
EDIT: I'm using 8 bit numbers.
int Bin2(int value, int Padding = 8)
{
for (int I = Padding; I > 0; --I)
{
if (value & (1 << (I - 1)))
std::cout<< '1';
else
std::cout<<'0';
}
return 0;
}
int Dec2(int Value)
{
//bool Negative = (Value & 10000000);
int Dec = 0;
for (int I = 0; Value > 0; ++I)
{
if(Value % 10 == 1)
{
Dec += (1 << I);
}
Value /= 10;
}
//if (Negative) (Dec -= (1 << 8));
return Dec;
}
int main()
{
Bin2(25);
std::cout<<"\n\n";
std::cout<<Dec2(11001);
}
You are checking for negative value incorrectly. Do the following instead:
bool Negative = (value & 0x80000000); //It will work for 32-bit platforms only
Or may be just compare it with 0.
bool Negative = (value < 0);
Why don't you just compare it to 0. Should work fine and almost certainly you can't do this in a manner more efficient than the compiler.
I am entirely unclear if this is what the OP is looking for, but its worth a toss:
If you know you have a value in a signed int that is supposed to be representing a signed 8-bit value, you can pull it apart, store it in a signed 8-bit value, then promote it back to a native int signed value like this:
#include <stdio.h>
int main(void)
{
// signed integer, value is 245. 8bit signed value is (-11)
int num = 0xF5;
// pull out the low 8 bits, storing them in a signed char.
signed char ch = (signed char)(num & 0xFF);
// now let the signed char promote to a signed int.
int res = ch;
// finally print both.
printf("%d ==> %d\n",num, res);
// do it again for an 8 bit positive value
// this time with just direct casts.
num = 0x70;
printf("%d ==> %d\n", num, (int)((signed char)(num & 0xFF)));
return 0;
}
Output
245 ==> -11
112 ==> 112
Is that what you're trying to do? In short, the code above will take the 8bits sitting at the bottom of num, treat them as a signed 8-bit value, then promote them to a signed native int. The result is you can now "know" not only whether the 8-bits were a negative number (since res will be negative if they were), you also get the 8-bit signed number as a native int in the process.
On the other hand, if all you care about is whether the 8th bit is set in the input int, and is supposed to denote a negative value state, then why not just :
int IsEightBitNegative(int val)
{
return (val & 0x80) != 0;
}

How do I use bitwise operators on a "double" on C++?

I was asked to get the internal binary representation of different types in C. My program currently works fine with 'int' but I would like to use it with "double" and "float". My code looks like this:
template <typename T>
string findBin(T x) {
string binary;
for(int i = 4096 ; i >= 1; i/=2) {
if((x & i) != 0) binary += "1";
else binary += "0";
}
return binary;
}
The program fails when I try to instantiate the template using a "double" or a "float".
Succinctly, you don't.
The bitwise operators do not make sense when applied to double or float, and the standard says that the bitwise operators (~, &, |, ^, >>, <<, and the assignment variants) do not accept double or float operands.
Both double and float have 3 sections - a sign bit, an exponent, and the mantissa. Suppose for a moment that you could shift a double right. The exponent, in particular, means that there is no simple translation to shifting a bit pattern right - the sign bit would move into the exponent, and the least significant bit of the exponent would shift into the mantissa, with completely non-obvious sets of meanings. In IEEE 754, there's an implied 1 bit in front of the actual mantissa bits, which also complicates the interpretation.
Similar comments apply to any of the other bit operators.
So, because there is no sane or useful interpretation of the bit operators to double values, they are not allowed by the standard.
From the comments:
I'm only interested in the binary representation. I just want to print it, not do anything useful with it.
This code was written several years ago for SPARC (big-endian) architecture.
#include <stdio.h>
union u_double
{
double dbl;
char data[sizeof(double)];
};
union u_float
{
float flt;
char data[sizeof(float)];
};
static void dump_float(union u_float f)
{
int exp;
long mant;
printf("32-bit float: sign: %d, ", (f.data[0] & 0x80) >> 7);
exp = ((f.data[0] & 0x7F) << 1) | ((f.data[1] & 0x80) >> 7);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 127);
mant = ((((f.data[1] & 0x7F) << 8) | (f.data[2] & 0xFF)) << 8) | (f.data[3] & 0xFF);
printf("mant: %16ld (0x%06lX)\n", mant, mant);
}
static void dump_double(union u_double d)
{
int exp;
long long mant;
printf("64-bit float: sign: %d, ", (d.data[0] & 0x80) >> 7);
exp = ((d.data[0] & 0x7F) << 4) | ((d.data[1] & 0xF0) >> 4);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 1023);
mant = ((((d.data[1] & 0x0F) << 8) | (d.data[2] & 0xFF)) << 8) | (d.data[3] & 0xFF);
mant = (mant << 32) | ((((((d.data[4] & 0xFF) << 8) | (d.data[5] & 0xFF)) << 8) | (d.data[6] & 0xFF)) << 8) | (d.data[7] & 0xFF);
printf("mant: %16lld (0x%013llX)\n", mant, mant);
}
static void print_value(double v)
{
union u_double d;
union u_float f;
f.flt = v;
d.dbl = v;
printf("SPARC: float/double of %g\n", v);
// image_print(stdout, 0, f.data, sizeof(f.data));
// image_print(stdout, 0, d.data, sizeof(d.data));
dump_float(f);
dump_double(d);
}
int main(void)
{
print_value(+1.0);
print_value(+2.0);
print_value(+3.0);
print_value( 0.0);
print_value(-3.0);
print_value(+3.1415926535897932);
print_value(+1e126);
return(0);
}
The commented out 'image_print()` function prints an arbitrary set of bytes in hex, with various minor tweaks. Contact me if you want the code (see my profile).
If you're using Intel (little-endian), you'll probably need to tweak the code to deal with the reverse bit order. But it shows how you can do it - using a union.
You cannot directly apply bitwise operators to float or double, but you can still access the bits indirectly by putting the variable in a union with a character array of the appropriate size, then reading the bits from those characters. For example:
string BitsFromDouble(double value) {
union {
double doubleValue;
char asChars[sizeof(double)];
};
doubleValue = value; // Write to the union
/* Extract the bits. */
string result;
for (size i = 0; i < sizeof(double); ++i)
result += CharToBits(asChars[i]);
return result;
}
You may need to adjust your routine to work on chars, which usually don't range up to 4096, and there may also be some weirdness with endianness here, but the basic idea should work. It won't be cross-platform compatible, since machines use different endianness and representations of doubles, so be careful how you use this.
Bitwise operators don't generally work with "binary representation" (also called object representation) of any type. Bitwise operators work with value representation of the type, which is generally different from object representation. That applies to int as well as to double.
If you really want to get to the internal binary representation of an object of any type, as you stated in your question, you need to reinterpret the object of that type as an array of unsigned char objects and then use the bitwise operators on these unsigned chars
For example
double d = 12.34;
const unsigned char *c = reinterpret_cast<unsigned char *>(&d);
Now by accessing elements c[0] through c[sizeof(double) - 1] you will see the internal representation of type double. You can use bitwise operations on these unsigned char values, if you want to.
Note, again, that in general case in order to access internal representation of type int you have to do the same thing. It generally applies to any type other than char types.
Do a bit-wise cast of a pointer to the double to long long * and dereference.
Example:
inline double bit_and_d(double* d, long long mask) {
long long t = (*(long long*)d) & mask;
return *(double*)&t;
}
Edit: This is almost certainly going to run afoul of gcc's enforcement of strict aliasing. Use one of the various workarounds for that. (memcpy, unions, __attribute__((__may_alias__)), etc)
Other solution is to get a pointer to the floating point variable and cast it to a pointer to integer type of the same size, and then get value of the integer this pointer points to. Now you have an integer variable with same binary representation as the floating point one and you can use your bitwise operator.
string findBin(float f) {
string binary;
for(long i = 4096 ; i >= 1; i/=2) {
long x = * ( long * ) &y;
if((x & i) != 0) binary += "1";
else binary += "0";
}
return binary;
}
But remember: you have to cast to a type with same size. Otherwise unpredictable things may happen (like buffer overflow, access violation etc.).
As others have said, you can use a bitwise operator on a double by casting double* to long long* (or sometimes just long*).
int main(){
double * x = (double*)malloc(sizeof(double));
*x = -5.12345;
printf("%f\n", *x);
*((long*)x) &= 0x7FFFFFFFFFFFFFFF;
printf("%f\n", *x);
return 0;
}
On my computer, this code prints:
-5.123450
5.123450

C++: max integer [duplicate]

This question already has answers here:
maximum value of int
(7 answers)
Closed 7 years ago.
Is there a C++ cross-platform library that provides me with a portable maximum integer number?
I want to declare:
const int MAX_NUM = /* call some library here */;
I use MSVC 2008 unmanaged.
In the C++ standard library header <limits>, you will find:
std::numeric_limits<int>::max()
Which will tell you the maximum value that can be stored in a variable of type int. numeric_limits is a class template, and you can pass it any of the numeric types to get the maximum value that they can hold.
The numeric_limits class template has a lot of other information about numeric types as well.
See limits.h (C) or climits (C++). In this case you would want the INT_MAX constant.
I know it's an old question but maybe someone can use this solution:
int size = 0; // Fill all bits with zero (0)
size = ~size; // Negate all bits, thus all bits are set to one (1)
So far we have -1 as result 'till size is a signed int.
size = (unsigned int)size >> 1; // Shift the bits of size one position to the right.
As Standard says, bits that are shifted in are 1 if variable is signed and negative and 0 if variable would be unsigned or signed and positive.
As size is signed and negativ we would shift in sign bit which is 1, which is not helping much, so we cast to unsigned int, forcing to shift in 0 instead, setting the sign bit to 0 while letting all other bits remain 1.
cout << size << endl; // Prints out size which is now set to maximum positive value.
We could also use a mask and xor but then we had to know the exact bitsize of the variable. With shifting in bits front, we don't have to know at any time how many bits the int has on machine or compiler nor need we include extra libraries.
I know the answer has been given but I just want to know from my old days, I used to do
int max = (unsigned int)-1
Will it give the same as
std::numeric_limits<int>::max()
?
On Hp UX with aCC compiler:
#include <iostream>
#include <limits>
using namespace std;
int main () {
if (sizeof(int)==sizeof(long)){
cout<<"sizeof int == sizeof long"<<endl;
} else {
cout<<"sizeof int != sizeof long"<<endl;
}
if (numeric_limits<int>::max()==numeric_limits<long>::max()){
cout<<"INT_MAX == lONG_MAX"<<endl;
} else {
cout<<"INT_MAX != LONG_MAX"<<endl;
}
cout << "Maximum value for int: " << numeric_limits<int>::max() << endl;
cout << "Maximum value for long: " << numeric_limits<long>::max() << endl;
return 0;
}
It prints:
sizeof int == sizeof long
INT_MAX != LONG_MAX
I checked both int and long types are 4bytes.
manpage limits(5) says that INT_MAX and LONG_MAX are both 2147483647
http://nixdoc.net/man-pages/HP-UX/man5/limits.5.html
So, conclusion std::numeric_limits< type >:: is not portable.