C Bitwise, TRUNCATING zero's in hex. 0x15000000 -> 0x15 ??? HOW? - c++

Is this even at all possible?
How would I truncate zeros?
In the integer withOUT using any masking techniques (NOT ALLOWED: 0x15000000 & 0xff000000 like that.). And also WITHOUT any casting.

Well, really, if you want to truncate the right side, the naive solution is:
uint input = 0x150000;
if(input)
{
while(!(input & 0x01)) // Replace with while(!(input % 0x10)) if you are actually against masking.
{
input >>= 1;
}
}
// input, here, will be 0x15.
Though, you can unroll this loop. As in:
if(!(input & 0xFFFF)) { input >>= 16; }
if(!(input & 0x00FF)) { input >>= 8; }
if(!(input & 0x000F)) { input >>= 4; } // Comment this line, down, if you want to align on bytes.
if(!(input & 0x0003)) { input >>= 2; } // Likewise here, down, to align on nybbles.
if(!(input & 0x0001)) { input >>= 1; }

One way to do it without any masking (assuming you want to truncate zero bits):
int input = 0x150000;
while (input && !(input%2))
input >>= 1;
Here's a complete program which illustrates it.
#include <stdio.h>
int main (int argc, char *argv[]) {
int input = 0;
if (argc < 2) {
fprintf (stderr, "Needs at least one parameter.\n");
return 1;
}
input = atoi (argv[1]);
printf ("%x -> ", input);
while (input && !(input%2))
input >>= 1;
printf ("%x\n",input);
return 0;
}
If you want to truncate zero nybbles, use:
while (input && ((input%16)==0))
input >>= 4;

John Gietzen's answer is my favourite, but for fun, it can actually be done without a loop!
If you know how many trailing zeros there are, then you can just shift right that number of bits. There are a number of techniques for finding the number of bits. See the few sections following the linear algorithm here: http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightLinear.

Divide by 16 (one nybble or hex digit's worth), as long as it's a multiple of 16:
if ( number )
while ( number % 16 == 0 )
number /= 16;
Of course, you can drop the initial test for zero if you know you'll never that as input.

Does this count as masking?
unsigned int truncate(unsigned int input)
{
while (input != 0 && input % 0x10 == 0)
{
input /= 0x10;
}
return input;
}

The easiest way is to convert the int to a byte, so you are now down to 8 bits, so not everything is truncated, necessarily, but it would have fewer zeroes.
I am hoping this isn't for homework, but because I fear it is, giving you all the code would not be fair.

Shift right with 24. Make sure your variable is unsigned.

Related

Converting a 'long' type into a binary String

My objective is to write an algorithm that would be able to convert a long number into a binary number stored in a string.
Here is my current block of code:
#include <iostream>
#define LONG_SIZE 64; // size of a long type is 64 bits
using namespace std;
string b10_to_b2(long x)
{
string binNum;
if(x < 0) // determine if the number is negative, a number in two's complement will be neg if its' first bit is zero.
{
binNum = "1";
}
else
{
binNum = "0";
}
int i = LONG_SIZE - 1;
while(i > 0)
{
i --;
if( (x & ( 1 << i) ) == ( 1 << i) )
{
binNum = binNum + "1";
}
else
{
binNum = binNum + "0";
}
}
return binNum;
}
int main()
{
cout << b10_to_b2(10) << endl;
}
The output of this program is:
00000000000000000000000000000101000000000000000000000000000001010
I want the output to be:
00000000000000000000000000000000000000000000000000000000000001010
Can anyone identify the problem? For whatever reason the function outputs 10 represented by 32 bits concatenated with another 10 represented by 32 bits.
why would you assume long is 64 bit?
try const size_t LONG_SIZE=sizeof(long)*8;
check this, the program works correctly with my changes
http://ideone.com/y3OeB3
Edit: and ad #Mats Petersson pointed out you can make it more robust by changing this line
if( (x & ( 1 << i) ) == ( 1 << i) )
to something like
if( (x & ( 1UL << i) ) ) where that UL is important, you can see his explanation the the comments
Several suggestions:
Make sure you use a type that is guaranteed to be 64-bit, such as uint64_t, int64_t or long long.
Use above mentioned 64-bit type for your variable i to guarantee that the 1 << i calculates correctly. This is caused by the fact that shift is only guaranteed by the standard when the number of bits shifted are less or equal to the number of bits in the type being shifted - and 1 is the type int, which for most modern platforms (evidently including yours) is 32 bits.
Don't put semicolon on the end of your #define LONG_SIZE - or better yet, use const int long_size = 64; as this allows all manner of better behaviour, for example that you in the debugger can print long_size and get 64, where print LONG_SIZE where LONG_SIZE is a macro will yield an error in the debugger.

Swapping lower byte (0-7) with the higher one (8-15) one

I now know how it's done in one line, altough I fail to realise why my first draft doesn't work aswell. What I'm trying to do is saving the lower part into a different variable, shifting the higher byte to the right and adding the two numbers via OR. However, it just cuts the lower half of the hexadecimal and returns the rest.
short int method(short int number) {
short int a = 0;
for (int x = 8; x < 16; x++){
if ((number & (1 << x)) == 1){
a = a | (1<<x);
}
}
number = number >> 8;
short int solution = number | a;
return solution;
You are doing it one bit at a time; a better approach would do it with a single operation:
uint16_t method(uint16_t number) {
return (number << 8) | (number >> 8);
}
The code above specifies 16-bit unsigned type explicitly, thus avoiding issues related to sign extension. You need to include <stdint.h> (or <cstdint> in C++) in order for this to compile.
if ((number & (1 << x)) == 1)
This is only going to return true if x is 0. Since 1 in binary is 00000000 00000001, and 1 << x is going to set all but the x'th bit to 0.
You don't care if it's 1 or not, you just care if it's non-zero. Use
if (number & (1 << x))

Binary-Decimal Negative bit set

How can I tell if a binary number is negative?
Currently I have the code below. It works fine converting to Binary. When converting to decimal, I need to know if the left most bit is 1 to tell if it is negative or not but I cannot seem to figure out how to do that.
Also, instead of making my Bin2 function print 1's an 0's, how can I make it return an integer? I didn't want to store it in a string and then convert to int.
EDIT: I'm using 8 bit numbers.
int Bin2(int value, int Padding = 8)
{
for (int I = Padding; I > 0; --I)
{
if (value & (1 << (I - 1)))
std::cout<< '1';
else
std::cout<<'0';
}
return 0;
}
int Dec2(int Value)
{
//bool Negative = (Value & 10000000);
int Dec = 0;
for (int I = 0; Value > 0; ++I)
{
if(Value % 10 == 1)
{
Dec += (1 << I);
}
Value /= 10;
}
//if (Negative) (Dec -= (1 << 8));
return Dec;
}
int main()
{
Bin2(25);
std::cout<<"\n\n";
std::cout<<Dec2(11001);
}
You are checking for negative value incorrectly. Do the following instead:
bool Negative = (value & 0x80000000); //It will work for 32-bit platforms only
Or may be just compare it with 0.
bool Negative = (value < 0);
Why don't you just compare it to 0. Should work fine and almost certainly you can't do this in a manner more efficient than the compiler.
I am entirely unclear if this is what the OP is looking for, but its worth a toss:
If you know you have a value in a signed int that is supposed to be representing a signed 8-bit value, you can pull it apart, store it in a signed 8-bit value, then promote it back to a native int signed value like this:
#include <stdio.h>
int main(void)
{
// signed integer, value is 245. 8bit signed value is (-11)
int num = 0xF5;
// pull out the low 8 bits, storing them in a signed char.
signed char ch = (signed char)(num & 0xFF);
// now let the signed char promote to a signed int.
int res = ch;
// finally print both.
printf("%d ==> %d\n",num, res);
// do it again for an 8 bit positive value
// this time with just direct casts.
num = 0x70;
printf("%d ==> %d\n", num, (int)((signed char)(num & 0xFF)));
return 0;
}
Output
245 ==> -11
112 ==> 112
Is that what you're trying to do? In short, the code above will take the 8bits sitting at the bottom of num, treat them as a signed 8-bit value, then promote them to a signed native int. The result is you can now "know" not only whether the 8-bits were a negative number (since res will be negative if they were), you also get the 8-bit signed number as a native int in the process.
On the other hand, if all you care about is whether the 8th bit is set in the input int, and is supposed to denote a negative value state, then why not just :
int IsEightBitNegative(int val)
{
return (val & 0x80) != 0;
}

How to convert an int to a binary string representation in C++

I have an int that I want to store as a binary string representation. How can this be done?
Try this:
#include <bitset>
#include <iostream>
int main()
{
std::bitset<32> x(23456);
std::cout << x << "\n";
// If you don't want a variable just create a temporary.
std::cout << std::bitset<32>(23456) << "\n";
}
I have an int that I want to first convert to a binary number.
What exactly does that mean? There is no type "binary number". Well, an int is already represented in binary form internally unless you're using a very strange computer, but that's an implementation detail -- conceptually, it is just an integral number.
Each time you print a number to the screen, it must be converted to a string of characters. It just so happens that most I/O systems chose a decimal representation for this process so that humans have an easier time. But there is nothing inherently decimal about int.
Anyway, to generate a base b representation of an integral number x, simply follow this algorithm:
initialize s with the empty string
m = x % b
x = x / b
Convert m into a digit, d.
Append d on s.
If x is not zero, goto step 2.
Reverse s
Step 4 is easy if b <= 10 and your computer uses a character encoding where the digits 0-9 are contiguous, because then it's simply d = '0' + m. Otherwise, you need a lookup table.
Steps 5 and 7 can be simplified to append d on the left of s if you know ahead of time how much space you will need and start from the right end in the string.
In the case of b == 2 (e.g. binary representation), step 2 can be simplified to m = x & 1, and step 3 can be simplified to x = x >> 1.
Solution with reverse:
#include <string>
#include <algorithm>
std::string binary(unsigned x)
{
std::string s;
do
{
s.push_back('0' + (x & 1));
} while (x >>= 1);
std::reverse(s.begin(), s.end());
return s;
}
Solution without reverse:
#include <string>
std::string binary(unsigned x)
{
// Warning: this breaks for numbers with more than 64 bits
char buffer[64];
char* p = buffer + 64;
do
{
*--p = '0' + (x & 1);
} while (x >>= 1);
return std::string(p, buffer + 64);
}
AND the number with 100000..., then 010000..., 0010000..., etc. Each time, if the result is 0, put a '0' in a char array, otherwise put a '1'.
int numberOfBits = sizeof(int) * 8;
char binary[numberOfBits + 1];
int decimal = 29;
for(int i = 0; i < numberOfBits; ++i) {
if ((decimal & (0x80000000 >> i)) == 0) {
binary[i] = '0';
} else {
binary[i] = '1';
}
}
binary[numberOfBits] = '\0';
string binaryString(binary);
http://www.phanderson.com/printer/bin_disp.html is a good example.
The basic principle of a simple approach:
Loop until the # is 0
& (bitwise and) the # with 1. Print the result (1 or 0) to the end of string buffer.
Shift the # by 1 bit using >>=.
Repeat loop
Print reversed string buffer
To avoid reversing the string or needing to limit yourself to #s fitting the buffer string length, you can:
Compute ceiling(log2(N)) - say L
Compute mask = 2^L
Loop until mask == 0:
& (bitwise and) the mask with the #. Print the result (1 or 0).
number &= (mask-1)
mask >>= 1 (divide by 2)
I assume this is related to your other question on extensible hashing.
First define some mnemonics for your bits:
const int FIRST_BIT = 0x1;
const int SECOND_BIT = 0x2;
const int THIRD_BIT = 0x4;
Then you have your number you want to convert to a bit string:
int x = someValue;
You can check if a bit is set by using the logical & operator.
if(x & FIRST_BIT)
{
// The first bit is set.
}
And you can keep an std::string and you add 1 to that string if a bit is set, and you add 0 if the bit is not set. Depending on what order you want the string in you can start with the last bit and move to the first or just first to last.
You can refactor this into a loop and using it for arbitrarily sized numbers by calculating the mnemonic bits above using current_bit_value<<=1 after each iteration.
There isn't a direct function, you can just walk along the bits of the int (hint see >> ) and insert a '1' or '0' in the string.
Sounds like a standard interview / homework type question
Use sprintf function to store the formatted output in the string variable, instead of printf for directly printing. Note, however, that these functions only work with C strings, and not C++ strings.
There's a small header only library you can use for this here.
Example:
std::cout << ConvertInteger<Uint32>::ToBinaryString(21);
// Displays "10101"
auto x = ConvertInteger<Int8>::ToBinaryString(21, true);
std::cout << x << "\n"; // displays "00010101"
auto x = ConvertInteger<Uint8>::ToBinaryString(21, true, "0b");
std::cout << x << "\n"; // displays "0b00010101"
Solution without reverse, no additional copy, and with 0-padding:
#include <iostream>
#include <string>
template <short WIDTH>
std::string binary( unsigned x )
{
std::string buffer( WIDTH, '0' );
char *p = &buffer[ WIDTH ];
do {
--p;
if (x & 1) *p = '1';
}
while (x >>= 1);
return buffer;
}
int main()
{
std::cout << "'" << binary<32>(0xf0f0f0f0) << "'" << std::endl;
return 0;
}
This is my best implementation of converting integers(any type) to a std::string. You can remove the template if you are only going to use it for a single integer type. To the best of my knowledge , I think there is a good balance between safety of C++ and cryptic nature of C. Make sure to include the needed headers.
template<typename T>
std::string bstring(T n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}
Use it like so,
std::cout << bstring<size_t>(371) << '\n';
This is the output in my computer(it differs on every computer),
0000000000000000000000000000000000000000000000000000000101110011
Note that the entire binary string is copied and thus the padded zeros which helps to represent the bit size. So the length of the string is the size of size_t in bits.
Lets try a signed integer(negative number),
std::cout << bstring<signed int>(-1) << '\n';
This is the output in my computer(as stated , it differs on every computer),
11111111111111111111111111111111
Note that now the string is smaller , this proves that signed int consumes less space than size_t. As you can see my computer uses the 2's complement method to represent signed integers (negative numbers). You can now see why unsigned short(-1) > signed int(1)
Here is a version made just for signed integers to make this function without templates , i.e use this if you only intend to convert signed integers to string.
std::string bstring(int n){
std::string s;
for(int m = sizeof(n) * 8;m--;){
s.push_back('0'+((n >> m) & 1));
}
return s;
}

What is the fastest way to calculate the number of bits needed to store a number

I'm trying to optimize some bit packing and unpacking routines. In order to do the packing I need to calculate the number of bits needed to store integer values. Here is the current code.
if (n == -1) return 32;
if (n == 0) return 1;
int r = 0;
while (n)
{
++r;
n >>= 1;
}
return r;
Non-portably, use the bit-scan-reverse opcode available on most modern architectures. It's exposed as an intrinsic in Visual C++.
Portably, the code in the question doesn't need the edge-case handling. Why do you require one bit for storing 0? In any case, I'll ignore the edges of the problem. The guts can be done efficiently thus:
if (n >> 16) { r += 16; n >>= 16; }
if (n >> 8) { r += 8; n >>= 8; }
if (n >> 4) { r += 4; n >>= 4; }
if (n >> 2) { r += 2; n >>= 2; }
if (n - 1) ++r;
You're looking to determine the integer log base 2 of a number (the l=highest bit set). Sean Anderson's "Bit Twiddling Hacks" page has several methods ranging from the obvious counting bits in a loop to versions that use table lookup. Note that most of the methods demonstrated will need to be modified a bit to work with 64-bit ints if that kind of portability is important to you.
http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
Just make sure that any shifting you're using to work out the highest bit set needs to be done' on an unsigned version of the number since a compiler implementation might or might not sign extend the >> operation on a signed value.
What you are trying to do is find the most significant bit. Some architectures have a special instruction just for this purpose. For those that don't, use a table lookup method.
Create a table of 256 entries, wherein each element identifies the upper most bit.
Either loop through each byte in the number, or use a few if-statements to break to find the highest order non-zero byte.
I'll let you take the rest from here.
Do a binary search instead of a linear search.
if ((n >> 16) != 0)
{
r += 16;
n >>= 16;
}
if ((n >> 8) != 0)
{
r += 8;
n >>= 8;
}
if ((n >> 4) != 0)
{
r += 4;
n >>= 4;
}
// etc.
If your hardware has bit-scan-reverse, an even faster approach would be to write your routine in assembly language. To keep your code portable, you could do
#ifdef ARCHITECTURE_WITH_BSR
asm // ...
#else
// Use the approach shown above
#endif
You would have to check the execution time to figure the granularity, but my guess is that doing 4 bits at a time, and then reverting to one bit at a time would make it faster. Log operations would probably be slower than logical/bit operations.
if (n < 0) return 32;
int r = 0;
while (n && 0x7FFFFFF0) {
r+=4;
n >>= 4; }
while (n) {
r++;
n >>= 1; }
return r;
number_of_bits = log2(integer_number)
rounded to the higher integer.