I've written a program, which shows binary representation of a particular integer value, using bitwise operators in C++. For even numbers it works as I expect, but for odd it adds 1 to the left of the binary representation.
#include <iostream>
using std::cout;
using std::cin;
using std::endl;
int main()
{
unsigned int a = 128;
for (int i = sizeof(a) * 8; i >= 0; --i) {
if (a & (1UL << i)) { // if i-th digit is 1
cout << 1; // Output 1
}
else {
cout << 0; // Otherwise output 0
}
}
cout << endl;
system("pause");
return 0;
}
Results:
For a = 128: 000000000000000000000000010000000,
For a = 127: 100000000000000000000000001111111
You might prefer CHAR_BIT macro instead of raw 8 (#include <climits>).
Consider your start value! Assuming unsigned int having 32 bit, your start value is int i = 4*8, so 1U << i shifts the value out of range. This is undefined behaviour and could result in anything, obviously, your specific compiler or hardware shifts %32, thus you get an initial value & 1, resulting in the unexpected leading 1... Did you notice that you actually printed out 33 digits instead of only 32?
The problem is here:
for (int i = sizeof(a) * 8; i >= 0; --i) {
it should be:
for (int i = sizeof(a) * 8; i-- ; ) {
http://coliru.stacked-crooked.com/a/8cb2b745063883fa
Related
I have been trying to learn C++ from a C# background, and I am running in to a very weird mistake in my Maximum Pair Product method.
The method works as expected with small numbers. For products of large numbers it produces a strange rounding and outputs the incorrect output.
int64_t Testing::MaximumPairProductFast(const std::vector<int>& numbers)
{
int64_t largestIntA = 0;
int64_t largestIntB = 0;
for (int i = 0; i < numbers.size(); i++)
{
// find first largest number
if (numbers[i] > largestIntA) largestIntA = numbers[i];
}
for (int i = 0; i < numbers.size(); i++)
{
//find second largest number
if (numbers[i] > largestIntB && numbers[i]!= largestIntA) largestIntB = numbers[i];
}
return largestIntA * largestIntB;
}
The main program
int main()
{
Testing ch;
std::vector<int> test{ 100000,90000 };
int result= ch.MaximumPairProductFast(test);
std::cout << result << "\n";
}
The output of the computation will be 410065408 instead of 9000000000 which is the correct answer
The weird thing is when I try to print this:
std::cout << (int64_t) 100000 * 90000 << "\n";
It does give the correct result 9000000000.
But if I don't cast it to int64_t :
std::cout << 100000 * 90000 << "\n";
The result is 410065408, which is exactly the output from my method even though I have ensured to return an int64_t type.
I have also tried this, but the outcome is the same:
int64_t Test::MaximumPairProductFast(const std::vector<int>& numbers)
{
int64_t largestIntA = 0;
int64_t largestIntB = 0;
for (int i = 0; i < numbers.size(); i++)
{
// find first largest number
if ((int64_t) numbers[i] > largestIntA) largestIntA = (int64_t) numbers[i];
}
for (int i = 0; i < numbers.size(); i++)
{
//find second largest number
if ((int64_t) numbers[i] > largestIntB && (int64_t) numbers[i]!= largestIntA) largestIntB = (int64_t) numbers[i];
}
return largestIntA * largestIntB;
}
Am I missing an obvious detail?
You are truncating your result from a int64_t to an int. On a typical system these days, you'll truncate a 64 bit integer and store it in a 32 bit value.
If you compile with the warning level increased, the compiler will tell you about this problem.
The solution is to declare result with the proper type:
int64_t result = ch.MaximumPairProductFast(test);
Or, just let the compiler use the correct type with auto:
auto result = ch.MaximumPairProductFast(test);
In your second example, the constants 100000 and 90000 are integers, so the multiplication of them will be done as integers. When you case one of the values to int64_t, the multiplication will be done with 64 bit ints. This can also be expressed with 100000LL.
On your system int is a 32-bit integer. int64_t is 64-bit integer. The problem in your code is you are multiplying two 32-bit ints so you're getting a 32-bit int result. Your answer is overflowing. When you explicitly cast it to int64_t, the operands are now 64-bit ints and you're getting a 64-bit int result which can hold the large value and not overflow.
I'll suggest you either:
use std::vector<int64_t> instead of std::vector<int>
If you're worried about space, do as you did, explicitely cast it to 64-bit int and then multiply
I am writing a function in C++ to convert a number from some base to decimal.
It works fine when the number of digits is even, but when it is odd it gives wrong answer.
For example:
Number to convert : 100
Base to convert to: 10
Correct answer : 100
Function's output : 99
Here is the code:
unsigned long long convertToDecimal(const std::string& number, const unsigned base)
{
std::string characters = "0123456789abcdef";
unsigned long long res = 0;
for(int i = 0, len = number.size(); i<len; ++i)
{
res += characters.find(number.at(i))*std::pow(base, len-1-i);
}
return res;
}
I'm using g++ C++11.
I can't reproduce your particular issue, but std::pow returns a floating point number and your implementation may have introduced some sort of rounding error which leaded to a wrong result when converted to unsigned long long.
To avoid those errors, when dealing with integer numbers, you should consider to avoid std::pow at all. Your function, for example, could have been written like this:
#include <iostream>
#include <string>
#include <cmath>
unsigned long long convertToDecimal(const std::string& number, const unsigned base)
{
std::string characters = "0123456789abcdef";
unsigned long long res = 0;
unsigned long long power = 1;
for(auto i = number.crbegin(); i != number.crend(); ++i)
{
// As in your code, I'm not checking for erroneous input
res += characters.find(*i) * power;
power *= base;
}
return res;
}
int main ()
{
std::cout << convertToDecimal("100", 2) << '\n'; // --> 4
std::cout << convertToDecimal("1234", 8) << '\n'; // --> 668
std::cout << convertToDecimal("99999", 10) << '\n'; // --> 99999
std::cout << convertToDecimal("fedcba", 16) << '\n'; // --> 16702650
}
My objective is to write an algorithm that would be able to convert a long number into a binary number stored in a string.
Here is my current block of code:
#include <iostream>
#define LONG_SIZE 64; // size of a long type is 64 bits
using namespace std;
string b10_to_b2(long x)
{
string binNum;
if(x < 0) // determine if the number is negative, a number in two's complement will be neg if its' first bit is zero.
{
binNum = "1";
}
else
{
binNum = "0";
}
int i = LONG_SIZE - 1;
while(i > 0)
{
i --;
if( (x & ( 1 << i) ) == ( 1 << i) )
{
binNum = binNum + "1";
}
else
{
binNum = binNum + "0";
}
}
return binNum;
}
int main()
{
cout << b10_to_b2(10) << endl;
}
The output of this program is:
00000000000000000000000000000101000000000000000000000000000001010
I want the output to be:
00000000000000000000000000000000000000000000000000000000000001010
Can anyone identify the problem? For whatever reason the function outputs 10 represented by 32 bits concatenated with another 10 represented by 32 bits.
why would you assume long is 64 bit?
try const size_t LONG_SIZE=sizeof(long)*8;
check this, the program works correctly with my changes
http://ideone.com/y3OeB3
Edit: and ad #Mats Petersson pointed out you can make it more robust by changing this line
if( (x & ( 1 << i) ) == ( 1 << i) )
to something like
if( (x & ( 1UL << i) ) ) where that UL is important, you can see his explanation the the comments
Several suggestions:
Make sure you use a type that is guaranteed to be 64-bit, such as uint64_t, int64_t or long long.
Use above mentioned 64-bit type for your variable i to guarantee that the 1 << i calculates correctly. This is caused by the fact that shift is only guaranteed by the standard when the number of bits shifted are less or equal to the number of bits in the type being shifted - and 1 is the type int, which for most modern platforms (evidently including yours) is 32 bits.
Don't put semicolon on the end of your #define LONG_SIZE - or better yet, use const int long_size = 64; as this allows all manner of better behaviour, for example that you in the debugger can print long_size and get 64, where print LONG_SIZE where LONG_SIZE is a macro will yield an error in the debugger.
How do I detect the length of an integer? In case I had le: int test(234567545);
How do I know how long the int is? Like telling me there is 9 numbers inside it???
*I have tried:**
char buffer_length[100];
// assign directly to a string.
sprintf(buffer_length, "%d\n", 234567545);
string sf = buffer_length;
cout <<sf.length()-1 << endl;
But there must be a simpler way of doing it or more clean...
How about division:
int length = 1;
int x = 234567545;
while ( x /= 10 )
length++;
or use the log10 method from <math.h>.
Note that log10 returns a double, so you'll have to adjust the result.
Make a function :
int count_numbers ( int num) {
int count =0;
while (num !=0) {
count++;
num/=10;
}
return count;
}
Nobody seems to have mentioned converting it to a string, and then getting the length. Not the most performant, but it definitely does it in one line of code :)
int num = -123456;
int len = to_string(abs(num)).length();
cout << "LENGTH of " << num << " is " << len << endl;
// prints "LENGTH of 123456 is 6"
You can use stringstream for this as shown below
stringstream ss;
int i = 234567545;
ss << i;
cout << ss.str().size() << endl;
if "i" is the integer, then
int len ;
char buf[33] ;
itoa (i, buf, 10) ; // or maybe 16 if you want base-16 ?
len = strlen(buf) ;
if(i < 0)
len-- ; // maybe if you don't want to include "-" in length ?
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
int i=2384995;
char buf[100];
itoa(i, buf, 10); // 10 is the base decimal
printf("Lenght: %d\n", strlen(buf));
return 0;
}
Beware that itoa is not a standard function, even if it is supported by many compilers.
len=1+floor(log10(n));//c++ code lib (cmath)
looking across the internet it's common to make the mistake of initializing the counter variable to 0 and then entering a pre-condition loop testing for as long as the count does not equal 0. a do-while loop is perfect to avoid this.
unsigned udc(unsigned u) //unsigned digit count
{
unsigned c = 0;
do
++c;
while ((u /= 10) != 0);
return c;
}
it's probably cheaper to test whether u is less than 10 to avoid the uneccessary division, increment, and cmp instructions for cases where u < 10.
but while on that subject, optimization, you could simply test u against constant powers of ten.
unsigned udc(unsigned u) //unsigned digit count
{
if (u < 10) return 1;
if (u < 100) return 2;
if (u < 1000) return 3;
//...
return 0; //number was not supported
}
which saves you 3 instructions per digit, but is less adaptable for different radixes inaddition to being not as attractive, and tedious to write by hand, in which case you'd rather write a routine to write the routine before inserting it into your program. because C only supports very finite numbers, 64bit,32bit,16bit,8bit, you could simply limit yourself to the maximum when generating the routine to benefit all sizes.
to account for negative numbers, you'd simply negate u if u < 0 before counting the number of digits. of course first making the routine support signed numbers.
if you know that u < 1000,
it's probably easier to just write, instead of writing the routine.
if (u > 99) len = 3;
else
if (u > 9) len = 2;
else len = 1;
Here are a few different C++ implementations* of a function named digits() which takes a size_t as argument and returns its number of digits. If your number is negative, you are going to have to pass its absolute value to the function in order for it to work properly:
The While Loop
int digits(size_t i)
{
int count = 1;
while (i /= 10) {
count++;
}
return count;
}
The Exhaustive Optimization Technique
int digits(size_t i) {
if (i > 9999999999999999999ull) return 20;
if (i > 999999999999999999ull) return 19;
if (i > 99999999999999999ull) return 18;
if (i > 9999999999999999ull) return 17;
if (i > 999999999999999ull) return 16;
if (i > 99999999999999ull) return 15;
if (i > 9999999999999ull) return 14;
if (i > 999999999999ull) return 13;
if (i > 99999999999ull) return 12;
if (i > 9999999999ull) return 11;
if (i > 999999999ull) return 10;
if (i > 99999999ull) return 9;
if (i > 9999999ull) return 8;
if (i > 999999ull) return 7;
if (i > 99999ull) return 6;
if (i > 9999ull) return 5;
if (i > 999ull) return 4;
if (i > 99ull) return 3;
if (i > 9ull) return 2;
return 1;
}
The Recursive Way
int digits(size_t i) { return i < 10 ? 1 : 1 + digits(i / 10); }
Using snprintf() as a Character Counter
⚠ Requires #include <stdio.h> and may incur a significant performance penalty compared to other solutions. This method capitalizes on the fact that snprintf() counts the characters it discards when the buffer is full. Therefore, with the right arguments and format specifiers, we can force snprintf() to give us the number of digits of any size_t.
int digits(size_t i) { return snprintf (NULL, 0, "%llu", i); }
The Logarithmic Way
⚠ Requires #include <cmath> and is unreliable for unsigned integers with more than 14 digits.
// WARNING! There is a silent implicit conversion precision loss that happens
// when we pass a large int to log10() which expects a double as argument.
int digits(size_t i) { return !i? 1 : 1 + log10(i); }
Driver Program
You can use this program to test any function that takes a size_t as argument and returns its number of digits. Just replace the definition of the function digits() in the following code:
#include <iostream>
#include <stdio.h>
#include <cmath>
using std::cout;
// REPLACE this function definition with the one you want to test.
int digits(size_t i)
{
int count = 1;
while (i /= 10) {
count++;
}
return count;
}
// driver code
int main ()
{
const int max = digits(-1ull);
size_t i = 0;
int d;
do {
d = digits(i);
cout << i << " has " << d << " digits." << '\n';
i = d < max ? (!i ? 9 : 10 * i - 1) : -1;
cout << i << " has " << digits(i) << " digits." << '\n';
} while (++i);
}
* Everything was tested on a Windows 10 (64-bit) machine using GCC 12.2.0 in Visual Studio Code .
As long as you are mixing C stdio and C++ iostream, you can use the snprintf NULL 0 trick to get the number of digits in the integer representation of the number. Specifically, per man 3 printf If the string exceeds the size parameter provided and is truncated snprintf() will return
... the number of characters (excluding the terminating null byte)
which would have been written to the final string if enough space
had been available.
This allows snprintf() to be called with the str parameter NULL and the size parameter 0, e.g.
int ndigits = snprintf (NULL, 0, "%d", 234567545)
In your case where you simply wish to output the number of digits required for the representation, you can simply output the return, e.g.
#include <iostream>
#include <cstdio>
int main() {
std::cout << "234567545 is " << snprintf (NULL, 0, "%d", 234567545) <<
" characters\n";
}
Example Use/Output
$ ./bin/snprintf_trick
234567545 is 9 characters
note: the downside to using the snprintf() trick is that you must provide the conversion specifier which will limit the number of digits representable. E.g "%d" will limit to int values while "%lld" would allow space for long long values. The C++ approach using std::stringstream while still limited to numeric conversion using the << operator handles the different integer types without manually specifying the conversion. Something to consider.
second note: you shouldn't dangle the "\n" of the end of your sprintf() conversion. Add the new line as part of your output and you don't have to subtract 1 from the length...
I made a function that converts numbers to binary. For some reason it's not working. It gives the wrong output. The output is in binary format, but it always gives the wrong result for binary numbers that end with a zero(at least that's what I noticed..)
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
while (x > 1)
{
rem = x % 2;
x /= 2;
converted += rem;
converted *= 10;
}
converted += x;
return converted;
}
Please help me fix it, this is really frustrating..
Thanks!
Use std::bitset to do the translation:
#include <iostream>
#include <bitset>
#include <limits.h>
int main()
{
int val;
std::cin >> val;
std::bitset<sizeof(int) * CHAR_BIT> bits(val);
std::cout << bits << "\n";
}
You're reversing the bits.
You cannot use the remains of x as an indicator when to terminate the loop.
Consider e.g. 4.
After first loop iteration:
rem == 0
converted == 0
x == 2
After second loop iteration:
rem == 0
converted == 0
x == 1
And then you set converted to 1.
Try:
int i = sizeof(x) * 8; // i is now number of bits in x
while (i>0) {
--i;
converted *= 10;
converted |= (x >> i) & 1;
// Shift x right to get bit number i in the rightmost position,
// then and with 1 to remove any bits left of bit number i,
// and finally or it into the rightmost position in converted
}
Running the above code with x as an unsigned char (8 bits) with value 129 (binary 10000001)
Starting with i = 8, size of unsigned char * 8. In the first loop iteration i will be 7. We then take x (129) and shift it right 7 bits, that gives the value 1. This is OR'ed into converted which becomes 1. Next iteration, we start by multiplying converted with 10 (so now it's 10), we then shift x 6 bits right (value becomes 2) and ANDs it with 1 (value becomes 0). We OR 0 with converted, which is then still 10. 3rd-7th iteration do the same thing, converted is multiplied with 10 and one specific bit is extracted from x and OR'ed into converted. After these iterations, converted is 1000000.
In the last iteration, first converted is multiplied with 10 and becomes 10000000, we shift x right 0 bits, yielding the original value 129. We AND x with 1, this gives the value 1. 1 is then OR'ed into converted, which becomes 10000001.
You're doing it wrong ;)
http://www.bellaonline.com/articles/art31011.asp
The remain of the first division is the rightmost bit in the binary form, with your function it becomes the leftmost bit.
You can do something like this :
unsigned long long to_binary(unsigned long long x)
{
int rem;
unsigned long long converted = 0;
unsigned long long multiplicator = 1;
while (x > 0)
{
rem = x % 2;
x /= 2;
converted += rem * multiplicator;
multiplicator *= 10;
}
return converted;
}
edit: the code proposed by CygnusX1 is a little bit more efficient, but less comprehensive I think, I'll advise taking his version.
improvement : I changed the stop condition of the while loop, so we can remove the line adding x at the end.
You are actually reversing the binary number!
to_binary(2) will return 01, instead of 10. When initial 0es are truncated, it will look the same as 1.
how about doing it this way:
unsigned long long digit = 1;
while (x>0) {
if (x%2)
converted+=digit;
x/=2;
digit*=10;
}
What about std::bitset?
http://www.cplusplus.com/reference/stl/bitset/to_string/
If you want to display you number as binary, you need to format it as a string. The easiest way to do this that I know of is to use the STL bitset.
#include <bitset>
#include <iostream>
#include <sstream>
typedef std::bitset<64> bitset64;
std::string to_binary(const unsigned long long int& n)
{
const static int mask = 0xffffffff;
int upper = (n >> 32) & mask;
int lower = n & mask;
bitset64 upper_bs(upper);
bitset64 lower_bs(lower);
bitset64 result = (upper_bs << 32) | lower_bs;
std::stringstream ss;
ss << result;
return ss.str();
};
int main()
{
for(int i = 0; i < 10; ++i)
{
std::cout << i << ": " << to_binary(i) << "\n";
};
return 1;
};
The output from this program is:
0: 0000000000000000000000000000000000000000000000000000000000000000
1: 0000000000000000000000000000000000000000000000000000000000000001
2: 0000000000000000000000000000000000000000000000000000000000000010
3: 0000000000000000000000000000000000000000000000000000000000000011
4: 0000000000000000000000000000000000000000000000000000000000000100
5: 0000000000000000000000000000000000000000000000000000000000000101
6: 0000000000000000000000000000000000000000000000000000000000000110
7: 0000000000000000000000000000000000000000000000000000000000000111
8: 0000000000000000000000000000000000000000000000000000000000001000
9: 0000000000000000000000000000000000000000000000000000000000001001
If your purpose is only display them as their binary representation, then you may try itoa or std::bitset
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <bitset>
using namespace std;
int main()
{
unsigned long long x = 1234567890;
// c way
char buffer[sizeof(x) * 8];
itoa (x, buffer, 2);
printf ("binary: %s\n",buffer);
// c++ way
cout << bitset<numeric_limits<unsigned long long>::digits>(x) << endl;
return EXIT_SUCCESS;
}
void To(long long num,char *buff,int base)
{
if(buff==NULL) return;
long long m=0,no=num,i=1;
while((no/=base)>0) i++;
buff[i]='\0';
no=num;
while(no>0)
{
m=no%base;
no=no/base;
buff[--i]=(m>9)?((base==16)?('A' + m - 10):m):m+48;
}
}
Here is a simple solution.
#include <iostream>
using namespace std;
int main()
{
int num=241; //Assuming 16 bit integer
for(int i=15; i>=0; i--) cout<<((num >> i) & 1);
cout<<endl;
for(int i=0; i<16; i++) cout<<((num >> i) & 1);
cout<<endl;
return 0;
}