How to find 2^x quickly in C. If you guys have any idea please help.
Is it int or float? For int, use left shift. For float, pow() function
Bitshift to the left, this multiplies numbers by 2 for every place shift, in the same way that shifting decimal numbers to the left multiplies them by 10.
Use the << operator, like so:
int twoPowZero = 1; // any number^0 is 1
int twoPowOne = 1 << 1; // this sets the '2' bit to '1'
int twoPowTwo = 1 << 2;
int twoPowFive = 1 << 5;
int twoPowTen = 1 << 10;
and so on until you get to 1 << 30. If you're using a signed 32-bit integer then 1 << 31 will give you -2147483648 because of two's complement. If you want to go higher than use long long unsigned int or uint64_t (64-bit integer). Or if your platform supports it: uint128_t.
If you want to go even higher, you'll need to roll your own "big integer" code. Note that some platforms and compilers come with a 128-bit integer type, but runtime performance varies: they may require a processor that can perform 128-bit operations, or they might break it down into two 64-bit operations.
Recall that in a binary system a bit in a position N represents 2^N. Therefore, the formula for positive int is
1 << x
#include <stdio.h>
#include <math.h>
int main ()
{
printf ("7.0 ^ 3 = %lf\n", pow (7.0,3));
printf ("4.73 ^ 12 = %lf\n", pow (4.73,12));
printf ("32.01 ^ 1.54 = %lf\n", pow (32.01,1.54));
return 0;
}
output:
7.0 ^ 3 = 343.000000
4.73 ^ 12 = 125410439.217423
32.01 ^ 1.54 = 208.036691
#include <math.h>
float powf(float x, float y); /* C99 */
double pow(double x, double y);
long double powl(long double x, long double y); /* C99 */
Set a 1 in the xth bit position: 1 << x.
In this case x should be less than the width of integer type, and x should be positive.
Related
Say I have a number, 100000, I can use some simple maths to check its size, i.e. log(100000) -> 5 (base 10 logarithm). Theres also another way of doing this, which is quite slow. std::string num = std::to_string(100000), num.size(). Is there an way to mathematically determine the length of a number? (not just 100000, but for things like 2313455, 123876132.. etc)
Why not use ceil? It rounds up to the nearest whole number - you can just wrap that around your log function, and add a check afterwards to catch the fact that a power of 10 would return 1 less than expected.
Here is a solution to the problem using single precision floating point numbers in O(1):
#include <cstdio>
#include <iostream>
#include <cstring>
int main(){
float x = 500; // to be converted
uint32_t f;
std::memcpy(&f, &x, sizeof(uint32_t)); // Convert float into a manageable int
uint8_t exp = (f & (0b11111111 << 23)) >> 23; // get the exponent
exp -= 127; // floating point bias
exp /= 3.32; // This will round but for this case it should be fine (ln2(10))
std::cout << std::to_string(exp) << std::endl;
}
For a number in scientific notation a*10^e this will return e (when 1<=a<10), so the length of the number (if it has an absolute value larger than 1), will be exp + 1.
For double precision this works, but you have to adapt it (bias is 1023 I think, and bit alignment is different. Check this)
This only works for floating point numbers, though so probably not very useful in this case. The efficiency in this case relative to the logarithm will also be determined by the speed at which int -> float conversion can occur.
Edit:
I just realised the question was about double. The modified result is:
int16_t getLength(double a){
uint64_t bits;
std::memcpy(&bits, &a, sizeof(uint64_t));
int16_t exp = (f >> 52) & 0b11111111111; // There is no 11 bit long int so this has to do
exp -= 1023;
exp /= 3.32;
return exp + 1;
}
There are some changes so that it behaves better (and also less shifting).
You can also use frexp() to get the exponent without bias.
If the number is whole, keep dividing by 10, until you're at 0. You'd have to divide 100000 6 times, for example. For the fractional part, you need to keep multiplying by 10 until trunc(f) == f.
I have two numbers which are 32 digit decimal floating point numbers, like 1.2345678901234567890123456789012, I want to get the multiplication which is also 32 digit decimal floating point number. Is there any efficient way to do this?
Just use boost::multiprecision. You can use arbitrary precision but there is a typedef cpp_bin_float_50 which is a float with 50 decimal places.
Example for multiplying to big decimal numbers:
#include <iostream>
#include <boost/multiprecision/cpp_bin_float.hpp>
int main(){
boost::multiprecision::cpp_bin_float_50 val1("1.2345678901234567890123456789012");
boost::multiprecision::cpp_bin_float_50 val2("2.2345678901234567890123456789012");
std::cout << std::setprecision(std::numeric_limits< boost::multiprecision::cpp_bin_float_50>::max_digits10);
std::cout << val1*val2 << std::endl;
}
Output:
2.7587257654473404640618808351577828416864868162811293
Use the usual grade school algorithm (long multiplication). If you used 3 ints (instead of 4):
A2A1A0 * B2B1B0 = A2*B2 A2*B1 A2*B0
A1*B2 A1*B1 A1*B0
A0*B2 A0*B1 A0*B0
Every multiplication will have a 2-int result. You have to sum every column on the right side, and propagate carry. In the end, you'll have a 6-int result (if inputs are 4-int, then the result is 8-int). You can then round this 8-int result. This is how you can handle the mantissa part. The exponents should just be added together.
I recommend you to divide a problem into two parts:
multiplying a long number with an int
adding the result from 1. into the final result
You'll need something like this as a workhorse (note that this code assumes that int is 32-bit, long long is 64-bit):
void wideMul(unsigned int &hi, unsigned int &lo, unsigned int a, unsigned int b) {
unsigned long long int r = (unsigned long long int)a*b;
lo = (unsigned int)r;
hi = (unsigned int)(r>>32);
}
Note: that if you had larger numbers, there are faster algorithms.
As i know binary equivalent of signed int is 11111111111111111111111111111111
and based on this, I'm trying to make Maximum and Minimum int value for my program without using limits.h header file. After run my below code i get Minimum value as -2147483648 and Maximum value is 0. Here below is my code:
int MaxInt(){
int MAX = -1;
MAX = 0 << ((sizeof(int)*8)-1);
return MAX;
}
int MinInt(){
int MIN = 0;
MIN = 1 << ((sizeof(int)*8)-1);
return MIN;
}
Whats wrong with my implementation.
Your implementation has several mistakes:
First, your representation of -1 assumes that int has a twos-complement 32 bit representation. This is not guaranteed for int. (It is for std::int32_t.)
Second, you assume that int has sizeof(int)*8 bits. This again is not guaranteed at all.
Under all these assumptions, you still have a mistake in your implementation:
0 << ((sizeof(int)*8)-1);
can be written (mathematically, not in c++) as:
0 * 2**((sizeof(int)*8)-1)
Now, as you know, multiplying something with 0 results in 0.
Assuming that twos-complement is given, the following simple implementations should work:
MIN = -1 << ((sizeof(int)*8)-1);
MAX = ~MIN;
In the function
int MaxInt(){
int MAX = -1;
MAX = 0 << ((sizeof(int)*8)-1);
return MAX;
}
you at first assigned -1 to MAX and then overwrote its value. So this assignment does not make sense.
Also if to shift left 0 then you will get again 0 independing on how long you will shift the 0.:)
The simplest way to get the maximum value of object of type int for the 2's complement internal representation is the following
int MaxInt()
{
int MAX = -1u >> 1;
return MAX;
}
Or you can write simply
int MaxInt()
{
return -1u >> 1;
}
Here is a demonstrative program
#include <iostream>
constexpr int MaxInt()
{
return -1u >> 1;
}
constexpr int MinInt()
{
return ~( -1u >> 1 );
}
int main()
{
std::cout << MaxInt() << std::endl;
std::cout << MinInt() << std::endl;
}
The program output might look like
2147483647
-2147483648
Whats wrong with my implementation
MAX = 0 << ((sizeof(int)*8)-1);
Shifting zero by any amount will always be zero.
This isn't specific to C++, but rather about 2's complement form. In 2's complement, the most-significant bit doesn't merely indicate the sign (that the value is negative) but is rather that power of 2 (that is, for an 8-bit 2's complement number, the most significant bit would represent -2^7).
To make the most negative number, only the most significant bit should be set.
// Disclaimer: this should work for *most* devices, but it
// is device-specific in that I am assuming 2's complement
// and I am also assuming that a char is 8-bits. In theory,
// you might find a custom chip where this isn't true,
// but any popular chip will probably have this behavior:
int number_of_digits_in_int = sizeof(int) * 8;
int most_significant_digit_index = number_of_digits_in_int - 1;
int most_negative_int = 1 << most_significant_digit_index;
To make the largest positive number, all positive bits should be set:
// The complement of 0 has all bits set. This value, by the way
// is the same as "-1" in 2s complement form, but writing it
// this way for clarity as to its meaning.
int all_bits_set = ~(static_cast<int>(0));
// Using an XOR with the most negative integer clears the
// most-signficant sign bit, leaving only positive bits.
int largest_positive_int = all_bits_set ^ most_negative_int;
Or, more simply:
// Since the most negative integer has only the negative bit set,
// its complement has only the positive bits set.
int largest_positive_int = ~most_negative_int;
As others have stated, though, you should just use std::numeric_limits. This will also make your code portable and work even on very bizaare devices that don't use 2s complement, for example, not to mention that the less code you write yourself, the fewer mistakes that you'll make.
I am testing the function fitsBits(int x, int n) on my own and I figure out there is a condition that doesn't fit in this function, what is the problem?
/*
* fitsBits - return 1 if x can be represented as an
* n-bit, two's complement integer.
* 1 <= n <= 32
* Examples: fitsBits(5,3) = 0, fitsBits(-4,3) = 1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 15
* Rating: 2
*/
int fitsBits(int x, int n) {
int r, c;
c = 33 + ~n;
r = !(((x << c)>>c)^x);
return r;
}
It seems like it gives the wrong answer in
fitsBits(0x80000000, 0x20);
It gives me 1, but actually it should be 0...
How could I fix it?
Thank you!
fitsBits(0x80000000, 0x20);
This function returns 1, because the first argument of your function is int, which is (in practice these days) a 32 bit signed integer. The largest value that signed 32 bit integer can represent is 0x7FFFFFFF, which is less than the value you are passing in. Because of that your value gets truncated and becomes -0x80000000, something that 32 bit integer can represent. Therefore your function returns 1 (yes, my first argument is something that can be represented using 0x20 = 32 bits).
If you want your function to properly classify number 0x80000000 as something that cannot be represented using 32 bits, you need to change the type of the first argument of your function. One options would've been using an unsigned int, but from your problem definition it seems like you need to properly handle negative numbers, so your remaining option is long long int, that can hold numbers between -0x8000000000000000 and 0x7FFFFFFFFFFFFFFF.
You will need to do couple more adjustments: you need to explicitly specify that your constant is of type long long by using LL suffix, and you now need to shift by 64 - c, not by 32 - c:
#include <stdio.h>
int fitsBits(long long x, int n) {
long long r;
int c;
c = 65 + ~n;
r = !(((x << c)>>c)^x);
return r;
}
int main() {
printf("%d\n", fitsBits(0x80000000LL, 0x20));
return 0;
}
Link to IDEONE: http://ideone.com/G8I3kZ
Left shifts that cause overflow are undefined for signed types. Hence the compiler may optimise (x<<c)>>c as simply x, and the entire function reduces down to return 1;.
Probably you want to use unsigned types.
A second cause of undefined behavior in your code is that c may be greater than or equal to the width of int. A shift of more than the width of the integer type is undefined behavior.
r = (((x << c)>>c)^x); //This will give you 0, meaning r = 0;
OR
r = !((x << c)>>c);
Your function can be simplified to
int fitsBits(int x) {
int r, c;
c = 33;
r = (((x << c)>>c)^x);
return r;
}
Note that when NOT(!) is brought you're asking for opposite of r
I'm doing a program that calculates the probability of lotteries.
Specification is choose 5 numbers out of 47 and 1 out of 27
So I did the following:
#include <iostream>
long int choose(unsigned n, unsigned k);
long int factorial(unsigned n);
int main(){
using namespace std;
long int regularProb, megaProb;
regularProb = choose(47, 5);
megaProb = choose(27, 1);
cout << "The probability of the correct number is 1 out of " << (regularProb * megaProb) << endl;
return 0;
}
long int choose(unsigned n, unsigned k){
return factorial(n) / (factorial(k) * factorial(n-k));
}
long int factorial(unsigned n){
long int result = 1;
for (int i=2;i<=n;i++) result *= i;
return result;
}
However the program doesn't work. The program calculates for 30 seconds, then gives me Process 4 exited with code -1,073,741,676 I have to change all the long int to long double, but that loses precision. Is it because long int is too short for the big values? Though I thought long int nowadays are 64bit? My compiler is g++ win32 (64bit host).
Whether long is 64-bit or not depends on the model. Windows uses a 32-bit long. Use int64_t from <stdint.h> if you need to ensure it is 64-bit.
But even if long is 64-bit it is still too small to hold factorial(47).
47! == 2.58623242e+59
2^64 == 1.84467441e+19
although 47C5 is way smaller than that.
You should never use nCr == n!/(r! (n-r)!) directly do the calculation as it overflows easily. Instead, factor out the n!/(n-r)! to get:
47 * 46 * 45 * 44 * 43
C = ----------------------
47 5 5 * 4 * 3 * 2 * 1
this can be managed even by a 32-bit integer.
BTW, for #Coffee's question: a double only has 53-bits of precision, where 47! requires 154 bits. 47! and 42! represented in double would be
47! = (0b10100100110011011110001010000100011110111001100100100 << 145) ± (1 << 144)
42! = (0b11110000010101100000011101010010010001101100101001000 << 117) ± (1 << 116)
so 47! / (42! × 5!)'s possible range of value will be
0b101110110011111110011 = 1533939 53 bits
v
max = 0b101110110011111110011.000000000000000000000000000000001001111...
val = 0b101110110011111110010.111111111111111111111111111111111010100...
min = 0b101110110011111110010.111111111111111111111111111111101011010...
that's enough to get the exact value 47C5.
to use 64bit long, you should use long long. (as mentioned here)
KennyTM has it right, you're going to overflow no matter what type you use. You need to approach the problem more smartly and factor out lots of work. If you're ok with an approximate answer, then take a look at Stirling's approximation:
Ln(n!) ~ n Ln(n) - n
So if you have
n!/(k!*(n-k)!)
You could say that's
e(ln(n!/(k!*(n-k)!)))
which after some math (double check to make sure I got it right) is
e(n*ln(n)-k*ln(k)-(n-k)*ln(n-k))
And that shouldn't overflow (but it's an approximate answer)
It's easy to calculate binomial coefficients up to 47C5 and beyond without overflow, using standard unsigned long 32-bit arithmetic. See my response to this question: https://math.stackexchange.com/questions/34518/are-there-examples-where-mathematicians-needs-to-calculate-big-combinations/34530#comment-76389