is this mathematical assumption about count the leading zero right? - c++

there are 4 unsigned integers(32bits):
a,b,c,d (c > a > d > b)
and a function:
clz(x) (calculate the leading zero numbers of x.eg.clz(2) == 30)
then there are :
n = clz(a^b) // xor
m = clz(c^d)
the question is: can we think m is certainly lesser or equal then n?
thanks in advance!

consider a simplified version: c=0x0111, a=0x0110, d=0x0101, b=0x0001
then c^d=0x0010, a^b=0x0111, so clz(c^d)=2 > clz(a^b)=1
So all you need is a,c,d having the same lz and b having a larger lz to fail the assumption.

Related

How to store output of very large Fibonacci number?

I am making a program for nth Fibonacci number. I made the following program using recursion and memoization.
The main problem is that the value of n can go up to 10000 which means that the Fibonacci number of 10000 would be more than 2000 digit long.
With a little bit of googling, I found that i could use arrays and store every digit of the solution in an element of the array but I am still not able to figure out how to implement this approach with my program.
#include<iostream>
using namespace std;
long long int memo[101000];
long long int n;
long long int fib(long long int n)
{
if(n==1 || n==2)
return 1;
if(memo[n]!=0)
return memo[n];
return memo[n] = fib(n-1) + fib(n-2);
}
int main()
{
cin>>n;
long long int ans = fib(n);
cout<<ans;
}
How do I implement that approach or if there is another method that can be used to achieve such large values?
One thing that I think should be pointed out is there's other ways to implement fib that are much easier for something like C++ to compute
consider the following pseudo code
function fib (n) {
let a = 0, b = 1, _;
while (n > 0) {
_ = a;
a = b;
b = b + _;
n = n - 1;
}
return a;
}
This doesn't require memoisation and you don't have to be concerned about blowing up your stack with too many recursive calls. Recursion is a really powerful looping construct but it's one of those fubu things that's best left to langs like Lisp, Scheme, Kotlin, Lua (and a few others) that support it so elegantly.
That's not to say tail call elimination is impossible in C++, but unless you're doing something to optimise/compile for it explicitly, I'm doubtful that whatever compiler you're using would support it by default.
As for computing the exceptionally large numbers, you'll have to either get creative doing adding The Hard Way or rely upon an arbitrary precision arithmetic library like GMP. I'm sure there's other libs for this too.
Adding The Hard Way™
Remember how you used to add big numbers when you were a little tater tot, fresh off the aluminum foil?
5-year-old math
1259601512351095520986368
+ 50695640938240596831104
---------------------------
?
Well you gotta add each column, right to left. And when a column overflows into the double digits, remember to carry that 1 over to the next column.
... <-001
1259601512351095520986368
+ 50695640938240596831104
---------------------------
... <-472
The 10,000th fibonacci number is thousands of digits long, so there's no way that's going to fit in any integer C++ provides out of the box. So without relying upon a library, you could use a string or an array of single-digit numbers. To output the final number, you'll have to convert it to a string tho.
(woflram alpha: fibonacci 10000)
Doing it this way, you'll perform a couple million single-digit additions; it might take a while, but it should be a breeze for any modern computer to handle. Time to get to work !
Here's an example in of a Bignum module in JavaScript
const Bignum =
{ fromInt: (n = 0) =>
n < 10
? [ n ]
: [ n % 10, ...Bignum.fromInt (n / 10 >> 0) ]
, fromString: (s = "0") =>
Array.from (s, Number) .reverse ()
, toString: (b) =>
b .reverse () .join ("")
, add: (b1, b2) =>
{
const len = Math.max (b1.length, b2.length)
let answer = []
let carry = 0
for (let i = 0; i < len; i = i + 1) {
const x = b1[i] || 0
const y = b2[i] || 0
const sum = x + y + carry
answer.push (sum % 10)
carry = sum / 10 >> 0
}
if (carry > 0) answer.push (carry)
return answer
}
}
We can verify that the Wolfram Alpha answer above is correct
const { fromInt, toString, add } =
Bignum
const bigfib = (n = 0) =>
{
let a = fromInt (0)
let b = fromInt (1)
let _
while (n > 0) {
_ = a
a = b
b = add (b, _)
n = n - 1
}
return toString (a)
}
bigfib (10000)
// "336447 ... 366875"
Expand the program below to run it in your browser
const Bignum =
{ fromInt: (n = 0) =>
n < 10
? [ n ]
: [ n % 10, ...Bignum.fromInt (n / 10 >> 0) ]
, fromString: (s = "0") =>
Array.from (s) .reverse ()
, toString: (b) =>
b .reverse () .join ("")
, add: (b1, b2) =>
{
const len = Math.max (b1.length, b2.length)
let answer = []
let carry = 0
for (let i = 0; i < len; i = i + 1) {
const x = b1[i] || 0
const y = b2[i] || 0
const sum = x + y + carry
answer.push (sum % 10)
carry = sum / 10 >> 0
}
if (carry > 0) answer.push (carry)
return answer
}
}
const { fromInt, toString, add } =
Bignum
const bigfib = (n = 0) =>
{
let a = fromInt (0)
let b = fromInt (1)
let _
while (n > 0) {
_ = a
a = b
b = add (b, _)
n = n - 1
}
return toString (a)
}
console.log (bigfib (10000))
Try not to use recursion for a simple problem like fibonacci. And if you'll only use it once, don't use an array to store all results. An array of 2 elements containing the 2 previous fibonacci numbers will be enough. In each step, you then only have to sum up those 2 numbers. How can you save 2 consecutive fibonacci numbers? Well, you know that when you have 2 consecutive integers one is even and one is odd. So you can use that property to know where to get/place a fibonacci number: for fib(i), if i is even (i%2 is 0) place it in the first element of the array (index 0), else (i%2 is then 1) place it in the second element(index 1). Why can you just place it there? Well when you're calculating fib(i), the value that is on the place fib(i) should go is fib(i-2) (because (i-2)%2 is the same as i%2). But you won't need fib(i-2) any more: fib(i+1) only needs fib(i-1)(that's still in the array) and fib(i)(that just got inserted in the array).
So you could replace the recursion calls with a for loop like this:
int fibonacci(int n){
if( n <= 0){
return 0;
}
int previous[] = {0, 1}; // start with fib(0) and fib(1)
for(int i = 2; i <= n; ++i){
// modulo can be implemented with bit operations(much faster): i % 2 = i & 1
previous[i&1] += previous[(i-1)&1]; //shorter way to say: previous[i&1] = previous[i&1] + previous[(i-1)&1]
}
//Result is in previous[n&1]
return previous[n&1];
}
Recursion is actually discommanded while programming because of the time(function calls) and ressources(stack) it consumes. So each time you use recursion, try to replace it with a loop and a stack with simple pop/push operations if needed to save the "current position" (in c++ one can use a vector). In the case of the fibonacci, the stack isn't even needed but if you are iterating over a tree datastructure for example you'll need a stack (depends on the implementation though). As I was looking for my solution, I saw #naomik provided a solution with the while loop. That one is fine too, but I prefer the array with the modulo operation (a bit shorter).
Now concerning the problem of the size long long int has, it can be solved by using external libraries that implement operations for big numbers (like the GMP library or Boost.multiprecision). But you could also create your own version of a BigInteger-like class from Java and implement the basic operations like the one I have. I've only implemented the addition in my example (try to implement the others they are quite similar).
The main idea is simple, a BigInt represents a big decimal number by cutting its little endian representation into pieces (I'll explain why little endian at the end). The length of those pieces depends on the base you choose. If you want to work with decimal representations, it will only work if your base is a power of 10: if you choose 10 as base each piece will represent one digit, if you choose 100 (= 10^2) as base each piece will represent two consecutive digits starting from the end(see little endian), if you choose 1000 as base (10^3) each piece will represent three consecutive digits, ... and so on. Let's say that you have base 100, 12765 will then be [65, 27, 1], 1789 will be [89, 17], 505 will be [5, 5] (= [05,5]), ... with base 1000: 12765 would be [765, 12], 1789 would be [789, 1], 505 would be [505]. It's not the most efficient, but it is the most intuitive (I think ...)
The addition is then a bit like the addition on paper we learned at school:
begin with the lowest piece of the BigInt
add it with the corresponding piece of the other one
the lowest piece of that sum(= the sum modulus the base) becomes the corresponding piece of the final result
the "bigger" pieces of that sum will be added ("carried") to the sum of the following pieces
go to step 2 with next piece
if no piece left, add the carry and the remaining bigger pieces of the other BigInt (if it has pieces left)
For example:
9542 + 1097855 = [42, 95] + [55, 78, 09, 1]
lowest piece = 42 and 55 --> 42 + 55 = 97 = [97]
---> lowest piece of result = 97 (no carry, carry = 0)
2nd piece = 95 and 78 --> (95+78) + 0 = 173 = [73, 1]
---> 2nd piece of final result = 73
---> remaining: [1] = 1 = carry (will be added to sum of following pieces)
no piece left in first `BigInt`!
--> add carry ( [1] ) and remaining pieces from second `BigInt`( [9, 1] ) to final result
--> first additional piece: 9 + 1 = 10 = [10] (no carry)
--> second additional piece: 1 + 0 = 1 = [1] (no carry)
==> 9542 + 1 097 855 = [42, 95] + [55, 78, 09, 1] = [97, 73, 10, 1] = 1 107 397
Here is a demo where I used the class above to calculate the fibonacci of 10000 (result is too big to copy here)
Good luck!
PS: Why little endian? For the ease of the implementation: it allows to use push_back when adding digits and iteration while implementing the operations will start from the first piece instead of the last piece in the array.

Linear Programming: Depict logical expression in a boolean variable

I have a mixed integer linear program (MIP or MILP).
In the end I want a boolean variable im my linear program, that has the following properties.
I have two variables:
boolean b.
real x, with x being 0 or larger.
What I want to achieve is:
b == false if x == 0.
b == true if x > 0.
I found a way to depict if x is in specific range (e.g. between 2 and 3) via:
2*b <= x
x <= 3*b
The problem with the above testing formula is, that b will be true if x is in the given range and false if outside that range.
Does anybody know a way to set a boolean variable to false if x == 0 and to true if x is larger than 0?
If U is an upper bound of x then
if x > 0 ==> b == 1
can be made as
x <= U*b
The second part (x == 0 => b == 0) needs to be modified to
x < epsilon ==> b == 0
which can be made as
b <= 1 + x - epsilon
where epsilon is a small number. Other than good practice this is necessary, because solvers do not work in rational arithmetic (although there are some research efforts to make them do so), but with certain precision thresholds, and therefore quantities such as 10e-12 are treated as zero.
I hope this helps!
You could use the signum function http://en.wikipedia.org/wiki/Signum_function take the absolute value and negate it. Since you didn't name a specific programming language I keep it general.

Fast inner product of ternary vectors

Consider two vectors, A and B, of size n, 7 <= n <= 23. Both A and B consists of -1s, 0s and 1s only.
I need a fast algorithm which computes the inner product of A and B.
So far I've thought of storing the signs and values in separate uint32_ts using the following encoding:
sign 0, value 0 → 0
sign 0, value 1 → 1
sign 1, value 1 → -1.
The C++ implementation I've thought of looks like the following:
struct ternary_vector {
uint32_t sign, value;
};
int inner_product(const ternary_vector & a, const ternary_vector & b) {
uint32_t psign = a.sign ^ b.sign;
uint32_t pvalue = a.value & b.value;
psign &= pvalue;
pvalue ^= psign;
return __builtin_popcount(pvalue) - __builtin_popcount(psign);
}
This works reasonably well, but I'm not sure whether it is possible to do it better. Any comment on the matter is highly appreciated.
I like having the 2 uint32_t, but I think your actual calculation is a bit wasteful
Just a few minor points:
I'm not sure about the reference (getting a and b by const &) - this adds a level of indirection compared to putting them on the stack. When the code is this small (a couple of clocks maybe) this is significant. Try passing by value and see what you get
__builtin_popcount can be, unfortunately, very inefficient. I've used it myself, but found that even a very basic implementation I wrote was far faster than this. However - this is dependent on the platform.
Basically, if the platform has a hardware popcount implementation, __builtin_popcount uses it. If not - it uses a very inefficient replacement.
The one serious problem here is the reuse of the psign and pvalue variables for the positive and negative vectors. You are doing neither your compiler nor yourself any favors by obfuscating your code in this way.
Would it be possible for you to encode your ternary state in a std::bitset<2> and define the product in terms of and? For example, if your ternary types are:
1 = P = (1, 1)
0 = Z = (0, 0)
-1 = M = (1, 0) or (0, 1)
I believe you could define their product as:
1 * 1 = 1 => P * P = P => (1, 1) & (1, 1) = (1, 1) = P
1 * 0 = 0 => P * Z = Z => (1, 1) & (0, 0) = (0, 0) = Z
1 * -1 = -1 => P * M = M => (1, 1) & (1, 0) = (1, 0) = M
Then the inner product could start by taking the and of the bits of the elements and... I am working on how to add them together.
Edit:
My foolish suggestion did not consider that (-1)(-1) = 1, which cannot be handled by the representation I proposed. Thanks to #user92382 for bringing this up.
Depending on your architecture, you may want to optimize away the temporary bit vectors -- e.g. if your code is going to be compiled to FPGA, or laid out to an ASIC, then a sequence of logical operations will be better in terms of speed/energy/area than storing and reading/writing to two big buffers.
In this case, you can do:
int inner_product(const ternary_vector & a, const ternary_vector & b) {
return __builtin_popcount( a.value & b.value & ~(a.sign ^ b.sign))
- __builtin_popcount( a.value & b.value & (a.sign ^ b.sign));
}
This will lay out very well -- the (a.value & b.value & ... ) can enable/disable an XOR gate, whose output splits into two signed accumulators, with the first pathway NOTed before accumulation.

Logical / Relational Expression Optimization

I need to optimize an expression of the form:
(a > b) || (a > c)
I tried several optimized forms one of which is as follows:
(a * 2) > (b + c)
Optimization is not from the compiler's point of view. I would like to reduce the two >s to one.
This is based on the assumption that 1 <= (a, b, c) <= 26
However, this works only for some cases. Is the optimization I am trying to do, really possible? If yes, a start would be really helpful.
The answer is probably: you do not want to optimize that. Moreover, I doubt that there's any way to write this more efficiently. If you say that a, b and c are values between 1 and 26, you shouldn't be using integers (you don't need that precision) if you wanted to be optimal (in size) anyway.
If a > b, the expression a > c will not be executed anyway. So you have at maximum 2 (and at minimum 1) conditional operations, which is really not worth an optimization.
I'm quite doubtful this is even an optimisation in most cases.
a > b || a > c
will evaluate to:
compare a b
jump not greater
compare a c
jump not greater
where
a * 2 > b + c
gives:
shift a left 1 (in temp1)
add b to c (in temp2)
compare temp1 temp2
jump if not greater
As always with performance, it's always much better to base your decision on actual performance measurements (preferably on a selection of processor architectures).
The best I can come up with is this
char a, b, c;
std::cin >> a >> b >> c;
if (((b-a) | (c-a)) & 0x80) {
// a > b || a > c
}
With gcc -O2 this generates only one conditional branch
40072e: 29 c8 sub %ecx,%eax
400730: 29 ca sub %ecx,%edx
400732: 09 d0 or %edx,%eax
400734: a8 80 test $0x80,%al
400736: 74 17 je 40074f <main+0x3f>
This leverages the constraints of the input values, since the values cannot be greater than 26 then subtracting a from b will give you a negative value when a > b, in two's complement you know bit 7 will be set in that case - the same applies to c. I then OR both so that bit 7 indicates whether a > b || a > c, lastly we inspect bit 7 by AND with 0x80 and branch on that.
Update: Out of curiosity I timed 4 different ways of coding this. To generate test data I used a simple linear congruential pseudo-random number generator. I timed it in a loop for 100 million iterations. I assumed for simplicity that if the condition is true we want to add 5 to a counter, do nothing otherwise. I timed it using g++ (GCC) 4.6.3 20120306 (Red Hat 4.6.3-2) on an Intel Xeon X5570 # 2.93GHz using -O2 optimization level.
Here's the code (comment out all but one of the conditional variants):
#include <iostream>
unsigned myrand() {
static unsigned x = 1;
return (x = x * 1664525 + 1013904223);
}
int main() {
size_t count = 0;
for(size_t i=0; i<100000000; ++i ) {
int a = 1 + myrand() % 26;
int b = 1 + myrand() % 26;
int c = 1 + myrand() % 26;
count += 5 & (((b-a) | (c-a)) >> 31); // 0.635 sec
//if (((b-a) | (c-a)) & 0x80) count += 5; // 0.660 sec
//if (a > std::max(b,c)) count += 5; // 0.677 sec
//if ( a > b || a > c) count += 5; // 1.164 sec
}
std::cout << count << std::endl;
return 0;
}
The fastest is a modification on the suggestion in my answer, where we use sign extension to generate a mask that is either 32 1s or 32 0s depending on whether the condition is true of false, and use that to mask the 5 being added so that it either adds 5 or 0. This variation has no branches. The times are in a comment on each line. The slowest was the original expression ( a > b || a > c).

Turn while loop into math equation?

I have two simple while loops in my program that I feel ought to be math equations, but I'm struggling to convert them:
float a = someValue;
int b = someOtherValue;
int c = 0;
while (a <= -b / 2) {
c--;
a += b;
}
while (a >= b / 2) {
c++;
a -= b;
}
This code works as-is, but I feel it could be simplified into math equations. The idea here being that this code is taking an offset (someValue) and adjusting a coordinate (c) to minimize the distance from the center of a tile (of size someOtherValue). Any help would be appreciated.
It can be proved that the following is correct:
c = floor((a+b/2)/b)
a = a - c*b
Note that floor means round down, towards negative infinity: not towards 0. (E.g. floor(-3.1)=-4. The floor() library functions will do this; just be sure not to just cast to int, which will usually round towards 0 instead.)
Presumably b is strictly positive, because otherwise neither loop will never terminate: adding b will not make a larger and subtracting b will not make a smaller. With that assumption, we can prove that the above code works. (And paranoidgeek's code is also almost correct, except that it uses a cast to int instead of floor.)
Clever way of proving it:
The code adds or subtracts multiples of b from a until a is in [-b/2,b/2), which you can view as adding or subtracting integers from a/b until a/b is in [-1/2,1/2), i.e. until (a/b+1/2) (call it x) is in [0,1). As you are only changing it by integers, the value of x does not change mod 1, i.e. it goes to its remainder mod 1, which is x-floor(x). So the effective number of subtractions you make (which is c) is floor(x).
Tedious way of proving it:
At the end of the first loop, the value of c is the negative of the number of times the loop runs, i.e.:
0 if: a > -b/2 <=> a+b/2 > 0
-1 if: -b/2 ≥ a > -3b/2 <=> 0 ≥ a+b/2 > -b <=> 0 ≥ x > -1
-2 if: -3b/2 ≥ a > -5b/2 <=> -b ≥ a+b/2 > -2b <=> -1 ≥ x > -2 etc.,
where x = (a+b/2)/b, so c is: 0 if x>0 and "ceiling(x)-1" otherwise. If the first loop ran at all, then it was ≤ -b/2 just before the last time the loop was executed, so it is ≤ -b/2+b now, i.e. ≤ b/2. According as whether it is exactly b/2 or not (i.e., whether x when you started was exactly a non-positive integer or not), the second loop runs exactly 1 time or 0, and c is either ceiling(x) or ceiling(x)-1. So that solves it for the case when the first loop did run.
If the first loop didn't run, then the value of c at the end of the second loop is:
0 if: a < b/2 <=> a-b/2 < 0
1 if: b/2 ≤ a < 3b/2 <=> 0 ≤ a-b/2 < b <=> 0 ≤ y < 1
2 if: 3b/2 ≤ a < 5b/2 <=> b ≤ a-b/2 < 2b <=> 1 ≤ y < 2, etc.,
where y = (a-b/2)/b, so c is: 0 if y<0 and 1+floor(y) otherwise. [And a now is certainly < b/2 and ≥ -b/2.]
So you can write an expression for c as:
x = (a+b/2)/b
y = (a-b/2)/b
c = (x≤0)*(ceiling(x) - 1 + (x is integer))
+(y≥0)*(1 + floor(y))
Of course, next you notice that (ceiling(x)-1+(x is integer)) is same as floor(x+1)-1 which is floor(x), and that y is actually x-1, so (1+floor(y))=floor(x), and as for the conditionals:
when x≤0, it cannot be that (y≥0), so c is just the first term which is floor(x),
when 0 < x < 1, neither of the conditions holds, so c is 0,
when 1 ≤ x, then only 0≤y, so c is just the second term which is floor(x) again.
So c = floor(x) in all cases.
c = (int)((a - (b / 2)) / b + 1);
a -= c * b;
Test case at http://pastebin.com/m1034e639
I think you want something like this:
c = ((int) a + b / 2 * sign(a)) / b
That should match your loops except for certain cases where b is odd because the range from -b/2 to b/2 is smaller than b when b is odd.
Assuming b is positive, abs(c) = floor((abs(a) - b/2) / b). Then, apply sign of a to c.