Is there any fast way in C (below 1 sec) to find the number of perfect squares between two numbers.
For ex. for 1 <-> 10 we have 2 perfect squares 4 and 9. But what about between 1<->2^60 or some other bigger number.
This is slow
while(i*i<=n)
{
sum+=i==((long long)(sqrt(i*i)));
i++;
}
where n is lets say 2^60 and we start with i=2.
x = (int)sqrt(n2) - (int)sqrt(n1);
Its trivial. Assume you have two endpoints, a & b, with a < b.
What is the next perfect square after a? Hint, what is sqrt(a)? What would rounding up do?
What is the largest perfect square that does not exceed b? Hint, what is sqrt(b)? Again, how would rounding help here?
Once you know those two numbers, counting the number of perfect squares seems truly trivial.
By the way, be careful. Even the sqrt of 2^60 is a big number, although it will fit into a double. The problem is that 2^60 is too large to fit into a standard double, since it exceeds 2^53. So beware precision issues.
Don't iterate. The equation:
floor(sqrt(b)) - ceil(sqrt(a)) + 1
gives the number of perfect squares in the interval from a up to b inclusive.
https://en.wikipedia.org/wiki/Intermediate_value_theorem
if(n1 is a perfect square)
x=(int)sqrt(n2)-(int)sqrt(n1)+1;
else
x=(int)sqrt(n2)-(int)sqrt(n1);
Language-agnostic formula to get the number of perfect squares in range [a, b] inclusive, as long as the basic math function sqrt, ceil and floor are provided by the standard lib.
cnt = int(floor(sqrt(b))) - int(ceil(sqrt(a))) + 1
Related
I know that a number x lies between n and f (f > n > 0). So my idea is to bring that range to [0, 0.65535] by 0.65535 * (x - n) / (f - n).
Then I just could multiply by 10000, round and store integer in two bytes.
Is it going to be an effective use of storage in terms of precision?
I'm doing it for a WebGL1.0 shader, so I'd like to have simple encoding/decoding math, I don't have access to bitwise operations.
Why multiply by 0.65535 and then by 10000.0? That introduces a second rounding with an unnecessary loss of precision.
The data will be represented well if it has equal likelihood over the entire range (f,n). But this is not always a reasonable assumption. What you're doing is similar to creating a fixed-point representation (fixed step size, just not starting at 0 or with steps that are negative powers of 2).
Floating-point numbers use bigger step sizes for bigger numbers. You could do the same by calculating log(x/f) / log(n/f) * 65535
I am working on a cryptocurrency and there is a calculation that nodes must make:
average /= total;
double ratio = average/DESIRED_BLOCK_TIME_SEC;
int delta = -round(log2(ratio));
It is required that every node has the exact same result no matter what architecture or stdlib being used by the system. My understanding is that log2 might have different implementations that yield very slightly different results or flags like --ffast-math could impact the outputted results.
Is there a simple way to convert the above calculation to something that is verifiably portable across different architectures (fixed point?) or am I overthinking the precision that is needed (given that I round the answer at the end).
EDIT: Average is a long and total is an int... so average ends up rounded to the closest second.
DESIRED_BLOCK_TIME_SEC = 30.0 (it's a float) that is #defined
For this kind of calculation to be exact, one must either calculate all the divisions and logarithms exactly -- or one can work backwards.
-round(log2(x)) == round(log2(1/x)), meaning that one of the divisions can be turned around to get (1/x) >= 1.
round(log2(x)) == floor(log2(x * sqrt(2))) == binary_log((int)(x*sqrt(2))).
One minor detail here is, if (double)sqrt(2) rounds down, or up. If it rounds up, then there might exist one or more value x * sqrt2 == 2^n + epsilon (after rounding), where as if it would round down, we would get 2^n - epsilon. One would give the integer value of n the other would give n-1. Which is correct?
Naturally that one is correct, whose ratio to the theoretical mid point x * sqrt(2) is smaller.
x * sqrt(2) / 2^(n-1) < 2^n / (x * sqrt(2)) -- multiply by x*sqrt(2)
x^2 * 2 / 2^(n-1) < 2^n -- multiply by 2^(n-1)
x^2 * 2 < 2^(2*n-1)
In order of this comparison to be exact, x^2 or pow(x,2) must be exact as well on the boundary - and it matters, what range the original values are. Similar analysis can and should be done while expanding x = a/b, so that the inexactness of the division can be mitigated at the cost of possible overflow in the multiplication...
Then again, I wonder how all the other similar applications handle the corner cases, which may not even exist -- and those could be brute force searched assuming that average and total are small enough integers.
EDIT
Because average is an integer, it makes sense to tabulate those exact integer values, which are on the boundaries of -round(log2(average)).
From octave: d=-round(log2((1:1000000)/30.0)); find(d(2:end) ~= find(d(1:end-1))
1 2 3 6 11 22 43 85 170 340 679 1358 2716
5431 10862 21723 43445 86890 173779 347558 695115
All the averages between [1 2( -> 5
All the averages between [2 3( -> 4
All the averages between [3 6( -> 3
..
All the averages between [43445 86890( -> -11
int a = find_lower_bound(average, table); // linear or binary search
return 5 - a;
No floating point arithmetic needed
I want to count the nth positive root of p for example we have n=2 and p=16 the answer is 4 because
4^2 = 16. I want to do this for huge numbers (1 <= n <= 200, 1 <= p < 10^101). I don't know how should I do it as fast as possible.
Example:
n=2 p=16 Answer 4
n=7 p=4357186184021382204544 Answer 1234
There are arbitrary precision math packages out there, if you have to come up with your own algorithm.
But you might try this: Get p into a double any way you can (a double can handle 10^101.) Then use math.h::pow(p, 1.0/n), and that answer will be close to the right integer (round it?). But this will fail if p is more than 15 digits, and n is too small, e.g., p = 10^100, n=2 gives a 50 digit answer, which is too big an integer for double to represent exactly.
Get 101 digit p into double: cut the number (string) into 10 digit chunks, multiply each by 10 to the appropriate power, and add them up.
Try Newton's method as described here:
http://en.wikipedia.org/wiki/Nth_root_algorithm
Take log of p, divide by n, and take the anti-log:
nthRoot(p, n) := Math.Power(10, Math.Log(p) / n)
Not sure whether you're specifically dealing with integers or what but that is the psuedo-code for it.
The program asks the user for the number of times to flip a coin (n; the number of trials).
A success is considered a heads.
Flawlessly, the program creates a random number between 0 and 1. 0's are considered heads and success.
Then, the program is supposed to output the expected values of getting x amount of heads. For example if the coin was flipped 4 times, what are the following probabilities using the formula
nCk * p^k * (1-p)^(n-k)
Expected 0 heads with n flips: xxx
Expected 1 heads with n flips: xxx
...
Expected n heads with n flips: xxx
When doing this with "larger" numbers, the numbers come out to weird values. It happens if 15 or twenty are put into the input. I have been getting 0's and negative values for the value that should be xxx.
Debugging, I have noticed that the nCk has come out to be negative and not correct towards the upper values and beleive this is the issue. I use this formula for my combination:
double combo = fact(n)/fact(r)/fact(n-r);
here is the psuedocode for my fact function:
long fact(int x)
{
int e; // local counter
factor = 1;
for (e = x; e != 0; e--)
{
factor = factor * e;
}
return factor;
}
Any thoughts? My guess is my factorial or combo functions are exceeding the max values or something.
You haven't mentioned how is factor declared. I think you are getting integer overflows. I suggest you use double. That is because since you are calculating expected values and probabilities, you shouldn't be concerned much about precision.
Try changing your fact function to.
double fact(double x)
{
int e; // local counter
double factor = 1;
for (e = x; e != 0; e--)
{
factor = factor * e;
}
return factor;
}
EDIT:
Also to calculate nCk, you need not calculate factorials 3 times. You can simply calculate this value in the following way.
if k > n/2, k = n-k.
n(n-1)(n-2)...(n-k+1)
nCk = -----------------------
factorial(k)
You're exceeding the maximum value of a long. Factorial grows so quickly that you need the right type of number--what type that is will depend on what values you need.
Long is an signed integer, and as soon as you pass 2^31, the value will become negative (it's using 2's complement math).
Using an unsigned long will buy you a little time (one more bit), but for factorial, it's probably not worth it. If your compiler supports long long, then try an "unsigned long long". That will (usually, depends on compiler and CPU) double the number of bits you're using.
You can also try switching to use double. The problem you'll face there is that you'll lose accuracy as the numbers increase. A double is a floating point number, so you'll have a fixed number of significant digits. If your end result is an approximation, this may work okay, but if you need exact values, it won't work.
If none of these solutions will work for you, you may need to resort to using an "infinite precision" math package, which you should be able to search for. You didn't say if you were using C or C++; this is going to be a lot more pleasant with C++ as it will provide a class that acts like a number and that would use standard arithmetic operators.
Let's say I am given integers x and y (satisfying x <= y with ones digit of 0 so they are, in particular, divisible by two). Then I know that their average avg = ((x+y) / 2) is an integer as well. I would like to find this midpoint rounded up to a resolution of 100. In other words if my two inputs are 75200 and 75300 then the avg is 75250 and rounded up to the nearest 100 (but without exceeding or equaling the bigger number) forces the answer to be 75200.
How can I implement this logic without first dividing everything by 100 and using the following floating point arithmetic:
x + std::floor((y - x) * .5 * 100 + .5)*0.01
In other words, how can I do the above without floating point values but obtain the same behavior at the resolution of 100 instead of 0.01?
To compute the average you can do
avg = (x + y) / 2
(BTW, integer addition and division by 2 are very cheap operations even on small microcontrollers.)
To round this to the nearest multiple of 100 (corresponding to your floating-point example) you can do
result = ((avg + 50) / 100) * 100
as integer division rounds down to the nearest integer. By changing the 50 to 0 you can always round down, while changing it to 99 always rounds up.
Edit: Note that this method for rounding doesn't work for negative numbers. Since integer division rounds towards zero, in that case you'll need to subtract the 50, subtract 99 to always round down and subtract 0 to always round up.
Your problematic example requires strong conditions:
the difference between x and y needs to be not greater than 100
y % 100 must be 0
So for most cases, a simple rounded average is perfect for you:
avg100 = avg - (avg % 100) + 100
The tricky part is fixing the remaining error without a condition - if you want to avoid conditions, or slow operations.
For this, the best way is to use a multiplication, and split the expression into two:
avg100 = avg - (avg % 100)
avg100 += 100 * !!(y - avg100)
For most cases, y is greater than avg100. For this case, the !! operator will return 1. In the rare case when they equal, it will return a 0, and it won't change the value.
(I don't know if the compiler will really generate a code without conditions for the '!!' operator, but I don't have a batter idea, and if it is possible, I think it will. If not, this code is still short and easy to understand.)
Also, you can calculate the average using the following expression:
avg = y - (y-x)/2
Or even change the division into bit shift for optimization.
This won't require for both of the numbers to be even, just to be the same parity.