I am using a 1D vector to represent a 3D data structure, with indices 0 to x*y*z, where x, y and z are the number of cells in each direction.
In order to index a specific cell [xi][yi][zi], I use:
index = xi + x*yi + x*y*zi ;
My problem is, I can't work out how to go the other way!
e.g. I want to get the individual coordinates represented in index 43.
I think I've worked out how to get xi:
xi = index % x ;
but I can't for the life of me get the others... :/
EDIT: Hmmm. Is this right?
xi = index % x;
yi = ((index - xi)/x) % y;
zi = ((index - xi - x*yi) / x) / y ;
Try this:
xi = index % x;
yi = (index / x) % y;
zi = ((index / x) / y) % z;
This can be easily generalized as one might expect.
Some modular arithmetic should do the trick.
index % x = (xi + x*yi + x*y*zi) % x = xi % x,
but 0 <= xi < x therefore xi % x = xi.
Then for yi:
(index / x) % y = yi.
Then lastly:
index / (x * y) = zi.
EDIT:
In code (as opposed to math above) it would be:
xi = index % x;
yi = (index / x) % y;
zi = index / (x * y);
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
Given N. Find number of all the integer pairs x, y such that
1<=x, y<=N
and x^2 - y is a perfect square
the N is large but O(sqrt(N)) will be fine to solve this problem.
I tried to solve this problem like, letting z^2 be the square number
x^2 - z^2 = y = (x+z)(x-z)
then let x + z = p and x - z = q;
then x = (p+q)/2 and z = (p-q)/2;
and (p+q)/2<=N;
and p and q should have same parity (both even or odd as (p+q)/2 is integer)
also pq<=N
Now I don't know how to proceed from here
or tell me some other method to solve this problem efficiently.
This solution solves the problem in O(sqrt N).
Rephrasing the problem
Let z^2 = x^2 - y, z ≥ 0, or equivalently 0 < y = x^2 - z^2 ≤ N
We need pairs of perfect squares under N^2 whose differences are less than or equal to N. By arithmetic series,
1 + 3 + 5 + 7 + ... + (2k - 1) = k^2
That means x^2 - z^2 is a sum of some n consecutive odd integers.
Counting odd integers
z^2 + (2z + 1) + (2z + 3) + ... + (2x - 1) = x^2. Apply arithmetic series formula
z^2 + n/2 * (4z + 2 + 2(n - 1)) = x^2
z^2 + n * (2z + n) = x^2
n(2z + n) ≤ N
z ≤ floor((N/n - n)/2)
We are thus able to find the last values of z for which at least n+1 odd consecutive integers are needed for their sum to exceed N.
For each z, the x can be z+1, z+2 ... z+n, for a total of n pairs.
#include <cmath>
#include <iostream>
int N = 99;
int main(void){
int z = -1;
// z = 0 is valid for x^2 < N, so -1 is largest invalid z.
int count = 0;
for (int n = std::sqrt(N); n > 0; n--){
int zNew = (N/n - n)/2;
// zNew is max z that has n perfect squares from z + 1 to z + n
count += (zNew - z) * n;
z = zNew;
}
std::cout << count << '\n';
}
A version in Java passed these unit tests.
(N, count) = (1, 1), (3, 2), (5, 4), (8, 6), (60, 68), (99, 124), (500, 808)
I use the following code to calculate standard deviation:
std::vector<float> k = {4,6,2};
float mean = 4;
float sum = std::accumulate(k.begin(), k.end(), 0, [&mean](float x, float y) {
return (y - mean) * (y - mean);
});
float variance = sum / k.size();
float stdev = sqrt(variance);
std::accumulate returns 4 when it should return:
(4-4)^2 + (6-4)^2 + (2-4)^2 = 8
Furthermore, printing (y - mean) * (y - mean) gives:
0
4
4
So, why doesn't it return 0 + 4 + 4?
You do not use the x parameter. Try the following:
float sum = std::accumulate(k.begin(), k.end(), 0.0F, [&mean](float x, float y) {
return x + (y - mean) * (y - mean);
});
UPDATE: init value as float
I have tried a cpp codeblock:
bool comp(const pair<int,int>&A, const pair<int,int>&B)
{
if(A.second<=B.second)
{
if(A.first>=B.first)
return 1;
else
return 0;
}
return 0;
}
int main()
{
int a, b, c, x[10], y[10];
cin>>a;
cin>>b;
cin>>c;
for(int i=0;i<4;++i)
{
cin>>x[i];
y[i]=a*x[i]*x[i]+b*x[i]+c;
}
vector<pair<int,int> >V;
for(int i=0;i<4;++i)
{
V.pb(mp(x[i],y[i]));
}
for(int i=0;i<4;++i)
{
sort(V.begin(),V.end(),&comp);
}
for(int i=0;i<V.size();i++)
{
cout<<V[i].first;
cout<<" "<<V[i].second<<" ";
}
return 0;
}
STDIN: a b c x1 x2 x3... and x is in sorted order i.e. x1 < x2 < x3. The Code should generate a new list (y = y1 y2 y3) using the parabola equation for every x and sort the above list with a run-time complexity of <= O(log n).
STDOUT: x3,y3 x1,y1 x2,y2 ... (assuming computed y3 < y1 < y2.. ).
Code should NOT compute the Y's. Multiplication on this compute node is "too" costly. The solution should identify a way of still sorting the list without computing the "y" values.
My code computes the y values. Can anyone find a method of sorting without computing the y values. A python code implementation would also work for me.
The farther an x value is from the parabola's apex x0, the higher is its y value when a is positive and the lower its y value when a is negative.
|x1 - x0| > |x2 - x0| && a > 0 --> y1 > y2
|x1 - x0| > |x2 - x0| && a < 0 --> y1 < y2
When a is zero, your parabola is really a line and the x values are already sorted in the correct order when b is positive or in the reverse order when b is negative.
So when a isn't zero, find the apex:
x0 = - b / (2*a)
Now find the value in your sorted list of x values that is closest to x:
i = index(x: min(|x - x0|))
Add point i to the list. Create two indices:
l = i - 1
r = i + 1
Now take the point at either index l or r that is closer to the apex, and add it to the list. Update the index until you have exhausted the list.
Revert the list when a is negative. (Or add the items from the end of the list.)
Edit: Here's an implementation in Python. It pops elements off sub-lists rather than using array indices, but the logic is the same:
import bisect
def parasort(a, b, c, x):
"""Return list sorted by y = a*x*x + b*x + c for sorted input x."""
if not x:
return x
if a == 0: # degenerate case: line
if b < 0: return x[::-1]
return x[:]
x0 = -0.5 * b / a # apex of parabola
i = bisect.bisect_left(x, x0) + 1 # closest point via bin. search
l = x[:i][::-1] # left array, reverted
r = x[i:] # right array
res = []
while l and r: # merge l and r
if x0 - l[0] > r[0] - x0: # right item is smaller
res += [r.pop(0)]
else: # left item is smaller
res += [l.pop(0)]
res += l + r # append rest of arrays
if a < 0: return res[::-1]
return res
a = 4
b = 0
c = 0
xx = parasort(a, b, c, [-3, 0, 1, 2])
for x in xx:
print x, a*x*x + b*x + c
Distance from point to point: dist = sqrt(dx * dx + dy * dy);
But sqrt is too slow and I can't accept that. I found a method called Taylor McLaughlin Series to estimate the distance of two points on the book. But I can't comprehend the following code. Thanks for anyone who helps me.
#define MIN(a, b) ((a < b) ? a : b)
int FastDistance2D(int x, int y)
{
// This function computes the distance from 0,0 to x,y with 3.5% error
// First compute the absolute value of x, y
x = abs(x);
y = abs(y);
// Compute the minimum of x, y
int mn = MIN(x, y);
// Return the distance
return x + y - (mn >> 1) - (mn >> 2) + (mn >> 4);
}
I have consulted related data about McLaughlin Series, but I still can't comprehend how the return value use McLaughlin Series to estimate the value. Thanks for everyone~
This task is almost duplicate of another one:
Very fast 3D distance check?
And there was link to great article:
http://www.azillionmonkeys.com/qed/sqroot.html
In the article you can find different aproaches for approximation of root. For example maybe this one is suitable for you:
int isqrt (long r) {
float tempf, x, y, rr;
int is;
rr = (long) r;
y = rr*0.5;
*(unsigned long *) &tempf = (0xbe6f0000 - *(unsigned long *) &rr) >> 1;
x = tempf;
x = (1.5*x) - (x*x)*(x*y);
if (r > 101123) x = (1.5*x) - (x*x)*(x*y);
is = (int) (x*rr + 0.5);
return is + ((signed int) (r - is*is)) >> 31;
}
If you can calculate root operation fast, then you can calculate distance in regular way:
return isqrt(a*a+b*b)
And one more link:
http://www.flipcode.com/archives/Fast_Approximate_Distance_Functions.shtml
u32 approx_distance( s32 dx, s32 dy )
{
u32 min, max;
if ( dx < 0 ) dx = -dx;
if ( dy < 0 ) dy = -dy;
if ( dx < dy )
{
min = dx;
max = dy;
} else {
min = dy;
max = dx;
}
// coefficients equivalent to ( 123/128 * max ) and ( 51/128 * min )
return ((( max << 8 ) + ( max << 3 ) - ( max << 4 ) - ( max << 1 ) +
( min << 7 ) - ( min << 5 ) + ( min << 3 ) - ( min << 1 )) >> 8 );
}
You are right sqrt is quite a slow function. But do you really need to compute the distance?
In a lot of cases you can use the distance² instead.
E.g.
If you want to find out what distance is shorter, you can compare the squares of the distance as well as the real distances.
If you want to check if a 100 > distance you can as well check 10000 > distanceSquared
Using the ditance squared in your program instead of the distance you can often avoid calculating the sqrt.
It depends on your application if this is an option for you, but it is always worth to be considered.
I'm reading the following code (taken from here)
void linear_interpolation_CPU(float2* result, float2* data,
float* x_out, int M, int N) {
float a;
for(int j = 0; j < N; j++) {
int k = floorf(x_out[j]);
a = x_out[j] - floorf(x_out[j]);
result[j].x = a*data[k+1].x + (-data[k].x*a + data[k].x);
result[j].y = a*data[k+1].y + (-data[k].y*a + data[k].y);
}
}
but I don't get it.
Why isn't the result[y] calculated by using the
formula?
It is calculated that way.
Look at the first two lines:
int k = floorf(x_out[j]);
a = x_out[j] - floorf(x_out[j]);
The first line defines x0 using the floor function. This is because the article assumes a lattice spacing of one for the sample points, as per the line:
the samples are obtained on the 0,1,...,M lattice
Now we could rewrite the second line for clarity as:
a = x_out[j] - k;
The second line is therefore x-x0.
Now, let us examine the equation:
result[j].y = a*data[k+1].y + (-data[k].y*a + data[k].y);
Rewriting this in terms of y, x, and x0 gives:
y = (x-x0)*data[k+1].y + (-data[k].y*(x-x0) + data[k].y);
Let's rename data[k+1].y as y1 and data[k].y as y0:
y = (x-x0)*y1 + (-y0*(x-x0) + y0);
Let's rearrange this by pulling out x-x0:
y = (x-x0)*(y1-y0) + y0;
And rearrange again:
y = y0 + (y1-y0)*(x-x0);
Again, the lattice spacing is important:
the samples are obtained on the 0,1,...,M lattice
Thus, x1-x0 is always 1. If we put it back in, we get
y = y0 + (y1-y0)*(x-x0)/(x1-x0);
Which is just the equation you were looking for.
Granted, it's ridiculous that the code is not written so as to make that apparent.