Compute integer bounds to include scaled floating point values - c++

I am trying to compute integer array bounds that will include floating point limits divided by a scale. For example, if my origin is 0, my floating point maximum is 10 then my integer array bounds need to be 2. The obvious formula is to divide my bounds by the scale, giving the incorrect result of 1.
I need to divide the inclusive maximum values by the scale and add one if the division is an exact multiple.
I am running into a mismatch between the normal way to define and use integer array indexes and my desired way to use real value coordinates. I am trying to map inclusive real value coordinates into integer array indexes, using a scaling term.
(I am actually working with two dimensional maps, but the problem can be expressed more simply in one dimension.)
This is wrong:
int get_array_size(double, scale, double maximum)
{
return std::ceil(maximum / scale); // Fails on exact multiples
}
This is wasteful:
int get_array_size(double, scale, double maximum)
{
return 1 + std::ceil(maximum / scale); // Allocates extra array memory
}
This is ugly and I am not sure if it is correct:
int get_array_size(double, scale, double maximum)
{
if (maximum % scale == 0) // I am not sure if this is correct
return 1 + std::ceil(maximum / scale);
else
return std::ceil(maximum / scale); // Maybe I can eliminate the call to std::ceil?
}
I am trying to get the value maximum / scale on every open ended interval ending at multiples of scale and 1 + maximum / scale on every interval from >= multiple of scale ending at < multiple of scale + 1. I am not sure how to correctly express this in mathematical terms or how to implement it in c++. I would be grateful if someone can clarify my understand and point me in the right direction.
Mathematically I think I am trying to define f(x, s) = y s.t. if s * n <= x and x < s * (n + 1) then y = n + 1. I want to implement this efficiently and respect the difference between <= and < comparison.

The way I interpret this question, I think maximum and scale don't actually matter - what you are really asking about is how to correctly map from floats to ints with specific boundary conditions. For example [0.0, 1.0) to 0, [1.0, 2.0) to 1, etc. So the question becomes a bit simpler if we just consider maximum / scale to be a single quantity; I'll call it t.
I believe you actually want to use std::floor instead of std::ceil:
int scaled_coord_to_index(float t) {
return std::floor(t);
}
And the size of your array should always be the maximum scaled coordinate + 1 (with negative values normalized to start at 0).
int array_size(float min_t, float max_t) {
// NOTE: This will "anchor" your coords based on the most negative value.
// e.g. if that value is 1.6, then your bins will be [1.6, 2.6), [2.6, 3.6), etc.
// To change that behavior you could use std::floor(min_t) instead.
return scaled_coord_to_index(max_t - min_t) + 1;
}

Related

Subsampling an array of numbers

I have a series of 100 integer values which I need to reduce/subsample to 77 values for the purpose of fitting into a predefined space on screen. This gives a fraction of 77/100 values-per-pixel - not very neat.
Assuming the 77 is fixed and cannot be changed, what are some typical techniques for subsampling 100 numbers down to 77. I get a sense that it will be a jagged mapping, by which I mean the first new value is the average of [0, 1] then the next value is [3], then average [4, 5] etc. But how do I approach getting the pattern for this mapping?
I am working in C++, although I'm more interested in the technique than implementation.
Thanks in advance.
Either if you downsample or you oversample, you are trying to reconstruct a signal over nonsampled points in time... so you have to make some assumptions.
The sampling theorem tells you that if you sample a signal knowing that it has no frequency components over half the sampling frequency, you can continously and completely recover the signal over the whole timing period. There's a way to reconstruct the signal using sinc() functions (this is sin(x)/x)
sinc() (indeed sin(M_PI/Sampling_period*x)/M_PI/x) is a function that has the following properties:
Its value is 1 for x == 0.0 and 0 for x == k*Sampling_period with k == 0, +-1, +-2, ...
It has no frequency component over half of the sampling_frequency derived from Sampling_period.
So if you consider the sum of the functions F_x(x) = Y[k]*sinc(x/Sampling_period - k) to be the sinc function that equals the sampling value at position k and 0 at other sampling value and sum over all k in your sample, you'll get the best continous function that has the properties of not having components on frequencies over half the sampling frequency and have the same values as your samples set.
Said this, you can resample this function at whatever position you like, getting the best way to resample your data.
This is by far, a complicated way of resampling data, (it has also the problem of not being causal, so it cannot be implemented in real time) and you have several methods used in the past to simplify the interpolation. you have to constructo all the sinc functions for each sample point and add them together. Then you have to resample the resultant function to the new sampling points and give that as a result.
Next is an example of the interpolation method just described. It accepts some input data (in_sz samples) and output interpolated data with the method described before (I supposed the extremums coincide, which makes N+1 samples equal N+1 samples, and this makes the somewhat intrincate calculations of (in_sz - 1)/(out_sz - 1) in the code (change to in_sz/out_sz if you want to make plain N samples -> M samples conversion:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/* normalized sinc function */
double sinc(double x)
{
x *= M_PI;
if (x == 0.0) return 1.0;
return sin(x)/x;
} /* sinc */
/* interpolate a function made of in samples at point x */
double sinc_approx(double in[], size_t in_sz, double x)
{
int i;
double res = 0.0;
for (i = 0; i < in_sz; i++)
res += in[i] * sinc(x - i);
return res;
} /* sinc_approx */
/* do the actual resampling. Change (in_sz - 1)/(out_sz - 1) if you
* don't want the initial and final samples coincide, as is done here.
*/
void resample_sinc(
double in[],
size_t in_sz,
double out[],
size_t out_sz)
{
int i;
double dx = (double) (in_sz-1) / (out_sz-1);
for (i = 0; i < out_sz; i++)
out[i] = sinc_approx(in, in_sz, i*dx);
}
/* test case */
int main()
{
double in[] = {
0.0, 1.0, 0.5, 0.2, 0.1, 0.0,
};
const size_t in_sz = sizeof in / sizeof in[0];
const size_t out_sz = 5;
double out[out_sz];
int i;
for (i = 0; i < in_sz; i++)
printf("in[%d] = %.6f\n", i, in[i]);
resample_sinc(in, in_sz, out, out_sz);
for (i = 0; i < out_sz; i++)
printf("out[%.6f] = %.6f\n", (double) i * (in_sz-1)/(out_sz-1), out[i]);
return EXIT_SUCCESS;
} /* main */
There are different ways of interpolation (see wikipedia)
The linear one would be something like:
std::array<int, 77> sampling(const std::array<int, 100>& a)
{
std::array<int, 77> res;
for (int i = 0; i != 76; ++i) {
int index = i * 99 / 76;
int p = i * 99 % 76;
res[i] = ((p * a[index + 1]) + ((76 - p) * a[index])) / 76;
}
res[76] = a[99]; // done outside of loop to avoid out of bound access (0 * a[100])
return res;
}
Live example
Create 77 new pixels based on the weighted average of their positions.
As a toy example, think about the 3 pixel case which you want to subsample to 2.
Original (denote as multidimensional array original with RGB as [0, 1, 2]):
|----|----|----|
Subsample (denote as multidimensional array subsample with RGB as [0, 1, 2]):
|------|------|
Here, it is intuitive to see that the first subsample seems like 2/3 of the first original pixel and 1/3 of the next.
For the first subsample pixel, subsample[0], you make it the RGB average of the m original pixels that overlap, in this case original[0] and original[1]. But we do so in weighted fashion.
subsample[0][0] = original[0][0] * 2/3 + original[1][0] * 1/3 # for red
subsample[0][1] = original[0][1] * 2/3 + original[1][1] * 1/3 # for green
subsample[0][2] = original[0][2] * 2/3 + original[1][2] * 1/3 # for blue
In this example original[1][2] is the green component of the second original pixel.
Keep in mind for different subsampling you'll have to determine the set of original cells that contribute to the subsample, and then normalize to find the relative weights of each.
There are much more complex graphics techniques, but this one is simple and works.
Everything depends on what you wish to do with the data - how do you want to visualize it.
A very simple approach would be to render to a 100-wide image, and then smooth scale the image down to a narrower size. Whatever graphics/development framework you're using will surely support such an operation.
Say, though, that your goal might be to retain certain qualities of the data, such as minima and maxima. In such a case, for each bin, you're drawing a line of darker color up to the minimum value, and then continue with a lighter color up to the maximum. Or, you could, instead of just putting a pixel at the average value, you draw a line from the minimum to the maximum.
Finally, you might wish to render as if you had 77 values only - then the goal is to somehow transform the 100 values down to 77. This will imply some kind of an interpolation. Linear or quadratic interpolation is easy, but adds distortions to the signal. Ideally, you'd probably want to throw a sinc interpolator at the problem. A good list of them can be found here. For theoretical background, look here.

C++: "Distance" between 2 coordinates in 2D array

For a game I'm writing I need to find an integer value for the distance between two sets of coordinates. It's a 2D array that holds the different maps. (Like the original zelda). The further you go from the center (5,5) the higher the number should be since the difficulty of enemies increases. Ideally it should be between 0 and 14. The array is 11x11.
Now, I tried to use the pythagoras formula that I remember from highschool, but it's spewing out overflow numbers. I can't figure out why.
srand(rand());
int distance=sqrt(pow((5-worldx), 2)-pow((5-worldy), 2));
if(distance<0) //alternative to abs()
{
distance+=(distance * 2);
}
if(distance>13)
{
distance=13;
}
int rnd=rand()%(distance+1);
Monster testmonster = monsters[rnd];
srand(rand()); does not make sense, it should be srand(time(NULL));
don't use pow for square, just use x*x
your formula is also wrong, you should add number together not minus
sqrt return double and cast to int will round it down
i think sqrt always return positive number
you know abs exists right? why not use it? also distance = -distance is better than distance+=(distance * 2)
srand(time(NULL));
int dx = 5 - worldx;
int dy = 5 - worldy;
int distance=sqrt(dx * dx + dy * dy);
if(distance>13)
{
distance=13;
}
int rnd=rand()%(distance+1);
Monster testmonster = monsters[rnd];
It's a^2 + b^2 = c^2, not minus. Once you call sqrt with a negative argument, you're on your own.
You're subtracting squares inside your square root, instead of adding them ("...-pow...").

Converting polygon coordinates from Double to Long for use with Clipper library

I have two polygons with their vertices stored as Double coordinates. I'd like to find the intersecting area of these polygons, so I'm looking at the Clipper library (C++ version). The problem is, Clipper only works with integer math (it uses the Long type).
Is there a way I can safely transform both my polygons with the same scale factor, convert their coordinates to Longs, perform the Intersection algorithm with Clipper, and scale the resulting intersection polygon back down with the same factor, and convert it back to a Double without too much loss of precision?
I can't quite get my head around how to do that.
You can use a simple multiplier to convert between the two:
/* Using power-of-two because it is exactly representable and makes
the scaling operation (not the rounding!) lossless. The value 1024
preserves roughly three decimal digits. */
double const scale = 1024.0;
// representable range
double const min_value = std::numeric_limits<long>::min() / scale;
double const max_value = std::numeric_limits<long>::max() / scale;
long
to_long(double v)
{
if(v < 0)
{
if(v < min_value)
throw out_of_range();
return static_cast<long>(v * scale - 0.5);
}
else
{
if(v > max_value)
throw out_of_range();
return static_cast<long>(v * scale + 0.5);
}
}
Note that the larger you make the scale, the higher your precision will be, but it also lowers the range. Effectively, this converts a floating-point number into a fixed-point number.
Lastly, you should be able to locate code to compute intersections between line segments using floating-point math easily, so I wonder why you want to use exactly Clipper.

How can I get such relative coordinates even in negative number space?

Imagine a simple two dimensional integer grid. It is divided into chunks by a larger integer grid. The dimensions of the equal chunks are given.
To get the chunk from global coordinates, I can simply divide a coordinate by the chunk size and cut the decimal places: chunk.x = global.x / chunksize.x. This works only for unsigned numbers because negative coordinates won't be rounded into the right direction. Therefore I apply rounding downwards manually: chunk.x = (int)floor((float)global.x / chunksize.x). This works quite well but here come the other part.
I also want to calculate coordinates relative to the containing chunk from global coordinates. For unsigned numbers, I simply used the remainder: local.x = global.x % chunksize.x;. But that doesn't work for negative coordinates since the local coordinates of negative chunks are not mirrored.
How can I calculate the local coordinates even in negative number space without calculaing the chunk before?
This
const int M = 100000;
//chunk.x = (global.x + M*chunksize.x) / chunksize.x - M;
local.x = (global.x + M*chunksize.x) % chunksize.x;
should be much much faster than conversion to and from floating-point.
Or,
//chunk.x = global.x / chunksize.x;
local.x = global.x % chunksize.x;
if (local.x < 0) {
//chunk.x--;
local.x += chunksize.x;
}
For negative outcomes, add the chunk size to them (which makes them positive). And if you then take the modulo again, you get an expression that works equally well for positive and negative global.x:
local.x = ((global.x % chunksize.x) + chunksize.x) % chunksize.x;

Vector Quantizitation to directional code words

I need to quantize my vector and generate directional code words from 0 to 15. So I had implemented following code line using C++ to achieve that. Just pass 2 points and calculate atan() value using that points. But it's only return just 0 to 7. other values are not return. Also sometimes it's return very large numbers like 42345. How can I modify this to return directional code words from 0 to 15
double angle = abs(atan((acc.y - acc.lastY)/(acc.x - acc.lastX))/(20*3.14159/180));
That's what the std::atan2 function is for.
Since tan function is periodic over just half circle. Logically, if you negate both coordinates, the expression in the argument comes out the same, so you can't tell the two cases apart. So you have to first look at which quadrant you are in by checking the signs and than adding 180 if you are in the negative half-space. The std::atan2 function will do it for you.
double angle = std::atan2(acc.y - acc.lastY, acc.x - acc.lastX) * (8 / PI);
It has the added benefit of actually working when acc.x == acc.lastX, while your expression will signal division by zero.
Additionally, the use of abs is wrong. If you get angle between -π and π you want to get angle between 0 and 2π, you need to write:
double angle = std::atan2(acc.y - acc.lastY, acc.x - acc.lastX); // keep it in radians
if(angle < 0)
angle += 2 * PI;
return angle * (8 / PI); // convert to <0, 16)
With abs you are unifying the cases with oposite sign of y, but same x.
Additionally if you want to round the values so that 0 represents directions along x axis slightly off to either side, you'll need to modify the rounding by adding half of the interval width and you'll have to do before normalizing to the &langle;0, 2π) range. You'd start with:
double angle = std::atan2(acc.y - acc.lastY, acc.x - acc.lastX) + PI/16;