Bilinear Interpolation - OSRM Rastersource - c++

I've got a question about bilinear interpolation in the OSRM-Project.
I understand the "normal" bilinear interpolation. Here the picture from Wikipedia, what is insane:
Now I'm trying to understand the bilinear interpolation which is used in the OSRM-Project for raster source data.
// Query raster source using bilinear interpolation
RasterDatum RasterSource::GetRasterInterpolate(const int lon, const int lat) const
{
if (lon < xmin || lon > xmax || lat < ymin || lat > ymax)
{
return {};
}
const auto xthP = (lon - xmin) / xstep;
const auto ythP =
(ymax - lat) /
ystep; // the raster texture uses a different coordinate system with y pointing downwards
const std::size_t top = static_cast<std::size_t>(fmax(floor(ythP), 0));
const std::size_t bottom = static_cast<std::size_t>(fmin(ceil(ythP), height - 1));
const std::size_t left = static_cast<std::size_t>(fmax(floor(xthP), 0));
const std::size_t right = static_cast<std::size_t>(fmin(ceil(xthP), width - 1));
// Calculate distances from corners for bilinear interpolation
const float fromLeft = xthP - left; // this is the fraction part of xthP
const float fromTop = ythP - top; // this is the fraction part of ythP
const float fromRight = 1 - fromLeft;
const float fromBottom = 1 - fromTop;
return {static_cast<std::int32_t>(raster_data(left, top) * (fromRight * fromBottom) +
raster_data(right, top) * (fromLeft * fromBottom) +
raster_data(left, bottom) * (fromRight * fromTop) +
raster_data(right, bottom) * (fromLeft * fromTop))};
}
Original Code here
Can someone explain me how the code works?
The input format are the SRTM data in ASCII format.
The variables height and width are defined as nrows and ncolumns.
The variables xstep and ystep are defined as:
return (max - min) / (static_cast<float>(count) - 1)
Where count is height for ystep and width for xstep, max and min similar.
And another question:
Can I use the same code for data in TIF-format and the whole world?

Horizontal pixel coordinates are in the range [0, width - 1]; similarly vertical coordinates are in [0, height - 1]. (Zero-indexing convention used in many many languages including C++)
The lines
const auto xthP = (lon - xmin) / xstep; (and for ythP)
Convert the input image-space coordinates (long, lat) into pixel coordinates. xstep is the width of each pixel in image-space.
Rounding this down (using floor) gives pixels intersected by the sample area on one side, and rounding up (ceil) gives the pixels on the other side. For the X-coordinate these give left and right.
The reason for using fmin and fmax are to clamp the coordinates so that they don't exceed the pixel coordinate range.
EDIT: since you are trying to interpret this picture, I'll list the corresponding parts below:
Q11 = (left, top)
Q12 - (left, bottom), etc.
P = (xthP, ythP)
R1 = fromTop, R2 = fromBottom etc.
A good start point would be http://www.cs.uu.nl/docs/vakken/gr/2011/Slides/06-texturing.pdf, slide 27. In future though, Google is your friend.

Related

Implement an Ellipse Structural Element

I want to implement the following using OpenCV (I'll post my attempt at the bottom of the post). I am aware that OpenCV has a function for something like this, but I want to try to write my own.
In an image (Mat) (the coordinate system is at the top left, since it is an image) of width width and height height, I want to display a filled ellipsewith the following properties:
it should be centered at (width/2, height/2)
the image should be binary, so the points corresponding to the ellipse should have a value of 1 and others should be 0
the ellipse should be rotated by angle radians around the origin (or degrees, this does not matter all that much, I can convert)
ellipse: semi-major axis parameter is a and semi-minor axis parameter is b and these two parameters also represent the size of these axes in the picture, so "no matter" the width and height, the ellipse should have a major axis of size 2*a and a minor axis of size 2*b
Ok, so I've found an equation similar to this (https://math.stackexchange.com/a/434482/403961) for my purpose. My code is as follows.. it does seem to do pretty well on the rotation side, but, sadly, depending on the rotation angle, the SIZE (major axis, not sure about the minor) visibly increases/decreases, which is not normal, since I want it to have the same size, independent of the rotation angle.
NOTE The biggest size is seemingly achieved when the angle is 45 or -45 degrees and the smallest for angles like -90, 0, 90.
Code:
inline double sqr(double x)
{
return x * x;
}
Mat ellipticalElement(double a, double b, double angle, int width, int height)
{
// just to make sure I don't use some bad values for my parameters
assert(2 * a < width);
assert(2 * b < height);
Mat element = Mat::zeros(height, width, CV_8UC1);
Point center(width / 2, height / 2);
for(int x = 0 ; x < width ; x++)
for(int y = 0 ; y < height ; y++)
{
if (sqr((x - center.x) * cos(angle) - (y - center.y) * sin(angle)) / sqr(a) + sqr((x - center.x) * sin(angle) - (y - center.y) * cos(angle)) / sqr(b) <= 1)
element.at<uchar>(y, x) = 1;
}
return element;
}
A pesky typo sneaked in your inequality. The first summand must be
sqr((x - center.x) * cos(angle) + (y - center.y) * sin(angle)) / sqr(a)
Note the plus sign instead of minus.

line-width for ellipse is not constant

I am drawing hollow ellipse using opengl. I calculate vertices in c++ code using standard ellipse formula. In fragment shader i just assign color to each fragment. The ellipse that i see on the screen has thinner line width on the sharper curves as compared to that where curve is not that sharp. So question is, how to make line-width consistent across the entire parameter of ellipse? Please see the image below:
C++ code :
std::vector<float> BCCircleHelper::GetCircleLine(float centerX, float centerY, float radiusX, float radiusY, float lineWidth, int32_t segmentCount)
{
auto vertexCount = (segmentCount + 1) * 2;
auto floatCount = vertexCount * 3;
std::vector<float> array(floatCount);
const std::vector<float>& data = GetCircleData (segmentCount);
float halfWidth = lineWidth * 0.5f;
for (int32_t i = 0; i < segmentCount + 1; ++i)
{
float sin = data [i * 2];
float cos = data [i * 2 + 1];
array [i * 6 + 0] = centerX + sin * (radiusX - halfWidth);
array [i * 6 + 1] = centerY + cos * (radiusY - halfWidth);
array [i * 6 + 3] = centerX + sin * (radiusX + halfWidth);
array [i * 6 + 4] = centerY + cos * (radiusY + halfWidth);
array [i * 6 + 2] = 0;
array [i * 6 + 5] = 0;
}
return std::move(array);
}
const std::vector<float>& BCCircleHelper::GetCircleData(int32_t segmentCount)
{
int32_t floatCount = (segmentCount + 1) * 2;
float segmentAngle = static_cast<float>(M_PI * 2) / segmentCount;
std::vector<float> array(floatCount);
for (int32_t i = 0; i < segmentCount + 1; ++i)
{
array[i * 2 + 0] = sin(segmentAngle * i);
array[i * 2 + 1] = cos(segmentAngle * i);
}
return array;
}
Aiming this:
The problem is likely that your fragments are basically line segments radiating from the center of the ellipse.
If you draw a line, from the center of the ellipse through the ellipse you've drawn, at any point on the perimeter, you could probably convince yourself that the distance covered by that red line is in fact the width that you're after (roughly, since you're working at low spatial resolution; somewhat pixelated). But since this is an ellipse, that distance is not perpendicular to the path being traced. And that's the problem. This works great for circles, because a ray from the center is always perpendicular to the circle. But for these flattened ellipses, it's very oblique!
How to fix it? Can you draw circles at each point on the ellipse, instead of line segments?
If not, you might need to recalculate what it means to be that thick when measured at that oblique angle - it's no longer your line width, may require some calculus, and a bit more trigonometry.
Ok, so a vector tangent to the curve described by
c(i) = (a * cos(i), b * sin(i))
is
c'(i) = (- a * sin(i), b * cos(i))
(note that this is not a unit vector). The perpendicular to this is
c'perp = (b * cos(i), a * sin(i))
You should be able to convince yourself that this is true by computing their dot product.
Lets calculate the magnitude of c'perp, and call it k for now:
k = sqrt(b * b * cos(i) * cos(i) + a * a * sin(i) * sin(i))
So we go out to a point on the ellipse (c(i)) and we want to draw a segement that's perpendicular to the curve - that means we want to add on a scaled version of c'perp. The scaling is to divide by the magnitude (k), and then multiply by half your line width. So the two end points are:
P1 = c(i) + halfWidth * c'perp / k
P2 = c(i) - halfWidth * c'perp / k
I haven't tested this, but I'm pretty sure it's close. Here's the geometry you're working with:
--
Edit:
So the values for P1 and P2 that I give above are end-points of a line-segment that's perpendicular to the ellipse. If you really wanted to continue with just altering the radiusX and radiusY values the way you were doing, you could do this. You just need to figure out what the 'Not w' length is at each angle, and use half of this value in place of halfWidth in radiusX +/- halfWidth and radiusY +/- halfwidth. I leave that bit of geometry as an exercise for the reader.

Extracting subimage with a specified aspect ratio

I need to extract an object from an image. I know the location of the object inside the image, ie the region where the object is located: this region is provided as a pair of coordinates [xmin, ymin] and [xmax, ymax].
I would like to modify the coordinates of this region (thus increasing the height and width in a suitable way) in order to extract a subimage with a specified aspect ratio. So, we have the following constraints:
in order to avoid cutting the object incorrectly, the width and height of the region must not be reduced;
bounds checking: the adaptation of the region size must ensure that the new coordinates are inside the image;
the width/height ratio of the subimage should be approximately equal to the specified aspect ratio.
How to solve this problem?
UPDATE: one possible solution
The solution to my problem is mainly the algorithm proposed by Mark in this answer. The result of this algorithm is a new region wider or higher than the original and it is able to obtain a new aspect ratio very close to that specified, without moving the center of the original region (if this is feasible, depending on the position of the region within the original image). The region obtained from this algorithm could be further processed by the following algorithm in order to make the aspect ratio closer to that specified.
for left=0:(xmin-1), // it tries all possible combinations
for right=0:(imgWidth-xmax), // of increments of the region size
for top=0:(ymin-1), // along the four directions
for bottom=0:(imgHeight-ymax),
x1 = xmin - left;
x2 = xmax + right;
y1 = ymin - top;
y2 = ymax + bottom;
newRatio = (x2 - x1) / (y2 - y1);
if (newRatio == ratio)
rect = [x1 y1 x2 y2];
return;
end
end
end
end
end
Example... An image with 976 rows and 1239 columns; an initial region [xmin ymin xmax ymax] = [570 174 959 957].
First algorithm (main processing).
Input: the initial region and the image size.
Output: it produces new region r1 = [568 174 960 957],
width = 392 and height = 783, so the aspect ratio is equal to 0.5006.
Second algorithm (post-processing).
Input: the region r1.
Output: new region r2 = [568 174 960 958],
width = 392 and height = 784, so the aspect ratio is equal to 0.5.
obj_width = xmax - xmin
obj_height = ymax - ymin
if (obj_width / obj_height > ratio)
{
height_adjustment = ((obj_width / ratio) - (ymax - ymin)) / 2;
ymin -= height_adjustment;
ymax += height_adjustment;
if (ymin < 0)
{
ymax -= ymin;
ymin = 0;
}
if (ymax >= image_height)
ymax = image_height - 1;
}
else if (obj_width / obj_height < ratio)
{
width_adjustment = ((obj_height * ratio) - (xmax - xmin)) / 2;
xmin -= width_adjustment;
xmax += width_adjustment;
if (xmin < 0)
{
xmax -= xmin;
xmin = 0;
}
if (xmax >= image_width)
xmax = image_width - 1;
}
Let's start with your region: a w x h rectangle centered on a point p. You want to extend this region to have the aspect ratio r. The idea is to extend the width or the height:
(trivial case) If w / h == r, then return.
Compute w' = h x r.
If w' > w, then the resulting region is of width w', height h and center p.
Else, the resulting region is of width w, height h' = w / r, and center p.
Move the center p to follow the edges of the image if it has to be clipped, for example if the resulting region upper-left point is outside of the image: let u = upper-left point of the resulting region and d = (min(u.x,0), min(u.y,0)). Then, the final center will be p' = p - d. It is similar for the lower-right part of the region.
Clip the resulting region to the image.

Line-Circle Algorithm not quite working as expected

First, see:
https://math.stackexchange.com/questions/105180/positioning-a-widget-involving-intersection-of-line-and-a-circle
I have an algorithm that solves for the height of an object given a circle and an offset.
It sort of works but the height is always off:
Here is the formula:
and here is a sketch of what it is supposed to do:
And here is sample output from the application:
In the formula, offset = 10 and widthRatio is 3. This is why it is (1 / 10) because (3 * 3) + 1 = 10.
The problem, as you can see is the height of the blue rectangle is not correct. I set the bottom left offsets to be the desired offset (in this case 10) so you can see the bottom left corner is correct. The top right corner is wrong because from the top right corner, I should only have to go 10 pixels until I touch the circle.
The code I use to set the size and location is:
void DataWidgetsHandler::resize( int w, int h )
{
int tabSz = getProportions()->getTableSize() * getProportions()->getScale();
int r = tabSz / 2;
agui::Point tabCenter = agui::Point(
w * getProportions()->getTableOffset().getX(),
h * getProportions()->getTableOffset().getY());
float widthRatio = 3.0f;
int offset = 10;
int height = solveHeight(offset,widthRatio,tabCenter.getX(),tabCenter.getY(),r);
int width = height * widthRatio;
int borderMargin = height;
m_frame->setLocation(offset,
h - height - offset);
m_frame->setSize(width,height);
m_borderLayout->setBorderMargins(0,0,borderMargin,borderMargin);
}
I can assert that the table radius and table center location are correct.
This is my implementation of the formula:
int DataWidgetsHandler::solveHeight( int offset, float widthRatio, float h, float k, float r ) const
{
float denom = (widthRatio * widthRatio) + 1.0f;
float rSq = denom * r * r;
float eq = widthRatio * offset - offset - offset + h - (widthRatio * k);
eq *= eq;
return (1.0f / denom) *
((widthRatio * h) + k - offset - (widthRatio * (offset + offset)) - sqrt(rSq - eq) );
}
It uses the quadratic formula to find what the height should be so that the distance between the top right of the rectangle, bottom left, amd top left are = offset.
Is there something wrong with the formula or implementation? The problem is the height is never long enough.
Thanks
Well, here's my solution, which looks to resemble your solveHeight function. There might be some arithmetic errors in the below, but the method is sound.
You can think in terms of matching the coordinates at the point of the circle across
from the rectangle (P).
Let o_x,o_y be the lower left corner offset distances, w and h be the
height of the rectangle, w_r be the width ratio, dx be the desired
distance between the top right hand corner of the rectangle and the
circle (moving horizontally), c_x and c_y the coordinates of the
circle's centre, theta the angle, and r the circle radius.
Labelling it is half the work! Simply write down the coordinates of the point P:
P_x = o_x + w + dx = c_x + r cos(theta)
P_y = o_y + h = c_y + r sin(theta)
and we know w = w_r * h.
To simplify the arithmetic, let's collect some of the constant terms, and let X = o_x + dx - c_x and Y = o_y - c_y. Then we have
X + w_r * h = r cos(theta)
Y + h = r sin(theta)
Squaring and summing gives a quadratic in h:
(w_r^2 + 1) * h^2 + 2 (X*w_r + Y) h + (X^2+Y^2-r^2) == 0
If you compare this with your effective quadratic, then as long as we made different mistakes :-), you might be able to figure out what's going on.
To be explicit: we can solve this using the quadratic formula, setting
a = (w_r^2 + 1)
b = 2 (X*w_r + Y)
c = (X^2+Y^2-r^2)

Creating a linear gradient in 2D array

I have a 2D bitmap-like array of let's say 500*500 values. I'm trying to create a linear gradient on the array, so the resulting bitmap would look something like this (in grayscale):
(source: showandtell-graphics.com)
The input would be the array to fill, two points (like the starting and ending point for the Gradient tool in Photoshop/GIMP) and the range of values which would be used.
My current best result is this:
alt text http://img222.imageshack.us/img222/1733/gradientfe3.png
...which is nowhere near what I would like to achieve. It looks more like a radial gradient.
What is the simplest way to create such a gradient? I'm going to implement it in C++, but I would like some general algorithm.
This is really a math question, so it might be debatable whether it really "belongs" on Stack Overflow, but anyway: you need to project the coordinates of each point in the image onto the axis of your gradient and use that coordinate to determine the color.
Mathematically, what I mean is:
Say your starting point is (x1, y1) and your ending point is (x2, y2)
Compute A = (x2 - x1) and B = (y2 - y1)
Calculate C1 = A * x1 + B * y1 for the starting point and C2 = A * x2 + B * y2 for the ending point (C2 should be larger than C1)
For each point in the image, calculate C = A * x + B * y
If C <= C1, use the starting color; if C >= C2, use the ending color; otherwise, use a weighted average:
(start_color * (C2 - C) + end_color * (C - C1))/(C2 - C1)
I did some quick tests to check that this basically worked.
In your example image, it looks like you have a radial gradient. Here's my impromtu math explanation for the steps you'll need. Sorry for the math, the other answers are better in terms of implementation.
Define a linear function (like y = x + 1) with the domain (i.e. x) being from the colour you want to start with to the colour your want to end with. You can think of this in terms of a range the within Ox0 to OxFFFFFF (for 24 bit colour). If you want to handle things like brightness, you'll have to do some tricks with the range (i.e. the y value).
Next you need to map a vector across the matrix you have, as this defines the direction that the colours will change in. Also, the colour values defined by your linear function will be assigned at each point along the vector. The start and end point of the vector also define the min and max of the domain in 1. You can think of the vector as one line of your gradient.
For each cell in the matrix, colours can be assigned a value from the vector where a perpendicular line from the cell intersects the vector. See the diagram below where c is the position of the cell and . is the the point of intersection. If you pretend that the colour at . is Red, then that's what you'll assign to the cell.
|
c
|
|
Vect:____.______________
|
|
I'll just post my solution.
int ColourAt( int x, int y )
{
float imageX = (float)x / (float)BUFFER_WIDTH;
float imageY = (float)y / (float)BUFFER_WIDTH;
float xS = xStart / (float)BUFFER_WIDTH;
float yS = yStart / (float)BUFFER_WIDTH;
float xE = xEnd / (float)BUFFER_WIDTH;
float yE = yEnd / (float)BUFFER_WIDTH;
float xD = xE - xS;
float yD = yE - yS;
float mod = 1.0f / ( xD * xD + yD * yD );
float gradPos = ( ( imageX - xS ) * xD + ( imageY - yS ) * yD ) * mod;
float mag = gradPos > 0 ? gradPos < 1.0f ? gradPos : 1.0f : 0.0f;
int colour = (int)( 255 * mag );
colour |= ( colour << 16 ) + ( colour << 8 );
return colour;
}
For speed ups, cache the derived "direction" values (hint: premultiply by the mag).
There are two parts to this problem.
Given two colors A and B and some percentage p, determine what color lies p 'percent of the way' from A to B.
Given a point on a plane, find the orthogonal projection of that point onto a given line.
The given line in part 2 is your gradient line. Given any point P, project it onto the gradient line. Let's say its projection is R. Then figure out how far R is from the starting point of your gradient segment, as a percentage of the length of the gradient segment. Use this percentage in your function from part 1 above. That's the color P should be.
Note that, contrary to what other people have said, you can't just view your colors as regular numbers in your function from part 1. That will almost certainly not do what you want. What you do depends on the color space you are using. If you want an RGB gradient, then you have to look at the red, green, and blue color components separately.
For example, if you want a color "halfway between" pure red and blue, then in hex notation you are dealing with
ff 00 00
and
00 00 ff
Probably the color you want is something like
80 00 80
which is a nice purple color. You have to average out each color component separately. If you try to just average the hex numbers 0xff0000 and 0x0000ff directly, you get 0x7F807F, which is a medium gray. I'm guessing this explains at least part of the problem with your picture above.
Alternatively if you are in the HSV color space, you may want to adjust the hue component only, and leave the others as they are.
void Image::fillGradient(const SColor& colorA, const SColor& colorB,
const Point2i& from, const Point2i& to)
{
Point2f dir = to - from;
if(to == from)
dir.x = width - 1; // horizontal gradient
dir *= 1.0f / dir.lengthQ2(); // 1.0 / (dir.x * dir.x + dir.y * dir.y)
float default_kx = float(-from.x) * dir.x;
float kx = default_kx;
float ky = float(-from.y) * dir.y;
uint8_t* cur_pixel = base; // array of rgba pixels
for(int32_t h = 0; h < height; h++)
{
for(int32_t w = 0; w < width; w++)
{
float k = std::clamp(kx + ky, 0.0f, 1.0f);
*(cur_pixel++) = colorA.r * (1.0 - k) + colorB.r * k;
*(cur_pixel++) = colorA.g * (1.0 - k) + colorB.g * k;
*(cur_pixel++) = colorA.b * (1.0 - k) + colorB.b * k;
*(cur_pixel++) = colorA.a * (1.0 - k) + colorB.a * k;
kx += dir.x;
}
kx = default_kx;
ky += dir.y;
}
}