What did i wrong in my image turn algoritm? - c++

I just want to turn image #1 and write it in memory #2 (#1 Body #2 TurnBody) (rotation around the center of the image)
KI and KJ its just (i-radius) and (j-radius) for usage. SIN and COS its just sin and cos of turn angle.
radius - just half of image side (my image is square)
6.28 = pi*2
example i need to turn
example i have:
(i turn not all image, just a small square in center and add it to big screen image)
TurnAngle - just my global value (shows what angle the image is now rotated)
void Turn(double angle, int radius, COLORREF* Body, COLORREF* TurnBody)
{
if (abs(TurnAngle += angle) > 6.28)
{
TurnAngle = 0;
}
int i, ki, j, kj;
const double SIN = sin(TurnAngle), COS = cos(TurnAngle);
for (i = 0, ki = -radius; i < 2 * radius; i++, ki++)
{
for (j = 0, kj = -radius; j < 2 * radius; j++, kj++)
{
if (Body[i * 2 * radius + j]) // if Pixel not black
{
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
}
}
}
}
this work, smth was wrong with ( ) or double values i rly dont know... Thank you guys
this->TurnBody[(int)(kj * COS - ki * SIN) + this->radius + ((int)(ki * COS + kj * SIN) + this->radius) * 2 * this->radius] = this->Body[i * 2 * this->radius + j];

I think this is wrong:
TurnBody[static_cast<int>(kj * COS - ki * SIN + radius + (ki * COS + kj * SIN + radius) * 2 * radius)] = Body[i * 2 * radius + j];
I think it should be more like this:
TurnBody[(int)(kj * COS) + radius + ((int)(kj * SIN) + radius) * 2*radius] = Body[i * 2 * radius + j];

Related

Cubic Interpolation with the official formula fails

I am trying to implement the Cubic Interpolation method using the next formula when a=-0.5 as usual.
My Linear Interpolation and Nearest Neighbor interpolation is working great but for some reason the Cubic interpolation fails with white pixels and turn them sometimes to turquoise color and sometimes messing around with another colors.
for example using rotation: (NOTE: please look carefully on the right image and you will notice the problems)
Another Example with much more black pixels. It almost seems to work perfectly but look on the dog's tongue. (strong white pixels turn to turquoise again)
you can see that my implementation of the Linear Interpolation is working great:
Since the actual rotation worked, I think I have a small mistake in the code that I did not notice, or maybe it's a numeric error or a double / float error.
It is important to note that I read the image normally and store the destination image as follows:
cv::Mat img = cv::imread("../dogfails.jpeg");
cv::Mat rotatedImageCubic(img.rows,img.cols,CV_8UC3);
Clarifications:
Inside my cubic interpolation function, srcPoint (newX and newY) is the "landing point" from the inverse transformation.
In my inverse transformations I am not using matrix multiplication with the pixels, right now I am just using the formulas for rotation. It might be important for the "numerical errors". For example:
rotatedX = x * cos(angle * toRadian) + y * sin(angle * toRadian);
rotatedY = x * (-sin(angle * toRadian)) + y * cos(angle * toRadian);
Here is my code for the Cubic Interpolation
double cubicEquationSolver(double d,double a) {
d = abs(d);
if( 0.0 <= d && d <= 1.0) {
double score = (a + 2.0) * pow(d, 3.0) - ((a + 3.0) * pow(d, 2.0)) + 1.0;
return score;
}
else if(1 < d && d <= 2) {
double score = a * pow(d, 3.0) - 5.0*a * pow(d, 2.0) + 8.0*a * d - 4.0*a;
return score;
}
else
return 0.0;
}
void Cubic_Interpolation_Helper(const cv::Mat& src, cv::Mat& dst, const cv::Point2d& srcPoint, cv::Point2i& dstPixel) {
double newX = srcPoint.x;
double newY = srcPoint.y;
double dx = abs(newX - round(newX));
double dy = abs(newY - round(newY));
double sumCubicBValue = 0;
double sumCubicGValue = 0;
double sumCubicRValue = 0;
double sumCubicGrayValue = 0;
double uX = 0;
double uY = 0;
if (floor(newX) - 1 < 0 || floor(newX) + 2 > src.cols - 1 || floor(newY) < 0 || floor(newY) > src.rows - 1) {
if (dst.channels() > 1)
dst.at<cv::Vec3b>(dstPixel) = cv::Vec3b(0, 0,0);
else
dst.at<uchar>(dstPixel) = 0;
}
else {
for (int cNeighbor = -1; cNeighbor <= 2; cNeighbor++) {
for (int rNeighbor = -1; rNeighbor <= 2; rNeighbor++) {
uX = cubicEquationSolver(rNeighbor + dx, -0.5);
uY = cubicEquationSolver(cNeighbor + dy, -0.5);
if (src.channels() > 1) {
sumCubicBValue = sumCubicBValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[0] * uX * uY;
sumCubicGValue = sumCubicGValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[1] * uX * uY;
sumCubicRValue = sumCubicRValue + (double) src.at<cv::Vec3b>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY)))[2] * uX * uY;
} else {
sumCubicGrayValue = sumCubicGrayValue + (double) src.at<uchar>(
cv::Point2i(round(newX) + rNeighbor, cNeighbor + round(newY))) * uX * uY;
}
}
}
if (dst.channels() > 1)
dst.at<cv::Vec3b>(dstPixel) = cv::Vec3b((int) round(sumCubicBValue), (int) round(sumCubicGValue),
(int) round(sumCubicRValue));
else
dst.at<uchar>(dstPixel) = sumCubicGrayValue;
}
I hope someone here will be able to help me, Thanks!

draw satellite covarage zone on equirectangular projection

I need to draw borders of the observation zone of satellite on equirectangular projection. I found this formulas (1) and figure:
sin(fi) = cos(alpha) * sin(fiSat) – sin(alpha) * sin (Beta) * cos (fiSat);
sin(lambda) = (cos(alpha) * cos(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sin(fi)));
cos(lambda) = (cos(alpha) * cos(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat)) / cos(asin(sin(fi))) -
(sin(alpha) * cos(Beta) * sin(lambdaSat)) / cos(asin(sin(fi)));
Cross-sections of the Earth in various planes:
And equations system (2) with figure:
if sin(lambda) > 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
if sin(lambda) > 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) < 0 then lambda = 180 - asin(sin(lambda));
if sin(lambda) < 0, cos(lambda) > 0 then lambda = asin(sin(lambda));
Scheme of reference angles for the longitude of the Earth:
Where: alpha – polar angle;
fiSat, lambdaSat – latitude, longitude of satellite;
Beta – angle which change from 0 to 2*Pi and help to draw the observation zone;
fi, lambda – latitude, longitude of point B on the border of observation zone;
I repeat both (1) and (2) formulas in cycle from 0 to 2*Pi to draw border of observation zone. But I am not quite sure in (2) system of equations.
Inside intervals [-180;-90], [-90;90], [90;180] the zone draws correctly.
Center at -35;45:
Center at 120;60:
Center at -120;-25
But on border of -90 and 90 degree it get messy:
Center at -95;-50
Center at 95;30
Can you help me with formulas(1) and (2) or write another ones?
double deltaB = 1.0*M_PI/180;
observerZone.clear();
for (double Beta = 0.0; Beta <= (M_PI * 2) ; Beta += deltaB){
double sinFi = cos(alpha) * sin(fiSat) - sin(alpha) * sin(Beta) * cos(fiSat);
double sinLambda = (cos(alpha) * cos(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * sin(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * cos(lambdaSat))/cos(asin(sinFi));
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) -
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
if (sinLambda > 0) {
if (cosLambda > 0 ){
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
}
else if (cosLambda > 0) {
sinLambda = asin(sinLambda);
sinFi = asin(sinFi);
}
else {
sinLambda = -M_PI - asin(sinLambda);
sinFi = asin(sinFi);
}
Point point;
point.latitude = qRadiansToDegrees(sinFi);
point.longitude = qRadiansToDegrees(sinLambda);
observerZone.push_back(point);
}
I solve my problem. In (1) equation when calculating cosLambda should be + instead of -.
double cosLambda = (cos(alpha) * cos(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * sin(Beta) * sin(fiSat) * cos(lambdaSat))/cos(asin(sinFi)) +
(sin(alpha) * cos(Beta) * sin(lambdaSat))/cos(asin(sinFi));
Sorry for disturbing.

How does this lighting calculation work?

I have that piece of code that is responsible for lighting a pyramid.
float Geometric3D::calculateLight(int vert1, int vert2, int vert3) {
float ax = tabX[vert2] - tabX[vert1];
float ay = tabY[vert2] - tabY[vert1];
float az = tabZ[vert2] - tabZ[vert1];
float bx = tabX[vert3] - tabX[vert1];
float by = tabY[vert3] - tabY[vert1];
float bz = tabZ[vert3] - tabZ[vert1];
float Nx = (ay * bz) - (az * by);
float Ny = (az * bx) - (ax * bz);;
float Nz = (ax * by) - (ay * bx);;
float Lx = -300.0f;
float Ly = -300.0f;
float Lz = -1000.0f;
float lenN = sqrtf((Nx * Nx) + (Ny * Ny) + (Nz * Nz));
float lenL = sqrtf((Lx * Lx) + (Ly * Ly) + (Lz * Lz));
float res = ((Nx * Lx) + (Ny * Ly) + (Nz * Lz)) / (lenN * lenL);
if (res < 0.0f)
res = -res;
return res;
}
I cannot understand calculations at the end. Can someone explain me the maths that is behind them? I know that firstly program calculates two vectors of a plane to compute the normal of it (which goes for vector N). Vector L stand for lighting but what happens next? Why do we calculate length of normal and light then multiply it and divide by their sizes?

Half of my ellipse drawn in the wrong place

Here is the code for an oval drawing method I am working on. I am applying the Bresenham method to plot its co-ordinates, and taking advantage of the ellipse's symmetrical properties to draw the same pixel in four different places.
void cRenderClass::plotEllipse(int xCentre, int yCentre, int width, int height, float angle, float xScale, float yScale)
{
if ((height == width) && (abs(xScale - yScale) < 0.005))
plotCircle(xCentre, yCentre, width, xScale);
std::vector<std::vector <float>> rotate;
if (angle > 360.0f)
{
angle -= 180.0f;
}
rotate = maths.rotateMatrix(angle, 'z');
//rotate[0][0] = cos(angle)
//rotate[0][1] = sin(angle)
float theta = atan2(-height*rotate[0][1], width*rotate[0][0]);
if (angle > 90.0f && angle < 180.0f)
{
theta += PI;
}
//add scalation in at a later date
float xShear = (width * (cos(theta) * rotate[0][0])) - (height * (sin(theta) * rotate[0][1]));
float yShear = (width * (cos(theta) * rotate[0][1])) + (height * (sin(theta) * rotate[0][0]));
float widthAxis = abs(sqrt(((rotate[0][0] * width) * (rotate[0][0] * width)) + ((rotate[0][1] * height) * (rotate[0][1] * height))));
float heightAxis = (width * height) / widthAxis;
int aSquared = widthAxis * widthAxis;
int fourASquared = 4*aSquared;
int bSquared = heightAxis * heightAxis;
int fourBSquared = 4*bSquared;
x0 = 0;
y0 = heightAxis;
int sigma = (bSquared * 2) + (aSquared * (1 - (2 * heightAxis)));
while ((bSquared * x0) <= (aSquared * y0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourASquared * (1 - y0));
y0--;
}
sigma += (bSquared * ((4 * x0) + 6));
x0++;
}
x0 = widthAxis;
y0 = 0;
sigma = (aSquared * 2) + (bSquared * (1 - (2 * widthAxis)));
while ((aSquared * y0) <= (bSquared * x0))
{
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) + y0));
drawPixel(xCentre + x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
drawPixel(xCentre - x0, yCentre + ((floor((x0 * yShear) / xShear)) - y0));
if (sigma >= 0)
{
sigma += (fourBSquared * (1 - x0));
x0--;
}
sigma += (aSquared * (4 * y0) + 6);
y0++;
}
//the above algorithm hasn't been quite completed
//there are still a few things I want to enquire Andy about
//before I move on
//this other algorithm definitely works
//however
//it is computationally expensive
//and the line drawing isn't as refined as the first one
//only use this as a last resort
/* std::vector<std::vector <float>> rotate;
rotate = maths.rotateMatrix(angle, 'z');
float s = rotate[0][1];
float c = rotate[0][0];
float ratio = (float)height / (float)width;
float px, py, xNew, yNew;
for (int theta = 0; theta <= 360; theta++)
{
px = (xCentre + (cos(maths.degToRad(theta)) * (width / 2))) - xCentre;
py = (yCentre - (ratio * (sin(maths.degToRad(theta)) * (width / 2)))) - yCentre;
x0 = (px * c) - (py * s);
y0 = (px * s) + (py * c);
drawPixel(x0 + xCentre, y0 + yCentre);
}*/
}
Here's the problem. When testing the rotation matrix on my oval drawing function, I expect it to draw an ellipse at a slant from its original horizontal position as signified by 'angle'. Instead, it makes a heart shape. This is sweet, but not the result I want.
I have managed to get the other algorithm (as seen in the bottom part of that code sample) working successfully, but it takes more time to compute, and doesn't draw lines quite as nicely. I only plan to use that if I can't get this Bresenham one working.
Can anyone help?

How to speed up bilinear interpolation of image?

I'm trying to rotate image with interpolation, but it's too slow for real time for big images.
the code something like:
for(int y=0;y<dst_h;++y)
{
for(int x=0;x<dst_w;++x)
{
//do inverse transform
fPoint pt(Transform(Point(x, y)));
//in coor of src
int x1= (int)floor(pt.x);
int y1= (int)floor(pt.y);
int x2= x1+1;
int y2= y1+1;
if((x1>=0&&x1<src_w&&y1>=0&&y1<src_h)&&(x2>=0&&x2<src_w&&y2>=0&&y2<src_h))
{
Mask[y][x]= 1; //show pixel
float dx1= pt.x-x1;
float dx2= 1-dx1;
float dy1= pt.y-y1;
float dy2= 1-dy1;
//bilinear
pd[x].blue= (dy2*(ps[y1*src_w+x1].blue*dx2+ps[y1*src_w+x2].blue*dx1)+
dy1*(ps[y2*src_w+x1].blue*dx2+ps[y2*src_w+x2].blue*dx1));
pd[x].green= (dy2*(ps[y1*src_w+x1].green*dx2+ps[y1*src_w+x2].green*dx1)+
dy1*(ps[y2*src_w+x1].green*dx2+ps[y2*src_w+x2].green*dx1));
pd[x].red= (dy2*(ps[y1*src_w+x1].red*dx2+ps[y1*src_w+x2].red*dx1)+
dy1*(ps[y2*src_w+x1].red*dx2+ps[y2*src_w+x2].red*dx1));
//nearest neighbour
//pd[x]= ps[((int)pt.y)*src_w+(int)pt.x];
}
else
Mask[y][x]= 0; //transparent pixel
}
pd+= dst_w;
}
How I can speed up this code, I try to parallelize this code but it seems there is no speed up because of memory access pattern (?).
The key is to do most of your computations as ints. The only thing that is necessary to do as a float is the weighting. See here for a good resource.
From that same resource:
int px = (int)x; // floor of x
int py = (int)y; // floor of y
const int stride = img->width;
const Pixel* p0 = img->data + px + py * stride; // pointer to first pixel
// load the four neighboring pixels
const Pixel& p1 = p0[0 + 0 * stride];
const Pixel& p2 = p0[1 + 0 * stride];
const Pixel& p3 = p0[0 + 1 * stride];
const Pixel& p4 = p0[1 + 1 * stride];
// Calculate the weights for each pixel
float fx = x - px;
float fy = y - py;
float fx1 = 1.0f - fx;
float fy1 = 1.0f - fy;
int w1 = fx1 * fy1 * 256.0f;
int w2 = fx * fy1 * 256.0f;
int w3 = fx1 * fy * 256.0f;
int w4 = fx * fy * 256.0f;
// Calculate the weighted sum of pixels (for each color channel)
int outr = p1.r * w1 + p2.r * w2 + p3.r * w3 + p4.r * w4;
int outg = p1.g * w1 + p2.g * w2 + p3.g * w3 + p4.g * w4;
int outb = p1.b * w1 + p2.b * w2 + p3.b * w3 + p4.b * w4;
int outa = p1.a * w1 + p2.a * w2 + p3.a * w3 + p4.a * w4;
wow you are doing a lot inside most inner loop like:
1.float to int conversions
can do all on floats ...
they are these days pretty fast
the conversion is what is killing you
also you are mixing float and ints together (if i see it right) which is the same ...
2.transform(x,y)
any unnecessary call makes heap trashing and slow things down
instead add 2 variables xx,yy and interpolate them insde your for loops
3.if ....
why to heck are you adding if ?
limit the for ranges before loop and not inside ...
the background can be filled with other fors before or later