Problem with Multigradient brush implementation from scatch in C++ and GDI - c++

I am trying to implement a gradient brush from scratch in C++ with GDI. I don't want to use GDI+ or any other graphics framework. I want the gradient to be of any direction (arbitrary angle).
My algorithm in pseudocode:
For each pixel in x dirrection
For each pixel in the y direction
current position = current pixel - centre //translate origin
rotate this pixel according to the given angle
scalingFactor =( rotated pixel + centre ) / extentDistance //translate origin back
rgbColor = startColor + scalingFactor(endColor - startColor)
extentDistance is the length of the line passing from the centre of the rectangle and has gradient equal to the angle of the gradient
Ok so far so good. I can draw this and it looks nice. BUT unfortunately because of the rotation bit the rectangle corners have the wrong color. The result is perfect only for angle which are multiples of 90 degrees. The problem appears to be that the scaling factor doesn't scale over the entire size of the rectangle.
I am not sure if you got my point cz it's really hard to explain my problem without a visualisation of it.
If anyone can help or redirect me to some helpful material I'd be grateful.

Ok guys fixed it. Apparently the problem was that when I was rotating the gradient fill (not the rectangle) I wasn't calculating the scaling factor correctly. The distance over which the gradient is scaled changes according to the gradient direction. What must be done is to find where the edge points of the rect end up after the rotation and based on that you can find the distance over which the gradient should be scaled. So basically what needs to be corrected in my algorithm is the extentDistance.
How to do it:
•Transform the coordinates of all four corners
•Find the smallest of all four x's as minX
•Find the largest of all four x's and call it maxX
•Do the same for y's.
•The distance between these two point (max and min) is the extentDistance

Related

Relate textures areas of a cube with the current Oculus viewport

I'm creating a 360° image player using Oculus rift SDK.
The scene is composed by a cube and the camera is posed in the center of it with just the possibility to rotate around yaw, pitch and roll.
I've drawn the object using openGL considering a 2D texture for each cube's face to create the 360° effect.
I would like to find the portion in the original texture that is actual shown on the Oculus viewport in a certain instant.
Up to now, my approach was try to find the an approximate pixel position of some significant point of the viewport (i.e. the central point and the corners) using the Euler Angles in order to identify some areas in the original textures.
Considering all the problems of using Euler Angles, do not seems the smartest way to do it.
Is there any better approach to accomplish it?
Edit
I did a small example that can be runned in the render loop:
//Keep the Orientation from Oculus (Point 1)
OVR::Matrix4f rotation = Matrix4f(hmdState.HeadPose.ThePose);
//Find the vector respect to a certain point in the viewport, in this case the center (Point 2)
FovPort fov_viewport = FovPort::CreateFromRadians(hmdDesc.CameraFrustumHFovInRadians, hmdDesc.CameraFrustumVFovInRadians);
Vector2f temp2f = fov_viewport.TanAngleToRendertargetNDC(Vector2f(0.0,0.0));// this values are the tangent in the center
Vector3f vector_view = Vector3f(temp2f.x, temp2f.y, -1.0);// just add the third component , where is oriented
vector_view.Normalize();
//Apply the rotation (Point 3)
Vector3f final_vect = rotation.Transform(vector_view);//seems the right operation.
//An example to check if we are looking at the front face (Partial point 4)
if (abs(final_vect.z) > abs(final_vect.x) && abs(final_vect.z) > abs(final_vect.y) && final_vect.z <0){
system("pause");
}
Is it right to consider the entire viewport or should be done for each single eye?
How can be indicated a different point of the viewport respect to the center? I don't really understood which values should be the input of TanAngleToRendertargetNDC().
You can get a full rotation matrix by passing the camera pose quaternion to the OVR::Matrix4 constructor.
You can take any 2D position in the eye viewport and convert it to its camera space 3D coordinate by using the fovPort tan angles. Normalize it and you get the direction vector in camera space for this pixel.
If you apply the rotation matrix gotten earlier to this direction vector you get the actual direction of that ray.
Now you have to convert from this direction to your texture UV. The component with the highest absolute value in the direction vector will give you the face of the cube it's looking at. The remaining components can be used to find the actual 2D location on the texture. This depends on how your cube faces are oriented, if they are x-flipped, etc.
If you are at the rendering part of the viewer, you will want to do this in a shader. If this is to find where the user is looking at in the original image or the extent of its field of view, then only a handful of rays would suffice as you wrote.
edit
Here is a bit of code to go from tan angles to camera space coordinates.
float u = (x / eyeWidth) * (leftTan + rightTan) - leftTan;
float v = (y / eyeHeight) * (upTan + downTan) - upTan;
float w = 1.0f;
x and y are pixel coordinates, eyeWidth and eyeHeight are eye buffer size, and *Tan variables are the fovPort values. I first express the pixel coordinate in [0..1] range, then scale that by the total tan angle for the direction, and then recenter.

Angle of object relative to the camera and video? Video and camera output different

I am wondering if I have got my thinking write about this, I have calibration done for my camera and now I want to get the angle of detected objects relative to the camera only on the x-axis, the horizontal.
I am thinking I can put some grid lines across the image at known pixel values and match those with know real world distances and calculate the angle per pixel that way, knowing the distances of the triangles. Starting at the centre of the image 0 degrees, and as we move towards the right +X degrees and towards the left -X pixels.
Assuming this is a correct way to go about it, for some reason the video I'm working with was recorded at 704x576 pixels, but when I plug the camera into my computer to work with it's 640x480 pixels and it's the same camera that made the recordings. I assume this will affect my results somewhat, with the calibration and definitely with the angle per pixel measurement that I want. I am working with OpenCV in C++, I am wondering if there's a way/function to adjust the screen size for when I call up the camera to 704x576 and if I then do my measurements at this size can I get a somewhat accurate angle per pixel measurement? Or do I need to do something else?
I'm still figuring my way around camera geometry and openCV, and any help would be much appreciated, thanks.
It is probably easier than you think. Say your camera has 60.0 deg horizontal field of view (FOV). Than each pixel along X axis is just 60.0/640 deg. You can easily calculate FOV by considering a right triangle with sides formed by a focal length vector and half of the screen width:
FOV = 2*atan(640/2, focal) where focal length is in pixels
for example, for focal=500 pixels
FOV = 2*atan(640/2, 500) = 1.14rad = 65.2deg
One thing to keep in mind is that focal length changes proportionally with screen resolution. For example, if you calculated focal=500 based on 640x320 image, then for 320x160 image focal=250.

How can I find center of object?

I have black and white image after binarization. After that I get one object with irregular shape. Link to this image is below.
How can I inscribe this object to circle?? or How can I find "center" of this object??
You can find the center of gravity of the pixels using a simple formula which is the sum of the x coordinates divided by the number of points and the sum of the y coordinates divided by the number of points (I mean white points).
Then you can draw a circle centered in the center of gravity with radious half of the maximum distance between points.
Here you have a graphic explanation for this.
This sounds like a smallest circle problem on the set of white pixels. It can be found in linear time in the number of pixels. This is the best you will ever get it your input is just an array of binary pixels.
well, you could scan from top down for the top-most white pixel, then from the bottom up for the bottom-most white pixel, same for left and right. that gives you a rectangle. finding the center of the rectangle is easy (e.g. left + ( right - left ) / 2), and that's your circle center. then find the distance to a corner (any will do), and that's your circle radius.
I think, that center of the object can be easily found as an arithmetic mean of x and y coordinate. I you want to replace it by a circle, I'd say that the diameter is a double of the mean distance of all points to the center.

Circle that moves on the edge of a circle

As the title describes, I want to make a tiny circle that circulates on the edge of the sector of the another big circle. I have implemented sector of the circle, now only issue here is how to make small circle circulate on the edge of this sector. I have tried various ways, however, none of them was proved to be successful, therefore I plead you to give me some tips of how to implement it.
Thanks in advance.
You just have to consider that, for a circle of radius 1 centered on the origin, every point on the circle can be described as:
P = [sin(alpha); cos(alpha)]
With 0<=alpha<2*pi
Now, if you change the radius and the center you will have:
P = [(radius * sin(alpha))+x_center; (radius*cos(alpha))+y_center]
So, just have a loop for alpha going from 0 to 2*pi (or whatever section of circle you need) and use the above equation to calculate the position of the center of the small circle.
I presume you have a a function that can draw a circle at a given position in cartesian co-ordinates and radius.
Use polar co-ordinates (angle / radius), set the radius to the radius of the big circle minus the small circle. Set the angle to wherever you want to start the circle. Then set a loop up to increment the angle by a given amount. After each increment, clear the screen, draw the big circle. Then convert the polar co-orindates into cartesian, add on the centre of the big circle and draw the small circle. Hold for as long as you want.

Get rectangle vertices by center, normal, length and height

I am looking for a way to get all the vertices of a rectangle whose center, normal, length and height I know. I am a little weak in maths so please help me.
Edit : the plane is in 3D space.
You can easily calculate the x and y coordinates of the vertices of a rectangle in 2D space given the center, width and height by subtracting/adding half the width/height from the x/y position of the center point.
If you need this in 3D space, this becomes a little more tricky and relies on a bit of trigonometry, but still follows the same principle. You'll need one extra piece of information. You need some way of fixing the orientation of the square in some direction; ie, which direction is the rectangle 'facing'. The normal will allow you to work out what plane the rectangle is on, but without some orientation on that plane, the best you can do is work out a set of possible values in a circle around the center for each of the vertices.