CDC::Ellipse doesn't draw correctly circles - c++

In a project of mine (VC++2010, MFC), I want to draw a circle using the CDC::Ellipse. I set two points: the first one is the center of the circle, the second one is a point I want it to be on the circumference.
I pass to the CDC::Ellipse( int x1, int y1, int x2, int y2 ) the coordinates of the upper-left corner and lower-right one.
Briefly: with Pitagora Theorem I calculate the distance between the two points ( radius ), then I subtract this value from the coordinates of the center to obtain the upper-left corner and add to obtain the lower-right one.
When I draw the cirlce and the points, and I zoom in, I see that the second one isn't on the circumference as expected, it is slightly inside unless you set it at 0°, 45°, 90° and so on with respect to the absolute sistem of coordinates.
Then I tried to draw the same circle using CDC::Polyline, I gave to this method the points obtained rotating another point around the center, at the distance equal to the radius. In this case the point is on the circumference every where I set it.
The overlap of these two circles has shown that they perfectly overlap at 0°, 45°, 90° and so on, but the gap is maximum at 22.5°, 67.5° and so on.
Has anyone ever noticed a similar behavior?
Thanks to everybody that can help me!
Code snippet:
this is how I calculate the radius given 2 points:
centerPX = vvFPoint( 1380, 845 );
secondPointPX = vvFPoint( 654,654 );
double radiusPX = (sqrt( (secondPointPX.x - centerPX.x) * (secondPointPX.x - centerPX.x) + (secondPointPX.y - centerPX.y) * (secondPointPX.y - centerPX.y) ));
( vvFPoint is a custom type derived from CPoint )
this is how I draw the "circle" with the CDC::Ellipse:
int up = (int)(((double)(m_p1.y-(double)originY - m_radius) / zoom) + 0.5) + offY;
int left = (int)(((double)(m_p1.x-(double)originX - m_radius) / zoom) + 0.5) + offX;
int down = (int)(((double)(m_p1.y-(double)originY + m_radius) / zoom) + 0.5) + offY;
int right = (int)(((double)(m_p1.x-(double)originX + m_radius) / zoom) + 0.5) + offX;
pDC->Ellipse( left, up, right, down);
(m_p1 is the center of the circle, originX/Y is the origin of the image, m_radius is the radius of the circle, zoom is the scale factor, offX/Y is an offset in the client area of my SW)
this is how I draw the circle "manually" (and quite trivial method) using a custom polyline class:
1) create the array of points:
point.x = centerPX.x + radiusPX;
point.y = centerPX.y;
for ( i=0; i < 3600; i++ )
{
pt1.RotateDeg ( centerPX, (double)0.1 );
poly->AddPoint( pt1 );
}
(RotateDeg is a custom method to rotate a point using first argument as a pivot and second argument as angle value in degrees, AddPoint is a custom method to create the array of points, poly is my custom polyline object).
2) draw it:
When I call the Draw( CDC* pDC ) I use the previous array to draw the polyline:
pDC->MoveTo(p);
I hope this can help you to reproduce my weird observations!
code snippet 2:
void vvPoint<Tipo>::RotateDeg(const vvPoint<Tipo> &center, double angle)
{
vvPoint<Tipo> ptB;
angle *= -(M_PI / 180);
*this -= center;
ptB.x = ((this->x * cos(angle)) - (this->y * sin(angle)));
ptB.y = ((this->x * sin(angle)) + (this->y * cos(angle)));
*this = ptB + center;
}
But to let you better understand my observations I would like to add a few images so you can see where my whole question started from... The problem is: I can't add images since I need to have 10 reputation. I uploaded a .zip file on dropbox and if you want I can send you the URL of this file. Let me know if this is the correct (and safe..) way to bypass this problem.
Thanks!

This might be a possible explanation. As MSDN says about CDC::Ellipse (with my emphasis):
The center of the ellipse is the center of the bounding rectangle
specified by x1, y1, x2, and y2, or lpRect. The ellipse is drawn with
the current pen, and its interior is filled with the current brush.
The figure drawn by this function extends up to, but does not include,
the right and bottom coordinates. This means that the height of the
figure is y2 – y1 and the width of the figure is x2 – x1.
The way you described how you calculate the bounding rectangle is not entirely clear (some source code would have helped) but, given the second paragraph quoted above, you possibly need to add 1 to your x2 and y2 values, to make sure you have a circle with the desired radius.
It's also worth noting that there may be slight rounding differences between your two drawing methods where you have an odd-sized bounding box (i.e. so the centre point falls logically on a half-pixel).
UPDATE
Using your code snippets (thanks), and assuming no zoom and zero offsets etc., I get a radius of 750.704 pixels and the following parameters for the ellipse:
pDC->Ellipse(629, 94, 2131, 1596);
According to MSDN, this means that the ellipse will be drawn in a figure of the following dimensions:
width = (2131 - 629) = 1502
height = (1596 - 94) = 1502
So as far as I can see, this should produce a circle rather than an ellipse.
The next thing to do is to find out how you're drawing the polygon - for that we need to see the implementation of RotateDeg - can you post that code? I'm suspecting some simple rounding error here, that maybe gets magnified when you zoom.
UPDATE 2
Just looking at this code:
for ( i=0; i < 3600; i++ )
{
pt1.RotateDeg ( centerPX, (double)0.1 );
poly->AddPoint( pt1 );
}
You are rotating your polygon points incrementally by 0.1 degrees each time. This will possibly accumulate some errors, so it may be worth doing it something like this instead:
for ( i=0; i < 3600; i++ )
{
vvFPoint ptNew = pt1;
ptNew.RotateDeg ( centerPX, (double)i * 0.1 );
poly->AddPoint( ptNew );
}
Maybe this will mean you have to change your RotateDeg function to take care of the correct quadrants.
One other point, you mentioned that you see the problem when you zoom into the image. If this means you are using you zoom variable, it is worth checking in this line ...:
pDC->Ellipse( left, up, right, down);
... that the parameters still form a square shape, so (right - left) == (down - up).
UPDATE 3
I just ran your RotateDeg function, in its current form, to see how the error accumulates (by feeding in the previous result to the next iteration). At each step, I calculated the distance between the new point and the centre and compared this with the required radius.
The chart below shows the result, where you can see an error of 4 pixels by the time the points have been calculated.
I think that this at least explains part of the difference (i.e. your polygon drawing is flawed) and - depending on zoom - you may introduce asymmetry into the ellipse parameters, which you can debug by comparing the width to the height as I described above.

Related

How can I maintain points inside a rectangle that keeps its aspect ratio inside and changing outer container?

So my problem involves trying to keep points drawn into a rectangle at the same position of the rectangle while an outer container box is scaled to any size.
The rectangle containing the points will keep its aspect ratio while it grows and shrinks in the center of the outer box.
I'm able to keep the inner box's aspect ratio constant but am having problems drawing the points in the correct place when scaling the outer box. Here's an example of my problem.
I'd like the point to stay on the same spot the picture is no matter how the outer box is scaled. The coordinate system has 0,0 as the topleft of the outer box and the inner box is centered using offsets allowing the inner box to be big as possible while maintaining its aspect ratio, however I'm stuck on getting points I add to maintain their position in the box. Here's a look at what I think I should be doing:
void PointsHandler::updatePoints()
{
double imgRatio = boxSize.width() / boxSize.height();
double oldXOffset = (oldContainerSize.width() - oldBoxSize.width()) / 2;
double oldYOffset = (oldContainerSize.height() - oldBoxSize.height()) / 2;
double newXOffset = (containerSize.width() - boxSize.width()) / 2;
double newYOffset = (containerSize.height() - boxSize.height()) / 2;
for(int i = 0; i < points.size(); i++){
double newX = ((points[i].x() - oldXOffset) + newXOffset) * boxRatio;
double newY = ((points[i].y() - oldYOffset) + newYOffset) * boxRatio;
points.replace(i, Point(newX, newY));
}
}
The requested transformations are only translations and scale.
To preserve the original aspect ratio of the inner box, the scale factor must be the same for the x and the y axes. To choose which one to apply, the user should compare the ratio between the width and the height of the new outer box with the aspect ratio of the old inner box. If it's lower, the scale should be the ratio between the new width and the old one, otherwise the ratio between the heights.
To respect the correct order of the transformations, you need to first apply a translation of the points so that the old center of inner box coincides with the origin of the axes (the top left corner of the outer box, apparently), then scale the points and finally translate back to the new center of the outer box. That's not what the posted code attempts to do, because it seems that the scale is applied last.

Drawing a sprite on the circumference of a circle based on the position of other objects

I'm making a sniper shooter arcade style game in Gamemaker Studio 2 and I want the position of targets outside of the viewport to be pointed to by chevrons that move along the circumference of the scope when it moves. I am using trig techniques to determine the coordinates but the chevron is jumping around and doesn't seem to be pointing to the target. I have the code broken into two: the code to determine the coordinates in the step event of the enemies class (the objects that will be pointed to) and a draw event in the same class. Additionally, when I try to rotate the chevron so it also points to the enemy, it doesn't draw at all.
Here's the coordinate algorithm and the code to draw the chevrons, respectively
//determine the angle the target makes with the player
delta_x = abs(ObjectPlayer.x - x); //x axis displacement
delta_y = abs(ObjectPlayer.y - y); //y axis displacement
angle = arctan2(delta_y,delta_x); //angle in radians
angle *= 180/pi //angle in radians
//Determine the direction based on the larger dimension and
largest_distance = max(x,y);
plusOrMinus = (largest_distance == x)?
sign(ObjectPlayer.x-x) : sign(ObjectPlayer.y-y);
//define the chevron coordinates
chevron_x = ObjectPlayer.x + plusOrMinus*(cos(angle) + 20);
chevron_y = ObjectPlayer.y + plusOrMinus*(sign(angle) + 20);
The drawing code
if(object_exists(ObjectEnemy)){
draw_text(ObjectPlayer.x, ObjectPlayer.y-10,string(angle));
draw_sprite(Spr_Chevron,-1,chevron_x,chevron_y);
//sSpr_Chevron.image_angle = angle;
}
Your current code is slightly more complex that it needs to be for this, if you want to draw chevrons pointing towards all enemies, you might as well do that on spot in Draw. And use degree-based functions if you're going to need degrees for drawing anyway
var px = ObjectPlayer.x;
var py = ObjectPlayer.y;
with (ObjectEnemy) {
var angle = point_direction(px, py, x, y);
var chevron_x = px + lengthdir_x(20, angle);
var chevron_y = py + lengthdir_y(20, angle);
draw_sprite_ext(Spr_Chevron, -1, chevron_x, chevron_y, 1, 1, angle, c_white, 1);
}
(also see: an almost-decade old blog post of mine about doing this while clamping to screen edges instead)
Specific problems with your existing code are:
Using a single-axis plusOrMinus with two axes
Adding 20 to sine/cosine instead of multiplying them by it
Trying to apply an angle to sSpr_Chevron (?) instead of using draw_sprite_ext to draw a rotated sprite.
Calculating largest_distance based on executing instance's X/Y instead of delta X/Y.

How to find Relative Offset of a point inside a non axis aligned box (box that is arbitrarily rotated)

I'm trying to solve an problem where I cannot find the Relative Offset of a Point inside a Box that exists inside of a space that can be arbitrarily rotated and translated.
I know the WorldSpace Location of the Box (and its 4 Corners, the Coordinates on the Image are Relative) as well as its Rotation. These can be arbitrary (its actually a 3D Trigger Volume within a game, but we are only concerned with it in a 2D plane from top down).
Looking at it Aligned to an Axis the Red Point Relative position would be
0.25, 0.25
If the Box was to be Rotated arbitrarily I cannot seem to figure out how to maintain that given we sample the same Point (its World Location will have changed) its Relative Position doesnt change even though the World Rotation of the Box has.
For reference, the Red Point represents an Object that exists in the scene that the Box is encompassing.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker)
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
float RelativeX = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerBottomRightLocation().X)),
FVector2D(0.f, 1.f),
MapMarkerLocation.X
);
float RelativeY = FMath::GetMappedRangeValueClamped(
-FVector2D(FMath::Min(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y), FMath::Max(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomRightLocation().Y)),
FVector2D(0.f, 1.f),
MapMarkerLocation.Y
);
OutPosition.X = FMath::Abs(RelativeX);
OutPosition.Y = FMath::Abs(RelativeY);
bResult = true;
}
return bResult;
}
Currently, you can see with the above code that im only using the Top Left and Bottom Right corners of the Box to try and calculate the offset, I know this is not a sufficient solution as doing this does not allow for Rotation (Id need to use the other 2 corners as well) however I cannot for the life of me work out what I need to do to reach the solution.
FMath::GetMappedRangeValueClamped
This converts one range onto another. (20 - 50) becomes (0 - 1) for example.
Any assistance/advice on how to approach this problem would be much appreciated.
Thanks.
UPDATE
#Voo's comment helped me realize that the solution was much simpler than anticipated.
By knowing the Location of 3 of the Corners of the Box, I'm able to find the points on the 2 lines these 3 Locations create, then simply mapping those points into a 0-1 range gives the appropriate value regardless of how the Box is Translated.
bool UPGMapWidget::GetMapMarkerRelativePosition(UPGMapMarkerComponent* MapMarker, FVector2D& OutPosition)
{
bool bResult = false;
if (MapMarker && GetMapVolume())
{
const FVector MapMarkerLocation = MapMarker->GetOwner()->GetActorLocation();
const FVector TopLeftLocation = GetMapVolume()->GetCornerTopLeftLocation();
const FVector TopRightLocation = GetMapVolume()->GetCornerTopRightLocation();
const FVector BottomLeftLocation = GetMapVolume()->GetCornerBottomLeftLocation();
FVector XPlane = FMath::ClosestPointOnLine(TopLeftLocation, TopRightLocation, MapMarkerLocation);
FVector YPlane = FMath::ClosestPointOnLine(TopLeftLocation, BottomLeftLocation, MapMarkerLocation);
// Convert the X axis into a 0-1 range.
float RelativeX = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().X, GetMapVolume()->GetCornerTopRightLocation().X),
FVector2D(0.f, 1.f),
XPlane.X
);
// Convert the Y axis into a 0-1 range.
float RelativeY = FMath::GetMappedRangeValueUnclamped(
FVector2D(GetMapVolume()->GetCornerTopLeftLocation().Y, GetMapVolume()->GetCornerBottomLeftLocation().Y),
FVector2D(0.f, 1.f),
YPlane.Y
);
OutPosition.X = RelativeX;
OutPosition.Y = RelativeY;
bResult = true;
}
return bResult;
}
The above code is the amended code from the original question with the correct solution.
assume the origin is at (x0, y0), the other three are at (x_x_axis, y_x_axis), (x_y_axis, y_y_axis), (x1, y1), the object is at (x_obj, y_obj)
do these operations to all five points:
(1)translate all five points by (-x0, -y0), to make the origin moved to (0, 0) (after that (x_x_axis, y_x_axis) is moved to (x_x_axis - x0, y_x_axis - y0));
(2)rotate all five points around (0, 0) by -arctan((y_x_axis - y0)/(x_x_axis - x0)), to make the (x_x_axis - x0, y_x_axis - y0) moved to x_axis;
(3)assume the new coordinates are (0, 0), (x_x_axis', 0), (0, y_y_axis'), (x_x_axis', y_y_axis'), (x_obj', y_obj'), then the object's zero-one coordinate is (x_obj'/x_x_axis', y_obj'/y_y_axis');
rotate formula:(x_new, y_new)=(x_old * cos(theta) - y_old * sin(theta), x_old * sin(theta) + y_old * cos(theta))
Update:
Note:
If you use the distance method, you have to take care of the sign of the coordinate if the object might go out of the scene in the future;
If there will be other transformations on the scene in the future (like symmetry transformation if you have mirror magic in the game, or transvection transformation if you have shockwaves, heatwaves or gravitational waves in the game), then the distance method no longer applies and you still have to reverse all the transformations your scene has in order to get the object's coordinate.

QGraphicsScene, how to place items based on center point

I have a QGraphicsScene, and I have determined where my center point is, but I now need to figure out how to place my items in the scene based on that information.
I have 2 pieces of data I need to work with: range and bearing.
Range obviously is how far away from the center point (or my location), and bearing is the direction from center point, with 0 being north, 180 being south and so on.
So for example, if I need to place an item at range: 20, bearing: 90, the item will be 20 (units) directly to the right of center point. Currently, placing the item with this data it is based off 0,0 being top left of the scene.
This all needs to be able to scale with the zoom state of the QGraphicsScene as well.
I'm totally lost at this conversion.
Unsure of how to get proper offsets with converting from coordinate system to coordinate system I managed to find a solution. Hopefully it's not considered too much of a hack job.
First I needed to offset from the top left (0,0), and knowing that my scene was 360, 360 that was easy.
Not being a math guy I was unsure about getting the angles, but after some research I saw that the info I had was exactly what I needed to derive a vector.
Here is the method I wrote to help me generate items in my QGraphicsScene.
QPointF Mainwindow::pointLocation(double bearing, double range){
int offset = 90; //used to offset Cartesian system
double centerX = 180;//push my center location out to halfway point
double centerY = 180;
double newX = centerX + qCos(qDegreesToRadians(bearing - offset)) * range;
double newY = centerY + qSin(qDegreesToRadians(bearing - offset)) * range;
QPointF newPoint = QPointF(newX, newY);
return newPoint;
}

3d model to fit in viewport

How does a 3D model handled unit wise ?
When i have a random model that i want to fit in my view port i dunno if it is too big or not, if i need to translate it to be in the middle...
I think a 3d object might have it's own origine.
You need to find a bounding volume, a shape that encloses all the object's vertices, for your object that is easier to work with than the object itself. Spheres are often used for this. Either the artist can define the sphere as part of the model information or you can work it out at run time. Calculating the optimal sphere is very hard, but you can get a good approximation using the following:
determine the min and max value of each point's x, y and z
for each vertex
min_x = min (min_x, vertex.x)
max_x = max (max_x, vertex.x)
min_y = min (min_y, vertex.y)
max_y = max (max_y, vertex.y)
min_z = min (min_z, vertex.z)
max_z = max (max_z, vertex.z)
sphere centre = (max_x + min_x) / 2, (max_y + min_y) / 2, (max_z + min_z) / 2
sphere radius = distance from centre to (max_x, max_y, max_z)
Using this sphere, determine the a world position that allows the sphere to be viewed in full - simple geometry will determine this.
Sorry, your question is very unclear. I suppose you want to center a 3D model to a viewport. You can achieve this by calculating the model's bounding box. To do this, traverse all polygons and get the minimum/maximum X/Y/Z coordinates. The bounding box given by the points (min_x,min_y,min_z) and (max_x,max_y,max_z) will contain the whole model. Now you can center the model by looking at the center of this box. With some further calculations (depending on your FOV) you can also get the left/right/upper/lower borders inside your viewport.
"so i tried to scale it down"
The best thing to do in this situation is not to transform your model at all! Leave it be. What you want to change is your camera.
First calculate the bounding box of your model somewhere in 3D space.
Next calculate the radius of it by taking the max( aabb.max.x-aabb.min.x, aabb.max.y-aabb.min.y, aabb.max.z-aabb.min.z ). It's crude but it gets the job done.
To center the object in the viewport place the camera at the object position. If Y is your forward axis subtract the radius from Y. If Z is the forward axis then subtract radius from it instead. Subtract a fudge factor to get you past the pesky near plane so your model doesn't clip out. I use quaternions in my engine with a nice lookat() method. So call lookat() and pass in the center of the bounding box. Voila! You're object is centered in the viewport regardless of where it is in the world.
This always places the camera axis aligned so you might want to get fancy and transform the camera into model space instead, subtract off the radius, then lookat() the center again. Then you're always looking at the back of the model. The key is always the lookat().
Here's some example code from my engine. It checks to see if we're trying to frame a chunk of static terrain, if so look down from a height, or a light or a static mesh. A visual is anything that draws in the scene and there are dozens of different types. A Visual::Instance is a copy of the visual, or where to draw it.
void EnvironmentView::frameSelected(){
if( m_tSelection.toInstance() ){
Visual::Instance& I = m_tSelection.toInstance().cast();
Visual* pVisual = I.toVisual();
if( pVisual->isa( StaticTerrain::classid )){
toEditorCamera().toL2W().setPosition( pt3( 0, 0, 50000 ));
toEditorCamera().lookat( pt3( 0 ));
}else if( I.toFlags()->bIsLight ){
Visual::LightInstance& L = static_cast<Visual::LightInstance&>( I );
qst3& L2W = L.toL2W();
const sphere s( L2W.toPosition(), L2W.toScale() );
const f32 y =-(s.toCenter()+s.toRadius()).y();
const f32 z = (s.toCenter()+s.toRadius()).y();
qst3& camL2W = toEditorCamera().toL2W();
camL2W.setPosition(s.toCenter()+pt3( 0, y, z ));//45 deg above
toEditorCamera().lookat( s.toCenter() );
}else{
Mesh::handle hMesh = pVisual->getMesh();
if( hMesh ){
qst3& L2W = m_tSelection.toInstance()->toL2W();
vec4x4 M;
L2W.getMatrix( M );
aabb3 b0 = hMesh->toBounds();
b0.min = M * b0.min;
b0.max = M * b0.max;
aabb3 b1;
b1 += b0.min;
b1 += b0.max;
const sphere s( b1.toSphere() );
const f32 y =-(s.toCenter()+s.toRadius()*2.5f).y();
const f32 z = (s.toCenter()+s.toRadius()*2.5f).y();
qst3& camL2W = toEditorCamera().toL2W();
camL2W.setPosition( L2W.toPosition()+pt3( 0, y, z ));//45 deg above
toEditorCamera().lookat( b1.toOrigin() );
}
}
}
}