Find position of point after scaling (OpenGL and C++) - c++

I've created a program that draws points to the screen in OpenGL (It draws the letter "X" at a specific point). The drawing is then scaled based on user input.
if (GetAsyncKeyState(VK_UP))
{
/*"zoom" is a global float variable*/
zoom += 0.005;
}
glScaled(1 + zoom, 1 + zoom, 1);
I want to find the new position of the points relative to the screen (i.e. A point may be drawn at (100, 100) but after scaling it may be somewhere like (150, 200) with regard to the screen coordinates, but the rasterisation values are always the same, in this case (100, 100)). Is there a function in OpenGL that can return the new coordinates of a point based on specific scaling?

Related

How to rotate a QGraphicsPixmap around a point according to mouseMoveEvent?

I want rotate a QGraphicsPixmapItem around a point according to mouse position.
So i tried this:
void Game::mouseMoveEvent(QMouseEvent* e){
setMouseTracking(true);
QPoint midPos((sceneRect().width() / 2), 0), currPos;
currPos = QPoint(mapToScene(e->pos()).x(), mapToScene(e->pos()).y());
QPoint itemPos((midPos.x() - cannon->scenePos().x()), (midPos.y() - cannon->scenePos().y()));
double angle = atan2(currPos.y(), midPos.x()) - atan2(midPos.y(), currPos.x());
cannon->setTransformOriginPoint(itemPos);
cannon->setRotation(angle); }
But the pixmap moves a few of pixels.
I want a result like this:
Besides the mixup of degrees and radians that #rafix07 pointed out there is a bug in the angle calculation. You basically need the angle of the line from midPos to currPos which you calculate by
double angle = atan2(currPos.y() - midPos.y(), currPos.x() - midPos.x());
Additionally the calculation of the transformation origin assumes the wrong coordinate system. The origin must be given in the coordinate system of the item in question (see QGraphicsItem::setTransformOriginPoint), not in scene coordinates. Since you want to rotate around the center of that item it would just be:
QPointF itemPos(cannon->boundingRect().center());
Then there is the question whether midPos is actually the point highlighted in your image in the middle of the canon. The y-coordinate is set to 0 which would normally be the edge of the screen, but your coordinate system may be different.
I would assume the itemPos calculated above is just the right point, you only need to map it to scene coordinates (cannon->mapToScene(itemPos)).
Lastly I would strongly advise against rounding scene coordinates (which are doubles) to ints as it is done in the code by forcing it to QPoints instead of QPointFs. Just use QPointF whenever you are dealing with scene coordinates.

OpenGL - get mouse position co-ordinates

I am making a 2D board game. the game board grid is 8x8 and each cell of the grid is an object. So a board consists of 64 cell objects. My aim is to work out which cell the mouse is in. I am attempting this by tracking the mouse coordinates and comparing it to the grid coordinates.
my coordinate system is as follows:
gluOrtho2D(-4,4,-4,4);
I am trying to get the current mouse position by using the following in my update function:
POINT p
if (GetCursorPos(&p)){
}
if (ScreenToClient(hWnd, &p))
{
}
However although this is tracking the coordinates of the mouse it is not correctly tracking the world coordinates that I set with gluOrtho2D. How can I achieve this?
It depends on your glViewPort
Let's say you have:
glViewport(0,0, 640, 640);
The mouse position is (mousePos.x,mousePos.y) and the world position you want to know is (world.x, world.y)
And, give that the top/left corner of your screen is the (0, 0) coordinate
Then we can make the following:
world.x = -4.0 + (mousePos.x / 640.0) * (4*2)
world.y = 4.0 - (mousePos.y / 640.0) * (4*2)
What we are doing here is a linear interpolation using the normalize position of the mouse within the screen (mousePos.x / 640) and then multiplying this value to the width of the word (4*2).
Given that the top/left corner of the grid start at (-4, 4), we add the offset of the world position.

SDL Screen Render Position Error Along Top of Window

I am making a top down isometric game using SDL 2.0 and C++ and have come across a glitch.
When a texture is rendered to the screen using the SDL_RenderCopyfunction, the moment the top of the texture hits the top of the screen it gets pushed down by one pixel, thus causing the missing borders seen in the following picture:
Pre-edit with no annotations
Post-edit with with annotations
The following is my render function specific to the world itself, as the world renders differently from everything else in the game, because I am simply copying a "source" texture instead of loading a texture for every single tile in the game, which would be absurdly inefficient.
//-----------------------------------------------------------------------------
// Rendering
DSDataTypes::Sint32 World::Render()
{
//TODO: Change from indexing to using an interator (pointer) for efficiency
for(int index = 0; index < static_cast<int>(mWorldSize.mX * mWorldSize.mY); ++index)
{
const int kTileType = static_cast<int>(mpTilesList[index].GetType());
//Translate the world so that when camera panning occurs the objects in the world will all be in the accurate position
I am also incorporating camera panning as follows (paraphrased with some snippets of code included as my camera panning logic spans multiple files due to the object orientated design of my game):
(code from above immediately continued below)
mpTilesList[index].SetRenderOffset(Window::GetPanOffset());
//position (dstRect)
SDL_Rect position;
position.x = static_cast<int>(mpTilesList[index].GetPositionCurrent().mX + Window::GetPanOffset().mX);
position.y = static_cast<int>(mpTilesList[index].GetPositionCurrent().mY + Window::GetPanOffset().mY);
position.w = static_cast<int>(mpTilesList[index].GetSize().mX);
position.h = static_cast<int>(mpTilesList[index].GetSize().mY);
//clip (frame)
SDL_Rect clip;
clip.x = static_cast<int>(mpSourceList[kTileType].GetFramePos().mX);
clip.y = static_cast<int>(mpSourceList[kTileType].GetFramePos().mY);
clip.w = static_cast<int>(mpSourceList[kTileType].GetFrameSize().mX);
clip.h = static_cast<int>(mpSourceList[kTileType].GetFrameSize().mY);
I am confused as to why this is happening, as regardless of whether I include my simple culling algorithm or not (as shown below), the same result occurs.
(code from above immediately continued below)
//Check to ensure tile is being drawn within the screen size. If so, rendercopy it, else simply skip over and do not render it.
//If the tile's position.x is greather than the left border of the screen
if(position.x > (-mpSourceList[kTileType].GetRenderSize().mX))
{
//If the tile's position.y is greather than the top border of the screen
if(position.y > (-mpSourceList[kTileType].GetRenderSize().mY))
{
//If the tile's position.x is less than the right border of the screen
if(position.x < Window::msWindowSize.w)
{
//If the tile's position.y is less than the bottom border of the screen
if(position.y < Window::msWindowSize.h)
{
SDL_RenderCopy(Window::mspRenderer.get(), mpSourceList[kTileType].GetTexture(), &clip, &position);
}
}
}
}
}
return 0;//TODO
}
You may have a rounding error when you are casting the positions to ints. Perhaps you should round to the nearest integer instead of just taking the floor (which is what you're cast is doing). A tile at position (0.8, 0.8) will be rendered at pixel (0, 0) when it should probably be rendered at position (1, 1).
Or you could ensure that the size of your tiles is always an integer, then errors shouldn't accumulate.
Short version of answer, restated at bottom:
By fixing my data types issue, that allowed me to fix my math library, which removed the issue of rounding parts of pixels, since there is no such thing as less than 1 but greater than 0 pixels on the screen when rendering.
Long Answer:
I believe the issue to have been caused by a rounding error with the offset logic used when rotating a 2D grid to a diagonal isometric perspective, with the rounding error only occurring when dealing with screen coordinates between -1 and +1.
Since I based the conversion from changing an orthogonal grid to a diagonal grid on the y axis (rows), this would explain why the single pixel offset was occurring only at the top border of the screen and not bottom border.
Even though every single row had implicit rounding occurring without any safety checks, only the conversion from world coordinates to screen coordinates dealt with rounding between a positive and negative number.
The reason behind all of this is because my math library which was templatized had an issue a lot of my code being based on type defined user types such as:
typedef unsigned int Uint32;
typedef signed int Sint32;
So I simply used DSMathematics::Vector2<float> instead of the proper implementation of DSMathematics::Vector2<int>.
The reason this is an issue is because there cannot be "half a pixel" on the screen, and thus integers must be used instead of floating point values.
By fixing my data types issue, that allowed me to fix my math library, which removed the issue of rounding parts of pixels, since there is no such thing as less than 1 but greater than 0 pixels on the screen when rendering.

Mirroring the Y axis in SFML

Hey so I'm integrating box2d and SFML, and box2D has the same odd, mirrored Y-axis coordinate system as SFML, meaning everything is rendered upside down. Is there some kind of function or short amount of code I can put that simply mirrors the window's render contents?
I'm thinking I can put something in sf::view to help with this...
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
I don't know what is box2d but when I wanted to flip Y axis using openGL, I just applied negative scaling factor to projection matrix, like:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, -1.0f, 1.0f);
If you want to do it independent of openGL simply apply a sf::View with a negative x value.
It sounds like your model uses a conventional coordinate system (positive y points up), and you need to translate that to the screen coordinate system (positive y points down).
When copying model/Box2D position data to any sf::Drawable, manually transform between the model and screen coordinate systems:
b2Vec2 position = body->GetPosition();
sprite.SetPosition( position.x, window.GetHeight() - position.y )
You can hide this in a wrapper class or function, but it needs to sit between the model and renderer as a pre-render transform. I don't see a place to set that in SFML.
I think Box2D has the coordinate system you want; just set the gravity vector based on your model (0, -10) instead of the screen.
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
By properly applying transforms. First, you can apply a transform that sets the window's bottom-left corner as the origin. Then, scale the Y axis by a factor of -1 to flip it as the second transform.
For this, you can use sf::Transformable to specify each transformation individually (i.e., the setting of the origin and the scaling) and then – by calling sf::Transformable::getTransform() – obtain an sf::Transform object that corresponds to the composed transform.
Finally, when rendering the corresponding object, pass this transform object to the sf::RenderTarget::draw() member function as its second argument. An sf::Transform object implicitly converts to a sf::RenderStates which is the second parameter type of the corresponding sf::RenderTarget::draw() overload.
As an example:
#include <SFML/Graphics.hpp>
auto main() -> int {
auto const width = 300, height = 300;
sf::RenderWindow win(sf::VideoMode(width, height), "Transformation");
win.setFramerateLimit(60);
// create the composed transform object
const sf::Transform transform = [height]{
sf::Transformable transformation;
transformation.setOrigin(0, height); // 1st transform
transformation.setScale(1.f, -1.f); // 2nd transform
return transformation.getTransform();
}();
sf::RectangleShape rect({30, 30});
while (win.isOpen()) {
sf::Event event;
while (win.pollEvent(event))
if (event.type == sf::Event::Closed)
win.close();
// update rectangle's position
rect.move(0, 1);
win.clear();
rect.setFillColor(sf::Color::Blue);
win.draw(rect); // no transformation applied
rect.setFillColor(sf::Color::Red);
win.draw(rect, transform); // transformation applied
win.display();
}
}
There is a single sf::RectangleShape object that is rendered twice with different colors:
Blue: no transform was applied.
Red: the composed transform was applied.
They move in opposite directions as a result of flipping the Y axis.
Note that the object space position coordinates remain the same. Both rendered rectangles correspond to the same object, i.e., there is just a single sf::RectangleShape object, rect – only the color is changed. The object space position is rect.getPosition().
What is different for these two rendered rectangles is the coordinate reference system. Therefore, the absolute space position coordinates of these two rendered rectangles also differ.
You can use this approach in a scene tree. In such a tree, the transforms are applied in a top-down manner from the parents to their children, starting from the root. The net effect is that children's coordinates are relative to their parent's absolute position.

World-Coordinate Issues with gluUnProject()

I'm currently calling Trace (method below) from a game loop. Right now all I'm trying to do is get the world coordinates from the screen mouse so I can move objects around in the world space. The values I'm getting from gluUnProject are however; puzzling me.
I was using glReadPixel(...) to get the Z value but that produced little to no movement in the object I was drawing and the resulting vector ended up being the same as my cameras location (except for the tiny decimal changes due to mouse movement), so I decided to get rid of the call and replace the Z value with 1.
My question is: Does the following code look right to you? Every example I've seen thusfar is either identical or -very- similar but I can't seem to produce correct results, even if I lock down the Y axis. If the code is correct, then I'm guessing that I'm just not using the resulting vector properly. Should I not be able to draw an object or point directly with the resulting vector or do I have to do something else with it, like normalize?
The current render mode is GL_RENDER and I am using glFrustum with a NearZ value of 1 and FarZ value of 2048, to create a perspective. There is also a series of viewports created along with scissors, with a size and width of 512x768 and positioned in each corner of a 1024x768 window. Trace(...) is called in between rendering of the upper left viewport and is the only perspective projection, while the other viewports are orthographic. FOV is set to 45.
void VideoWindow::Trace(int cursorX, int cursorY)
{
double objX, objY, objZ;//holder for world coordinates
GLint view[4];//viewport dimensions+pos
GLdouble p[16];//projection matrix
GLdouble m[16];//modelview matrix
GLdouble z;//Z-Buffer Value?
glGetDoublev (GL_MODELVIEW_MATRIX, m);
glGetDoublev (GL_PROJECTION_MATRIX,p);
glGetIntegerv( GL_VIEWPORT, view );
//view[3]-cursorY = conversion from upper left (0,0) to lower left (0,0)
//Unproject 2D Screen coordinates into wonderful world coordinates
gluUnProject(cursorX, view[3]-cursorY, 1, m, p, view, &objX, &objY, &objZ);
//Do something useful here???
}
Any ideas?
Edit: I've changed the winZ value to 0.5 instead of 1 which gives a vector thats more reasonable but drawing a point still wasn't matching the mouse. I found out that the value of view[3] was 384 which is correct for the viewport I'm using but I replaced it with 768 (the actual window size) and the point followed the mouse 100%. Further experimentation reveals that I can't use the coordinates to move around a 3D object in the perspective world space using this these coordinates however moving around 3D object in Orthographic space works fine.
The winz argument to gluUnproject specifies the depth from the camera at which you're "picking" your points. As you've stated this coordinate should be in the [0, 1] range.
Some tutorials like NeHes read out the z coordinate from the depth buffer so that you "pick" at the right depth, of course for this to work you'll have to do the gluUnproject after you've rendered everything else.
Regardless, if you set winz to 0.5 or something (not 0 or 1 or the point will end up on the near or far clip plane, and maybe culled) and do the following:
gluUnProject(cursorX, view[3]-cursorY, 0.5, m, p, view, &objX, &objY, &objZ);
//Do something useful here???
glPointSize(10);
glBegin(GL_POINTS);
glColor3f(1, 0, 0);
glVertex3f(objX, objY, objZ);
glEnd();
You should end up with a red blob at the mouse pointer (provided nothing else overdraws it afterwards and you don't have any funny render states which renders the point invisible).
just a thought, but if the third argument to gluUnProject is the z distance to the camera, wouldn't any point you draw at that location be on the near clipping plane of your frustum?
Better make that z value a bit higher.