I am making a 2D board game. the game board grid is 8x8 and each cell of the grid is an object. So a board consists of 64 cell objects. My aim is to work out which cell the mouse is in. I am attempting this by tracking the mouse coordinates and comparing it to the grid coordinates.
my coordinate system is as follows:
gluOrtho2D(-4,4,-4,4);
I am trying to get the current mouse position by using the following in my update function:
POINT p
if (GetCursorPos(&p)){
}
if (ScreenToClient(hWnd, &p))
{
}
However although this is tracking the coordinates of the mouse it is not correctly tracking the world coordinates that I set with gluOrtho2D. How can I achieve this?
It depends on your glViewPort
Let's say you have:
glViewport(0,0, 640, 640);
The mouse position is (mousePos.x,mousePos.y) and the world position you want to know is (world.x, world.y)
And, give that the top/left corner of your screen is the (0, 0) coordinate
Then we can make the following:
world.x = -4.0 + (mousePos.x / 640.0) * (4*2)
world.y = 4.0 - (mousePos.y / 640.0) * (4*2)
What we are doing here is a linear interpolation using the normalize position of the mouse within the screen (mousePos.x / 640) and then multiplying this value to the width of the word (4*2).
Given that the top/left corner of the grid start at (-4, 4), we add the offset of the world position.
Related
I want rotate a QGraphicsPixmapItem around a point according to mouse position.
So i tried this:
void Game::mouseMoveEvent(QMouseEvent* e){
setMouseTracking(true);
QPoint midPos((sceneRect().width() / 2), 0), currPos;
currPos = QPoint(mapToScene(e->pos()).x(), mapToScene(e->pos()).y());
QPoint itemPos((midPos.x() - cannon->scenePos().x()), (midPos.y() - cannon->scenePos().y()));
double angle = atan2(currPos.y(), midPos.x()) - atan2(midPos.y(), currPos.x());
cannon->setTransformOriginPoint(itemPos);
cannon->setRotation(angle); }
But the pixmap moves a few of pixels.
I want a result like this:
Besides the mixup of degrees and radians that #rafix07 pointed out there is a bug in the angle calculation. You basically need the angle of the line from midPos to currPos which you calculate by
double angle = atan2(currPos.y() - midPos.y(), currPos.x() - midPos.x());
Additionally the calculation of the transformation origin assumes the wrong coordinate system. The origin must be given in the coordinate system of the item in question (see QGraphicsItem::setTransformOriginPoint), not in scene coordinates. Since you want to rotate around the center of that item it would just be:
QPointF itemPos(cannon->boundingRect().center());
Then there is the question whether midPos is actually the point highlighted in your image in the middle of the canon. The y-coordinate is set to 0 which would normally be the edge of the screen, but your coordinate system may be different.
I would assume the itemPos calculated above is just the right point, you only need to map it to scene coordinates (cannon->mapToScene(itemPos)).
Lastly I would strongly advise against rounding scene coordinates (which are doubles) to ints as it is done in the code by forcing it to QPoints instead of QPointFs. Just use QPointF whenever you are dealing with scene coordinates.
I have built a small custom qml item that is used as a selection area (something like the QRubberBand component provided in Qt Widgets). The item also give the ability to user to resize the content of the selection, so by grabbing the bottom corner of the selection rectangle it is possible to drag to enlarge the content. After the user has done resizing I would like to compute the QTransform matrix of the transformation. QTransform provides a convenient QTransform::scale method to get a scale transformation matrix (which I can use by comparing the width and height ratio with the previous size of the selection). The problem is that QTransform::scale assumes that the center point of the transformation is the center of the object, but I would like my transformation origin to be the top left of the selection (since the user is dragging from the bottom-right).
So for example, if I have the following code:
QRectF selectionRect = QRectF(QPointF(10,10), QPointF(200,100));
// let's resize the rectangle by changing its bottom-right corner
auto newSelectionRect = selectionRect;
newSelectionRect.setBottomRight(QPointF(250, 120));
QTransform t;
t.scale(newSelectionRect.width()/selectionRect.width(), newSelectionRect.height()/selectionRect.height());
The problem here is that if I apply the transformation t to my original selectionRect I don't get my new rectangle newSelectionRect back, but I get the following:
QRectF selectionRect = QRectF(QPointF(10,10)*sx, QPointF(200,100)*sy);
where sx and sy are the scale factors of the transform. I would like a way to compute the QTransform of my transformation that gives back newSelectionRect when applied to selectionRect.
The problem lies in this assumption:
QTransform::scale assumes that the center point of the transformation is the center of the object
All transformations performed by QTransform are referred to the origin of the axis, is just an application of various tranformation matrixes (https://en.wikipedia.org/wiki/Transformation_matrix):
Also, QTransform::translate (https://doc.qt.io/qt-5/qtransform.html#translate) states:
Moves the coordinate system dx along the x axis and dy along the y axis, and returns a reference to the matrix.
Thereby, what you are looking for is:
QTransform t;
t.translate(+10, +10); // Move the origin to the top left corner of the rectangle
t.scale(newSelectionRect.width()/selectionRect.width(), newSelectionRect.height()/selectionRect.height()); // scale
t.translate(-10, -10); // move the origin back to where it was
QRectF resultRect = t.mapRect(selectionRect); // resultRect == newSelectionRect!
I've created a program that draws points to the screen in OpenGL (It draws the letter "X" at a specific point). The drawing is then scaled based on user input.
if (GetAsyncKeyState(VK_UP))
{
/*"zoom" is a global float variable*/
zoom += 0.005;
}
glScaled(1 + zoom, 1 + zoom, 1);
I want to find the new position of the points relative to the screen (i.e. A point may be drawn at (100, 100) but after scaling it may be somewhere like (150, 200) with regard to the screen coordinates, but the rasterisation values are always the same, in this case (100, 100)). Is there a function in OpenGL that can return the new coordinates of a point based on specific scaling?
I'm trying to detect if my mouse is hovering over a rectangle that I drew(with VBOs), but when I get the mouse coordinates with Mouse.getX() & Mouse.getY(), it returns the window coordinates(i.e. (480, 200)). How can I get the mouse coordinates from the range of [-1, 1]?
trivial approach
You can do this by knowing only the viewport or if drawing in the whole window its inner size. Assuming mouse coordinates are 0,0 in the top left corner.
The following will normalize the input to [-1, 1].
double normalizedX = -1.0 + 2.0 * (double)Mouse.getX() / window.width;
double normalizedY = 1.0 - 2.0 * (double)Mouse.getY() / window.height;
You can also use a more intricate solution by creating an inverse matrix for the viewport and multiply out the mouse input vector.
In my scene I have terrain that I want to "grab" and then have the camera pan (with its height, view vector, field of view, etc. all remaining the same) as I move the cursor.
So the initial "grab" point will be the working point in world space, and I'd like that point to remain under the cursor as I drag.
My current solution is to take the previous and current screen points, unproject them, subtract one from the other, and translate my camera with that vector. This is close to what I want, but the cursor doesn't stay exactly over the initial scene position, which can be problematic if you start near the edge of the terrain.
// Calculate scene points
MthPoint3D current_scene_point =
camera->screenToScene(current_point.x, current_point.y);
MthPoint3D previous_scene_point =
camera->screenToScene(previous_point.x, previous_point.y);
// Make sure the cursor didn't go off the terrain
if (current_scene_point.x != MAX_FLOAT &&
previous_scene_point.x != MAX_FLOAT)
{
// Move the camera to match the distance
// covered by the cursor in the scene
camera->translate(
MthVector3D(
previous_scene_point.x - current_scene_point.x,
previous_scene_point.y - current_scene_point.y,
0.0));
}
Any ideas are appreciated.
With some more sleep :
Get the initial position of your intersected point, in world space and in model space ( relative to the model's origin)
i.e use screenToScene()
Create a ray that goes from the camera through the mouse position : {ray.start, ray.dir}
ray.start is camera.pos, ray.dir is (screenToScene() - camera.pos)
Solve NewPos = ray.start + x * ray.dir knowing that NewPos.y = initialpos_worldspace.y;
-> ray.start.y + x*ray.dir.y = initialpos_worldspace.y
-> x = ( initialpos_worldspace.y - ray.start.y)/rad.dir.y (beware of dividebyzeroexception)
-> reinject x in NewPos_worldspace = ray.start + x * ray.dir
substract initialpos_modelspace from that to "re-center" the model
The last bit seems suspect, though.