Algorithm to zoom into mouse(OpenGL) - c++

I have an OpenGL scene with a top left coordinate system. When I glScale it zooms in from (0,0) the top left. I want it to zoom in from the mouse's coordinate (relative to the OGL frame). How is this done?
Thanks

I believe this can be done in four steps:
Find the mouse's x and y coordinates using whatever function your windowing system (i.e. GLUT or SDL) has for that, and use gluUnProject to get the object coordinates that correspond to those window coordinates
Translate by (x,y,0) to put the origin at those coordinates
Scale by your desired vector (i,j,k)
Translate by (-x,-y,0) to put the origin back at the top left

I did a smooth zoom in using glortho . The skeleton of my solution is
glortho(initial viewport x,y & size)
glcalllist(my display list)
render
.
.
loop to gradually go to final viewrport coordinates/size . Implement your timing and FPS requirements
.
.
glortho(final viewport x,y & size)
glcalllist(my display list)
render
I hope you get the general idea. There are few other methods to acheive this, but I find glortho the method the easiest to comprehend.

Related

Get screen position of the center of a mesh

My goal is to create an intuitive 3D manipulator to handle rotations of meshes displayed in my 3D editor, made with Qt / QML.
To do that, when the user clicks on an entity, 3 tori are spawned around the mesh, representing the euler angles the user can act on. If the user then clicks on one torus, I want him to be able to rotate the mesh by dragging the mouse. The natural way users seem to do that is by dragging the mouse around the torus in the direction they want the mesh to rotate.
I therefore need a way to know how the user is rotating his mouse. I thought of a way: when the user clicks on the torus, I retrieve the position of the center of the torus. Then, I translate this world position to its screen position. Then, I monitor the angle between the cursor of the mouse and the center of the torus. The evolution of this angle should tell me everything I need: if the angle increases clockwise, the mesh should rotate clockwise and vice versa. This solution should yield a result good enough for my application, since it won't depend on the angle of the camera, or only very minimally.
However, I can't find a way to translate a world position to its screen position with Qt. I found the function QVector3D::project(const QMatrix4x4 &modelView, const QMatrix4x4 &projection, const QRect &viewport), but its documentation is very scarce and I couldn't find anyone using it... I might have found what to feed in for the projection argument (the projectionMatrix property from QCamera, here https://doc.qt.io/qt-5/qml-qt3d-render-camera.html), but that's it. What is the modelView ? And viewport ? Is it simply QRect(0, 0, 1920, 1080) ?
If anyone have any kind of lead, it would be amazing, I can't find anything anywhere and I'm kind of losing hope now. Or maybe another, simpler, solution to my problem ? Please note that the user can also freely move the camera around the mesh, which adds in complexity.
Thanks a lot for your time, and have a nice day !
Yes, you should be able to translate from world position to screen position using the mentioned function. You are correct about the projection argument. As for the modelView argument, you should use viewMatrix property from QCamera, which is missing from the official documentation, but it works for me. The viewport parameter represents the dimensions of the part of the screen, you are projecting on. You could use QRect(0, 0, 1920, 1080) if you use full screen Full HD projection, otherwise use something like QRect(QPoint(0, 0), view->size()), where view is the wigdet or window with your 3D image. Be careful, that the resulting screen position will have y = 0 being down and positive values being above, which the opposite to usual screen coordinates.

mfc when to use logical/device coordinates

I have heard that rectangles, coordinates of the mouse and other things involving drawing all use device coordinates. Is this true? Are there any ways that can tell me if I have logical or device coordinates?
I could look at the documentation of functions that give me the coordinates, but sometimes they don't explicitly say if these are logical or device coordinates. For example, the documentation for GetCursorPos function says it "retrieves the position of the mouse cursor, in screen coordinates."
I am assuming screen coordinates are the same as device coordinates? Does this mean I have to convert the screen coordinates I get from the function into client coordinates?
You know what coordinate (0,0) is on the top-left corner of screen. But on paper when we draw a graph, (0,0) may be on bottom left, or on the centre of the graph plotting paper.
By default, the logical and coordinates and physical/screen coordinates are same, and (0,0) points to top-left. But what if you want to draw a line from bottom left to somewhere in middle of screen, that matches the math/trigonometry you've learnt or are practising? Well, the you move to changing the logical coordinate system to something of your liking.
You'd use SetMapMode to change the logical coordinate system. You may later use LPtoDP, DPtoLP, ClientToScreen, ScreenToClient etc for mapping and for physical monitor to your window coordinates mapping.
About Coordinate Spaces and Transformations

GPU mouse picking OpenGL/WebGL

I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?
The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.

the order of translate and scale for zoom and pan

first thing I want to do is translating to the center of the screen and draw all of the objects from there.
then I would like to apply tranlsate for panning and scale for zoom. I want to zoom relative to a center point ! so how should be the order of them so that it works ?
glTranslatef(width/2, height/2, 0);
gltranslate(centerX,centerY); // go to center point
glscale(zoom);
glTranslatef(offset.x/zoom, offset.y/zoom, offset.z/zoom); // pan
I tried the above order but it doesn't go to the center point and it always zoom relative to (0,0).
I suppose you are drawing a square with both x and y between 0,1.
first you have to translate to the point the scaled object should be:
glTranslate3f(centerX,centerY,0);
glScale(zoom);
glTranslatef(-0.5f, -0.5f,0); // to the middle
draw stuff
opengl executes the transformations in reverse order since it's a pipeline.
reading the above sequence in the bottom-up direction will give the key.

Tracing a ray from the camera to the mouse pointer in GLUT

I'm not really sure if it makes sense but I need to do and there is a chance there won't be any obstacle in front.
Is's not clear what you mean by tracing but :
1 You may be looking for gluUnProject to go from screen coordinates to space coordinates. With the help of the Z buffer for distance from camera, you can get the coordinates of 3D point which is seen at the specified pixel.
2 you want to draw a 3D line from camera origin to some 3D point at the mouse cursor. Well it's just a point at the mouse cursor.