So the picture below is from one of the slides of my computer graphics class. Can anyone please explain to me what the last 4 lines of code do ?
Those 4 lines of code are specific to the application you're writing in your class (or that your teacher is showing you or whatever). They appear to be setting up some scene coordinates so that the center of the scene is always aligned with the center of the window. cx is the x coordinate of the center of the scene (in world coordinates, I imagine). dy is the height of the scene (also in world coordinates). In the last 2 lines, the left and right edges of the scene are adjusted based on the aspect ratio of the window. The new left edge is the center minus half the height times the aspect ratio of the window.
Also, you should understand that OpenGL does not have a Reshape function. That is part of glut, which is not part of OpenGL. glut is a separate library to aid in using OpenGL, but is not part of OpenGL itself. I mention that mainly because glut is deprecated on some platforms.
Related
My goal is to create an intuitive 3D manipulator to handle rotations of meshes displayed in my 3D editor, made with Qt / QML.
To do that, when the user clicks on an entity, 3 tori are spawned around the mesh, representing the euler angles the user can act on. If the user then clicks on one torus, I want him to be able to rotate the mesh by dragging the mouse. The natural way users seem to do that is by dragging the mouse around the torus in the direction they want the mesh to rotate.
I therefore need a way to know how the user is rotating his mouse. I thought of a way: when the user clicks on the torus, I retrieve the position of the center of the torus. Then, I translate this world position to its screen position. Then, I monitor the angle between the cursor of the mouse and the center of the torus. The evolution of this angle should tell me everything I need: if the angle increases clockwise, the mesh should rotate clockwise and vice versa. This solution should yield a result good enough for my application, since it won't depend on the angle of the camera, or only very minimally.
However, I can't find a way to translate a world position to its screen position with Qt. I found the function QVector3D::project(const QMatrix4x4 &modelView, const QMatrix4x4 &projection, const QRect &viewport), but its documentation is very scarce and I couldn't find anyone using it... I might have found what to feed in for the projection argument (the projectionMatrix property from QCamera, here https://doc.qt.io/qt-5/qml-qt3d-render-camera.html), but that's it. What is the modelView ? And viewport ? Is it simply QRect(0, 0, 1920, 1080) ?
If anyone have any kind of lead, it would be amazing, I can't find anything anywhere and I'm kind of losing hope now. Or maybe another, simpler, solution to my problem ? Please note that the user can also freely move the camera around the mesh, which adds in complexity.
Thanks a lot for your time, and have a nice day !
Yes, you should be able to translate from world position to screen position using the mentioned function. You are correct about the projection argument. As for the modelView argument, you should use viewMatrix property from QCamera, which is missing from the official documentation, but it works for me. The viewport parameter represents the dimensions of the part of the screen, you are projecting on. You could use QRect(0, 0, 1920, 1080) if you use full screen Full HD projection, otherwise use something like QRect(QPoint(0, 0), view->size()), where view is the wigdet or window with your 3D image. Be careful, that the resulting screen position will have y = 0 being down and positive values being above, which the opposite to usual screen coordinates.
I've been using the Bullet physics engine with OpenGL to visualise my simulations. I currently have a very simple simulation of a cube that has an initial horizontal and forward velocity that falls down from the sky and collides with the walls of a room that are all slanted at 45 degrees, with the bottom of the wall meeting the floor.
I use getOpenGLMatrix to get the orientation, position, etc. of the cube and map it to OpenGL by making that matrix the Model matrix. However, when I run it and visualise the simulation the cube behaves as expected (rolls down the wall), but it does not "touch" the rendered OpenGL wall (I say touch but of course mean the rendered cube does not appear to come near the rendered wall).
My Bullet cube is 2x2x2 (specified by btBoxShape(btVector3(1.0f,1.0f,1.0f))).
My OpenGL cube is also 2x2x2, with the origin at 0 and corners 1.0 away in each direction.
The only thing I can think of is that the coordinates in Bullet physics do not map directly to the coordinates of OpenGL (for example, a cube edge of length 1 in Bullet is X pixels, but a cube edge of length 1 in OpenGL is Y pixels). Is this the case? If not, can you think why I might have this issue (obviously I don't expect you to magically know the answer, just wondering if there are any known issues like this).
Thanks
I'm trying to use the mouse listener in Haskell using OpenGL and have run into a problem. Apparently the return given for the x and y coordinates is a GLint. The problem is in then using these because I need a GLfloat. Simple parsing doesn't appear to fix the problem. On a more technical note, why would this return an int at all when the entire size of the screen is represented in OpenGL as only 1 square unit?
GLUT or GLFW, which are the toolkits that you're probably using to access OpenGL, manage windows for you. They have no idea about what the current view port looks like -- heck, you could even be rendering to only a quarter of the window; that'd mess up your coordinates pretty badly if GLUT/GLFW cared about OpenGL coordinates! Luckily, they do not; the x and y coordinates that you're getting are actual pixel coordinates on the window, and I think that (0, 0) even is in the top left, with the Y axis going downwards. The mouse coordinates are completely separate from OpenGL.
Nonetheless, you can convert a GLint into a GLfloat using fromIntegral.
I am testing some rendering stuff with OpenGL and I noticed that I have some issues with long thin polygons that are forming a plane. So when having two of these long polygons directly next to each other, snapping at the long side, I noticed that some of the pixels at the edge are invisible. These invisible pixels move around when I move the camera.
What I found is that this is because the pixels at the edge of these "sliver" polygons will be invisible because the rasterization thinks that they are not within that polygon at this specific view angle.
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
If you found my description of the problem a bit weird see http://www.ugrad.cs.ubc.ca/~cs314/Vjan2008/slides/week5.day3-4x4.pdf page 27 and following. That's what I mean.
EDIT: ok i think i have to make clear what my problem is, because i have a feeling that i cant adress it with anti aliasing techniques
aaa|b|cc
aaa|b|cc
aaa|b|cc
^ ^
1 2
- the polygons a, b and c form a plane
- some pixels at edge 1 and 2 are invisible at certain camera angles
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
In general, you don't. If OpenGL thinks that a part of a triangle is too thin to be rendered for a given resolution, then it's too thin to be rendered. The general form of this issue is called called "aliasing".
The solution is to use an antialiasing technique. For example, multisampling. When you create the context, select a number of samples to use.
I'm writing a 2D game using a wrapper of OpenGLES. There is a camera aiming at a bunch of textures, which are the sprites for the game. The user should be able to move the view around by moving their fingers around on the screen. The problem is, the camera is about 100 units away from the textures, so when the finger is slid across the screen to pan the camera, the sprites move faster than the finger due to parallax effect.
So basically, I need to convert 2D screen coordinates, to 3D coordinates at a specific z distance away (in my case 100 away because that's how far away the textures are).
There are some "Unproject" functions in C#, but I'm using C++ so I need the math behind this function. I'm extremely new to 3D stuff and I'm very bad at math so if you can explain like you are explaining to a 10 year old that would be much appreciated.
If I can do this, I can pan the camera at such a speed so it looks like the distant sprites are panning with the users finger.
For picking purposes, there are better ways than doing a reverse projection. See this answer: https://stackoverflow.com/a/1114023/252687
In general, you will have to scale your finger-movement-distance to use it in a far-away plane (z unit away).
i.e, it l unit is the amount of finger movement and if you want to find the effect z unit away, the length l' = l/z
But, please check the effect and adjust the l' (double/halve etc) to get the desired effect.
Found the answer at:
Wikipedia
It has the following formula:
To determine which screen x-coordinate corresponds to a point at Ax,Az multiply the point coordinates by:
where
Bx is the screen x coordinate
Ax is the model x coordinate
Bz is the focal length—the axial distance from the camera center to the image plane
Az is the subject distance.