How to clip rendering in OpenGL (C++) - c++

How to clip rendering in OpenGL (simple rectangle area)?
Please post a C++ example.

What you probably need is OpenGL's scissor mechanism.
It clips rendering of pixels that do not fall into a rectangle defined by x, y, width and height parameters.
Note also that this OpenGL state when enabled, affects glClear command by restricting the area cleared.

If you only want to display a specific rectangle, you need a combination of something like glFrustrum or glOrtho along with glViewPort. It's actually glViewPort that sets the clipping rectangle. glFrustrum, glOrtho (gluPerspective, etc.) then map some set of real coordinates to that rectangle. Typically you hardly notice the glViewPort, because it's normally set to the entire area of whatever window you're using, and what you change is the mapping to get different views in the window.
If you just adjust glFrustum (for example) by itself, the display area on the screen will stay the same, and you'll just change the mapping so you'll still fill the entire window area, and basically just move the virtual camera around, so you zoom in or out (etc.) on the "world" being displayed. Conversely, if you just adjust glViewPort, you'll display exactly the same data, but into a smaller rectangle.
To "clip" the data to the smaller rectangle, you need to adjust both at once, in more or less the "opposite" directions so as your view-port rectangle gets smaller, you zoom in your view frustum to compensate.

Related

Is it possible to separate normalized device coordinates and window clipping in openGL (glViewport)

Is there a way to set a transformation for NDC to window, but separately specify the clipping region so it matches the actual window size?
Background: I have a bunch of openGL code that renders a 2D map to a window. It's a lot of complex code, because I use both the GPU and the CPU to draw on the map, so it's important that I keep to a consistent coordinate system in both places. To keep that simple, I use glViewport(0,0,mapSizeX, mapSizeY), and now map coordinates correspond well to pixel coordinates in the frame buffer, exactly what I need. I can use GLSL to draw some of the map, call glReadPixels and use the CPU to draw on top of that, and glDrawPixels to send that back to the frame buffer, all of that using the same coordinate system. Finally I use GLSL to draw a few final things over that (that I don't want zoomed). That all works, except...
The window isn't the same size as the map, and glViewport doesn't just set up the transformation. It also sets up clipping. So now when I go to draw a few last items, and the window is larger than the map, things I draw near the top of the screen get clipped away. Is there a workaround?
glViewport doesn't just set up the transformation. It also sets up clipping.
No, it just sets up the transformation. By the time the NDC-to-window space transform happens, clipping has already been done. That happened immediately after vertex processing; your vertex shader (or whatever you're doing to transform vertices) handled that based on how it transformed vertices into clip-space.
You should use the viewport to set up how you want the NDC box to visibly appear in the window. Your VS needs to handle the transformation into the clipping area. So it effectively decides how much of the world gets put into the NDC box that things get clipped to.
Basically, you have map space (the coordinates used by your map) and clip-space (the coordinates after vertex transformations). And you have some notion of which part of the map you want to actually draw to the window. You need to transform the region of your map that you want to see such that the corners of this region appear in the corners of the clipping box (for orthographic projections, this is typically [-1, 1]).
In compatibility OpenGL, this might be defined by using glOrtho for othographic projections to transform from you. In a proper vertex shader, you'll need to provide an appropriate orthographic matrix.

Methods of zooming in/out with openGL (C++)

I am wondering what kind of methods are commonly used when we do zoom in/out.
In my current project, I have to display millions of 2D rectangles on the screen, and I am using a fixed viewport and changing glortho2D variables when I have to zoom in/out.
I am wondering if this is a good way of doing it and what other solution can I use.
I also have another question which I think it is related to
how should I do zoom in/out.
As I said, I am currenly using a fixed viewport and changing glortho2D variables in my code, and I assumed that opengl will be able to figure out which rectangles are out of the screen and not render them. However, it seems like opengl is redrawing all the rectangles again and again. The rendering time of viewing millions of rectangles (zoom out) is equal to vewing hundreds of rectangles (zoom into a particular area), which is opposite of what I expected. I am wondering if it is related to the zooming methods I used or am I missing something important.
ie . I am using VBO while rendering the rectangles.
and I assumed that opengl will be able to figure out which rectangles are out of the screen
You assumed wrong
and not render them.
OpenGL is a rather dumb drawing API. There's no such thing like a scene in OpenGL. All it does is coloring pixels on the framebuffer one point, line or triangle at a time. When geometry lies outside the viewport it still has to be processed up to the point it gets clipped (and then discarded).

How to find out how many units across the screen plane in OpenGL

How would one get the relative size of the viewing plane in opengl's own units? I need to find out the width and height in "opengl units". Is there a function which will retrieve this information?
I assume that one unit (let us say 1.0f) in Z would be equivalent to one unit in X, even if conversion to a real measurement system in meaningless.
I know I can get the screen size either by use of GetSystemMetrics(SM_CXSCREEN) or glutGet(GLUT_SCREEN_WIDTH), but this is in pixels.
To handle the graphical window calls, I am using freeglut on non-windows OSes and the WinAPI on Windows.
Assuming you want to draw something like a UI, set your projection matrix to an Orthographic matrix with glOrtho, then you don't have any perspective and have a direct orthographic mapping between world coordinates and screen coordinates. The arguments to your glOrtho call determine how wide/high your view port is in world coordinates.
If you want to draw both a UI and a 3D scene, draw the UI with glOrtho and draw the scene with glPerspective using a clipping mask to make sure you don't ruin your UI.
If on the other hand you want to know the width of the view port in a 3D scene with perspective, so that you know how big to draw your object then you'll have to deal with the perspective projection. You need to know at which Z coordinate you want to know the witdh/height of the view port. You can use gluUnProject to calculate the world coordinate corresponding to a given screen coordinate and Z plane.
However it would probably be better to do it the other way around, always draw your object with a given size and then calculate what your projection matrix should be to have that object appear properly in your view port.

Make an object clickable (click if the mouse is in the area of object)

I have a rectangle on my window and I am trying to make this rectangle click-able by defining the area of the rectangle.
If the mouse click is inside this area, then it's a click else not.
For eg: On the window, let's assume the vertex of the rectangle is:
x = 40, y = 50; width = 200, height = 100;
So, a click will be counted when
(mouseXPos > getX()) && (mousxPos < (getX()+width)) && (mouseYPos > getY()) && (mouseYPos > getY()+height)
Now, I am doing lookAt transformation to the object by inheriting a class which has lookAt functions. Also, I am using a camera to check the different faces of the object (camera rotation). So, when the object rotates along various axes and shows different faces when the camera is used.
However, when the object moves, I would have thought the vertices of the rectangle would change. The vertices of the rectangle should also have changed on doing gluLookAt function but looks like they do not and my click-area always remains stationary at those points although the object is not there. How do I tackle this problem? How do I make my object clickable and add some mouse events on it?
If you're trying to click on 3D shapes, and you are moving the camera, I wouldn't check this in screen coordinates.
Instead, you can project the point where the user has clicked into a line in 3D space, and intersect that against the objects which can be clicked on.
GL has a function for this: gluUnproject()
If you give that function the details of your view, along with the screen point being clicked on, it can tell you where in 3D space the user has clicked. If you check a point at both the near and far planes, the line between these points can be checked against your object.
The other alternative, is some kind of ID buffer, where you render the scene off-screen, but instead of outputting shaded pixels, you output ID values for each object. The hit-test is then just a matter of reading a pixel from that buffer.
Seeing that nobody else is answering, I will give it a try. :)
Though not very experienced with opengl this seems like the expected behavior to me, and it seems like you are mixing up world coordinates with on screen coordinates.
Think of it this way, if you take a picture of a table in your living room from two different angles it will look differently, but will in both images occupy the same space in that room. The same can be said about entities when working with opengl, even though you move the camera the coordinates for the entity does not change, just the perception of it.
Now, if I'm not mistaken you can translate your world coordinates into on-screen coordinates by applying the same transformations as applied by opengl. I would suggest taking a look at this: OpenGL: projecting mouse click onto geometry
Thumbs up for the ID buffer alternative. I used it several times and it was really the right thing to do : no expensive geometry picking and a hardly-total disconnection with the underlying geometry.
Still, you can only pick the closest geometry (that is the one "visible" on the screen) and not the models that could be hidden behind. The same problem appears when dealing with translucent materials too.
One solution could be to do some ID-peeling by rendering four objects in a RGBA ID texture (I have a feeling it could be non-optimal but it's worth a shot).

Using texture during window resize

So what I'm trying to do is copy the current front buffer to a texture and use this during a resize to mimic what a normal window resize would do. I'm doing this because the scene is too expensive to render during a resize and I want to provide a fluid resize.
The texture coping is fine, but I'm struggling to work out the maths to get the texture to scale and translate appropriately (I know there will be borders visible when magnifying beyond the largest image dimension).
Can anyone help me?
but I'm struggling to work out the maths to get the texture to scale and translate appropriately
Well, this depends on which axis you base your field of view on. If it's the vertical axis, then increasing the ratio width/height will inevitably lead to letterboxing left and right. Similar if your FOV is based on the horizontal axis inreasing height/width will letterbox top and bottom. If the aspect variation goes opposite you have no letterboxing, because you don't need additional picture information for this.
There's unfortunately no one-size-fits all solition for this. Either you live with some borders, or stretch your image without keeping the aspect, or before resizing the window you render with a largely bigger FOV to a quadratic texture of which you show only a subset.