How to drag the line segment by selecting the vertex - opengl

the line_loop is created by opengl. I need to pick a vertex of the line segment and then drag it to somewhere in the 2D screen.
My thought is to pick up the vertex of the line by using Opengl picking method, and then the buffer storing the hit records is also created from glSelectBuffer. The problem is that how I can know which vertex is selected from information of the returned buffer? The buffer stores the name of the vertex. But it seems the vertex does not has a name in GL_RENDER mode?
Update: Is there any other convenient way to drag lines by mouse?

OpenGL is not a scene graph (gah, it seems every other question on OpenGL I answer begins with this statement). After you've drawn something, OpenGL no longer has any recollection of what you actually sent it. The old OpenGL selection mode is technically just testing if the geometry submitted is within the projections clip space range. On most OpenGL implementations selection mode falls back into software rendering mode, so you'll get a major performance hit.
There are several better ways to do selection (that's why selection mode has been removed from OpenGL after all). If it's just single vertices in specific geometry (like a selection rubberband) you're after, then you should perform the whole transformation of those points into normalized device coordinates yourself, sort them into some screen space spatial subdivision structure (2d Kd tree, quadtree, etc.) so that you can determine the point clicked on in O(log n) time – in contrast to the O(n) you'd have with selection mode, in which you have to "draw" the whole rubberband, so that all points are tested.
EDIT/Update
Since OpenGL is (just) a drawing API you also can't "drag around" things. You'll have to redraw them. Technically you should redraw the whole scene, or when starting a drag, draw the scene without the object about to be dragged into a texture (color and maybe depth), and then for each dragging step clear the view to the cached contents in the texture and then add the dragged objects in its updated position.

Related

Select object in OpenGL when doing transformations in the vertex shader

I'm pretty new to OpenGL and am trying to implement a simple program where I can draw cubes, move them around with the mouse, and delete them.
Previously I had done my drag operations by translating on the CPU. In this way I was able to use ray-tracing to pick out the element I wanted because the vertices themselves were being updated.
However, I'm trying to move all of the transformations to the GPU and in doing so realized that I would then be giving up updated access to the vertices on the CPU (as the CPU still thinks the vertices are the un-transformed ones). How does one do this communication so that I wouldn't have to manually do transformations on the CPU as well as in the Vertex Shader?
No matter where you're doing your transformations, you will typically have a model matrix that describes where each object is in the scene. Instead of transforming each object into world space just so you can check for intersection with a world-space ray, you can also transform the ray into the object space of each object by transforming the ray with the inverse model matrix.
One general issue with ray-tracing is that, as your scene gets larger, brute force testing of each object will get increasingly slow. You can use acceleration structures like an Octree or a Bounding Volume Hierarchy to speed things up. A completely different approach when it comes to picking would be just render an ID buffer, i.e. a buffer that has the same resolution as your currently rendered frame and for each pixel saves the ID of the object that is visible at that pixel. Then you can simply read back the value of the pixel underneath the cursor to find out what object you hit without the need to do any raytracing. Rendering the ID buffer could be done as a separate pass or can likely just be added as an additional render target to a pass you're already doing, e.g., prefilling the depth buffer or just when rendering the scene in case you only do one pass.

Generic picking solution for 3D scenes with vertex-shader-based geometry deformation applied

I'm trying to implement a navigation technique for 3D scenes (in OpenSceneGraph with OpenGL). Among other things the user should be able to click on an scene object on the screen to move towards it.
The navigation technique should be integrated into another project which uses a vertex shader to apply a global deformation to the scene geometry. And here is the problem: Since the geometry is deformed using a vertex shader, it is not straight forward to un-project the mouse cursor position to the world coordinates of the spot the user actually selected. But I need those coordinates to perform the proper camera movement in my navigation technique.
One way of performing this un-projection would be to modify the vertex shader (used for the deformation) to let it also store the vertex' original position and normal in separate textures. Afterwards one could read those textures at the mouse position to get the desired values.
Now, as I said, the vertex shader belongs to another project which I actually don't want to touch. One goal of my navigation technique is to be as generic as possible to be easily integrated into other projects as well.
So here is the question: Is there any feature in OpenSceneGraph or OpenGL that I did not consider so far? Anything that allows me to get the world coordinates of a fragment, independently of the vertex shader coder?
Well, you could always do an OpenGL selection operation:
http://www.glprogramming.com/red/chapter13.html
Alternately, you could rasterize to a very small (1px*1px) framebuffer where the user clicked, read back the z-buffer and unproject the Z value you got into world space.

What exactly is a buffer in OpenGL, and how can I use multiple ones to my advantage?

Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.

How to enable depth testing for the GL_SELECT buffer?

I am using the GL selection buffer to implement mouse picking. Unfortunately, OpenGL is returning hits in the selection buffer even for objects that are entirely occluded. For example, if there is a man hidden behind a wall, the selection buffer will include a hit record for the man even though he is not visible.
Selection is implemented in roughly the way described in the OpenGL Programming Guide: switch to the GL_SELECT render mode -- glRenderMode(GL_SELECT) -- render the scene, and then parse the selection buffer. The depth buffer and depth testing are enabled, but GL seems to ignore depth settings in GL_SELECT mode.
Is it possible for OpenGL to do depth culling in GL_SELECT mode? Is there another way of discarding hit records for hidden objects without re-implementing selection using another method?
The selection buffer will give you all the objects that match your mouse position regardless of depth from the camera. It's up to you to determine whether you want the closest, furthest or all objects. Remember the mouse only works in a 2D world and is trying to do selection for a 3D space. Imagine a ray shooting out into the -z direction at the x,y coordinate that you clicked your mouse. All the objects that intersect that ray are returned in the selection buffer. Sounds like you want to choose the closest one.
See jerome's tutorial and nehe's tutorial on selection.
The processHits function from the OpenGL programming guide shows how to get the z values of the object at the hit location. Use the z-value to sort the objects and pick the closest ones.

Combining OpenGL renderings into one view

I have a simple solid modeling application in which I want to implement several "navigation modes", ways for the user to navigate the camera through 3d space. One of them is the ubiquitous 'drag and pan/rotate' that is used in SketchUp, Blender etc.; I also want to implement something that is more relevant to my specific application. Specifically, I want to implement a mode where the camera floats on a 'ring' above the object being modeled (a building), and always looks at the center of the model; this way, a user can easily 'circle' around the object, a common operation in my application.
So, what I want to do is render the building in my view, and display a torus in the top right of the view, with a small sphere on the torus to represent the camera location. There would be a north arrow in the torus, and the user would drag the camera around the model object by dragging the sphere; moving the sphere would reposition the camera and redraw the scene.
It looks like what I should do is the following: render the 'main view', i.e. the building; then render the torus and sphere (with different perspective settings and lighting) to an offscreen buffer, and blit it from there to my main view.
Then however I get to the hit testing. I want to detect if the user clicks on the sphere, or the torus; from what I understand from OpenGL picking (it seems to be a hard subject :/ ), all picking methods apply only for selecting in one 'scene'. Apart from that, I still want to detect 'normal' picking operations in the building model, obviously.
So, my questions:
How do I render to an offscreen buffer and blit into another OpenGL context (with alpha blending & transparence like for the center of the torus)?
How do I do hit testing in the described scenario?
I don't think you need to do off-screen rendering for this. You should be able to just re-set the camera and viewport and render the overlay after the main scene. You might have issues with Z-ordering and/or buffering, but perhaps the "sub-scene" is simple enough for that not to matter, or you could of course just clear the Z buffer before rendering it.
As far as drawing the torus/sphere goes, create a separate class for that and implement a "draw" method. Have the class contain the location of both the sphere and torus and have draw() render those things on the screen.
Then just call myRing.draw() in your main drawing method and you'll have a sphere and torus!
If you mean you want to have a a circle/ring rendered in 2D (which might be easier) in the top right corner of the window, then the same sort of idea would apply as in your hitbox post (except without that annoying projection calculation!)
Lastly, I'd consider using a function key in combination with mouse drags to implement the functionality you want... E.g. the user holds "shift" and then click-drags the mouse across the screen. These mouse events are caught and the x-delta is used to compute the angle of rotation. The camera's location is updated as this happens and you get a smooth sliding motion :)
I agree with #unwind; you don't need an offscreen buffer. If you want to anyway, search for "render-to-texture".
As for hit testing, The OpenGL FAQ has an entry on it. It describes several solutions: using GL_SELECTION render mode, using gluUnproject() to get a 3D collision ray and a simple 2D solution using unique colors.