OpenGL Mouse Picking Strategy - opengl

I am using OpenGL to render a model of an object that is rotationally-symmetric in a given plane, and I want the user to be able to scroll over the model (possibly after rotations, zooms, etc.) and determine what world coordinate on the model the mouse is currently pointing to.
The reason I mentioned the symmetry is that I'm building the model using VBOs of individual components for ease of use. An analogy to what I'm doing would be a bicycle wheel - I'd have one VBO for a spoke, one for the hub, and one for the wheel/tire and I'd resuse the spoke VBO a number of times (after suitable translations and rotations). My first question is, is this arrangement conducive to the kind of picking that I'm trying to do? I want each individual spoke in the resulting model to be "pickable", for example. Do I need a seperate VBO for each quad/triangle in the mesh to do the kind of selection that I'm trying to do? I really hope that's not the case...
Also, what would be the best picking algorithm to use? I've heard nothing but negative things about OpenGL's built-in selection mode. Thanks in advance!

Regarding your question about VBOs, there is nothing wrong in having few VBOs and reuse them. In fact, you can have everything in one VBO and only index into it. For your bicycle wheel analogy, the spoke vertices could be followed by the tire vertices, and you could draw the vertices of the spokes several times (maybe even using instancing, see glMultiDrawElements), followed by drawing from the same VBO but starting at a different index, the tire.
Regarding your picking question, one easy way of getting the world coordinate of the point on the model under the mouse would be to read back the depth value (glReadPixels on a 1x1 rectangle, preferrably with a pixel buffer object on the last frame's data, that way you hide the transfer latency). Then call gluUnproject to get world coordinates.

Related

OpenGL - How to render many different models?

I'm currently struggling with finding a good approach to render many (thousands) slightly different models. The model itself is a simple cube with some vertex offset, think of a skewed quad face. Each 'block' has a different offset of its vertices, so basically I have a voxel engine on steroids as each block is not a perfect cube but rather a skewed cuboid. To render this shape 48 vertices are needed but can be cut to 24 vertices as only 3 faces are visible. With indexing we are at 12 vertices (4 for each face).
But, now that I have the vertices for each block in the world, how do I render them?
What I've tried:
Instanced Rendering. Sounds good, doesn't work as my models are not the same.
I could simplify distant blocks to a cube and render them with glDrawArraysInstanced/glDrawElementsInstanced.
Put everything in one giant VBO. This has a better performance than rendering each cube individually, but has the downside of having one large mesh. This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
I am aware of frustum culling and occlusion culling, but I already have problems with some cubes in front of me (tested with a 128x128 world).
My requirements:
Draw some thousand models.
Each model has vertices offsets to make the block less cubic, stored in another VBO.
Each block has to be an individual object, as you should be able to place/remove blocks.
Any good performance advices?
This is not desireable as I need every cube to have different textures, lighting, etc... Selecting a single cube within that huge mesh is not possible.
Programmers should avoid declaring that something is "impossible"; it limits your thinking.
Giving each face of these cubes different textures has many solutions. The Minecraft approach uses texture atlases. Each "texture" is really just a sub-section of one large texture, and you use texture coordinates to select which sub-section a particular face uses. But you can get more complex.
Array textures allow for a more direct way to solve this problem. Here, the texture coordinates would be the same, but you use a per-vertex integer to select the correct texture for a face. All of the vertices for a particular face would have an index. And if you're clever, you don't even really need texture coordinates. You can generate them in your vertex shader, based on per-vertex values like gl_VertexID and the like.
Lighting parameters would work the same way: use some per-vertex data to select parameters from a UBO or SSBO.
As for the "individual object" bit, that's merely a matter of how you're thinking about the problem. Do not confuse what happens in the player's mind with what happens in your code. Games are an elaborate illusion; just because something appears to the user to be an "individual object" doesn't mean it is one to your rendering engine.
What you need is the ability to modify your world's data to remove and add new blocks. And if you need to show a block as "selected" or something, then you simply need another per-block value (like the lighting parameters and index for the texture) which tells you whether to draw it as a "selected" block or as an "unselected" one. Or you can just redraw that specific selected block. There are many ways of handling it.
Any decent graphics card (since about 2010) is able to render a few millions vertices in a blinking.
The approach is different depending on how many changes per frame. In other words, how many data must be transferred to the GPU per frame.
For the case of small number of changes, storing the data in one big VBO or many smaller VBOs (and their VAOs), sending the changes by uniforms, and calling several glDraw***, shows similar performance. Different hardwares behave with little difference. Indexed data may improve the speed.
When most of the data changes in every frame and these changes are hard or impossible to do in the shaders, then your app is memory-transfer bound. Streaming is a good advise.

OpenGL 3.2+ Drawing Cubes around existing vertices

So I have a cool program that renders a pretty cube in the centre of the screen.
I'm trying to now create a tiny cube on each corner of the existing cube (so 8 tiny cubes), centred on each of the existing cubes corners (or vertices).
I'm assuming an efficient way to implement this would be with a loop of some kind, to minimise the amount of code.
My query is, how does this affect the VAO/VBO's? Even in a loop, would each one need it's own buffer or could they all be sent at the same time...
Secondly, if it can be done, what would the structure of this loop be like, in terms of focusing on separate vertices given that each vertex has different coordinates...
As Vaughn Cato said, each object (using the same VBOs) can simply be drawn at different locations in world space, so you do not need to define separate VBO's for each object.
To complete this task, you simply need a loop to modify the given matrix before each one is rendered to the screen to change the origins of where each cube is drawn.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

Mesh deformation and VBOs

In my current project I render a series of basically cubic 3D models arranged in a grid. These 3D tiles form the walls of a dungeon level in a game, so they're not perfectly cubic, but I pay special attention to be certain all the edges line up and everything tiles correctly.
I'm interested in implementing a height-map deformation, which seems like it'd require me to manually deform the vertices of the 3D tiles, first by raising or lowering a corner, then by calculating a line between two corners and shifting all the vertices based on the height of that line. Seems pretty straightforward.
My current issue is this: I'm using OpenGL, which provides an optimization called VBOs, which basically are (to my understanding) static copies of the mesh kept in GPU memory for speed. I render using VBOs because I only use three basic models (L-corner, straight-wall, and a cap to join walls when they don't meet in an L). If I have to manually fiddle with the vertices of my models, it seems like I'd have to replace the content of the VBO every tile, which pretty much negates the point of using them.
It seems to me that I might be able to use simple rotation and translation transforms to achieve a similar effect, but I can't figure out how to do it without leaving gaps between the tiles. Any thoughts?
You may be able to use a vertex program on your GPU. The main difficulty (if I understand your problem correctly) is that vertex programs must rely on either global or per-vertex parameters, and there is a strictly limited amount of space available for each.
Without more details, I can only suggest being clever about how you set up the parameters...

Stuck building a game engine

I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.