I have a simple solid modeling application in which I want to implement several "navigation modes", ways for the user to navigate the camera through 3d space. One of them is the ubiquitous 'drag and pan/rotate' that is used in SketchUp, Blender etc.; I also want to implement something that is more relevant to my specific application. Specifically, I want to implement a mode where the camera floats on a 'ring' above the object being modeled (a building), and always looks at the center of the model; this way, a user can easily 'circle' around the object, a common operation in my application.
So, what I want to do is render the building in my view, and display a torus in the top right of the view, with a small sphere on the torus to represent the camera location. There would be a north arrow in the torus, and the user would drag the camera around the model object by dragging the sphere; moving the sphere would reposition the camera and redraw the scene.
It looks like what I should do is the following: render the 'main view', i.e. the building; then render the torus and sphere (with different perspective settings and lighting) to an offscreen buffer, and blit it from there to my main view.
Then however I get to the hit testing. I want to detect if the user clicks on the sphere, or the torus; from what I understand from OpenGL picking (it seems to be a hard subject :/ ), all picking methods apply only for selecting in one 'scene'. Apart from that, I still want to detect 'normal' picking operations in the building model, obviously.
So, my questions:
How do I render to an offscreen buffer and blit into another OpenGL context (with alpha blending & transparence like for the center of the torus)?
How do I do hit testing in the described scenario?
I don't think you need to do off-screen rendering for this. You should be able to just re-set the camera and viewport and render the overlay after the main scene. You might have issues with Z-ordering and/or buffering, but perhaps the "sub-scene" is simple enough for that not to matter, or you could of course just clear the Z buffer before rendering it.
As far as drawing the torus/sphere goes, create a separate class for that and implement a "draw" method. Have the class contain the location of both the sphere and torus and have draw() render those things on the screen.
Then just call myRing.draw() in your main drawing method and you'll have a sphere and torus!
If you mean you want to have a a circle/ring rendered in 2D (which might be easier) in the top right corner of the window, then the same sort of idea would apply as in your hitbox post (except without that annoying projection calculation!)
Lastly, I'd consider using a function key in combination with mouse drags to implement the functionality you want... E.g. the user holds "shift" and then click-drags the mouse across the screen. These mouse events are caught and the x-delta is used to compute the angle of rotation. The camera's location is updated as this happens and you get a smooth sliding motion :)
I agree with #unwind; you don't need an offscreen buffer. If you want to anyway, search for "render-to-texture".
As for hit testing, The OpenGL FAQ has an entry on it. It describes several solutions: using GL_SELECTION render mode, using gluUnproject() to get a 3D collision ray and a simple 2D solution using unique colors.
Related
I am wondering what kind of methods are commonly used when we do zoom in/out.
In my current project, I have to display millions of 2D rectangles on the screen, and I am using a fixed viewport and changing glortho2D variables when I have to zoom in/out.
I am wondering if this is a good way of doing it and what other solution can I use.
I also have another question which I think it is related to
how should I do zoom in/out.
As I said, I am currenly using a fixed viewport and changing glortho2D variables in my code, and I assumed that opengl will be able to figure out which rectangles are out of the screen and not render them. However, it seems like opengl is redrawing all the rectangles again and again. The rendering time of viewing millions of rectangles (zoom out) is equal to vewing hundreds of rectangles (zoom into a particular area), which is opposite of what I expected. I am wondering if it is related to the zooming methods I used or am I missing something important.
ie . I am using VBO while rendering the rectangles.
and I assumed that opengl will be able to figure out which rectangles are out of the screen
You assumed wrong
and not render them.
OpenGL is a rather dumb drawing API. There's no such thing like a scene in OpenGL. All it does is coloring pixels on the framebuffer one point, line or triangle at a time. When geometry lies outside the viewport it still has to be processed up to the point it gets clipped (and then discarded).
I'm trying to implement a navigation technique for 3D scenes (in OpenSceneGraph with OpenGL). Among other things the user should be able to click on an scene object on the screen to move towards it.
The navigation technique should be integrated into another project which uses a vertex shader to apply a global deformation to the scene geometry. And here is the problem: Since the geometry is deformed using a vertex shader, it is not straight forward to un-project the mouse cursor position to the world coordinates of the spot the user actually selected. But I need those coordinates to perform the proper camera movement in my navigation technique.
One way of performing this un-projection would be to modify the vertex shader (used for the deformation) to let it also store the vertex' original position and normal in separate textures. Afterwards one could read those textures at the mouse position to get the desired values.
Now, as I said, the vertex shader belongs to another project which I actually don't want to touch. One goal of my navigation technique is to be as generic as possible to be easily integrated into other projects as well.
So here is the question: Is there any feature in OpenSceneGraph or OpenGL that I did not consider so far? Anything that allows me to get the world coordinates of a fragment, independently of the vertex shader coder?
Well, you could always do an OpenGL selection operation:
http://www.glprogramming.com/red/chapter13.html
Alternately, you could rasterize to a very small (1px*1px) framebuffer where the user clicked, read back the z-buffer and unproject the Z value you got into world space.
Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?
Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.
Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.
So I have what is essentially a game... There is terrain in this game. I'd like to be able to create a top-down view minimap so that the "player" can see where they are going. I'm doing some shading etc on the terrain so I'd like that to show up in the minimap as well. It seems like I just need to create a second camera and somehow get that camera's display to show up in a specific box. I'm also thinking something like a mirror would work.
I'm looking for approaches that I could take that would essentially give me the same view I currently have, just top down... Does this seem feasible? Feel free to ask questions... Thanks!
One way to do this is to create an FBO (frame buffer object) with a render buffer attached, render your minimap to it, and then bind the FBO to a texture. You can then map the texture to anything you'd like, generally a quad. You can do this for all sorts of HUD objects. This also means that you don't have to redraw the contents of your HUD/menu objects as often as your main view; update the the associated buffer only as often as you require. You will often want to downsample (in the polygon count sense) the objects/scene you are rendering to the FBO for this case. The functions in the API you'll want to check into are:
glGenFramebuffersEXT
glBindFramebufferEXT
glGenRenderbuffersEXT
glBindRenderbufferEXT
glRenderbufferStorageEXT
glFrambufferRenderbufferEXT
glFrambufferTexture2DEXT
glGenerateMipmapEXT
There is a write-up on using FBOs on gamedev.net. Another potential optimization is that if the contents of the minimap are static and you are simply moving a camera over this static view (truly just a map). You can render a portion of the map that is much larger than what you actually want to display to the player and fake a camera by adjusting the texture coordinates of the object it's mapped onto. This only works if your minimap is in orthographic projection.
Well, I don't have an answer to your specific question, but it's common in games to render the world to an image using an orthogonal perspective from above, and use that for the minimap. It would at least be less performance intensive than rendering it on the fly.
I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.
I would treat the handles as 3D objects. This provides many advantages - it's more consistant, they behave well, hit testing is easy, etc.
If you want the handles to be a constant size, you can still treat them as 3D objects, but you will have to scale their size as appropriate based off the distance to camera. This is a bit of a hassle, but since there are typically only a few handles, and these are usually small objects, it should be fine performance wise.
However, I'd actually say let the handles scale with the scene. As long as you pick a rendering style for the handle that makes them stand out (ie: bright orange boxes, etc), the perspective effects (smaller handles in the background) actually makes working with them easier for the end-user in many ways. It is difficult to get a sense of depth from a 3D scene - the perspective effects on the handles help provide more visual clues as to how "deep" the handle is into the screen.
First off, project the handle/corner co-ordinates onto the camera's plane (effectively converting them to 2D coordinates on the screen; normalize this against the screen dimensions.)
Here's some simple code to enable orthogonal/2D-overlay drawing:
void enable2D()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
int wind[4];
glGetIntegerv(GL_VIEWPORT,wind);
glOrtho(0,wind[2],0,wind[3],-1,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}
void disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
enable2D() caches the current modelview/projection matrices and replaces the projection matrix with one normalized to the screen (i.e. the width/height of the screen) and restores the identity matrix for modelview.
After making this call, you can make glVertex2f() calls using screen/pixel coordinates, allowing you to draw in 2D! (This will also allow you to hit-test since you can easily get the mouse's current pixel coords.)
When you're done, call disable2D to restore your old modelview/projection matrices :)
The hardest part is computing where the hitboxes fall on the 2D plane and dealing with overlaying (if two project to the same place, which to select on click?)
Hope this helped :)
I've coded up a manipulator with handles for a 3d editing package, and ran into a lot of these same issues.
First, there's an open source manipulator. I couldn't find it in my most recent search, probably because there's a plethora of names for these things - 3d widgets, gizmos, manipulators, gimbals, etc.
Anyhow, the way I did it was to add a manipulator object to the scene that, when drawn, draws all of the handles. It does the same thing for bounding box computation, and selection.
Reed's idea about keeping them the same size is interesting for handles that exist on objects, and might work there. For a manipulator, I found that it was more of a 3d UI element, and it was much more usable if it did not change size. I had a bug where the size was only determined based on the active viewport, which resulted in horrible huge/tiny manipulators in other viewports, very useless. If you're going to add them to the scene, you might want to add them per-viewport, or make them actually have a fixed size.
I know the question is really old. But just in case someone needs it:
Interactive Techniques in Three-dimensional Scenes (Part 1): Moving 3D Objects with the Mouse using OpenGL 2.1
Article is good and has an interesting link section at the bottom.