I'm looking for a way to create a second view from the top of my current 3D scene. I would like to do this as easy as possible. The basic idea is that you have a subwindow that will display a top view of the setting.
I've looked into subwindows in openGL but the problem is you have to redraw everything (basically ending up with 2 scene's with different angle = not good). Also because this will be used in a 3D game called "tower box stacking" (you have to place boxes on top of each other and make a high tower) its impossible to use the subwindows way to do it (since you would get 2 scenes with different blocks/locations/actions/...)
So how can I add a "second camera" to my current scene and then position it on top.
I've looked into subwindows in openGL but the problem is you have to redraw everything (basically ending up with 2 scene's with different angle = not good)
This is actually the one and only way to do this with OpenGL.
So how can I add a "second camera" to my current scene and then position it on top.
OpenGL doesn't have cameras. It doesn't even have a scene. OpenGL merely draws very simple shapes: Points, Lines and Triangles. Above that OpenGL has no understanding of geometry or complex scenes.
Scene management is up to you and drawing multiple views of a scene is up to be implemented by you.
Update: Pseudocode
draw_scene:
for o in objects:
glPushMatrix()
glMultMatrix(o.transform)
o.draw()
glPopMatrix()
render_main_view:
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glFrustum(...)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glMultMatrix(main_camera_transform)
render_secondary_view:
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glFrustum(...)
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
glMultMatrix(secondary_camera_transform)
scissor_viewport(x,y,w,h)
glScissor(x,y,w,h)
glViewport(x,y,w,h)
render:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
glEnable(GL_SCISSOR_TEST)
scissor_viewport(main_viewport.x,main_viewport.y,main_viewport.w,main_viewport.h)
render_main_view()
glClear(GL_DEPTH_BUFFER_BIT)
scissor_viewport(secondary_viewport.x,secondary_viewport.y,secondary_viewport.w,secondary_viewport.h)
render_secondary_view()
Draw the scene once using your default settings.
Then apply a different view transformation (corresponding to your second "camera"), use glViewport to select a sub-rectangle of the screen and draw the scene again. (Don't forget to reset the glViewport to cover your entire screen again afterwards)
If you want the mini-map to have a different aspect ratio (w/h), then during the second pass you'll need to also change the perspective transformation so that everything looks OK.
Disclaimer: I haven't tried this and it's a suggestion really.
So you say your game is about stacking boxes and you want an overhead view. Why not 'fake' the overhead view? Basically you create a texture that represents the minimap of your game as an orthogonal view. ONLY when a new block gets stacked, you would need to update the texture. To view it in the current position you would then have to set the appropriate texture-coordinates of the 'sub-window' or viewport.
Related
As stated in the title, I need to render one part of the teapot in each of the four viewports, and the four parts together are a complete teapot. I can now complete the effect in gluOrtho2D. But in gluPerspective, I cannot use gluLookAt to change the observation position.
This is my result Without gluLookAt:
Did you by chance try to emplace the rendering in each viewport by trying to transform it there using modelview and projection matrices?
If so, here's a hint: Consider why the function glViewport is called that way and not glWindow (nonexistent). Just use glViewport to define the subset of the window you want to render to.
Suppose you have some objects which are rendered based on camera position and then you have side pannels (some buttons, text, etc.) which are always at the same position on the screen.
How could I achieve this effect with opengl?
I'm not sure what I should be looking for but I have two ideas how this could be done. The first is to draw semi-transparent texture after applying view and projection matrix. The second is to render to texture like here and then draw it on a plane and render also the pannels.
What method is the most efficient and/or what method is usually used by game developers?
glViewport(full_window);
set_projection_and_modelview_for_scene();
draw_scene();
glViewport(sidebar_position);
glScissor(sidebar_position);
glEnable(GL_SCISSOR_TEST);
set_projection_and_modelview_for_sidebar();
draw_sidebar();
glDisable(GL_SCISSOR_TEST);
I am creating a 3d asteroid game where you navigate a spaceship through some asteroids.When i click the asteroid i destroy it, and i want before the asteroid is being destroyed to add a hud like you see in sci-fi movies with target locked and stuff like that.So select asteroid, animate the hud, destroy asteroid.What is the best approach in achieving this ? Should i simply create some planes and render them visible only when i need to, or is there another approach like when you create text and you set up a new projection to render the text over the main window.
Woah, woah, you're tackling several problems at once here.
First you must determine the on-screen position (and maybe the bounds) of the asteroid. You do this by mimicking the vertex transformation pipeline on the barycenter position of the asteroid. The usual way is
p_clip = Projection · (Modelview · p)
p_ndc = p_ndc / p_ndc.w
Drawing the HUD overlay requires to get a newbe-misconception out of the way. If you followed one of the usual, bad tutorials, then you'll find the projection matrix setup in the window reshape function. That's not where it belongs.
If you put the whole viewport and projection setup into the drawing function, things become obvious. You can set and reset the viewport and projection as often as required. So first draw the scene using your usual projection and viewport settings. Then you clear the depth buffer and switch to a projection suitable for rendering the overlay.
I have a simple solid modeling application in which I want to implement several "navigation modes", ways for the user to navigate the camera through 3d space. One of them is the ubiquitous 'drag and pan/rotate' that is used in SketchUp, Blender etc.; I also want to implement something that is more relevant to my specific application. Specifically, I want to implement a mode where the camera floats on a 'ring' above the object being modeled (a building), and always looks at the center of the model; this way, a user can easily 'circle' around the object, a common operation in my application.
So, what I want to do is render the building in my view, and display a torus in the top right of the view, with a small sphere on the torus to represent the camera location. There would be a north arrow in the torus, and the user would drag the camera around the model object by dragging the sphere; moving the sphere would reposition the camera and redraw the scene.
It looks like what I should do is the following: render the 'main view', i.e. the building; then render the torus and sphere (with different perspective settings and lighting) to an offscreen buffer, and blit it from there to my main view.
Then however I get to the hit testing. I want to detect if the user clicks on the sphere, or the torus; from what I understand from OpenGL picking (it seems to be a hard subject :/ ), all picking methods apply only for selecting in one 'scene'. Apart from that, I still want to detect 'normal' picking operations in the building model, obviously.
So, my questions:
How do I render to an offscreen buffer and blit into another OpenGL context (with alpha blending & transparence like for the center of the torus)?
How do I do hit testing in the described scenario?
I don't think you need to do off-screen rendering for this. You should be able to just re-set the camera and viewport and render the overlay after the main scene. You might have issues with Z-ordering and/or buffering, but perhaps the "sub-scene" is simple enough for that not to matter, or you could of course just clear the Z buffer before rendering it.
As far as drawing the torus/sphere goes, create a separate class for that and implement a "draw" method. Have the class contain the location of both the sphere and torus and have draw() render those things on the screen.
Then just call myRing.draw() in your main drawing method and you'll have a sphere and torus!
If you mean you want to have a a circle/ring rendered in 2D (which might be easier) in the top right corner of the window, then the same sort of idea would apply as in your hitbox post (except without that annoying projection calculation!)
Lastly, I'd consider using a function key in combination with mouse drags to implement the functionality you want... E.g. the user holds "shift" and then click-drags the mouse across the screen. These mouse events are caught and the x-delta is used to compute the angle of rotation. The camera's location is updated as this happens and you get a smooth sliding motion :)
I agree with #unwind; you don't need an offscreen buffer. If you want to anyway, search for "render-to-texture".
As for hit testing, The OpenGL FAQ has an entry on it. It describes several solutions: using GL_SELECTION render mode, using gluUnproject() to get a 3D collision ray and a simple 2D solution using unique colors.
I'd like to try and implement some HCI for my existing OpenGL application. If possible, the menus should appear infront of my 3D graphics which would be in the background.
I was thinking of drawing a square directly in front of the "camera", and then drawing either textures or more primatives on top of the "base" square.
While the menus are active the camera can't move, so that the camera doesn't look away from the menus.
Does this sound far feteched to anyone or am I on the right tracks? How would everyone else do it?
I would just glPushMatrix, glLoadIdentity, do your drawing, then glPopMatrix and not worry about where your camera is.
You'll also need to disable and re-enable depth test, lighting and such
There is the GLUI library to do this (no personal experience)
Or if you are using Qt there are ways of rendering Qt widgets transparently on top of the OpenGL model, there is also beta support for rendering all of Qt in opengl.
You could also do all your 3d Rendering, then switch to orthographic projection and draw all your menu objects. This would be much easier than putting it all on a large billboarded quad as you suggested.
Check out this exerpt, specifically the heading "Projection Transformations".
As stated here, you need to apply a translation of 0.375 in x and y to get pixel perfect alignment:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.375, 0.375, 0.0);
/* render all primitives at integer positions */
The algorithm is simple:
Draw your 3D scene, presumably with depth testing enabled.
Disable depth testing so that your GUI elements will draw over the 3D stuff.
Use glPushMatrix to store you current model view and projection matrices (assuming you want to restore them - otherwise, just trump on them)
Set up your model view and projection matrices as described in the above code
Draw your UI stuff
Use glPushMatrix to restore your pushed matrices (assuming you pushed them)
Doing it like this makes the camera position irrelevant - in fact, as the camera moves, the 3D parts will be affected as normal, but the 2D overlay stays in place. I'm expecting that this is the behaviour you want.