Terrain minimap in OpenGL? - c++

So I have what is essentially a game... There is terrain in this game. I'd like to be able to create a top-down view minimap so that the "player" can see where they are going. I'm doing some shading etc on the terrain so I'd like that to show up in the minimap as well. It seems like I just need to create a second camera and somehow get that camera's display to show up in a specific box. I'm also thinking something like a mirror would work.
I'm looking for approaches that I could take that would essentially give me the same view I currently have, just top down... Does this seem feasible? Feel free to ask questions... Thanks!

One way to do this is to create an FBO (frame buffer object) with a render buffer attached, render your minimap to it, and then bind the FBO to a texture. You can then map the texture to anything you'd like, generally a quad. You can do this for all sorts of HUD objects. This also means that you don't have to redraw the contents of your HUD/menu objects as often as your main view; update the the associated buffer only as often as you require. You will often want to downsample (in the polygon count sense) the objects/scene you are rendering to the FBO for this case. The functions in the API you'll want to check into are:
glGenFramebuffersEXT
glBindFramebufferEXT
glGenRenderbuffersEXT
glBindRenderbufferEXT
glRenderbufferStorageEXT
glFrambufferRenderbufferEXT
glFrambufferTexture2DEXT
glGenerateMipmapEXT
There is a write-up on using FBOs on gamedev.net. Another potential optimization is that if the contents of the minimap are static and you are simply moving a camera over this static view (truly just a map). You can render a portion of the map that is much larger than what you actually want to display to the player and fake a camera by adjusting the texture coordinates of the object it's mapped onto. This only works if your minimap is in orthographic projection.

Well, I don't have an answer to your specific question, but it's common in games to render the world to an image using an orthogonal perspective from above, and use that for the minimap. It would at least be less performance intensive than rendering it on the fly.

Related

Rendering a rect-based minimap

Cocos2d-iPhone, 1.0.1.
My game has a map. And I have an array containing NSValues (CGRects) that basically represent the collisions in the map. Anyway, what I need is to literally create a texture which is pretty much a grey background with black-filled rectangles representing my rects, and later I'll use this texture to create my minimap.
Anyway, the problem is the texture-creation part. I want to know about this, because creating CCSprites to represent my rectangles is a bit impossible (they're hundreds per map!). I also considered drawing primitives with stuff like CCDrawLine and such, but I'm not so sure about this.
What do you recommend? How would you create a texture?
Creating a minimap means creating a scaled down version of whatever map you're using to represent the game world.
One approach that might work is to scale down your map layer (CCTMXTiledMap?) so it fits the size of your minimap. Then render it onto a CCRenderTexture. This may be time consuming so it's a good idea not to update the minimap render texture every frame.
Alternatively loop over your tilemap, and for each tile render a single pixel at the appropriate position onto the render texture using a given color based on the tile's type (grass, mountain, water, etc). If the resulting minimap is too large or small, place 2x2 pixels or more or scale down the rendertexture.
Sounds like a tile-based game to me.

Lens shader / Image disortion

Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?
Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.
Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.

Sprite Sheet With OpenGL and SDL

I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.

Using Vertex Buffer Objects for a tile-based game and texture atlases

I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.

Combining OpenGL renderings into one view

I have a simple solid modeling application in which I want to implement several "navigation modes", ways for the user to navigate the camera through 3d space. One of them is the ubiquitous 'drag and pan/rotate' that is used in SketchUp, Blender etc.; I also want to implement something that is more relevant to my specific application. Specifically, I want to implement a mode where the camera floats on a 'ring' above the object being modeled (a building), and always looks at the center of the model; this way, a user can easily 'circle' around the object, a common operation in my application.
So, what I want to do is render the building in my view, and display a torus in the top right of the view, with a small sphere on the torus to represent the camera location. There would be a north arrow in the torus, and the user would drag the camera around the model object by dragging the sphere; moving the sphere would reposition the camera and redraw the scene.
It looks like what I should do is the following: render the 'main view', i.e. the building; then render the torus and sphere (with different perspective settings and lighting) to an offscreen buffer, and blit it from there to my main view.
Then however I get to the hit testing. I want to detect if the user clicks on the sphere, or the torus; from what I understand from OpenGL picking (it seems to be a hard subject :/ ), all picking methods apply only for selecting in one 'scene'. Apart from that, I still want to detect 'normal' picking operations in the building model, obviously.
So, my questions:
How do I render to an offscreen buffer and blit into another OpenGL context (with alpha blending & transparence like for the center of the torus)?
How do I do hit testing in the described scenario?
I don't think you need to do off-screen rendering for this. You should be able to just re-set the camera and viewport and render the overlay after the main scene. You might have issues with Z-ordering and/or buffering, but perhaps the "sub-scene" is simple enough for that not to matter, or you could of course just clear the Z buffer before rendering it.
As far as drawing the torus/sphere goes, create a separate class for that and implement a "draw" method. Have the class contain the location of both the sphere and torus and have draw() render those things on the screen.
Then just call myRing.draw() in your main drawing method and you'll have a sphere and torus!
If you mean you want to have a a circle/ring rendered in 2D (which might be easier) in the top right corner of the window, then the same sort of idea would apply as in your hitbox post (except without that annoying projection calculation!)
Lastly, I'd consider using a function key in combination with mouse drags to implement the functionality you want... E.g. the user holds "shift" and then click-drags the mouse across the screen. These mouse events are caught and the x-delta is used to compute the angle of rotation. The camera's location is updated as this happens and you get a smooth sliding motion :)
I agree with #unwind; you don't need an offscreen buffer. If you want to anyway, search for "render-to-texture".
As for hit testing, The OpenGL FAQ has an entry on it. It describes several solutions: using GL_SELECTION render mode, using gluUnproject() to get a 3D collision ray and a simple 2D solution using unique colors.