Rendering a rect-based minimap - cocos2d-iphone

Cocos2d-iPhone, 1.0.1.
My game has a map. And I have an array containing NSValues (CGRects) that basically represent the collisions in the map. Anyway, what I need is to literally create a texture which is pretty much a grey background with black-filled rectangles representing my rects, and later I'll use this texture to create my minimap.
Anyway, the problem is the texture-creation part. I want to know about this, because creating CCSprites to represent my rectangles is a bit impossible (they're hundreds per map!). I also considered drawing primitives with stuff like CCDrawLine and such, but I'm not so sure about this.
What do you recommend? How would you create a texture?

Creating a minimap means creating a scaled down version of whatever map you're using to represent the game world.
One approach that might work is to scale down your map layer (CCTMXTiledMap?) so it fits the size of your minimap. Then render it onto a CCRenderTexture. This may be time consuming so it's a good idea not to update the minimap render texture every frame.
Alternatively loop over your tilemap, and for each tile render a single pixel at the appropriate position onto the render texture using a given color based on the tile's type (grass, mountain, water, etc). If the resulting minimap is too large or small, place 2x2 pixels or more or scale down the rendertexture.

Sounds like a tile-based game to me.

Related

OpenGL 3.2+ Drawing Cubes around existing vertices

So I have a cool program that renders a pretty cube in the centre of the screen.
I'm trying to now create a tiny cube on each corner of the existing cube (so 8 tiny cubes), centred on each of the existing cubes corners (or vertices).
I'm assuming an efficient way to implement this would be with a loop of some kind, to minimise the amount of code.
My query is, how does this affect the VAO/VBO's? Even in a loop, would each one need it's own buffer or could they all be sent at the same time...
Secondly, if it can be done, what would the structure of this loop be like, in terms of focusing on separate vertices given that each vertex has different coordinates...
As Vaughn Cato said, each object (using the same VBOs) can simply be drawn at different locations in world space, so you do not need to define separate VBO's for each object.
To complete this task, you simply need a loop to modify the given matrix before each one is rendered to the screen to change the origins of where each cube is drawn.

Preventing Overdraw in Isometric Art

Background:
I am creating a game that presents the world in an isometric perspective, achieved by drawing isometric tiles. My current implementation is naive, using the painter's method, drawing from back to front, from bottom to top, using surface blits from tile images.
The Problem:
I'm concerned (maybe unduly so, please let me know if this is the case) about overdraw. Here's a small snapshot of a single layer of tiles:
The areas hi-lit in pink are the areas where the back-to-front, bottom-to-top method blits pixels to the canvas more than once. This is a small and contrived example, but in practice I hope to accomplish something more along the lines of this:
(image credit eBoy)
With an image as complex as this, and a tile-based implementation, each screen pixel is drawn to several times before the final image is composited, which feels like it's really inefficient. Since these are just 2D images with, in the end, one-bit alpha masks, there aren't as many concerns as there would be with 3D (e.g. no wasted lighting or transform math) but it still seems there should be a more elegant way of determining whether a pixel should be drawn or not based on whether or not it would be occluded in the final composition.
Solutions?
The best solution I've come up with so far is to:
Reverse the drawing order and draw front-to-back, top-to-bottom.
Keep a single bit per pixel fake z buffer that records whether or not a pixel has been drawn yet.
Only draw a tile if some of the pixels it covers haven't been drawn yet.
Is there a better way to do this? Are blit operations superefficient and I'm tilting at windmills here?
Windmills. Especially if you're using OpenGL-accelerated SDL2 blits.

Sprite Sheet With OpenGL and SDL

I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.

OpenGL 2D game question

I want to make a game with Worms-like destructible terrain in 2D, using OpenGL.
What is the best approach for this?
Draw pixel per pixel? (Uh, not good?)
Have the world as a texture and manipulate it (is that possible?)
Thanks in advance
Thinking about the way Worms terrain looked, I came up with this idea. But I'm not sure how you would implement it in OpenGL. It's more of a layered 2D drawing approach. I'm posting the idea anyway. I've emulated the approach using Paint.NET.
First, you have a background sky layer.
And you have a terrain layer.
The terrain layer is masked so the top portion isn't drawn. Draw the terrain layer on top of the sky layer to form the scene.
Now for the main idea. Any time there is an explosion or other terrain-deforming event, you draw a circle or other shape on the terrain layer, using the terrain layer itself as a drawing mask (so only the part of the circle that overlaps existing terrain is drawn), to wipe out part of the terrain. Use a transparent/mask-color brush for the fill and some color similar to the terrain for the thick pen.
You can repeat this process to add more deformations. You could keep this layer in memory and add deformations as they occur or you could even render them in memory each frame if there aren't too many deformations to render.
I guess you'd better use texture-filled polygons with the correct mapping (a linear one that doesn't stretch the texture to use all the texels, but leaves the cropped areas out), and then reshape them as they get destroyed.
I'm assuming your problem will be to implement the collision between characters/weapons/terrain.
As long as you aren't doing this on opengl es, you might be able to get away with using the stencil buffer to do per-pixel collision detection and have your terrain be a single modifyable texture.
This page will give an idea:
http://kometbomb.net/2007/07/11/hardware-accelerated-2d-collision-detection-in-opengl/
The way I imagine it is this:
a plane with the texture applied
a path( a vector of points/segments ) used for ground collisions.
When something explodes, you do a boolean operation (rectangle-circle) for the texture(revealing the background) and for the 'walkable' path.
What I'm trying to say is you do a geometric boolean operation and you use the result to update the texture(with an alpha mask or something) and update the data structure you use to keep track of the walkable area(which ever that might be).
Split things up, instead of relying only on gl draw methods
I think I would start by drawing the foreground into the stencil buffer so the stencil buffer is set to 1 bits anywhere there's foreground, and 0 elsewhere (where you want your sky to show).
Then to draw a frame, you draw your sky, enable the stencil buffer, and draw the foreground. For the initial frame (before any explosion has destroyed part of the foreground) the stencil buffer won't really be doing anything.
When you do have an explosion, however, you draw it to the stencil buffer (clearing the stencil buffer for that circle). Then you re-draw your data as before: draw the sky, enable the stencil buffer, and draw the foreground.
This lets you get the effect you want (the foreground disappears where desired) without having to modify the foreground texture at all. If you prefer not to use the stencil buffer, the alternative that seems obvious to me would be to enable blending, and just manipulate the alpha channel of your foreground texture -- set the alpha to 0 (transparent) where it's been affected by an explosion. IMO, the stencil buffer is a bit cleaner approach, but manipulating the alpha channel is pretty simple as well.
I think, but this is just a quick idea, that a good way might be to draw a Very Large Number of Lines.
I'm thinking that you represent the landscape as a bunch of line segments, for each column of the screen you have 0..n vertical lines, that make up the ground:
12 789
0123 6789
0123456789
0123456789
In the above awesomeness, the column of "0":s makes up a single line, and so on. I didn't try to illustrate the case where a single pixel column has more than one line, since it's a bit hard in this coarse format.
I'm not sure this will be efficient, but it at least makes some sense since lines are an OpenGL primitive.
You can color and texture the lines by enabling texture-mapping and specifying the desired texture coordinates for each line segment.
Typically the way I have seen it done is to have each entity be a textured quad, then update the texture for animation. For a destructible terrain it might be best to break the train into tiles then you only have to update the ones that have changed. Don't use GLdrawpixels it is probably the slowest approach possible (outside of reloading textures from disk every frame though it would be close.)

Terrain minimap in OpenGL?

So I have what is essentially a game... There is terrain in this game. I'd like to be able to create a top-down view minimap so that the "player" can see where they are going. I'm doing some shading etc on the terrain so I'd like that to show up in the minimap as well. It seems like I just need to create a second camera and somehow get that camera's display to show up in a specific box. I'm also thinking something like a mirror would work.
I'm looking for approaches that I could take that would essentially give me the same view I currently have, just top down... Does this seem feasible? Feel free to ask questions... Thanks!
One way to do this is to create an FBO (frame buffer object) with a render buffer attached, render your minimap to it, and then bind the FBO to a texture. You can then map the texture to anything you'd like, generally a quad. You can do this for all sorts of HUD objects. This also means that you don't have to redraw the contents of your HUD/menu objects as often as your main view; update the the associated buffer only as often as you require. You will often want to downsample (in the polygon count sense) the objects/scene you are rendering to the FBO for this case. The functions in the API you'll want to check into are:
glGenFramebuffersEXT
glBindFramebufferEXT
glGenRenderbuffersEXT
glBindRenderbufferEXT
glRenderbufferStorageEXT
glFrambufferRenderbufferEXT
glFrambufferTexture2DEXT
glGenerateMipmapEXT
There is a write-up on using FBOs on gamedev.net. Another potential optimization is that if the contents of the minimap are static and you are simply moving a camera over this static view (truly just a map). You can render a portion of the map that is much larger than what you actually want to display to the player and fake a camera by adjusting the texture coordinates of the object it's mapped onto. This only works if your minimap is in orthographic projection.
Well, I don't have an answer to your specific question, but it's common in games to render the world to an image using an orthogonal perspective from above, and use that for the minimap. It would at least be less performance intensive than rendering it on the fly.