I recently started looking at cocos2d game development.
What's the difference between sprite and texture?
Maybe I could through in 'bitmap' in there. What is a bitmap?
They all seem to be the same thing as 2d image.
A texture is an in-memory image that the device can draw onto the screen.
A sprite actually draws the texture, or just a specific rectangle of the texture, on the screen. The sprite can be scaled, rotated, positioned, skewed, tinted (colorized) among other things.
Multiple sprites can share the same texture. The texture is only loaded to memory once regardless of how many sprites are using the same texture. Moreover with CCSpriteBatchNode you can "batch" the drawing of all sprites that are using the same texture to achieve better performance.
A bitmap is a general term for a computer image where each pixel is represented by one or more bits. There's also the image format BMP which is/was popular on Windows. Most people would just say "image" these days as there are other forms of "bitmaps" that are not images. For example in AI code you often have bitmaps (arrays of bits) that represent state information of the AI or pathfinding algorithms for all areas of the game world. Ie each area in the world could have a "blocking" bit, or a "resource" bit that helps the AI making decisions.
See also Wikipedia:
Texture Mapping
Bitmap
you can load texture into memory, for example your file with image is texture. sprite is object with set of parameters, several of them are pointer to the texture, size and texture coordinate.
you can load texture 2048x2048 into memory, then create sprite with part of this texture.
Related
I'm using DevIl to load openGL textures, which is chopped up into sprites by metadata. The problem is, the chopped segment has white around the edges (see below):
This sprite is 64 x 64, so no pixel padding procedures have been used. The spritesheet is cramped, all of the sprites are assembled right next to each other (as opposed to separating them).
Is it possible that there is a configuration that I am missing in my DevIL texture loading function? Or is it a rendering feature of OpenGL?
What is your texture filter? Does setting it to GL_NEAREST solve your problem? If so, I have written extensively about the cause of this problem:
opengl, Black lines in-between tiles.
I am doing video texturing to a rectangle surface created. I need to create 2 more rectangle of say different size and then copy a part of the texturing video running on the 1st surface (for eg: middle part of the video ) and play it on the new surface created. Is this possible using OpenGL ES ? Through my native video surface renderer, I can do this functionality and can map it to OGLES application. I was just wondering whether it is possible to do directly from OGL app itself, by copying selected rectangle from one of the video texturing surface ?
If your texture is full motion video, you should not copy the texture data because that will be too slow too keep up with video frame rates. You should avoid using glTexImage2D() and instead use the EGL Image Extensions as detailed in my third article here:
http://montgomery1.com/opengl/
But either way, once you have the image in a texture and the texture is bound with glBindTexture(), then any number of rectangles you draw will be textured with that same currently-bound texture, without more copying. These rectangles are actually geometry constructed of triangles and not "surfaces". The framebuffer is the surface. The texture coordinates can be different for each rectangle, which allows you to crop and/or scale the texture mapping uniquely for each.
Lets say I have this image and in it is an object (a cube). That object is being tracked (with labels) and I manage to render a virtual cube onto it (augmented reality). Now that I can render a virtual cube onto it I want to be able to make the object 'disappear' with some really basic diminished-reality technique called "inpainting". The inpaint in question is pretty simple (it has to be or the FPS will suffer) and it requires me to do some operations on pixels and their neighbors (like with Gaussian blur or other basic image processing).
To do that I first need:
A mask: black background with a white cube in it.
Access each pixel of the initial image (at coordinates x and y) as well as its neighborhood and do stuff based on the pixel value of the mask at the same x and y coordinates. So basically the mask serves as a way to say ignore this pixel or use this pixel.
How do I do this using OpenGL? I want to be able to access pixel values 1 by 1 preferably in 2D because of the neighbors.
Do I use FBOs or PBOs? I've read many things about buffers and methods like glDrawPixels() but I'm having trouble putting them all together. The paper I saw this method in used the GL_BACK buffer but mine is already used. Some sample code (C++) would be really appreciated with all the formalities (OpenG` calls) since I'm still a beginner in OpenGL.
I'm even thinking of using OpenCV if pixel manipulation is too hard in OpenGL since my AR library (Aruco) works on top of OpenCV. In that case I will still need to get the mask (white cube on black background), convert it to a cv::Mat and then do my processing.
I know this approach is inefficient (going back and forth from the GPU/CPU) but my goal (for now) is to at least make the basics work.
Setup a framebuffer object to render your original image + virtual cube. Here's a tutorial.
Next you can attach that framebuffer texture as a input (sampler) texture of your next stage and render a quad (two triangles) that cover your mask.
In the fragment shader you should be able to sample your "screen coordinate" by reading the variable gl_FragCoord. Setting up the texture filter functions as GL_NEAREST, you can access the exact texture coordinates. Also the neighboring pixels are available with a displacement (deltaX = 2/Width, deltaY=2/Height).
Using a previous framebuffer texture as source is mandatory, as the currently active framebuffer is write only.
I have an assignment to render a terrain from a greyscale 8bit bmp and get colors to the terrain from a texture 24bit bmp. I managed to get a proper landscape with heights and so on, and also I get the colors from the texture bitmap. The problem is that the full color rendered terrain is very "blocky", it shows right colors and height but it's so blocky. I use glShadeModel(GL_SMOOTH) but it still looks so blocky, almost like I can see the pixels from the bitmap. So any hints are appreciated.
Do you use the bitmap as texture, or do you set vertex colours from the bitmap? I suggest you use a texture, using the planar vertex position as texture coordinate.
One thing you have to take into consideration is when you are rendering are you using GL_TRIANGLES or GL_TRIANGLESTRIPS this makes a difference on performance, second if you are using lighting you have to define your normals and each triangle or each vertex of each triangle, the problem then becomes tricky because almost every triangle is on a different plane. Not having proper normals would make it look blocky. The third thing that makes a difference is how big or small the triangles are; the smaller the triangles or the more divisions in your [x,z] plane increases you resolution thus increases the visual quality, but also slows down your frame rate. You have to find a good balance between the two.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.