I'm using DevIl to load openGL textures, which is chopped up into sprites by metadata. The problem is, the chopped segment has white around the edges (see below):
This sprite is 64 x 64, so no pixel padding procedures have been used. The spritesheet is cramped, all of the sprites are assembled right next to each other (as opposed to separating them).
Is it possible that there is a configuration that I am missing in my DevIL texture loading function? Or is it a rendering feature of OpenGL?
What is your texture filter? Does setting it to GL_NEAREST solve your problem? If so, I have written extensively about the cause of this problem:
opengl, Black lines in-between tiles.
Related
I have a 512X512 texture which holds a number of images that i want to use in my application. After adding the image data to the texture i save the texture coords for the individual images. Later i apply these on some quads that i am drawing. The texture has mipmapping activated.
When i take a screenshot of the rendered scene at exactly the same instance in two different runs of the applications, i notice that there are differences in the image only among those quads textured using this mipmapped texture. Can mipmapping cause such an issue?
My best guess is that it has to do with precisions in your shader. Check out this problem that I had (and fought with for a while) and my solution:
opengl texture mapping off by 5-8 pixels
It probably is a combination of mimapping's automatic scaling of your texture atlas and the precision hints in your shader code.
Also see the other linked question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
I am working on a game with a friend and we are using openGl, glut, devIL, and c++ to render everything. Simply, Most of the .pngs we are using are rendering properly, but there are random pixels that are showing up as white.
These pixels fall into 2 categories. The first are pixels on the edge of the image. These are resulting from the anti-aliasing going on from photoshop's stroke feature (which i am trying to fix). The second is the more mysterious one. When the enemy is standing still, the texture looks fine, but as soon as it jumps a random white line appears on the top of it.
The line on top is of varying solidity (this shot is not the most solid)
It seems like a blending issue, but I am not as familiar with the way openGl handles the transparency (our code for transparency was learned from the other questions on stack overflow though I couldn't find anything on this issue, however). I am hoping something will fix both issues, but am more worried about the second.
Our current setup code:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
Transparent areas of a bitmap also have a color. If it is 100% transparent, you usually can't see it. Photoshop usually fills white in these areas.
If you are using minifying or magnifying flags that are not GL_NEAREST, then you will have interpolation. If you interpolate in between two pixels, where one is blue and opaque, and the other is white and transparent, then you will get something that is 50% transparent and light-blue. You may also get the same problem with mimaps, as interpolation is used. If you use mipmaps, one solution is to generate them yourself. That way, you can ignore the transparent areas when doing the interpolations. See some good explanations here: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Why are you using png files? You save some disk space, but need to include complex libraries like devil. You don't save any space in the delivery of an application, as most tools that creates delivery packages have very efficient compression. And you don't save any memory on the GPU, which may be the most critical.
This looks like an artifact in your source PNG. Are you sure there are no such light opaque pixels there?
White line appearing on top could be a UV interpolation error from neighbor texture in your texture atlas (or padding if you pad your NPOT textures to POT with white opaque pixels). Thats why usually you need to pad textures with at least one edge pixel in every direction. That won't help with mipmaps though, as Lars said - you might need to use custom mipmap generation or drop it altogether.
I recently started looking at cocos2d game development.
What's the difference between sprite and texture?
Maybe I could through in 'bitmap' in there. What is a bitmap?
They all seem to be the same thing as 2d image.
A texture is an in-memory image that the device can draw onto the screen.
A sprite actually draws the texture, or just a specific rectangle of the texture, on the screen. The sprite can be scaled, rotated, positioned, skewed, tinted (colorized) among other things.
Multiple sprites can share the same texture. The texture is only loaded to memory once regardless of how many sprites are using the same texture. Moreover with CCSpriteBatchNode you can "batch" the drawing of all sprites that are using the same texture to achieve better performance.
A bitmap is a general term for a computer image where each pixel is represented by one or more bits. There's also the image format BMP which is/was popular on Windows. Most people would just say "image" these days as there are other forms of "bitmaps" that are not images. For example in AI code you often have bitmaps (arrays of bits) that represent state information of the AI or pathfinding algorithms for all areas of the game world. Ie each area in the world could have a "blocking" bit, or a "resource" bit that helps the AI making decisions.
See also Wikipedia:
Texture Mapping
Bitmap
you can load texture into memory, for example your file with image is texture. sprite is object with set of parameters, several of them are pointer to the texture, size and texture coordinate.
you can load texture 2048x2048 into memory, then create sprite with part of this texture.
I have an assignment to render a terrain from a greyscale 8bit bmp and get colors to the terrain from a texture 24bit bmp. I managed to get a proper landscape with heights and so on, and also I get the colors from the texture bitmap. The problem is that the full color rendered terrain is very "blocky", it shows right colors and height but it's so blocky. I use glShadeModel(GL_SMOOTH) but it still looks so blocky, almost like I can see the pixels from the bitmap. So any hints are appreciated.
Do you use the bitmap as texture, or do you set vertex colours from the bitmap? I suggest you use a texture, using the planar vertex position as texture coordinate.
One thing you have to take into consideration is when you are rendering are you using GL_TRIANGLES or GL_TRIANGLESTRIPS this makes a difference on performance, second if you are using lighting you have to define your normals and each triangle or each vertex of each triangle, the problem then becomes tricky because almost every triangle is on a different plane. Not having proper normals would make it look blocky. The third thing that makes a difference is how big or small the triangles are; the smaller the triangles or the more divisions in your [x,z] plane increases you resolution thus increases the visual quality, but also slows down your frame rate. You have to find a good balance between the two.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.