I have troubles with using Textures in OpenGl (only 2D)
I want to show a Level (size 15x15) and every Field can have a different value/Picture.
I have about 100 different Fieldtypes and according to the leveldesign, I have to display a different image for every type.
If I use an single tga-Image for every possible Field (100 files), everything works fine,
but now I put all Images together in one File and, depending on the field-type, I'm always displaying a different part from the image.
The Problem: sometimes thin Lines in the images aren't displayed and between the different sprites, there are often black or gray lines which makes the whole graphics ugly.
Is there a way to solve this problem easily?
(Maybe I have to load the tga-image into one GLuint and then split it up into 100 different GLunint's? Or is there an setting to improve the rendering? Or do I have to change the resolution of the tga-image file itself?)
Here's a part of my image-File, every element has a resolution of 220x220 pixels --> whole pic.: 2200x2200 pixels
And that's the OpenGl-output:
I really don't want to have hundrets of image-files. Especially because loading all these Files consumes a lot of time and I'm sure there's a solution out there
EDIT:
I'm using the following settings:
// Specify filtering and edge actions
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
EDIT2:
Now I changed GL_LINEAR to GL_NEAREST and added a half texel in the texture-coordinates there are no lines between the images anymore :)
But I still have the other problem:
Small elements (lines) in the image aren't displayed correctly.
This is how it should look like:
"Keep in mind, that OpenGL samples textures at the texel centers"
Have a look at the answer here:
OpenGL ES Texture Coordinates Slightly Off
You need to add padding pixels to your textures, to avoid lines along their edges. So that when OpenGL samples pixels from texture to be drawn (including mipmap levels if you render them in smaller size than they are in your files), there are no wrongly colored pixels used.
Related
What is the best way to texture terrain made from quads in OpenGL? I have around 30 different textures I want to have for my terrains (1 texture per terrain type, so 30 terrain types) and would like to have smooth transitions between any two of the terrains.
I have been doing some browsing on the web and found that there are many different methods, including 3d texturing, Alpha channels, blending, and using shaders. However, which of these is the most efficient and can handle the amount of textures I am looking to use? For example: This popular answer describes how to use some techniques, but since the mixmap only has 4 properties (RGBA) and so can only support 4 textures.
I should also note that I know nothing about shaders, so non-shader required techniques would be preferable.
Since you linked to an answer that describes texture splatting, and its question mentions the game Oblivion, I can provide some additional insight into that.
Basic texture splatting with an RGBA mixmap only supports four textures per terrain quad, but you can use different sets of textures for different quads. Oblivion divides its terrain into squares (called "cells") of 32 grid points (192 feet) per side, and each cell defines its own set of four terrain textures. So you can't have lots of texture diversity within a small area, but you can easily vary your textures over larger regions. If you prefer, you can define texture sets for smaller regions, even individual quads, at the expense of using more memory.
If you really need more than four textures in a quad, you can use multiple mixmaps. For each additional one, you just do another texture lookup to get four more blending factors, and blend in four more textures on top of the results from the previous mixmap. You can scale up to as many textures as you want, again at the expense of memory.
Texture splatting can be tricky to combine with with LOD techniques on the height map, because when a single low-detail terrain quad represents a group of high-detail quads, you have to sample several different mixmaps for different regions of the big quad. Oblivion sidesteps that problem by using texture splatting only for full-detail terrain; distant cells, rendered at lower resolution, use precomputed textures produced by the editor, which does the splatting and downscaling in advance.
One alternative to texture splatting is to use a clipmap to render a "megatexture". With this approach, you have a single large texture that represents your entire terrain, and you avoid filling up your RAM by loading different parts of it with only as much detail as is actually needed to render it based on the viewer's current position. (Distant parts of the terrain can't be seen at full detail, so there's no need to load them at full detail.)
The advantage of this approach is its artistic freedom: you can place fine details anywhere you want in the texture, without regard to the vertex grid. The disadvantage is that it's rather complex to implement, and the entire clipmap has to be stored somewhere, probably in a big file on disk, so that you can load parts of it into RAM as needed.
I am working on a game with a friend and we are using openGl, glut, devIL, and c++ to render everything. Simply, Most of the .pngs we are using are rendering properly, but there are random pixels that are showing up as white.
These pixels fall into 2 categories. The first are pixels on the edge of the image. These are resulting from the anti-aliasing going on from photoshop's stroke feature (which i am trying to fix). The second is the more mysterious one. When the enemy is standing still, the texture looks fine, but as soon as it jumps a random white line appears on the top of it.
The line on top is of varying solidity (this shot is not the most solid)
It seems like a blending issue, but I am not as familiar with the way openGl handles the transparency (our code for transparency was learned from the other questions on stack overflow though I couldn't find anything on this issue, however). I am hoping something will fix both issues, but am more worried about the second.
Our current setup code:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
Transparent areas of a bitmap also have a color. If it is 100% transparent, you usually can't see it. Photoshop usually fills white in these areas.
If you are using minifying or magnifying flags that are not GL_NEAREST, then you will have interpolation. If you interpolate in between two pixels, where one is blue and opaque, and the other is white and transparent, then you will get something that is 50% transparent and light-blue. You may also get the same problem with mimaps, as interpolation is used. If you use mipmaps, one solution is to generate them yourself. That way, you can ignore the transparent areas when doing the interpolations. See some good explanations here: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Why are you using png files? You save some disk space, but need to include complex libraries like devil. You don't save any space in the delivery of an application, as most tools that creates delivery packages have very efficient compression. And you don't save any memory on the GPU, which may be the most critical.
This looks like an artifact in your source PNG. Are you sure there are no such light opaque pixels there?
White line appearing on top could be a UV interpolation error from neighbor texture in your texture atlas (or padding if you pad your NPOT textures to POT with white opaque pixels). Thats why usually you need to pad textures with at least one edge pixel in every direction. That won't help with mipmaps though, as Lars said - you might need to use custom mipmap generation or drop it altogether.
I am using OpenGL 1.3 to do 2D sprite rendering and supporting both POTS (power of two size) textures and NPOTS (non power of two size) textures with TEXTURE_2D and TEXTURE_RECTANGLE_ARB respectively.
I had started with POTS textures (using TEXTURE_2D), which worked fine, but now I am adding NPOTS textures (using TEXTURE_RECTANGLE_ARB). This addition has caused the POTS textures (with TEXTURE_2D) to break.
By break I mean that the POTS textures are rendered as a grayscale gradient ranging from gray in the bottom left corner and white in the top right.
An extra point (discovered whilst trying to fix this error) - One big difference between TEXTURE_RECTANGLE_ARB and TEXTURE_2D is that the first uses non-normalised coordinates on the textures, whereas TEXTURE_2D uses normalised coordinates ([0.0,1.0]). I decided to check and replace the TEXTURE_2D's normalised coordinates with non-normalised coordinates, and this removed the grayscale problem by creating another - it was rendering the wrong texture!
I.e. when using a POTS and an NPOTS texture, my POTS texture tries to render the NPOTS texture.
Does anyone have any idea why this might be happening? Thankyou!
Okay, so it turns out that it was a rather silly mistake, that was found in the original TEXTURE_2D code. I had forgotten to end the rendering with the correct glDisable!
I.e. the rendering code began with glEnable(targetType), but did not end with glDisable(targetType) [where targetType was the correct choice of GL_TEXTURE_2D or GL_TEXTURE_RECTANGLE_ARB].
I guess, somehow, the two rendering environments got intertwined.
Lesson - make sure when you begin with glEnable that you end with glDisable.
Edit: Considering the below comment, I did a little digging and found out about target presidence. The idea is presented in the beginning of this article:
http://math.hws.edu/graphicsnotes/c4/s5.html
"At most one texture target will be used when a surface is rendered. If several targets are enabled, 3D textures have precedence over 2D textures, and 2D textures have precedence over 1D. Except for one example later in this section, we will work only with 2D textures."
In terms of TEXTURE_RECTANGLE_ARBs precedence, this is described in the specification in section 10: http://www.opengl.org/registry/specs/ARB/texture_rectangle.txt
I did not know about target priority or precedence at the time of the bug, so thanks to #datenwolf!
Even if you have solved your problem, I'd like to clarify a concept that might solve problems that you may have in a near future.
Here is what is wrong:
now I am adding NPOTS textures (using TEXTURE_RECTANGLE_ARB)
GL_texture_rectangle OpenGL extension is not mean to support Not Power Of Two (NPOT) textures; the correct OpenGL extention to query is GL_texture_non_power_of_two.
GL_texture_non_power_of_two cannot break existing applications which expect Power Of Two (POT) texture, because it relax the specification in order to accept textures having any width/height/depth (within the limits, of course). Here is a quote of the extension:
There is no additional procedural or enumerant api introduced by this
extension except that an implementation which exports the extension
string will allow an application to pass in texture dimensions for
the 1D, 2D, cube map, and 3D targets that may or may not be a power
of two.
Instead, GL_texture_rectangle allow you to specify texture coordinates by addressing the texture extents (width and height) by the pixel coordinate, which is an integer (instead of the usual floating point coordinate normalized in the range [0.0f, 1.0f]). Additionally, rectangle textures cannot support mipmapping.
additionally, if you read GL_texture_rectangle specification, it is not meant to support NPOT texture, because rectangle textures shall be affected by the same restrictions of 2D textures.
I am trying to create a mipmapped textured image that represents elevation. The image must be 940 x 618. I realize that my texture must have a width and height of a power of 2. As of now I have tried to incrementally go through doing all my texturing in squares (eg 64 x 64, or 128 x 128, even 512 x 512), but the image still comes out blurry. Any idea of how to better texture an image of this size?
Use a 1024x1024 texture and put your image in just a part of the image, 940x618. Then use the values 940.0/1024.0 and 618.0/1024.0 for the max texture coordinates, or scale the TEXTURE_MATRIX. This will make a 1:1 mapping for your pixels. You might also need to shift the model half a pixel to get a perfect fit, this depends on your model setup and view.
This is the technic I used in this screensaver I made for the Mac. http://memention.com/void/ It grabs the screen contents and uses it as a texture on some 3D effects and I really wanted a pixel perfect fit.
As far as I know, modern technologies do not require you to use a power of 2 for your dimensions. Just know however, that if this code is run an older machine, you'll have some problems. How old is your machine?
The texture is probably not mapped 1:1, and you have GL_LINEAR or GL_NEAREST filtering. Try higher resolution texture, mipmapping, and 1:1 screen mapping.
Use a 940x618 sized texture (if this is truly the size of the surface it's applied to) and set the texture's minification/magnification to use GL_LINEAR. That should give you the results you're after.
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt