GLSL Partially Overlapping Textures - opengl

I'm trying to blend two partially overlapping textures in GLSL and am wondering if I'm misunderstanding the concept of multi-texturing. Is it required that the textures fully overlap or can you have two offset textures that blend only where they overlap?
I have two images similar to the following (minus grid lines and text):
Example image
Ideally, the overlapping sections of the image would blend together nicely so that the final result would look like one smooth image that combines the two together. Overlapping orange pixels, for example, would blend together or take the higher intensity.
I'm new to GLSL and have been using this article GLSL Shader Article which uses a fragment shader to blend the textures (fairly standard).
Following the article, I#m setting up each texture like so:
glUseProgramObjectARB( m_hProgramObject );
GLint nParamObj = glGetUniformLocationARB( m_hProgramObject, pParamName_i );
...
glActiveTexture(GL_TEXTURE0 + nTextureID_i );
glBindTexture(GL_TEXTURE_2D, nTextureID_i);
glUniform1iARB( nParamObj, nTextureID_i );
I then bind each texture and draw triangle strips. My textures are created as:
glBindTexture( GL_TEXTURE_2D, m_nTextureID );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glTexImage2D(GL_TEXTURE_2D, 0, 4, nWidth, nHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, pbyData);
Does that process seem reasonable or am I misunderstanding the concept? Any tips or advice on how to achieve this?

That process certainly seems adequate. The advantage of using a fragment shader is you get complete control over how the textures are combined. For the offset, you may want two sets of texture coordinates - one for each image - or you could generate them implicitly. Figuring out what you want and writing the fragment shader will probably be the difficult bit. Unfortunately if you want to blend many different textures, the fragment shader used in this way can get quite expensive or just wont work with too many textures bound.
Your example image doesn't look like any blending has occurred at all - the images are just positioned over each other. In this case, it's easier just to draw separate bits of geometry with mapped textures.
Blending is typically done by the fixed pipeline blending stage. For example using the following calls...
glEnable(GL_BLEND)
glBlendFunc(src_scale, dest_scale)
One of the most common configuration is alpha blending with the over operator: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in which the amount blended is given by the alpha value of the colour your drawing - possibly influenced by the A component in your GL_RGBA texture. You can further manipulate the blend equations if needed. See Blending.

Related

How to set texture parameters in OpenGL/GLFW to avoid texture aliasing (wavey behavior on object borders) when viewing objects from a distance?

I am rendering a huge 3D cube array, that sometimes counts thousands of cubes aligned right next to one another. I am rendering a jpg texture to the cubes, which is just a simple color with a black border around the frame.
The problem:
The array is huge, and the distant parts of the array get kind of mixed into one another, so to say. In other words, the borders in the distant cubes sometimes completely disappear, sometimes they form an arbitrary wavey line together with other neighboring borders. All in all, the scene looks kind of messy because all the fine details (hard borders between the neighboring cubes) are lost/melted together. After searching for the solution online, I understand that the problem might be in my choice of texture filtering options.
This is how the problem actually looks like in OpenGL:
This is how the current code for loading texture and setting texture parameters looks like:
glGenTextures(1, &texture3);
glBindTexture(GL_TEXTURE_2D, texture3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//load image:
data = stbi_load("resources/textures/gray_border.jpg", &width, &height, &nrChannels, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
By now, I have tried playing with changing different parameters to the function glGenerateMipmap() and altering between the parameters in the glTexParameteri() function, but none did work by now.
If you want to enable Mip Mapping, then you have to use one of the minifying functions like GL_NEAREST_MIPMAP_NEAREST, GL_LINEAR_MIPMAP_NEAREST, GL_NEAREST_MIPMAP_LINEAR or GL_LINEAR_MIPMAP_LINEAR, see glTexParameter and Texture - Mip Maps:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
A further improvement can be gained by Anisotropic filtering, which is provides by the extension ARB_texture_filter_anisotropic and is a core feature since OpenGL 4.6.
e.g.
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY, 16);
See Sampler Object - Anisotropic filtering

GL_NEAREST in GLSL?

If I use the fixed pipeline, I can use
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
to make an image 'pixelated' as opposed to fragments in between pixels in the image being interpolated. How would I do the same thing in GLSL program? I'm using the texture2D function. I ask because I am using a shader program for my skybox, and you can see the edges because the edge pixels get blurred with grey. This problem gets fixed if I were to use the fixed pipeline and the above function calls.
You can use the same texture minification and magnification filters with the programmable pipeline. It sounds like the issue is not the min/mag filter, but with how you're handling texture clamping/wrapping. Either that or your textures have gray in them, which you probably don't want.
To set up texture clamping, you can do the following:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will cause any pixels sampled from outside the texture to return the same color as the nearest pixel within the texture to that sample location.
As the other answers and comments alread pointed out, the texture sampling states will effect both the fixed function pipeline and the programmable pipeline in the same ways. I'd just like to add that in shaders, you can also completely bypass the sampling and use the GLSL texelFetch() functions where you can directly access the unfiltered texels - which will basically look like GL_NEAREST filtering. You will also lose the wrapping functionality and hve to use unnormalized integer texture coords, so this is probably not what you want in that scenario, though.

OpenGL - How to read Alpha from a texture

I want the alpha value to be read for each pixel from the texture, so that some pixels completely disappear. The texture file(targa format) does contain the proper alpha channel.
Screenshot: http://i43.tinypic.com/2i79s1x.png
Here are the options I am using:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGR, //changing GL_BGR to anything else doesn't do a thing :? also tried GL_BGRA.
GL_UNSIGNED_BYTE, targaImage);
I have also tried most of the combinations of parameters for the glBlendFunc but none achieves the effect, alhtough I might have skipped it. This is the one that gets the regular blending done right(based on the alpha from glColor):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Rectangle drawing:
glColor4f(1,1,1,0.5);
mRect->Render();
If I set alpha to 1 it is fully opaque, but there is still white in the bottom right, meaning that the alpha is read from the texture but the white polygon beneath it is visible. So I need to make the polygon disappear somehow, but the texture to remain visible.
So that's how I achieve this in the picture. I have also experimented with this:
glAlphaFunc(GL_GREATER, 0.49);
glEnable(GL_ALPHA_TEST);
It only proves that the alpha of each ,,fragment'' of my rectangle is 0.5.
This texture file has a gradient that has full red around the blue circle in the middle, but the alpha goess from 0 in the top-left to full in the bottom-right(it's not the red color fading to white).
I would supply the whole code but it has more than 2k lines and I have split everything into classes, so I am just pulling out the parts I think are important.
Do I need my own shader to do this? I have only made my first contact with OpenGL and C++ a couple of weeks ago and I'm not into them yet, so if that's the solution I would appreciate a link to a tutorial that deals with alpha and GLSL.
Thank you :)
It looks like you're using the old, fixed function pipeline. With that you must properly configure the texture environment. Specifically you want the texture to modulate or replace the base color. Either is fine, but I presume replace mode is better suited for you.
After binding the texture set the mode using glTexEnvi, in your case specifically
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
Also I want to remind you, that you actually must enable blending (glEnable(GL_BLEND);)
If you look at the reference page for glTexEnv I think you easily grasp how ridiculously complex the state space for the fixed function texture environment pipeline became. I strongly suggest you don't bother with it and go directly for using the programmable pipeline, i.e. fragment shaders. Yes, their learning curve is significantly steeper, but with shaders you can actually write something legible like
#version 330
uniform sampler3D sampler_albedo;
uniform sampler2D sampler_diffuse;
out vec4 outcolor;
main()
{
float albedo = texture(sampler_albedo, texcood);
vec4 diffuse = texture(sampler_diffuse, texcoord);
outcolor = mix(basecolor, diffuse.rgb, diffuse.a) * albedo;
}
instead of spending over 15 lines setting up the texture environment and register combiners.

Mipmaps and Nearest filtering result in darker image

I am loading images into OpenGL app.Usually I am using Linear filtering but now testing nearest I found the resulting image is significantly darker than the original one.Btw,it also seems to me that the linear filtering causes some brightness loose too.Here are examples:
Linear filtering :
Nearest filtering :
Original image:
Now, I am setting mipmaps levels (to 4 ).I found that when not using mipmaps the original brightness is intact.What can be the problem?Is it related to gamma correction?
Here is the code for image load and mipmap generation:
ILinfo imageInfo;
iluGetImageInfo(&imageInfo);
iluFlipImage();
if (imageInfo.Format == IL_RGB)
{
ilConvertImage(IL_BGRA, IL_UNSIGNED_BYTE);
}
else if (imageInfo.Format == IL_RGBA)
{
ilConvertImage(IL_BGRA, IL_UNSIGNED_BYTE);
}
iluGetImageInfo(&imageInfo);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);
glTexStorage2D(GL_TEXTURE_2D,numMipMapLevels,GL_RGBA8,imageInfo.Width,imageInfo.Height);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,imageInfo.Width,imageInfo.Height,GL_BGRA,GL_UNSIGNED_BYTE,imageInfo.Data);
/* ==================================== */
// Trilinear filtering by default
if(smooth){
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
}else{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
I am also running MSAA pass in a custom FBO but it looks to be irrelevant to the issue as I tested also with MSAA turned off and the same problem persists.
From your code it looks like you create a mipmapped texture (with 4 mipmap levels as you say) but then only set the image for the first level. This means all the other level's images are undefined. When then using GL_NEAREST_MIPMAP_LINEAR, it will access the two mipmap levels that best fit the pixel-texel-ratio (the MIPMAP_LINEAR-part) and then pick a single nearest texel from each level (the NEAREST-part) and interpolate those.
From your image it looks like the unspecified mipmap levels are just black, so you get an interpolation between the texture color and black, thus a darkened texture (well, they could actually contain anything and the texturing shouldn't even work since the texture is incomplete, but maybe immutable storage behaves different in this regard). When not using mipmaps (thus only creating a single level with glTexStorage), there will only be a single level used in the filtering (even if using a mipmapped filter), which of course has a valid image.
If you intend to use some kind of mipmapping, then you should actually set the texture image for each and every mipmap level (or set the top-level image and do a glGenerateMipmap call afterwards). If you just wanted to use real nearest neighbour filtering, then just use GL_NEAREST (I've never actually seen much practical use for all the other mipmap filters except for the real trilinear filter GL_LINEAR_MIPMAP_LINEAR).

texturing a glutSolidSphere

I need to add an earth texture to a glutSolidSphere. The problem is that I cannot figure out how to make the texture stretch over the entire sphere and still be able to rotate.
We're enabling the textures with.
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE,GL_OBJECT_LINEAR);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE,GL_OBJECT_LINEAR);
glEnable(GL_TEXTURE_2D);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
//drawcode...
using GL_SPHERE_MAP in param instead of GL_OBJECT_LINEAR makes the textures look proper, but they cannot rotate.
The parameters I use for the texture are
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I understand that GL_REPEAT tiles the texture, while using GL_CLAMP instead gives me the texture once on the object, but I cannot get it to stretch over the whole sphere.
Does anyone know how to properly texture a glutSolidSphere?
glutSolidSphere doesn't provide proper texture coordinates, and OpenGL built in texture generation allows only for linear mappings from vertex position to vertex texture coordinate, which essentially means that you can not use them to texture a 3-sphere with a 2-flat, bounded texture (for a mathematical explanation look up the topics of topology of manifolds and map theory).
So what can you do? There are a number of possible solutions:
Don't use glutSolidSphere, but some other geometry generator that does provide proper texture coordinates (though texturing a sphere with just a single bounded 2D texture is a difficult topic, there are several mappings, each with their problems)
Use a texture with the same topology as a sphere, a cube map, then you can use the GL_NORMAL_MAP for the texture gen mode, i.e.
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_NORMAL_MAP);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_NORMAL_MAP);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_NORMAL_MAP);
Look up tutorials about cube mapping. But in a essence a cube map consists of 6 texture faces, arranged in a cube about the origin and texture coordinates are not points on the cube itself, but a direction from the origin and the addressed texel is the one, where the direction ray intersects with the cube.
Use a vertex shader, generating texture coordinates from vertex positions. Since a vertex shader is freely programmable, the mapping isn't required to be linear. Of course will run into the peculiarities of mapping a 3-sphere with a bounded 2-flat again.