I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.
Related
I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?
Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.
I am learning opengl and I thought I pretty much understand fragment shader. My intuition is that fragment shader gets applied once to every pixel but recently when working with texture, I became confused on how they exactly work.
First of all, fragment shader typically takes in a series of texture coordinate so if I have a quad, the fragment shader would takes in the texture coordinates for the 4 corners of the quads. Now what I don't understand is the sampling process which is the process of taking the texture coordinates and getting the appropriate color value at that texture coordinates. Specifically, since I only supply 4 texture coordinates, how does opengl knows to samples the coordinates in between for color value.
This task is made even more confusing when you consider the fact that vertex shader goes straight to fragment shader and vertex shader gets applied per vertex. This means that at any given time, the fragment shader only knows about the texture coordinate corresponding to a single vertex rather than the whole 4 coordinates that make up the quads. Thus how exactly it knows to samples the values that fit the shapes on the screen when it only have one texture coordinates available at a time?
All varying variables are interpolated automatically.
Thus if you put texture coordinates for each vertex into a varying, you don't need to do anything special with them after that.
It could be as simple as this:
// Vertex
#version 330 compatibility
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
void main()
{
v_texcoord = a_texcoord;
}
// Fragment
uniform vec2 u_texture;
varying vec2 v_texcoord;
void main()
{
gl_FragColor = texture2D(u_texture, v_texcoord);
}
Disclaimer: I used the old GLSL syntax. In newer GLSL versions, attribute would be replaced with in. varying would replaced with out in the vertex shader and with in in the fragment shader. gl_FragColor would be replaced with a custom out vec4 variable. texture2D() would be replaced with texture()
Notice how this fragment shader doesn't do any manual interpolation. It receives just a single vec2 v_texcoord, which was interpolated under the hood from v_texcoords of vertices comprising a primitive1 current fragment belongs to.
1. A primitive means a point, a line, a triangle or a quad.
first : in core context you still can use gl_FragColor.
second : you have texel ,fragment and real_monitor_pixel.These are different
things.
say this line is about convert texel to fragment(or to pixel idk exactly what it does):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
when texel is less then fragment(pixel)
I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.
As the title states, using a sampler2DShadow causes an error in the lighting shader of my multisampling FBO, but I cannot detect the problem due to having a very similar configuration using a standard deferred rendering setup without multisampling, which works fine.
Is there a compatibility issue with sampler2DShadow and multisampling in openlGL, or some alternative I should be using?
The shaders compile fine.
The code works fine until I run this line:
texture(gShadowMap2D, vec3(pCoord.xy, (pCoord.z) / pCoord.w));
and retrieve the result. I then get GL_INVALID_OPERATION.
The shadow map is from a directional light (depth map is valid and visible) and uses GL_COMPARE_R_TO_TEXTURE, set to a standard texture (GL_TEXTURE_2D).
The multisampling deferred FBO textures uses GL_TEXTURE_2D_MULTISAMPLE.
I'm using glsl 330 (openGL 3.3 core profile).
UPDATE
I think the problem is related to getting the world position from the position map in the multisampled fragment shader.
The standard way:
vec3 worldPos = texture(gPositionMap, texCoord).xyz;
The multisampled way:
vec2 texCoordMS = floor(vertTextureSize * texCoord.xy);
for(int i = 0; i < samples; i++)
{
worldPos += texelFetch(gPositionMapMS, ivec2(texCoordMS), i).xyz;
}
worldPos = worldPos / samples;
(I omitted the other samplers.)
I'm guessing I am out of bounds which throws the error when trying to access the sampler2DShadow (pCoord is calculated using worldPos).
Now to figure out how to get this multisampled worldPos to get the same result as the standard way???
Standard way (mDepthVP = mat4 (light's depth view prog):
vec4 coord = gLight.mDepthVP * vec4(worldPos, 1.0);
Well, after almost pulling my hair out desperately searching for a single hint as to why this problem was happening I finally figured it out, but I'm not entirely sure why it was causing the problem.
During the geometry pass (before the lighting pass) the models are rendered to the position, colour (diffuse), normals and depth-stencil as you would expect. During this pass a texture in binded (the diffuse texture of a mesh) but only as a standard texture (GL_TEXTURE_2D) at unit zero (GL_TEXTURE0) (I'm only using diffuse for now).
I left it like that as the system worked, because the lighting pass overrides that unit when it binds the four FBO textures for reading. However, in the multisampling FBO they were being binded as multisampling textures (GL_TEXTURE_2D_MULTISAMPLE) and it just happens that the 'position' map was using unit zero (GL_TEXTURE0).
For some reason this didn't overwrite the previously bound unit from the geometry pass and caused the GL_INVALID_OPERATION error. After calling:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
straight after the geometry pass the problem went away.
The question I ask comes down to asking "why didn't it overwrite?"
I'm trying to figure out how to make a semi-transparent 2D overlay over my 3D scene, reading the OpenGL SuperBible 5th edition for reference.
It has an example which overlays the OpenGL logo over a scene (in Chapter 7) using the texture target GL_TEXTURE_RECTANGLE, and a GLSL uniform type called sampler2DRect. The texture is supposed to be displayed in the fragment shader using the texture() command.
The example in this book uses many source files and I'm having a really hard time implementing it in a simple program, so I'm wondering if anyone could point me to a simpler example of the sampler2DRect.
I have no trouble with the part about switching to an orthographic projection, rather when I try to load the texture, it just displays the surface in white. My code's getting really messy at this point, and I can't seem to pinpoint the problem, so I'd rather start over from scratch following a simpler example if one is available anywhere.
P.S. I'm using SFML 2.0rc for loading the image file, in case it matters.
error C1101: ambiguous overloaded function reference "mul(mat4, vec3)"
(0) : mat3x4 mul(mat3x1, mat1x4)
(0) : mat3 mul(mat3x1, mat1x3)
(0) : mat3x2 mul(mat3x1, mat1x2)
(0) : mat3x1 mul(mat3x1, mat1)
(0) : mat2x4 mul(mat2x1, mat1x4)
.....
This is a very wordy way to tell you that there's no such function that multiplies a mat4 with a vec3. It's then listing all of the legal variants of mul.
Your dimensions must match when you multiply matrices, what you likely want is to multiply a mat4 with a vec4. If this is for your position coordinate, then add a 1.0 as the final value of the vector:
uniform mat4 mvpMatrix;
in vec3 position;
main()
gl_Position = mvpMatrix * vec4(position, 1.0);
In addition to Tim's answer, make sure that :
Your texture is bound : glBindTexture(GL_TEXTURE_2D, textureID);
The vertex shader outputs UV coords : out vec2 UV;
The vertex shader gets UV coords : in vec2 UV;
The VBO with the UVs exists, is enabled, bound and set ( glEnableVertexAttribArray, glBindBuffer, glVertexAttribPointer )
glEnable(GL_TEXTURE_2D)
And special items for rectangle textures :
The UV coords are in [0,width]x[0,height] (special case for rectangle textures).
Make sure that your quad has approx. the same size as the texture ( rect.tex don't have mipmaps)
Use standard textures instead. They can be NPOT.
Also : use gDebugger.