Alternative to gl_TexCoord.xy to get texture coordinate - c++

I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
 
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
 
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?

Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.

Related

Draw a geometric object and texture in different coordinates using same shader in Opengl (GLSL)

I wonder if there is a nice (at least any) way to draw some geometric shape and a texture using same shader program in opengl 2 (or maybe higher).
Saw this example in a book for a fragmnet shader (as an example of how glTexEnvi func from Opegl 1 can be replaced in Opengl >= 2 version):
precision mediump float;
uniform sampler2D s_tex0;
varying vec2 v_texCoord;
varying vec4 v_primaryColor;
void main()
{
gl_FragColor = texture2D(s_tex0, v_texCoord) * v_primaryColor;
}
Though it is very hard for me to guess the vertex shader, if i want to draw texture and some geometry in different coordinates (possibly intersecting in some place).
Does anybody have an idea?
There has to be a way. It will just make some things (for example different blendings) so much easier to do.
P.S. Had an idea of using a "switcher" in vertex shader to pass different coordinates wheather it is in "1" or "0" state, somewhy it didn't workout. Hope you know a better solution.
I'll just leave it here.
Though i still don't know the possible vertex shader for the question above i was lucky enough to solve my subgoal a harder way using blending.
It turned out that blending with constants GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA didn't work as expected (when destination are vertices) because alpha channel for pixels was "turned off" by default (you could still use alpha channel from image), so you have to "turn it on" to make blending with these constants work properly.
In android studio (and java overall) it is possible to do it using setEGLConfigChooser function.

Perspective-correct shader rendering

I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.

OpenGL: Passing random positions to the Vertex Shader

I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?
The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.

Names of `out` variables in a fragment shader

I'm having some problem understanding one line in the most basic (flat) shader example while reading OpenGL SuperBible.
In chapter 6, Listing 6.4 and 6.5 it introduces the following two very basic shaders.
6.4 Vertex Shader:
// Flat Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Transformation Matrix
uniform mat4 mvpMatrix;
// Incoming per vertex
in vec4 vVertex;
void main(void)
{
// This is pretty much it, transform the geometry
gl_Position = mvpMatrix * vVertex;
}
6.5 Fragment Shader:
// Flat Shader
// Fragment Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Make geometry solid
uniform vec4 vColorValue;
// Output fragment color
out vec4 vFragColor;
void main(void)
{
gl_FragColor = vColorValue;
}
My confusion is that it says vFragColor in the out declaration while saying gl_FragColor in main().
On the other hand, in code from the website, it has been corrected to 'vFragColor = vColorValue;' in the main loop.
What my question is that other then being a typo in the book, what is the rule for naming out values of shaders? Do they have to follow specific names?
On OpenGL.org I've found that gl_Position is required for the out of the vertex shader. Is there any such thing for the fragment shader? Or it is just that if there is only one output, then it will be the color in the buffer?
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
As stated in the GLSL specification for version 1.3, the use of gl_FragColor in the fragment shader is deprecated. Instead, you should use a user defined output variable like the
vFragColor variable described in your fragment shader. As you said, it's a typo.
What is the rule for naming out values of shaders?
The variable name can be anything you like, unless it collides with any existing names.
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
When there is more than one out in the fragment shader, you should assign slots to the fragment shader outputs by calling BindFragDataLocation. You can then say which slots will render to which render target by calling DrawBuffers.
The specification states that if you have one output variable in the fragment shader defined, it will be assigned to index 0 and output 0. For more information, I recommend you take a look at it yourself.
gl_FragColor was the original output variable in early versions of GLSL. This was the color of the fragment that was to be drawn.
Your initial confusion is justified, as there's no reason to declare that out variable and then write to glFragColor.
In later versions it became customizable, such that you could give arbitrary names to your output variables. You can map these arbitrary outputs to specific buffers with the command glBindFragDataLocation.
I'm not 100% positive, but I believe if you don't call this function before linking, then your output variables will be randomly assigned to buffers. If you only have one output, then it should always be assigned to buffer 0.

GLSL 4.10 Texture Mapping

I'm trying to figure out how to do texture mapping using GLSL version 4.10. I'm pretty new to GLSL and was happy to get a triangle rendering today with colors fading based on sin(time) using shaders. Now I'm interested in using shaders with a single texture.
A lot of tutorials and even Stack Overflow answers suggest using gl_MultiTexCoord0. However, this has been deprecated since GLSL 1.30 and the latest version is now 4.20. My graphics card doesn't support 4.20 which is why I'm trying to use 4.10.
I know I'm generating and binding my texture appropriately, and I have proper vertex coordinates and texture coordinates because my heightmap rendered perfectly when I was using the fixed-function pipeline, and it renders fine with color rather than the texture.
Here are my GLSL shaders and some of my C++ draw code:
---heightmap.vert (GLSL)---
in vec3 position;
in vec2 texcoord;
out vec2 p_texcoord;
uniform mat4 projection;
uniform mat4 modelview;
void main(void)
{
gl_Position = projection * modelview * vec4(position, 1.0);
p_texcoord = texcoord;
}
---heightmap.frag (GLSL)---
in vec2 p_texcoord;
out vec4 color;
uniform sampler2D texture;
void main(void)
{
color = texture2D(texture, p_texcoord);
}
---Heightmap::Draw() (C++)---
// Bind Shader
// Bind VBO + IBO
// Enable Vertex and Texcoord client state
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
// glVertexPointer(...)
// glTexCoordPointer(...)
glUniform4fv(projLoc, projection);
glUniform4fv(modelviewLoc, modelview);
glUniform1i(textureId, 0);
// glDrawElements(...)
// glDisable/unbind everything
The thing that I am also suspicious about are whether I have to pass the texture coord stuff to the fragment shader as a varying since I'm not touching it in the vertex shader. Also, I have no idea how it's going to get the interpolated texcoords from that. It seems like it's just going to get 0.f or 1.f, not the interpolated coordinate. I don't know enough about shaders to understand how that works. If somebody could enlighten me I would be thrilled!
Edit 1:
#Bahbar: So sorry, that was a typo. I'm typing code on one machine while reading it off another. Like I said, it all worked with the fixed function pipeline. Although glEnableClientState and gl[Vertex|TexCoord]Pointer are deprecated, they should still work with shaders, no? glVertexPointer rather than glVertexAttribPointer worked with colors rather than textures. Also, I am using glBindAttribLocation (position to 0 and texcoord to 1).
The reason I am still using glVertexPointer is I am trying to un-deprecate one thing at a time.
glBindTexture takes a texture object as a second parameter.
// Enable Vertex and Texcoord client state
I assume you meant the generic vertex attributes ? Where are your position and texcoord attributes set up ? To do that, you need some calls to glEnableVertexAttrib, and glVertexAttribPointer instead of glEnableClientState and glVertex/TexCoordPointer (all those are deprecated in the same way that gl_MultiTexCoord is in glsl).
And of course, to figure out where the attributes are bound, you need to either call glGetAttribLocation to figure out where the GL chose to put the attrib, or define it yourself with glBindAttribLocation (before linking the program).
Edit to add, following your addition:
Well, 0 might end up pulling data from glVertexPointer (for reasons you should not rely on. attrib 0 is special and most IHVs make it work just like Vertex), but 1 very likely won't be pulling data from glTexCoord.
In theory, there is no overlap between the generic attributes (like your texcoord, that gets its data from glVertexAttribPointer(1,XXX), 1 here being your chosen location), and the built-in attributes (like gl_MultiTexCoord[0] that gets its data from glTexCoordPointer).
Now, nvidia is known to not follow the spec, and indeed aliases attributes (this comes from the Cg model, as far as I know), and will go so far as saying to use a specific attribute location for glTexCoord (the Cg spec suggests it uses location 8 for TexCoord0 - and location 1 is the attribute blendweight - see table 39, p242), but really you should just bite the bullet and switch your TexCoordPointer to VertexAttribPointer calls.