Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a material editor very similar to Unreal or Unity and I got a stuck. I have no idea how to implement the most important piece of code: how to translate diagrams / nodes to glsl, so I have some questions:
Is the vertex shader contains a constant number of attributes? For example: position, texture coordinates, normals, tangents, bitangens, and all light information, or is it depending on what nodes I have in the material, such as a normal map node.
Do I have to change only fragment shader code or vertex/fragment shader code or geometry/fragment/vertex shader code after diagram has changed?
You should probably first learn modern OpenGL, before setting on this sort of task. Here are brief answers to your question. For "in depth" info please grab some good OpenGL book and read it.
Is the vertex shader contains a constant number of attributes?
Nope. It's user dependent. Based on your buffer configuration you may have position, normals,uv attrbutes - if you need to perform texture and normal mapping. In other cases, you may want to have just position, if all you need is to draw a geometry.The maximum allowed number of attributes is hardware dependent and you can query it using GL_MAX_VERTEX_ATTRIBS token.
Do I have to change only fragment shader code or vertex/fragment
shader code or geometry/fragment/vertex shader code after diagram has
changed?
It depends. There are special variables called varyings. Those are used to pass data from one shader stage to another. For example here is a simple shader program source for drawing texture mapped geometry:
Vertex shader:
#version 430 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uv;
out smooth vec2 v_uv;
void main()
{
gl_Position = position;//we don't do any projection
v_uv = uv; //send interpolated uvs to fragment shader
}
Fragment shader:
#version 430 core
layout(binding = 1) uniform sampler2D tex;
in smooth vec2 v_uv;
out vec4 o_color;
void main()
{
o_color= texture(tex,v_uv);
}
As you can see, we pass uv attribute into fragment shader using out keyword which marks this variable as varying.In your case, if someone using your editor, adds varyings to a vertex shader,then they also have to add in version of that varying in the fragment shader as well. Hope I answered your questions.
Related
I always did my shaders in glsl 3 (with the #version 330 line) but it's starting to be pretty old, so I recently tried to make a shader in glsl 4, and use it with the SFML library for rendering, instead of pure openGL.
For now, my goal is to do a basic shader for a 2d game, which takes the color of each pixel of a texture and modify them. I always did that with gl_TexCoord[0].xy, but it seems to be depreciated now, so I searched and I heard that I must use the in and out variables with a vertex shader, so I tried.
Fragment shader
#version 400
in vec2 fragCoord;
out vec4 fragColor;
uniform sampler2D image;
void main(){
// Get the color
vec4 color = texture( image, fragCoord );
/*
* Do things with the color
*/
// Return the color
fragColor = color;
}
Vertex shader
#version 400
in vec3 position;
in vec2 textureCoord;
out vec2 fragCoord;
void main(){
// Set the position of the pixel and vertex (I guess)
fragCoord = textureCoord;
gl_Position = vec4( position, 1.0 );
}
I also seen that we could add the projection, model, and view matrices, but I don't know how to do that with SFML (I don't even think we can), and I don't want to learn some complex things about openGL or SFML just to change some colors on a 2d game, so here is my question:
Is there an easy way to just get the coordinates of the pixel we're working on? Maybe get rid of the vertex shader, or use it without using matrices?
Unless you really want to learn a lot of nasty OpenGl, writing your own shaders just for textures is a little overkill. SFML can handle textures and shaders for you behind the scenes (here is a good article on how to use them) so you don't need to worry about shaders at all. Also note that you can change the color of SFML sprites (which is, I believe, what you are trying to do), with sprite.setColor(sf::color(*whatever*));. Plus, there's no problem in using version 330. That's what I usually use, albeit with in and out stuff as well.
If you really want to use your own shaders for fancy effects, like pixellation, blurring, etc. I can't help you much since I've only ever worked with pure OpenGl, so I don't know how the vertex information is handled by SFML, but this is some interesting example code you can check out, here is a tutorial, and here is a reference.
To more directly answer your question. gl_FragCoord is a built-in variable with GLSL that keeps track of the fragments position, but you have to set gl_Position in the vertex shader. You can't get rid of the vertex shader if you are doing anything OpenGl related. You'd have to do fancy matrix stuff (this is a wonderful library) and probably buffer stuff (like this) to tell GLSL yourself where everything is.
I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.
I'm just starting to learn graphics using opengl and barely grasp the ideas of shaders and so forth. Following a set of tutorials, I've drawn a triangle on screen and assigned a color attribute to each vertex.
Using a vertex shader I forwarded the color values to a fragment shader which then simply assigned the vertex color to the fragment.
Vertex shader:
[.....]
layout(location = 1) in vec3 vertexColor;
out vec3 fragmentColor;
void main(){
[.....]
fragmentColor = vertexColor;
}
Fragment shader:
[.....]
out vec3 color;
in vec3 fragmentColor;
void main()
{
color = fragmentColor;
}
So I assigned a different colour to each vertex of the triangle. The result was a smoothly interpolated coloured triangle.
My question is: since I send a specific colour to the fragment shader, where did the smooth interpolation happen? Is it a state enabled by default in opengl? What other values can this state have and how do I switch among them? I would expect to have total control over the pixel colours using a fragment shader, but there seem to be calculations behind the scenes that alter the result. There are clearly things I don't understand, can anyone help on this matter?
Within the OpenGL pipeline, between the vertex shading stages (vertex, tesselation, and geometry shading) and fragment shading, is the rasterizer. Its job is to determine which screen locations are covered by a particular piece of geometry(point, line, or triangle). Knowing those locations, along with the input vertex data, the rasterizer linearly interpolates the data values for each varying variable in the fragment shader and sends those values as inputs into your fragment shader. When applied to color values, this is called Gouraud shading.
source : OpenGL Programming Guide, Eighth Edition.
If you want to see what happens without interpolation, call glShadeModel(GL_FLAT) before you draw. The default value is GL_SMOOTH.
I know using a very simple vertex shader like
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
and a very simple fragment shader like
precision mediump float;
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
to draw a triangle with red, blue, and green vertices will end up having a triangle like this
My questions are:
Do calculations for interpolating fragment colors belonging to one triangle (or a primitive) happen in parallel on GPU?
What are the algorithm and also hardware support for interpolating fragment colors inside the triangle?
The interpolation is the step Fragment Processor
Algorithm is very simple they just interpolate the color according to their UV
Yes, absolutely.
Triangle color interpolation is part of the fixed-function pipeline (it's actually part of the rasterization step, which happens before fragment processing), so it is carried out entirely in hardware with probably all video cards. The equations for interpolating vertex data can be found e.g. in OpenGL 4.3 specification, section 14.6.1 (pp. 405-406). The algorithm defines barycentric coordinates for the triangle and uses them to interpolate between the vertices.
Besides the answers giving here, I wanted to add that there doesn't have to be dedicated fixed-function hardware for the interpolations. Modern GPU tend to use "pull-model interpolation" where the interpolation is actually done via the shader units.
I recommend reading Fabian Giesen's blog article about the topic (and the whole series of articles about the graphics pipeline in genreal).
On the first question - though there are parallel units on the GPU, it depends on the size of the triangle in consideration. For most of the GPUs, drawing happens on a tile by tile basis, and depending on the "screen" size of the triangle, if it falls within just one tile completely, it will be processed completely in only one tile processor. If it is split, it can be done in parallel by different units.
The second question is answered by other posters before me.
I'm having some problem understanding one line in the most basic (flat) shader example while reading OpenGL SuperBible.
In chapter 6, Listing 6.4 and 6.5 it introduces the following two very basic shaders.
6.4 Vertex Shader:
// Flat Shader
// Vertex Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Transformation Matrix
uniform mat4 mvpMatrix;
// Incoming per vertex
in vec4 vVertex;
void main(void)
{
// This is pretty much it, transform the geometry
gl_Position = mvpMatrix * vVertex;
}
6.5 Fragment Shader:
// Flat Shader
// Fragment Shader
// Richard S. Wright Jr.
// OpenGL SuperBible
#version 130
// Make geometry solid
uniform vec4 vColorValue;
// Output fragment color
out vec4 vFragColor;
void main(void)
{
gl_FragColor = vColorValue;
}
My confusion is that it says vFragColor in the out declaration while saying gl_FragColor in main().
On the other hand, in code from the website, it has been corrected to 'vFragColor = vColorValue;' in the main loop.
What my question is that other then being a typo in the book, what is the rule for naming out values of shaders? Do they have to follow specific names?
On OpenGL.org I've found that gl_Position is required for the out of the vertex shader. Is there any such thing for the fragment shader? Or it is just that if there is only one output, then it will be the color in the buffer?
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
As stated in the GLSL specification for version 1.3, the use of gl_FragColor in the fragment shader is deprecated. Instead, you should use a user defined output variable like the
vFragColor variable described in your fragment shader. As you said, it's a typo.
What is the rule for naming out values of shaders?
The variable name can be anything you like, unless it collides with any existing names.
What happens when there is more then one out of a fragment shader? How does the GLSL compiler know which one to use in the buffer?
When there is more than one out in the fragment shader, you should assign slots to the fragment shader outputs by calling BindFragDataLocation. You can then say which slots will render to which render target by calling DrawBuffers.
The specification states that if you have one output variable in the fragment shader defined, it will be assigned to index 0 and output 0. For more information, I recommend you take a look at it yourself.
gl_FragColor was the original output variable in early versions of GLSL. This was the color of the fragment that was to be drawn.
Your initial confusion is justified, as there's no reason to declare that out variable and then write to glFragColor.
In later versions it became customizable, such that you could give arbitrary names to your output variables. You can map these arbitrary outputs to specific buffers with the command glBindFragDataLocation.
I'm not 100% positive, but I believe if you don't call this function before linking, then your output variables will be randomly assigned to buffers. If you only have one output, then it should always be assigned to buffer 0.