Vertex shaders in and out? - c++

I've got two shaders like this:
const char* vertexShaderData =
"#version 450 \n"
"in vec3 vp;"
"in vec3 color;\n"
"out vec3 Color;\n"
"void main(){"
"Color=color;"
"gl_Position = vec4(vp, 1.0);"
"}";
const char* fragShaderData =
"#version 410\n"
"uniform vec4 incolor;\n"
"in vec3 Color;"
"out vec4 outColor;"
"void main(){"
"outColor = vec4(Color, 1.0);"
"}";
I understand that each shader is called for each vertex.
Where do the in paremters in my vertexShaderData get their values? In no point in the code do I specify what vp is or what color is. In the second shader, I get that the invalue comes from the first shader's out value. But where do thoes inital ins come from?
About the out value of the fragShaderData: How is this value used? In other words, how does OpenGL know that this is an RGB color value and know to paint the triangle with this color?

For the vertex shader,
you can use glGetAttribLocationin C++ to get the driver assigned location or manually set it like this: layout (location = 0) in vec3 vp; in GLSL to get the location for the attribute. Then you upload the data in C++ like this:
// (Vertex buffer must be bound at this point)
glEnableVertexAttribArray( a ); // 'a' would be 0 if you did the latter
glVertexAttribPointer( a, 3, GL_FLOAT, GL_FALSE, sizeof( your vertex ), nullptr );
For the fragment shader,
'in' variables must match vertex shader's 'out' variables, like in your sample code out vec3 Color; -> in vec3 Color;
gl_Position controls where outColor is painted.

You feed the data to the vertex shader from your OpenGL calls (in CPU). Once you compiled the program (vertex shader + fragment shader), you feed the vertex you want.
Different than the vertex shader, this fragment shader will run for once for EVERY pixel inside the triangle you are rendering. The outColor will be a vec4 (R,G,B,A) that "goes to your framebuffer". About the color, in theory, this is abstract for OpenGL. They are called RGBA for convenience... you can even access the same data as XYZW (it's an alias for RGBA). OpenGL will output NUMBERS to the framebuffer you desire (according the rules of color attachments, etc). In fact you will have 4 channels THAT BY THE WAY are used in the monitor to output RGB (and A used for transparency).... In other words, you can used GL programs to create triangles that will output 1 channel, or 2 channels, depending on your needs, and these channels can mean anything you need. For example, you can interpolate and YUV image, or a UV plane (2 channels). If you output these to monitor, you won't have the colors correct, once the monitor is expecting RGB, but the OpenGL concept is abroader than RGB. It will interpolate numbers for every pixel inside the triangle. That's it.

Related

How does rendering to multiple textures work in modern OpenGL?

If I understand FBOs correctly, I can attach several 2D textures to different color attachment points. I'd like to use this capability to write out a few different types of data from a fragment shader (e.g. world position and normals). It looks like in ye olden days, this involved writing to gl_FragData at different indices, but gl_FragData doesn't exist anymore. How is this done now?
You can just add output variables to your fragment shader. Here is an example:
layout(location=0) out vec4 color;
layout(location=1) out vec3 normal;
layout(location=2) out vec3 position;
void main() {
color = vec4(1,0,0,1);
normal = vec3(0,1,0);
position = vec3(1,2,3);
}
As dari said, you can use the layout(location=0) specifier.
Another method to assign the location outside of the shader is to use:
glBindFragDataLocation(_program, 0, "color");
Then in the shader:
out vec4 color;
See this for a more thorough discussion of output buffers:
https://www.opengl.org/wiki/Fragment_Shader#Output_buffers

How vertex shader works?

I'm a beginner, I try to draw a circle by drawing a square. But failed! Here's my vertex shader:
#define RADIUS 0.5
#define WHITE vec4(1.0,1.0,1.0,1.0)
#define RED vec4(1.0,0.0,0.0,1.0)
attribute vec2 a_position;
varying vec4 v_color; //defines color in fragment shader
....
void main(){
gl_Position = a_position;
v_color = (a_position[0]*a_position[0]+a_position[1]*position[1]<RADIUS*RADIUS) ? RED : WHITE;
}
It does not work as I want. WHY?
In short: Not like this!
As the name suggests, the code in a vertex shader is executed once per vertex. So if you draw a square, the vertex shader is only executed for the 4 vertices you specify for the draw call.
The expression you have in your shader code needs to be executed for each fragment (at least for this discussion, you can assume that fragment is the same as a pixel). You want to evaluate for each pixel if it is inside or outside the circle. Therefore, the logic needs to be in the fragment shader.
To get this working, it's easiest if you pass the original position to the fragment shader. There is a built-in variable (gl_FragCoord) for the position available in the fragment shader, but it is in pixels, which makes your calculation more complicated.
So your vertex shader would look like this:
attribute vec2 a_position;
varying vec2 v_origPosition;
...
void main() {
gl_Position = a_position;
v_origPosition = a_position;
}
Most of what you had in the vertex shader then goes to the fragment shader:
...
varying vec2 v_origPosition;
...
void main() {
gl_FragColor = (dot(v_origPosition, v_origPosition) < RADIUS * RADIUS) ? ...

DirectX11 / OpenGL only renders half of the texture

This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE​)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH

Fragment shader always uses 1.0 for alpha channel

I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
       vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.

OpenGL luminance to color mapping?

I was wondering if there was a way I could process OpenGL texture buffers so that a buffer of grayscale values is converted to rgb values on the fly through some formula of my choosing.
I already have a function like, which works well but outputs a color 3 vector.
rgb convertToColor(float value);
I am very new to OpenGL and was wondering what kind of shader I should use and where it should go. My program currently cycles frames as such:
program1.bind();
program1.setUniformValue("texture", 0);
program1.enableAttributeArray(vertexAttr1);
program1.enableAttributeArray(vertexTexr1);
program1.setAttributeArray(vertexAttr1, vertices.constData());
program1.setAttributeArray(vertexTexr1, texCoords.constData());
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,widthGL,heightGL, GL_LUMINANCE,GL_UNSIGNED_BYTE, &displayBuff[displayStart]);
glDrawArrays(GL_TRIANGLES, 0, vertices.size());
glBindTexture(GL_TEXTURE_2D,0);
//...
Shader
QGLShader *fshader1 = new QGLShader(QGLShader::Fragment, this);
const char *fsrc1 =
"uniform sampler2D texture;\n"
"varying mediump vec4 texc;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture2D(texture, texc.st);\n"
"}\n";
I am trying to recreate effects in matlab like imagesc as seen for example in the image below:
Something like this would work:
"uniform sampler2D texture;\n"
"uniform sampler1D mappingTexture;\n"
"varying mediump vec4 texc;\n"
"void main(void)\n"
"{\n"
" gl_FragColor = texture1D(mappingTexture, texture2D(texture, texc.st).s);\n"
"}\n";
where mapping texture is 1D textuer that maps grayscale to color.
Of course, you could also write function that calculates rgb color based on grayscale, but (depending on your hardware) it might be faster just to do texture lookup.
it is not clear how to bind two buffers to an OpenGL program at once
Binding textures to samplers.
It looks like you already have a program loaded and set up to read the texture (program1 in your code). Assuming the vertex shader is already set up to pass the pixel shader texture coordinates to look up into the texture (in the below program this is "texcoord"), you should be able to change the pixel shader to something like this:
uniform texture2d texture; // this is the greyscale texture
varying vec2 texcoord; // passed from your vertex shader
void main()
{
float luminance = tex2D(texture, texcoord).r; // grab luminance texture
gl_FragColor = convertToColor(luminance); // run your function
}
this reads in the luminance texture and calls your function which converts a luminance to a color. if your function only returns a 3-component rgb vector, you can change the last line to:
gl_FragColor = vec4(convertToColor(luminance), 1.0);