Rendering rectangle texture with GLSL - opengl

I created a class that renders videoframes (on Mac) to a custom framebuffer object. As input I have a YUV texture, and I successfully created a fragment shader, which takes as input 3 rectangle textures (one for Y, U, and V planes each, the data for which is uploaded by glTexSubImage2D using GL_TEXTURE_RECTANGLE_ARB, GL_LUMINANCE and GL_UNSIGNED_BYTE), before rendering I set the active textures to three different texture units (0, 1 and 2) and bind a texture for each, and for performance reasons I used GL_APPLE_client_storage and GL_APPLE_texture_range. Then I rendered it using glUseProgram(myProg), glBegin(GL_QUADS) ... glEnd().
That worked fine, and I got the expected result (aside from a flickering effect, which I guess has to do with the fact that I used two different GL contexts on two different threads, and I suppose they get into each other's way at some point [that's topic for another question later]). Anyway, I decided to further improve my code by adding a vertex shader as well, so that I can skip the glBegin/glEnd - which I read is outdated and should be avoided anyway.
So as a next step I created two buffer objects, one for the vertices and one for the texture coordinates:
const GLsizeiptr posSize = 4 * 4 * sizeof(GLfloat);
const GLfloat posData[] =
{
-1.0f, -1.0f, -1.0f, 1.0f,
1.0f, -1.0f, -1.0f, 1.0f,
1.0f, 1.0f, -1.0f, 1.0f,
-1.0f, 1.0f, -1.0f, 1.0f
};
const GLsizeiptr texCoordSize = 4 * 2 * sizeof(GLfloat);
const GLfloat texCoordData[] =
{
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0
};
glGenBuffers(1, &m_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, posSize, posData, GL_STATIC_DRAW);
glGenBuffers(1, &m_texCoordBuffer);
glBindBuffer(GL_ARRAY_BUFFER, m_texCoordBuffer);
glBufferData(GL_ARRAY_BUFFER, texCoordSize, texCoordData, GL_STATIC_DRAW);
Then after loading the shaders I try to retrieve the locations of the attributes in the vertex shader:
m_attributeTexCoord = glGetAttribLocation( m_shaderProgram, "texCoord");
m_attributePos = glGetAttribLocation( m_shaderProgram, "position");
which gives me 0 for texCoord and 1 for position, which seems fine.
After getting the attributes I also call
glEnableVertexAttribArray(m_attributePos);
glEnableVertexAttribArray(m_attributeTexCoord);
(I am doing that only once, or does it have to be done before every glVertexAttribPointer and glDrawArrays? Does it need to be done per texture unit? or while my shader is activated with glProgram? Or can I do it just anywhere?)
After that I changed the rendering code to replace the glBegin/glEnd:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_Y);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_U);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, texID_V);
glUseProgram(myShaderProgID);
// new method with shaders and buffers
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
glVertexAttribPointer(m_attributePos, 4, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ARRAY_BUFFER, m_texCoordBuffer);
glVertexAttribPointer(m_attributeTexCoord, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glDrawArrays(GL_QUADS, 0, 4);
glUseProgram(0);
But since changing the code to this, I always only ever get a black screen as result. So I suppose I am missing some simple steps, maybe some glEnable/glDisable or setting some things properly - but but like I said I am new to this, so I haven't really got an idea. For your reference, here is the vertex shader:
#version 110
attribute vec2 texCoord;
attribute vec4 position;
// the tex coords for the fragment shader
varying vec2 texCoordY;
varying vec2 texCoordUV;
//the shader entry point is the main method
void main()
{
texCoordY = texCoord;
texCoordUV = texCoordY * 0.5; // U and V are only half the size of Y texture
gl_Position = gl_ModelViewProjectionMatrix * position;
}
My guess is that I am missing something obvious here, or just don't have a deep enough understanding of the ongoing processes here yet. I tried using OpenGLShaderBuilder as well, which helped me get the original code for the fragment shader right (this is why I haven't posted it here), but since adding the vertex shader it doesn't give me any output either (was wondering how it could know how to produce the output, if it doesn't know the position/texCoord attributes anyway?)

I haven't closely studied every line, but I think your logic is mostly correct. What I see missing is glEnableVertexAttribArray. You need to enable both vertex attributes before the glDrawArrays call.

Related

Make a shader storage buffer in different shader programs accessible

What layout and binding do i have to do to make a (working) shader storage buffer readable in a second shader program?
I set up and populated a SSBO which i bound successfully and used in a geometry shader. That shader reads and writes to that SSBO - no problems so far. No rendering done there.
In the next step, my rendering pass (second shader program) shall have access to this data. The idea is to have a big data set while the vertex shader of the second program only uses some indices per render call to pick certain values of that SSBO.
Do i miss some specific binding commands or did i place them at the wrong spot?
Is the layout consistent in both programs? Did i mess up the instances?
I just can't find any examples of a SSBO used in two programs..
Creating, populating and binding:
float data[48000];
data[0] = -1.0;
data[1] = 1.0;
data[2] = -1.0;
data[3] = -1.0;
data[4] = 1.0;
data[5] = -1.0;
data[6] = 1.0;
data[7] = 1.0;
data[16000] = 0.0;
data[16001] = 1.0;
data[16002] = 0.0;
data[16003] = 0.0;
data[16004] = 1.0;
data[16005] = 0.0;
data[16006] = 1.0;
data[16007] = 1.0;
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(data), &data, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo);
Instancing in geometry shader
layout(std140, binding = 1) buffer mesh
{
vec2 points[8000];
vec2 texs[8000];
vec4 colors_and_errors[8000];
} mesh_data;
Second instance in the vertex shader of the other program
layout(std140, binding = 1) buffer mesh
{
vec2 points[8000];
vec2 texs[8000];
vec4 colors_and_errors[8000];
} mesh_data;
Are the instances working against each other?
Right now am not posting my bindings done in the render loop since i am not sure what i am doing there. I tried to bind before/ after changing the used program; without success.
Does anybody have an idea?
EDIT: Do i also have to bind the SSBO to the second program outside of the render loop? In a different way than the first binding?
EDIT: Although i did not solve this particular problem, i found a work-around that might be even more in the sense of opengl.
I used the SSBO of the first program as vertex attributes in the second program. This and the indexed-rendering function of opengl solved this issue.
(Should this be marked as solved?)
It seems like you're most of the way there, but there are a few things you should watch out for.
Is the layout consistent in both programs?
layout(std140, binding = 1) buffer mesh
You need to be careful about this layout. std140 will round up alignments to vec4, so will no longer line up with the data you're providing from the C code. In this case, std430 should work for you.
Do i also have to bind the SSBO to the second program outside of the render loop? In a different way than the first binding?
Once you've bound the SSBO once, assuming both programs are using the same binding point (in your example, they are) then you should be fine. Sharing data between programs is fine, but synchronisation is required. You can enforce this with a memory barrier.
You don't mention VAOs, but you will only be able to use SSBOs after you've bound a VAO (not on the default one).
I think this might be best explained with an example.
Vertex shader for the first program. It uses the buffer data for its position and texture coords and then flips the positions in Y.
layout(std430, binding = 1) buffer mesh {
vec4 points[3];
vec2 texs[3];
} mesh_data;
out highp vec2 coords;
void main() {
coords = mesh_data.texs[gl_VertexID];
gl_Position = mesh_data.points[gl_VertexID];
mesh_data.points[gl_VertexID] = vec4(gl_Position.x, -gl_Position.y, gl_Position.zw);
}
Verted shader for the second program. It just uses the data but doesn't modify it.
layout(std430, binding = 1) buffer mesh {
vec4 points[3];
vec2 texs[3];
} mesh_data;
out highp vec2 coords;
void main() {
coords = mesh_data.texs[gl_VertexID];
gl_Position = mesh_data.points[gl_VertexID];
}
In the application, you need to bind a VAO.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
Then setup your SSBO.
float const data[] = {
-0.5f, -0.5f, 0.0f, 1.0,
0.0f, 0.5f, 0.0f, 1.0,
0.5f, -0.5f, 0.0f, 1.0,
0.0f, 0.0f,
0.5f, 1.0f,
1.0f, 0.0f
};
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(data), data, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, ssbo);
Make the draw calls using the first program.
glUseProgram(first_program);
glDrawArrays(GL_TRIANGLES, 0, 3);
Insert a memory barrier to ensure the writes complete from the preceding draw call before the next draw call tries to read from the buffer.
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
Make the draw calls using the second program.
glUseProgram(second_program);
glDrawArrays(GL_TRIANGLES, 0, 3);
I hope that clarifies things! Let me know if you have any further questions.

ImGui+OpenGL, render function can't render fonts [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
As the title says, I am using ImGui and I can't get my render function to render the fonts.
Things I have done:
Verified my texture is loaded properly with RenderDoc
Verified that my vertex attribute pointers are compliant with ImGui's convention (array of structs).
Below is my rendering code. You can also see the developer's example code for OpenGL here: https://github.com/ocornut/imgui/blob/master/examples/opengl3_example/imgui_impl_glfw_gl3.cpp
// Setup some GL state
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_SCISSOR_TEST);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
// Setup orthographic projection
glViewport(0, 0, (GLsizei)fb_width, (GLsizei)fb_height);
const float ortho_projection[4][4] =
{
{ 2.0f/io.DisplaySize.x, 0.0f, 0.0f, 0.0f },
{ 0.0f, 2.0f/-io.DisplaySize.y, 0.0f, 0.0f },
{ 0.0f, 0.0f, -1.0f, 0.0f },
{-1.0f, 1.0f, 0.0f, 1.0f },
};
// Setup the shader. bind() calls glUseProgram and enables/disables the proper vertex attributes
shadeTextured->bind();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, g_FontTexture);
shadeTextured->setUniformM4(Shader::uhi_transform, *(glm::mat4*)&ortho_projection[0][0]);
shadeTextured->setUniformSampler(1, 0);
// Set my vertex attribute pointers for position and tex coords
glVertexAttribPointer(0,
2,
GL_FLOAT,
GL_FALSE,
sizeof(ImDrawVert),
(GLvoid*)IM_OFFSETOF(ImDrawVert, pos));
glVertexAttribPointer(1,
2,
GL_FLOAT,
GL_FALSE,
sizeof(ImDrawVert),
(GLvoid*)IM_OFFSETOF(ImDrawVert, uv));
// Loop through all commands ImGui has
for (int n = 0; n < draw_data->CmdListsCount; n++) {
const ImDrawList* cmd_list = draw_data->CmdLists[n];
const ImDrawIdx* idx_buffer_offset = 0;
glBindBuffer(GL_ARRAY_BUFFER, g_VboHandle);
glBufferData(GL_ARRAY_BUFFER,
(GLsizeiptr)cmd_list->VtxBuffer.Size * sizeof(ImDrawVert),
(const GLvoid*)cmd_list->VtxBuffer.Data,
GL_STREAM_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, g_ElementsHandle);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(GLsizeiptr)cmd_list->IdxBuffer.Size * sizeof(ImDrawIdx),
(const GLvoid*)cmd_list->IdxBuffer.Data,
GL_STREAM_DRAW);
for (int cmd_i = 0; cmd_i < cmd_list->CmdBuffer.Size; cmd_i++) {
const ImDrawCmd* pcmd = &cmd_list->CmdBuffer[cmd_i];
glScissor((int)pcmd->ClipRect.x,
(int)(fb_height - pcmd->ClipRect.w),
(int)(pcmd->ClipRect.z - pcmd->ClipRect.x),
(int)(pcmd->ClipRect.w - pcmd->ClipRect.y));
glDrawElements(GL_TRIANGLES,
(GLsizei)pcmd->ElemCount,
sizeof(ImDrawIdx) == 2 ? GL_UNSIGNED_SHORT : GL_UNSIGNED_INT,
idx_buffer_offset);
idx_buffer_offset += pcmd->ElemCount;
}
}
And here are the (very, very simple) shaders I have written. The shaders have worked texturing a button before, so I am assuming they are functionally correct.
Vertex shader:
#version 330 core
layout (location = 0) in vec2 pos;
layout (location = 1) in vec2 texCoord;
out vec2 fragTexCoord;
uniform mat4 transform;
void main() {
gl_Position = transform * vec4(pos, 0.0, 1.0);
fragTexCoord = texCoord;
}
Fragment shader:
#version 330 core
out vec4 fragColor;
in vec2 fragTexCoord;
uniform sampler2D sampler;
void main() {
fragColor = texture(sampler, fragTexCoord);
}
I'm at a total loss! Any help would be greatly appreciated
Debugging an incorrect OpenGL setup/state can be quite difficult. It's unclear why you are not using exactly the code provided in imgui_impl_glfw_gl3.cpp and rewriting your own, but what you may do is:
Start again from the supposedly working imgui_impl_glfw_gl3.cpp and turn it step by step into your own and see what makes it break?
Disable scissor temporarily.
Since you are using RenderDoc already: does it show you the correct mesh? Are the vertices that it shows you ok?

confused trying to draw coloured elements for drawing using using glDrawElements using vertex and fragment shaders

I'm creating a set of classes to read in 3d objects from COLLADA files. I started with some basic code to read in the positions and normals and plot them with opengl. I added code to scale the vertices successfully and added all the code I need to read in the color or texture connected with each graphics element in the COLLAD file. But now I need to add the code to draw the vertices with color. I have created the buffer object array to house the color array for each of the vertices array and buffer objects.
This is the code I have to build the arrays from data I obtain from the COLLADA file:
Keep in mind I am still creating this it's not perfect.
// Set vertex coordinate data
glBindBuffer(GL_ARRAY_BUFFER, vbosPosition[i]);
glBufferData(GL_ARRAY_BUFFER, col->vectorGeometry[i].map["POSITION"].size,
scaledData, GL_STATIC_DRAW);
free(scaledData);
loc = glGetAttribLocation(program, "in_coords");//get a GLuint for the attribute and put it into GLuint loc.
glVertexAttribPointer(loc, col->vectorGeometry[i].map["POSITION"].stride, col->vectorGeometry[i].map["POSITION"].type, GL_FALSE, 0, 0);//glVertexAttribPointer — loc specifies the index of the generic vertex attribute to be modified.
glEnableVertexAttribArray(0);
#ifdef Testing_Mesh3D
PrintGLVertex(vbosPosition[i], col->vectorGeometry[i].map["POSITION"].size / 4);
#endif // Set normal vector data
glBindBuffer(GL_ARRAY_BUFFER, vbosNormal[i]);
glBufferData(GL_ARRAY_BUFFER, col->vectorGeometry[i].map["NORMAL"].size, col->vectorGeometry[i].map["NORMAL"].data, GL_STATIC_DRAW);
loc = glGetAttribLocation(program, "in_normals");
glVertexAttribPointer(loc, col->vectorGeometry[i].map["NORMAL"].stride, col->vectorGeometry[i].map["NORMAL"].type, GL_FALSE, 0, 0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, vbosColor[i]);
Material* material = col->mapGeometryUrlToMaterial2Effect[col->vectorGeometry[i].id];
if (material->effect1.size() > 0)
{
Effect effect1 = material->effect1[0];
if (effect1.type == enumEffectTypes::color)
{
Color color = effect1.color;
glBufferData(GL_ARRAY_BUFFER, color.length, color.values, GL_STATIC_DRAW);
loc = glGetAttribLocation(program, "in_colors");
glVertexAttribPointer(loc, color.length, color.type, GL_FALSE, 0, 0);
}
else
{
}
}
}
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
// Initialize uniform data
void Mesh3D::InitializeUniforms(GLuint program) {
GLuint program_index, ubo_index;
struct LightParameters params;
// Specify the rotation matrix
glm::vec4 diff_color = glm::vec4(0.3f, 0.3f, 1.0f, 1.0f);
GLint location = glGetUniformLocation(program, "diffuse_color");
glUniform4fv(location, 1, &(diff_color[0]));
// Initialize UBO data
params.diffuse_intensity = glm::vec4(0.5f, 0.5f, 0.5f, 1.0f);
params.ambient_intensity = glm::vec4(0.3f, 0.3f, 0.3f, 1.0f);
params.light_direction = glm::vec4(-1.0f, -1.0f, 0.25f, 1.0f);
// Set the uniform buffer object
glUseProgram(program);
glGenBuffers(1, &ubo);
glBindBuffer(GL_UNIFORM_BUFFER, ubo);
glBufferData(GL_UNIFORM_BUFFER, 3 * sizeof(glm::vec4), &params, GL_STREAM_DRAW);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
glUseProgram(program);
// Match the UBO to the uniform block
glUseProgram(program);
ubo_index = 0;
program_index = glGetUniformBlockIndex(program, "LightParameters");
glUniformBlockBinding(program, program_index, ubo_index);
glBindBufferRange(GL_UNIFORM_BUFFER, ubo_index, ubo, 0, 3 * sizeof(glm::vec4));
glUseProgram(program);
This is a hearder file containing the two string literals I housing the strings used to build the vertex and fragment shader. Again I am new to this and not sure how I need to modify the shader to include colored vertices, I have started by adding an input vec4 for the four float colour ( includes alpha). Any help?
#pragma once
#ifndef Included_shaders
#define Included_shaders
#include<stdio.h>
#include<iostream>
static std::string shaderVert = "#version 330\n"
"in vec3 in_coords;\n"
"in vec3 in_normals;\n"
"in vec4 in_colors; \n"//added by me
"out vec3 vertex_normal;\n"
"void main(void) {\n"
"vertex_normal = in_normals;\n"
"gl_Position = vec4(in_coords, 1.0);\n"
"}\n";
static std::string shaderFrag = "#version 330\n"
"in vec3 vertex_normal;\n"
"out vec4 output_color;\n"
"layout(std140) uniform LightParameters{\n"
"vec4 diffuse_intensity;\n"
"vec4 ambient_intensity;\n"
"vec4 light_direction;\n"
"};\n"
"uniform vec4 diffuse_color;\n"
"void main() {\n"
"/* Compute cosine of angle of incidence */\n"
"float cos_incidence = dot(vertex_normal, light_direction.xyz);\n"
"cos_incidence = clamp(cos_incidence, 0, 1);\n"
"/* Compute Blinn term */\n"
"vec3 view_direction = vec3(0, 0, 1);\n"
"vec3 half_angle = normalize(light_direction.xyz + view_direction);\n"
"float blinn_term = dot(vertex_normal, half_angle);\n"
"blinn_term = clamp(blinn_term, 0, 1);\n"
"blinn_term = pow(blinn_term, 1.0);\n"
"/* Set specular color and compute final color */\n"
"vec4 specular_color = vec4(0.25, 0.25, 0.25, 1.0);\n"
"output_color = ambient_intensity * diffuse_color +\n"
"diffuse_intensity * diffuse_color * cos_incidence +\n"
"diffuse_intensity * specular_color * blinn_term;\n"
"}\n";
#endif
Finally this is the funciton I am modifying to draw the colored elements
void Mesh3D::DrawToParent()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw elements of each mesh in the vector
for (int i = 0; i<nVectorGeometry; i++)
{
glBindVertexArray(vaos[i]);
glDrawElements(col->vectorGeometry[i].primitive/*This is 4 for GL_Triangles*/, col->vectorGeometry[i].index_count,
GL_UNSIGNED_SHORT, col->vectorGeometry[i].indices);
}
glBindVertexArray(0);
glutSwapBuffers();
}
am getting a little confused about the glVertexAttribPointer and glGetAttribLocation though I think I get the basic idea. Am I using this right.
Am I setting up the buffer object for colors correctly. Am I correct I have a color for each vertex in this buffer, right now I have only placed the single color that applies to all associated buffers in this array and probably need to change that?
How exactly do I go about drawing the colored vertices when I call glDrawElements?
Don't just refer me to the resources for opengl a lot of the wordy explanations make little sense to me.
Make your vertex shader output color and make the fragment shader take it as input. The color will be interpolated between vertices.
Yes, it seems you have understood glAttribPointer correctly.
DrawElements takes the indices of the vertices you want to draw. The last argument should not contain the indices. Instead, it should probably be null. The indices should be specified with glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ...).
If the color buffer is correctly bound you don't need to do anything special id drawElements for the colors. The shaders will get all the enabled attribarrays.
I can help you better if you run the code and you tell me what problems you get. It would also help if the code was easier to read. If you split it into functions under ten lines in length you may spot some errors yourself and possibly remove some duplication.

OpenGL : Why can't I pass a single float from vertex shader to fragment shader?

EDIT: see at the end for new investigations on the subject.
I've been experiencing an odd behavior with my shaders. In short, I find it very strange that to pass a single float from a vertex shader to a fragment shader I have to pass also a fake variable, and I am looking for an explanation of this behavior.
In more details : I wrote two minimalistic shaders
a vertex shader that passes one float (only one, this is important), vertAlpha, to the fragment shader, like so:
#version 150
uniform float alphaFactor;
uniform mat4 cameramodel;
in float vertAlpha;
in vec3 vert;
in vec3 vertScale;
in vec3 trans;
out float fragAlpha;
void main()
{
fragAlpha = alphaFactor * vertAlpha;
gl_Position = cameramodel * vec4( trans + ( vert * vertScale ), 1.0f );
}
and a fragment shader that uses the passed variable:
#version 150
in float fragAlpha;
out vec4 finalColor;
void main()
{
finalColor = vec4( 0.0f, 0.0f, 0.0f, fragAlpha );
}
But that doesn't work, nothing appears on the screen, it seems that in the fragment shader, fragAlpha keeps it's initialization value of 0, and ignores the passed value.
After investigating, I found a "hack" to solve this. I found that the fragment shader "sees" the passed value for fragAlpha only if a fake (unused) value is passed with it (depending on the platform (osx / NVidia GeForce 9400, Windows laptop / Intel HD Graphics), a vec3 or a vec2 is sufficient).
So this vertex shader solves the problem:
#version 150
uniform float alphaFactor;
uniform mat4 cameramodel;
in vec3 vertFake;
in float vertAlpha;
in vec3 vert;
in vec3 vertScale;
in vec3 trans;
out float fragAlpha;
out vec3 fragFake;
void main()
{
fragAlpha = alphaFactor * vertAlpha;
fragFake = vertFake;
gl_Position = cameramodel * vec4( trans + ( vert * vertScale ), 1.0f );
}
I find this more like a "hack" than a solution. What should I do to solve this properly?
Is there a "per-driver manufacturer minimum threshold" in terms of size of data that can pass from a vertex shader to a fragment shader?
EDIT:
After reading derhass comment, I went back to my code to see if I was doing something wrong.
After more investigation, I found that the problem is not inherent to the fact that I pass the attribute value to the fragment shader. I changed the order of attributes declarations in the vertex shader and saw that the attribute that has the "0" location (location returned by glGetAttribLocation) is not updated by a call to glVertexAttribute, it's value stays at 0.
It will occur for example for the "trans" attribute if it is declared before all other attributes. Introducing a fake attribute in my shader only fixed the issue because the fake attribute took the location "0".
In any case, glGetError returns no error anywhere.
I use glDrawElements to render, with a vbo that contains the vertex positions, maybe I'm using it inadequatly... or it's not well supported by my hardware(s)?
To give some more context, I copy here the calls i do to opengl, for setup and rendering:
Setup:
GLuint vao, vbo;
glGenBuffers(1, &vbo);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
GLfloat vertexData[] = {
// X Y Z
0.0f, 0.0f, 1.0f, // 0
1.0f, 0.0f, 1.0f, // 1
0.0f, 0.0f, 0.0f, // 2
1.0f, 0.0f, 0.0f, // 3
0.0f, 1.0f, 1.0f, // 4
1.0f, 1.0f, 1.0f, // 5
1.0f, 1.0f, 0.0f, // 6
0.0f, 1.0f, 0.0f // 7
};
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW);
GLint vert = glGetAttribLocation(p, "vert")
glEnableVertexAttribArray(vert);
glVertexAttribPointer(vert, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glBindVertexArray(0);
render:
glUseProgram(p)
aphaFactor = glGetUniformLocation(p,"alphaFactor");
glUniform1f(alphaFactor, 1.0f);
cameramodel = glGetUniformLocation(p,"cameramodel");
glm::mat4 mat = gCamera.matrix();
glUniformMatrix4fv(cameramodel, 1, GL_FALSE, glm::value_ptr(mat); //
glBindVertexArray(vao);
GLubyte indices[36] =
{
3, 2, 6, 7, 6, 2, 6, 7, 4, 2, 4, 7, 4, 2, 0, 3, 0, 2, 0, 3, 1, 6, 1, 3, 1, 6, 5, 4, 5, 6, 5, 4, 1, 0, 1, 4
};
GLint trans = glGetAttribLocation(p, "trans");
GLint alpha = glGetAttribLocation(p, "vertAlpha");
GLint scale = glGetAttribLocation(p, "vertScale");
loop over many entities:
glVertexAttrib3f(trans, m_vInstTransData[offset], m_vInstTransData[offset + 1], m_vInstTransData[offset + 2]);
glVertexAttrib1f(alpha, m_vInstTransData[offset + 3]);
glVertexAttrib3f(scale, m_vInstTransData[offset + 4], m_vInstTransData[offset + 5], m_vInstTransData[offset + 6]);
glDrawElements(GL_TRIANGLES, sizeof(indices) / sizeof(GLubyte), GL_UNSIGNED_BYTE, indices);
end loop
glBindVertexArray(0);
you need to specify the interpolator for this so to prevent interpolation between fragments of the same primitive change:
out float fragAlpha;
in float fragAlpha;
into:
out flat float fragAlpha;
in flat float fragAlpha;
because float is interpolated by default. Similarly mat3 is not interpolated by default and if you want to interpolate it then use smooth instead of flat ...
If I remember correctly Scalars (int,float,double) and vectors (vec?,dvec?) are interpolated by default and matrices (mat?) are not.
Not sure which property will be set in your flat variable my bet is the one set on the last vertex of the primitive pass ... In case you need to compute it on some specified vertex then you should move the computation into geometry shader.

What are shaders in OpenGL and what do we need them for? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
When I'm trying to get through openGL wiki and tutorials on www.learnopengl.com, it never ends up understandable by intuition how whole concept works. Can someone maybe explain me in more abstract way how it works? What are vertex shader and fragment shader and what do we use them for?
The OpenGL wiki gives a good definition:
A Shader is a user-defined program designed to run on some stage of a graphics processor.
History lesson
In the past, graphics cards were non-programmable pieces of silicon which performed a set of fixed algorithms:
inputs: 3D coordinates of triangles, their colors, light sources
output: a 2D image
all using a single fixed parameterized algorithm, typically similar to the Phong reflection model. Image from Wiki:
Such architectures were known as "fixed function pipeline", as they could only implement a single algorithm.
But that was too restrictive for programmers who wanted to create many different complex visual effects.
So as semiconductor manufacture technology advanced, and GPU designers were able to cramp more transistors per square millimeter, vendors started allowing some the parts of the rendering pipeline to be programmed programming languages like the C-like GLSL.
Those languages are then converted to semi-undocumented instruction sets that runs on small "CPUs" built-into those newer GPU's.
In the beginning, those shader languages were not even Turing complete!
The term General Purpose GPU (GPGPU) refers to this increased programmability of modern GPUs, and new languages were created to be more adapted to it than OpenGL, notably OpenCL and CUDA. See this answer for a brief discussion of which kind of algorithm lends itself better to GPU rather than CPU computing: What do the terms "CPU bound" and "I/O bound" mean?
Overview of the modern shader pipeline
In the OpenGL 4 model, only the blue stages of the following diagram are programmable:
Image source.
Shaders take the input from the previous pipeline stage (e.g. vertex positions, colors, and rasterized pixels) and customize the output to the next stage.
The two most important ones are:
vertex shader:
input: position of points in 3D space
output: 2D projection of the points (using 4D matrix multiplication)
This related example shows more clearly what a projection is: How to use glOrtho() in OpenGL?
fragment shader:
input: 2D position of all pixels of a triangle + (color of edges or a texture image) + lightining parameters
output: the color of every pixel of the triangle (if it is not occluded by another closer triangle), usually interpolated between vertices
The fragments are discretized from the previously calculated triangle projections, see:
How fragment shader determines the number of fragments from vertex shader output?
https://gamedev.stackexchange.com/questions/8977/what-is-a-fragment-in-3d-graphics-programming/118820#118820
Related question: What are Vertex and Pixel shaders?
From this we see that the name "shader" is not very descriptive for current architectures. The name originates of course from "shadows", which is handled by what we now call the "fragment shader". But "shaders" in GLSL now also manage vertex positions as is the case for the vertex shader, not to mention OpenGL 4.3 GL_COMPUTE_SHADER, which allows for arbitrary calculations completely unrelated to rendering, much like OpenCL.
TODO could OpenGL be efficiently implemented with OpenCL alone, i.e., making all stages programmable? Of course, there must be a performance / flexibility trade-off.
The first GPUs with shaders even used different specialized hardware for vertex and fragment shading, since those have quite different workloads. Current architectures however use multiple passes of a single type of hardware (basically small CPUs) for all shader types, which saves some hardware duplication. This design is known as an Unified Shader Model:
Adapted from this image, SVG source.
The following amazing summary from the great channel Asianometry https://youtu.be/GuV-HyslPxk?t=350 also clarifies that some of the pipeline was actually handled by the CPU itself rather than GPU in earlier technology, largely led by NVIDIA:
The same video then also goes on to mention how their GeForce 3 series from 2001 was the first product to introduce some level of shader programmability.
Source code example
To truly understand shaders and all they can do, you have to look at many examples and learn the APIs. https://github.com/JoeyDeVries/LearnOpenGL for example is a good source.
In modern OpenGL 4, even hello world triangle programs use super simple shaders, instead of older deprecated immediate APIs like glBegin and glColor.
Consider this triangle hello world example that has both the shader and immediate versions in a single program: https://stackoverflow.com/a/36166310/895245
main.c
#include <stdio.h>
#include <stdlib.h>
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#define INFOLOG_LEN 512
static const GLuint WIDTH = 512, HEIGHT = 512;
/* vertex data is passed as input to this shader
* ourColor is passed as input to the to the fragment shader. */
static const GLchar* vertexShaderSource =
"#version 330 core\n"
"layout (location = 0) in vec3 position;\n"
"layout (location = 1) in vec3 color;\n"
"out vec3 ourColor;\n"
"void main() {\n"
" gl_Position = vec4(position, 1.0f);\n"
" ourColor = color;\n"
"}\n";
static const GLchar* fragmentShaderSource =
"#version 330 core\n"
"in vec3 ourColor;\n"
"out vec4 color;\n"
"void main() {\n"
" color = vec4(ourColor, 1.0f);\n"
"}\n";
GLfloat vertices[] = {
/* Positions Colors */
0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 1.0f
};
int main(int argc, char **argv) {
int immediate = (argc > 1) && argv[1][0] == '1';
/* Used in !immediate only. */
GLuint vao, vbo;
GLint shaderProgram;
glfwInit();
GLFWwindow* window = glfwCreateWindow(WIDTH, HEIGHT, __FILE__, NULL, NULL);
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit();
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glViewport(0, 0, WIDTH, HEIGHT);
if (immediate) {
float ratio;
int width, height;
glfwGetFramebufferSize(window, &width, &height);
ratio = width / (float) height;
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_TRIANGLES);
glColor3f( 1.0f, 0.0f, 0.0f);
glVertex3f(-0.5f, -0.5f, 0.0f);
glColor3f( 0.0f, 1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.0f);
glColor3f( 0.0f, 0.0f, 1.0f);
glVertex3f( 0.0f, 0.5f, 0.0f);
glEnd();
} else {
/* Build and compile shader program. */
/* Vertex shader */
GLint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
glCompileShader(vertexShader);
GLint success;
GLchar infoLog[INFOLOG_LEN];
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(vertexShader, INFOLOG_LEN, NULL, infoLog);
printf("ERROR::SHADER::VERTEX::COMPILATION_FAILED\n%s\n", infoLog);
}
/* Fragment shader */
GLint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
glCompileShader(fragmentShader);
glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success);
if (!success) {
glGetShaderInfoLog(fragmentShader, INFOLOG_LEN, NULL, infoLog);
printf("ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n%s\n", infoLog);
}
/* Link shaders */
shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success);
if (!success) {
glGetProgramInfoLog(shaderProgram, INFOLOG_LEN, NULL, infoLog);
printf("ERROR::SHADER::PROGRAM::LINKING_FAILED\n%s\n", infoLog);
}
glDeleteShader(vertexShader);
glDeleteShader(fragmentShader);
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
/* Position attribute */
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
/* Color attribute */
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
glUseProgram(shaderProgram);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
}
glfwSwapBuffers(window);
/* Main loop. */
while (!glfwWindowShouldClose(window)) {
glfwPollEvents();
}
if (!immediate) {
glDeleteVertexArrays(1, &vao);
glDeleteBuffers(1, &vbo);
glDeleteProgram(shaderProgram);
}
glfwTerminate();
return EXIT_SUCCESS;
}
Adapted from Learn OpenGL, my GitHub upstream.
Compile and run on Ubuntu 20.04:
sudo apt install libglew-dev libglfw3-dev
gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c -lGL -lGLEW -lglfw
# Shader
./main.out
# Immediate
./main.out 1
Identical outcome of both:
From that we see how:
the vertex and fragment shader programs are being represented as C-style strings containing GLSL language (vertexShaderSource and fragmentShaderSource) inside a regular C program that runs on the CPU
this C program makes OpenGL calls which compile those strings into GPU code, e.g.:
glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
glCompileShader(fragmentShader);
the shader defines their expected inputs, and the C program provides them through a pointer to memory to the GPU code. For example, the fragment shader defines its expected inputs as an array of vertex positions and colors:
"layout (location = 0) in vec3 position;\n"
"layout (location = 1) in vec3 color;\n"
"out vec3 ourColor;\n"
and also defines one of its outputs ourColor as an array of colors, which is then becomes an input to the fragment shader:
static const GLchar* fragmentShaderSource =
"#version 330 core\n"
"in vec3 ourColor;\n"
The C program then provides the array containing the vertex positions and colors from the CPU to the GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
On the immediate non-shader example however, we see that magic API calls are made that explicitly give positions and colors:
glColor3f( 1.0f, 0.0f, 0.0f);
glVertex3f(-0.5f, -0.5f, 0.0f);
We understand therefore that this represents a much more restricted model, since the positions and colors are not arbitrary user-defined arrays in memory that then get processed by an arbitrary user provided program anymore, but rather just inputs to a Phong-like model.
In both cases, the rendered output normally goes straight to the video, without passing back through the CPU, although it is possible to read to the CPU e.g. if you want to save them to a file: How to use GLUT/OpenGL to render to a file?
Cool non-trivial shader applications to 3D graphics
One classic cool application of a non-trivial shader are dynamic shadows, i.e. shadows cast by one object on another, as opposed to shadows that only depend on the angle between the normal of a triangle and the light source, which was already covered in the Phong model:
Image source.
Cool non-3D fragment shader applications
https://www.shadertoy.com/ is a "Twitter for fragment shaders". It contains a huge selection of visually impressive shaders, and can serve as a "zero setup" way to play with fragment shaders. Shadertoy runs on WebGL, an OpenGL interface for the browser, so when you click on a shadertoy, it renders the shader code in your browser. Like most "fragment shader graphing applicaitons", they just have a fixed simple vertex shader that draws two triangles on the screen right in front of the camera: WebGL/GLSL - How does a ShaderToy work? so the users only code the fragment shader.
Here are some more scientific oriented examples hand picked by me:
image processing can be done faster than on CPU for certain algorithms: Is it possible to build a heatmap from point data at 60 times per second?
plotting can be done faster than on CPU for certain functions: Is it possible to build a heatmap from point data at 60 times per second?
Shaders basically give you the correct coloring of the object that you want to render, based on several light equations. So if you have a sphere, a light, and a camera, then the camera should see some shadows, some shiny parts, etc, even if the sphere has only one color. Shaders perform the light equation computations to give you these effects.
The vertex shader transforms each vertex's 3D position in virtual space (your 3d model) to the 2D coordinate at which it appears on the screen.
The fragment shader basically gives you the coloring of each pixel by doing light computations.
In a short and simple manner, GPU routines provide hook/callback functions so that you paint the textures of the faces. These hooks are the shaders.