i'm having a problem trying to reading an image from a fragment shader, first i write into the image in shader porgram A (im just painting blue on the image) then i'm reading from another shader program B to display the image, but the reading part is not getting the right color i'm getting a black image
Unexpected result
This is my application code:
void GLAPIENTRY MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
std::cout << "GL CALLBACK: type = " << std::hex << type << ", severity = " << std::hex << severity << ", message = " << message << "\n"
<< (type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "") << std::endl;
}
class ImgRW
: public Core
{
public:
ImgRW()
: Core(512, 512, "JFAD")
{}
virtual void Start() override
{
glEnable(GL_DEBUG_OUTPUT);
glDebugMessageCallback(MessageCallback, nullptr);
shader_w = new Shader("w_img.vert", "w_img.frag");
shader_r = new Shader("r_img.vert", "r_img.frag");
glGenTextures(1, &space);
glBindTexture(GL_TEXTURE_2D, space);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512);
glBindImageTexture(0, space, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
}
virtual void Update() override
{
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
virtual void End() override
{
delete shader_w;
delete shader_r;
glDeleteTextures(1, &space);
glDeleteVertexArrays(1, &vertex_array);
}
private:
Shader* shader_w;
Shader* shader_r;
GLuint vertex_array;
GLuint space;
};
#if 1
CORE_MAIN(ImgRW)
#endif
and these are my fragment shaders:
Writing to image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
imageStore(img, ivec2(gl_FragCoord.xy), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Reading from image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
vec4 color = imageLoad(img, ivec2(gl_FragCoord.xy));
out_color = color;
}
The only way that i get the correct result is if i change the order of the drawing commands and i dont need the memory barriers, like this (in the Update fuction of above):
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I don't know if the problem is the graphics card or the drivers or if i'm missing some kind of flag that enables memoryBarriers or if i put the wrong barrier bits or if i placed the barriers in the code in the wrong part
The Vertex shader for both shader programs is the next:
#version 430 core
void main()
{
vec2 v[4] = vec2[4]
(
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2(-1.0, 1.0),
vec2( 1.0, 1.0)
);
vec4 p = vec4(v[gl_VertexID], 0.0, 1.0);
gl_Position = p;
}
and in my init function is:
void Window::init()
{
glfwInit();
window = glfwCreateWindow(getWidth(), getHeight(), name, nullptr, nullptr);
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebufferSizeCallback);
glfwSetCursorPosCallback(window, cursorPosCallback);
//glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
assert(gladLoadGLLoader((GLADloadproc)glfwGetProcAddress) && "Couldn't initilaize OpenGL");
glEnable(GL_DEPTH_TEST);
}
and in my function run i'm calling my start, update and end functions
void Core::Run()
{
std::cout << glGetString(GL_VERSION) << std::endl;
Start();
float lastFrame{ 0.0f };
while (!window.close())
{
float currentFrame = static_cast<float>(glfwGetTime());
Time::deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
glViewport(0, 0, getWidth(), getHeight());
glClearBufferfv(GL_COLOR, 0, &color[0]);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);
Update();
glfwSwapBuffers(window);
glfwPollEvents();
}
End();
}
glEnable(GL_DEPTH_TEST);
As I suspected.
Just because a fragment shader doesn't write a color output doesn't mean that those fragments will not affect the depth buffer. If the fragment passes the depth test and the depth write mask is on (assuming no other state is involved), it will update the depth buffer with the current fragment's depth (and the color buffer with uninitialized values, but that's a different matter).
Since you're drawing the same geometry both times, the second rendering's fragments will get the same depth values as the corresponding fragments from the first rendering. But the default depth function is GL_LESS. Since any value is not less than itself, this means that all fragments from the second rendering fail the depth test.
And therefore, they don't get rendered.
So just turn off the depth test. And while you're at it, turn off color writes for your "writing" rendering pass, since you're not writing to the color buffers.
Now, you do properly need the memory barrier between the two draw calls. But you only need the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT, since that's how you're reading the data (via image load/store, not samplers).
Related
I am trying to render some polygons to a texture, and then render the texture to the screen.
I'm not sure how to debug my code since that would require to probe the internal state of OpenGL, so I would appreciate tips on how to debug myself more than pointing out the error I have done.
Anyway, I commented the code I wrote explaining what I expect each line to do.
Here is a description of what the code is supposed to do.
Basically, I made a vertex shader that provides the position, UV and color to the fragment shader. The fragment shader has a uniform to activate texture sampling, otherwise it will just output the input color. In both cases, the color is multiplied by a uniform color. First I create a texture, and I fill it with red and green raw pixel data to test. This texture is correcly rendered to the screen (I see the red and green part correctly as I initialized it). Then i try to do the actual rendering on the texture. I try to render a small blue square in the middle of it (sampler disabled on the fragment shader, color uniform set to blue) but I can't get this blue square to appear on the rendered texture.
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include "utils.h"
#include <glm/glm.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <iostream>
using namespace std;
#define numVAOs 1
#define numVBOs 1
GLuint shaderProgram;
GLuint unifUseTexture, unifInTexture, unifTMat, unifDrawColor;
GLuint texture;
GLuint textureFrameBuffer;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
void drawRectangle() {
}
void init() {
// Compile the shaderProgram
shaderProgram = createShaderProgram("vertex.glsl","fragment.glsl");
// Retrieve the uniform location
unifUseTexture = glGetUniformLocation(shaderProgram,"useTexture");
unifInTexture = glGetUniformLocation(shaderProgram,"inTexture");
unifTMat = glGetUniformLocation(shaderProgram,"tMat");
unifDrawColor = glGetUniformLocation(shaderProgram,"drawColor");
// Create vertex array object and vertex buffer object
glGenVertexArrays(numVAOs,vao);
glBindVertexArray(vao[0]);
float xyzuvrgbaSquare[54] = {
/* C */ 1.0,-1.0,0.0, 1.0,0.0, 1.0,1.0,1.0,1.0,
/* A */ -1.0,1.0,0.0, 0.0,1.0, 1.0,1.0,1.0,1.0,
/* B */ 1.0,1.0,0.0, 1.0,1.0, 1.0,1.0,1.0,1.0,
/* A */ -1.0,1.0,0.0, 0.0,1.0, 1.0,1.0,1.0,1.0,
/* C */ 1.0,-1.0,0.0, 1.0,0.0, 1.0,1.0,1.0,1.0,
/* D */-1.0,-1.0,0.0, 0.0,0.0, 1.0,1.0,1.0,1.0
};
glGenBuffers(numVBOs,vbo);
glBindBuffer(GL_ARRAY_BUFFER,vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 4*54,xyzuvrgbaSquare,GL_STATIC_DRAW);
// Associate vbo with the correct vertex attribute to display the rectangle
glBindBuffer(GL_ARRAY_BUFFER,vbo[0]);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,36,0); // inPosition
glVertexAttribPointer(1,4,GL_FLOAT,GL_FALSE,36,(void*)20); // inColor
glVertexAttribPointer(2,2,GL_FLOAT,GL_FALSE,36,(void*)12); // inUV
glEnableVertexAttribArray(0); // location=0 in the shader
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
// Generate a small 128x128 texture. I followed the tutorial
// over http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
// generate a frameBuffer to contain the texture
glGenFramebuffers(1,&textureFrameBuffer);
// Bind it, so when I will generate the texture it will be associated with it
glBindFramebuffer(GL_FRAMEBUFFER, textureFrameBuffer);
glGenTextures(1,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
// Put some raw data inside of it for testing purposes. I will fill it
// half with green, half with red
unsigned char* imageRaw = new unsigned char[4*128*128];
for(int i=0; i<4*128*64; i+=4) {
imageRaw[i] = 255;
imageRaw[i+1] = 0;
imageRaw[i+2] = 0;
imageRaw[i+3] = 255;
imageRaw[4*128*64+i] = 0;
imageRaw[4*128*64+i+1] = 255;
imageRaw[4*128*64+i+2] = 0;
imageRaw[4*128*64+i+3] = 255;
}
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,128,128,0,GL_RGBA,GL_UNSIGNED_BYTE,imageRaw);
// Setup some required parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Draw a small blue square on the texture
// So, activate the previously compiled shader program and setup the uniforms
glUseProgram(shaderProgram);
// First, create a transform matrix to make the square smaller (20% of texture)
glm::mat4 tMat = glm::scale(glm::mat4(1.0f),glm::vec3(0.2,0.2,0));
glUniformMatrix4fv(unifTMat,1,GL_FALSE,glm::value_ptr(tMat));
// do not use a texture (ignore sampler2D in fragment shader)
glUniform1i(unifUseTexture,0);
// use the color BLUE for the rectangle
glUniform4f(unifDrawColor,0.0,0.0,1.0,1.0);
// Bind the textureFrameBuffer to render on the texture instead of the screen
glBindFramebuffer(GL_FRAMEBUFFER,textureFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture,0);
GLenum drawBuffers[1] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, drawBuffers);
GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER);
if( status != GL_FRAMEBUFFER_COMPLETE ) {
cout << "framebuffer status: " << status << endl;
}
// the vertex framebuffer and vertex attribute pointer have already been
// described, so I'll just do the draw call here
glDrawArrays(GL_TRIANGLES,0,6);
// Display the textore on screen
// Bind the screen framebuffer (0) so the following rendering will occurr on screen
glBindFramebuffer(GL_FRAMEBUFFER,0);
// Put a white background color
glClearColor(1.0,1.0,1.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
// Change properly the shader uniforms
glUniform4f(unifDrawColor,1.0,1.0,1.0,1.0); // multiply by white, no changes
glUniform1i(unifUseTexture,1); // set useTexture to True
// Create a transform matrix to scale the rectangle so that it uses up only half screen
tMat = glm::scale(glm::mat4(1.0f),glm::vec3(.5,.5,.0));
glUniformMatrix4fv(unifTMat,1,GL_FALSE,glm::value_ptr(tMat));
// Put the sampler2D
glActiveTexture(GL_TEXTURE0); // Work on texture0
// 0 because of (binding = 0) on the fragment shader
glBindTexture(GL_TEXTURE_2D,texture);
glDrawArrays(GL_TRIANGLES,0,6); // 6 vertices
}
int main(int argc, char** argv) {
// Build the window
if (!glfwInit()) exit(EXIT_FAILURE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR,4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR,3);
GLFWwindow* window = glfwCreateWindow(600,600,"Dashboard",NULL,NULL);
glfwMakeContextCurrent(window);
if(glewInit() != GLEW_OK) exit(EXIT_FAILURE);
glfwSwapInterval(1);
init();
while(!glfwWindowShouldClose(window)) {
//display(window,glfwGetTime());
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
edit: I forgot to put the shader code here, though the problem is not within the shader because it does work when used to render the texture to screen.
vertex.glsl:
#version 430
layout (location=0) in vec3 inPosition;
layout (location=1) in vec4 inColor;
layout (location=2) in vec2 inUV;
uniform mat4 tMat;
uniform vec4 drawColor;
out vec4 varyingColor;
out vec2 varyingUV;
void main(void) {
gl_Position = tMat * vec4(inPosition,1.0);
varyingColor = inColor*drawColor;
varyingUV = inUV;
}
fragment.glsl:
#version 430
in vec4 varyingColor;
in vec2 varyingUV;
layout(location = 0) out vec4 color;
layout (binding=0) uniform sampler2D inTexture;
uniform bool useTexture;
void main(void) {
if( useTexture )
color = vec4(texture(inTexture,varyingUV).rgb,1.0) * varyingColor;
else
color = varyingColor;
}
The texture which is attached to the framebuffer, has a different size than the window. Hence you've to adjust the viewport rectangle (glViewport) to the size of the size of the currently bound framebuffer, before drawing the geometry:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageRaw);
// [...]
glBindFramebuffer(GL_FRAMEBUFFER, textureFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture,0);
glViewport(0, 0, 128, 128);
// [...]
glDrawArrays(GL_TRIANGLES, 0, 6);
// [...]
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, WIDTH, HEIGHT);
// [...]
glDrawArrays(GL_TRIANGLES, 0, 6);
I'm trying to write a simple opengl program that cyclically grows and shrinks a single point at the center of the screen using glPointSize() and a variable pointSize. Printing the value of pointSize and stepping through the code with a debugger appears to show that pointSize updates correctly on each iteration. The rendering of the point also appears to be correct when pointSize is increasing, but when pointSize is decreasing the point is still rendered at its maximum size on the screen, and keeps rendering that way no matter how much pointSize grows or shrinks in value - and I cannot figure out why.
#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <iostream>
#define numVAOs 1
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint createShaderProgram() {
const char *vshaderSource =
"#version 430 \n"
"void main(void) \n"
"{ gl_Position = vec4(0.0, 0.0, 0.0, 1.0); }";
const char *fshaderSource =
"#version 430 \n"
"out vec4 color; \n"
"void main(void) \n"
"{ color = vec4(0.0, 0.0, 1.0, 1.0); }";
GLuint vShader = glCreateShader(GL_VERTEX_SHADER);
GLuint fShader = glCreateShader(GL_FRAGMENT_SHADER);
GLuint vfprogram = glCreateProgram();
glShaderSource(vShader, 1, &vshaderSource, NULL);
glShaderSource(fShader, 1, &fshaderSource, NULL);
glCompileShader(vShader);
glCompileShader(fShader);
glAttachShader(vfprogram, vShader);
glAttachShader(vfprogram, fShader);
glLinkProgram(vfprogram);
return vfprogram;
}
void init() {
renderingProgram = createShaderProgram();
glGenVertexArrays(numVAOs, vao);
glBindVertexArray(vao[0]);
}
GLfloat pointSize = 100.0f;
GLfloat increment = 1.0f;
void display() {
glUseProgram(renderingProgram);
if (pointSize > 200 || pointSize < 2) increment *= -1.0f;
pointSize += increment;
glPointSize(pointSize);
glDrawArrays(GL_POINTS, 0, 1);
}
int main(void) {
if (!glfwInit()) exit(EXIT_FAILURE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
GLFWwindow* window = glfwCreateWindow(600, 600, "Test", nullptr, nullptr);
glfwMakeContextCurrent(window);
if (glewInit() != GLEW_OK) exit(EXIT_FAILURE);
glfwSwapInterval(1);
init();
while (!glfwWindowShouldClose(window)) {
display();
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
exit(EXIT_SUCCESS);
}
Your code does not clear the screen with glClear. Therefore, when the point grows, you can see it. When it shrinks, it's simply drawn 'inside' the bigger dot, without you noticing it. You could catch that if you changed the color of the dot as it changes the size.
Simply clear the color buffer at the beginning of the frame:
void display() {
// default clear color is black, but you can change it:
//glClearColor(1,0,0,1);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(renderingProgram);
...
I'm working on developing code in OpenGL, and I was completing one of the tutorials for a lesson. However, the code that I completed did not color the triangle. Based off of the tutorial, my triangle should come out as green, but it keeps turning out white. I think there is an error in the code for my shaders, but I can't seem to find the error.
I tried altering the code a few times, and I even moved on to the next tutorial, which shades each vertex. However, my triangle is still coming out as white.
#include <iostream> //Includes C++ i/o stream
#include <GL/glew.h> //Includes glew header
#include <GL/freeglut.h> //Includes freeglut header
using namespace std; //Uses the standard namespace
#define WINDOW_TITLE "Modern OpenGL" //Macro for window title
//Vertex and Fragment Shader Source Macro
#ifndef GLSL
#define GLSL(Version, Source) "#version " #Version "\n" #Source
#endif
//Variables for window width and height
int WindowWidth = 800, WindowHeight = 600;
/* User-defined Function prototypes to:
* initialize the program, set the window size,
* redraw graphics on the window when resized,
* and render graphics on the screen
* */
void UInitialize(int, char*[]);
void UInitWindow(int, char*[]);
void UResizeWindow(int, int);
void URenderGraphics(void);
void UCreateVBO(void); //This step is missing from Tutorial 3-3
void UCreateShaders(void);
/*Vertex Shader Program Source Code*/
const GLchar * VertexShader = GLSL(440,
in layout(location=0) vec4 vertex_Position; //Receive vertex coordinates from attribute 0. i.e. 2
void main(){
gl_Position = vertex_Position; //Sends vertex positions to gl_position vec 4
}
);
/*Fragment Shader Program Source Code*/
const GLchar * FragmentShader = GLSL(440,
void main(){
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); //Sets the pixels / fragments of the triangle to green
}
);
//main function. Entry point to the OpenGL Program
int main(int argc, char* argv[])
{
UInitialize(argc, argv); //Initialize the OpenGL program
glutMainLoop(); // Starts the Open GL loop in the background
exit(EXIT_SUCCESS); //Terminates the program successfully
}
//Implements the UInitialize function
void UInitialize(int argc, char* argv[])
{
//glew status variable
GLenum GlewInitResult;
UInitWindow(argc, argv); //Creates the window
//Checks glew status
GlewInitResult = glewInit();
if(GLEW_OK != GlewInitResult)
{
fprintf(stderr, "Error: %s\n", glewGetErrorString(GlewInitResult));
exit(EXIT_FAILURE);
}
//Displays GPU OpenGL version
fprintf(stdout, "INFO: OpenGL Version: %s\n", glGetString(GL_VERSION));
UCreateVBO(); //Calls the function to create the Vertex Buffer Object
UCreateShaders(); //Calls the function to create the Shader Program
//Sets the background color of the window to black. Optional
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
//Implements the UInitWindow function
void UInitWindow(int argc, char* argv[])
{
//Initializes freeglut
glutInit(&argc, argv);
//Sets the window size
glutInitWindowSize(WindowWidth, WindowHeight);
//Memory buffer setup for display
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
//Creates a window with the macro placeholder title
glutCreateWindow(WINDOW_TITLE);
glutReshapeFunc(UResizeWindow); //Called when the window is resized
glutDisplayFunc(URenderGraphics); //Renders graphics on the screen
}
//Implements the UResizeWindow function
void UResizeWindow(int Width, int Height)
{
glViewport(0,0, Width, Height);
}
//Implements the URenderGraphics function
void URenderGraphics(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Clears the screen
/*Creates the triangle*/
GLuint totalVertices = 3; //Specifies the number of vertices for the triangle i.e. 3
glDrawArrays(GL_TRIANGLES, 0, totalVertices); //Draws the triangle
glutSwapBuffers(); //Flips the back buffer with the front buffer every frame. Similar to GL Flush
}
//Implements the CreateVBO function
void UCreateVBO(void)
{
//Specifies coordinates for triangle vertices on x and y
GLfloat verts[] =
{
0.0f, 1.0f, //top-center of the screen
-1.0f, -1.0f, //bottom-left of the screen
1.0f, -1.0f //bottom-right of the screen
};
//Stores the size of the verts array / number of the coordinates needed for the triangle i.e. 6
float numVertices = sizeof(verts);
GLuint myBufferID; //Variable for vertex buffer object id
glGenBuffers(1, &myBufferID); //Creates 1 buffer
glBindBuffer(GL_ARRAY_BUFFER, myBufferID); //Activates the buffer
glBufferData(GL_ARRAY_BUFFER, numVertices, verts, GL_STATIC_DRAW); //Sends vertex or coordinate data to GPU
/*Creates the Vertex Attribute Pointer*/
GLuint floatsPerVertex = 2; //Number of coordinates per vertex
glEnableVertexAttribArray(0); //Specifies the initial position of the coordinates in the buffer
/*Instructs the GPU on how to handle the vertex bugger object data.
* Parameters: attribPointerPosition | coordinates per vertex | data type | deactivate normalization | 0 strides | 0 offset
*/
glVertexAttribPointer(0, floatsPerVertex, GL_FLOAT, GL_FALSE, 0, 0);
}
//Implements the UCreateShaders function
void UCreateShaders(void)
{
//Create a shader program object
GLuint ProgramId = glCreateProgram();
GLuint vertexShaderId = glCreateShader(GL_VERTEX_SHADER); //Create a Vertex Shader Object
GLuint fragmentShaderId = glCreateShader(GL_FRAGMENT_SHADER); //Create a Fragment Shader Object
glShaderSource(vertexShaderId, 1, &VertexShader, NULL); //Retrieves the vertex shader source code
glShaderSource(fragmentShaderId, 1, &FragmentShader, NULL); //Retrieves the fragment shader source code
glCompileShader(vertexShaderId); //Compile the vertex shader
glCompileShader(fragmentShaderId); //Compile the fragment shader
//Attaches the vertex and fragment shaders to the shader program
glAttachShader(ProgramId, vertexShaderId);
glAttachShader(ProgramId, fragmentShaderId);
glLinkProgram(ProgramId); //Links the shader program
glUseProgram(ProgramId); //Uses the shader program
}
When completed correctly, the code should result in a solid green triangle.
The variable gl_FragColor is unavailable in GLSL 4.4 core profile since it was deprecated. Because you don't specify a compatibility profile, the default core is assumed. Either use
#version 440 compatibility
for your shaders, or, even better, use the GLSL 4.4 onwards approach:
#version 440 core
layout(location = 0) out vec4 OUT;
void main(){
OUT = vec4(0.0, 1.0, 0.0, 1.0);
}
So my university lecturer gave us this code and it doesn't work.. it never has and no one has been able to get it to work so far.. are we being stupid or is our lecturer giving us broken material? I seriously can't figure this out and need help, i managed to get part way through in fixing many mistakes but after that the issues got harder and harder to solve despite this being '100% working' code.... side note: all the directories are formatted correctly and additional dependencies have all been set up correctly to the best of my knowledge.
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the //GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++ gl::exts::LoadTest didLoad = gl::sys::LoadFunctions(); if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to //the viewer
//i.e. Enable depth testing with smaller depth value //interpreted as being closer gl::Enable(gl::DEPTH_TEST); gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points,
gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in //separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile GLuint vs = gl::CreateShader(gl::VERTEX_SHADER); gl::ShaderSource(vs, 1, &vertex_shader, NULL); gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER); gl::ShaderSource(fs, 1, &fragment_shader, NULL); gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable //GPU shader program
//Create empty program and attach shaders GLuint shader_program = gl::CreateProgram(); gl::AttachShader(shader_program, fs); gl::AttachShader(shader_program, vs); gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
Your line endings seems to been mangled.
There are multiple lines in your code where actual code was not broken into two lines, so that code is now on the same line as a comment and therefor not being executed. This is your program with proper line endings:
//First Shader Handling Program
#include "stdafx.h"
#include "gl_core_4_3.hpp"
#include <GLFW/glfw3.h>
int _tmain(int argc, _TCHAR* argv[])
{
//Select the 4.3 core profile
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
//Start the OpenGL context and open a window using the
//GLFW helper library
if (!glfwInit()) {
fprintf(stderr, "ERROR: could not start GLFW3\n");
glfwTerminate();
return 1;
}
GLFWwindow* window = glfwCreateWindow(640, 480, "First GLSL Triangle", NULL, NULL);
if (!window) {
fprintf(stderr, "ERROR: could not open window with GLFW3\n");
glfwTerminate();
return 1;
}
glfwMakeContextCurrent(window);
//Load the OpenGL functions for C++
gl::exts::LoadTest didLoad = gl::sys::LoadFunctions();
if (!didLoad) {
//Load failed
fprintf(stderr, "ERROR: GLLoadGen failed to load functions\n");
glfwTerminate();
return 1;
}
printf("Number of functions that failed to load : %i.\n", didLoad.GetNumMissing());
//Tell OpenGL to only draw a pixel if its shape is closer to
//the viewer
//i.e. Enable depth testing with smaller depth value
//interpreted as being closer
gl::Enable(gl::DEPTH_TEST);
gl::DepthFunc(gl::LESS);
//Set up the vertices for a triangle
float points[] = {
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
//Create a vertex buffer object to hold this data
GLuint vbo=0;
gl::GenBuffers(1, &vbo);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::BufferData(gl::ARRAY_BUFFER, 9 * sizeof(float), points, gl::STATIC_DRAW);
//Create a vertex array object
GLuint vao = 0;
gl::GenVertexArrays(1, &vao);
gl::BindVertexArray(vao);
gl::EnableVertexAttribArray(0);
gl::BindBuffer(gl::ARRAY_BUFFER, vbo);
gl::VertexAttribPointer(0, 3, gl::FLOAT, FALSE, 0, NULL);
//The shader code strings which later we will put in
//separate files
//The Vertex Shader
const char* vertex_shader =
"#version 400\n"
"in vec3 vp;"
"void main() {"
" gl_Position = vec4(vp, 1.0);"
"}";
//The Fragment Shader
const char* fragment_shader =
"#version 400\n"
"out vec4 frag_colour;"
"void main() {"
" frag_colour = vec4(1.0, 0.5, 0.0, 1.0);"
"}";
//Load the strings into shader objects and compile
GLuint vs = gl::CreateShader(gl::VERTEX_SHADER);
gl::ShaderSource(vs, 1, &vertex_shader, NULL);
gl::CompileShader(vs);
GLuint fs = gl::CreateShader(gl::FRAGMENT_SHADER);
gl::ShaderSource(fs, 1, &fragment_shader, NULL);
gl::CompileShader(fs);
//Compiled shaders must be compiled into a single executable
//GPU shader program
//Create empty program and attach shaders
GLuint shader_program = gl::CreateProgram();
gl::AttachShader(shader_program, fs);
gl::AttachShader(shader_program, vs);
gl::LinkProgram(shader_program);
//Now draw
while (!glfwWindowShouldClose(window)) {
//Clear the drawing surface
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::UseProgram(shader_program);
gl::BindVertexArray(vao);
//Draw point 0 to 3 from the currently bound VAO with
//current in-use shader
gl::DrawArrays(gl::TRIANGLES, 0, 3);
//update GLFW event handling
glfwPollEvents();
//Put the stuff we have been drawing onto the display
glfwSwapBuffers(window);
}
//Close GLFW and end
glfwTerminate();
return 0;
}
1) Do I understand correctly that to draw using vertex arrays or VBOs I need for all my attributes to either call glBindAttribLocation before the shader program linkage or call glGetAttribLocation after the shader program was successfully linked and then use the bound/obtained index in the glVertexAttribPointer and glEnableVertexAttribArray calls?
To be more specific: these three functions - glGetAttribLocation, glVertexAttribPointer and glEnableVertexAttribArray - they all have an input parameter named "index". Is it the same "index" for all the three? And is it the same thing as the one returned by glGetAttribLocation?
If yes:
2) I've been facing a problem on OS X, I described it here: https://stackoverflow.com/questions/28093919/using-default-attribute-location-doesnt-work-on-osx-osx-opengl-bug , but unfortunately didn't get any replies.
The problem is that depending on what attribute locations I bind to my attributes I do or do not see anything on the screen. I only see this behavior on my MacBook Pro with OS X 10.9.5; I've tried running the same code on Linux and Windows and it seems to work on those platforms independently from which locations are my attributes bound to.
Here is a code example (which is supposed to draw a red triangle on the screen) that exhibits the problem:
#include <iostream>
#include <GLFW/glfw3.h>
GLuint global_program_object;
GLint global_position_location;
GLint global_aspect_ratio_location;
GLuint global_buffer_names[1];
int LoadShader(GLenum type, const char *shader_source)
{
GLuint shader;
GLint compiled;
shader = glCreateShader(type);
if (shader == 0)
return 0;
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len > 1)
{
char* info_log = new char[info_len];
glGetShaderInfoLog(shader, info_len, NULL, info_log);
std::cout << "Error compiling shader" << info_log << std::endl;
delete info_log;
}
glDeleteShader(shader);
return 0;
}
return shader;
}
int InitGL()
{
char vertex_shader_source[] =
"attribute vec4 att_position; \n"
"attribute float dummy;\n"
"uniform float uni_aspect_ratio; \n"
"void main() \n"
" { \n"
" vec4 test = att_position * dummy;\n"
" mat4 mat_projection = \n"
" mat4(1.0 / uni_aspect_ratio, 0.0, 0.0, 0.0, \n"
" 0.0, 1.0, 0.0, 0.0, \n"
" 0.0, 0.0, -1.0, 0.0, \n"
" 0.0, 0.0, 0.0, 1.0); \n"
" gl_Position = att_position; \n"
" gl_Position *= mat_projection; \n"
" } \n";
char fragment_shader_source[] =
"void main() \n"
" { \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
" } \n";
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program_object;
GLint linked;
vertex_shader = LoadShader(GL_VERTEX_SHADER , vertex_shader_source );
fragment_shader = LoadShader(GL_FRAGMENT_SHADER, fragment_shader_source);
program_object = glCreateProgram();
if(program_object == 0)
return 1;
glAttachShader(program_object, vertex_shader );
glAttachShader(program_object, fragment_shader);
// Here any index except 0 results in observing the black screen
glBindAttribLocation(program_object, 1, "att_position");
glLinkProgram(program_object);
glGetProgramiv(program_object, GL_LINK_STATUS, &linked);
if(!linked)
{
GLint info_len = 0;
glGetProgramiv(program_object, GL_INFO_LOG_LENGTH, &info_len);
if(info_len > 1)
{
char* info_log = new char[info_len];
glGetProgramInfoLog(program_object, info_len, NULL, info_log);
std::cout << "Error linking program" << info_log << std::endl;
delete info_log;
}
glDeleteProgram(program_object);
return 1;
}
global_program_object = program_object;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(global_program_object);
global_position_location = glGetAttribLocation (global_program_object, "att_position");
global_aspect_ratio_location = glGetUniformLocation(global_program_object, "uni_aspect_ratio");
GLfloat vertices[] = {-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f};
glGenBuffers(1, global_buffer_names);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 9, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0;
}
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glUseProgram(global_program_object);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glVertexAttribPointer(global_position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(global_position_location);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(global_position_location);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void FreeGL()
{
glDeleteBuffers(1, global_buffer_names);
glDeleteProgram(global_program_object);
}
void SetViewport(int width, int height)
{
glViewport(0, 0, width, height);
glUseProgram(global_program_object);
glUniform1f(global_aspect_ratio_location, static_cast<GLfloat>(width) / static_cast<GLfloat>(height));
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
InitGL();
// Double the resolution to correctly draw with Retina display
SetViewport(1280, 960);
while (!glfwWindowShouldClose(window))
{
Render();
glfwSwapBuffers(window);
glfwPollEvents();
}
FreeGL();
glfwTerminate();
return 0;
}
Does this look like a bug to you? Can anyone reproduce it? If it's a bug where should I report it?
P.S.
I've also tried SDL instead of GLFW, the behavior is the same...
The behavior you see is actually correct as per the spec, and MacOSX has something to do with this, but only in a very indirect way.
To answer question 1) first: You are basically correct. With modern GLSL (>=3.30), you can also specifiy the desired index via the layout(location=...) qualifier directly in the shader code, instead of using glBindAttribLocation(), but that is only a side note.
The problem you are facing is that you are using a legacy GL context. You do not specify a desired version, so you will get maximum compatibility to the old way. Now on windows, you are very likely to get a compatibility profile of the highest version supported by the implementation (typically GL3.x or GL4.x on non-ancient GPUs).
However, on OSX, you are limited to at most GL2.1. And this is where the crux lies: your code is invalid in GL2.x. To explain this, I have to go back in GL history. In the beginning, there was the immediate mode, so you did draw by
glBegin(primType);
glColor3f(r,g,b);
glVertex3f(x,y,z);
[...]
glColor3f(r,g,b);
glVertex3f(x,y,z);
glEnd();
Note that the glVertex call is what actually creates a vertex. All other per-vertex attributes are basically some current vertex state which can be set any time, but calling glVertex will take all of those current attributes together with the position to form the vertex which is fed to the pipeline.
Now when vertex arrays were added, we got functions like glVertexPointer(), glColorPointer() and so on, and each attribute array could be enabled or disabled separately via glEnableClientState(). The array-based draw calls are actually defined in terms of the immediate mode in the OpenGL 2.1 specification as glDrawArrays(GLenum mode, GLint first, GLsizei count) being equivalent to
glBegin(mode);
for (i=0; i<count; i++)
ArrayElement(first + i);
glEnd();
with ArrayElement(i) being defined (this one is derived from the wording of theGL 1.5 spec):
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
This definition has some sublte consequence: You must have the GL_VERTEX_ARRAY attribute array enabled, otherwise nothing will be drawn, since no equivalent of glVertex calls are generated.
Now when the generic attributes were added in GL2.0, a special guarantee was made: generic attribute 0 is aliasing the builtin glVertex attribute - so both can be used interchangeably, in immediate mode as well as in arrays. So glVertexAttrib3f(0,x,y,z) "creates" a vertex the same way glVertex3f(x,y,z) would have. And using an array with glEnableVertexAttribArray(0) is as good as glEnableClientState(GL_VERTEX_ARRAY).
In GL 2.1, the ArrayElement(i) function now looks as follows:
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
for (a=1; a<max_attribs; a++) {
if ( generic_attrib_a_enabled )
glVertexAttrib...(a, <i-th value of attrib a> );
}
if ( generic_attrib_0_enabled)
glVertexAttrib...(0, <i-th value of attrib 0> );
else if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
Now this is what happens to you. You absolutely need attribute 0 (or the old GL_VERTEX_ARRAY attribute) to be enabled for this to generate any vertices for the pipeline.
Note that it should be possible in theory to just enable attribute 0, no matter if it is used in the shader or not. You should just make sure that the corresponding attrib pointer pionts to valid memory, to be 100% safe. So you simply could check if your attribute index 0 is used, and if not, just set the same pointer as attribute 0 as you did for your real attribute, and the GL should be happy. But I haven't tried this.
In more modern GL, these requirements are not there anymore, and drawing without attribute 0 will work as intended, and that is what you saw on those other systems. Maybe you should consider switching to modern GL, say >= 3.2 core profile, where the issue will not be present (but you need to update your code a bit, including the shaders).