Opengl will not display anything - c++

I'm trying to get a small triangle to display.
Here is my initialization code:
void PlayerInit(Player P1)
{
glewExperimental = GL_TRUE;
glewInit();
//initialize buffers needed to draw the player
GLuint vao;
GLuint buffer;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &buffer);
//bind the buffer and vertex objects
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
//Set up a buffer to hold 6 floats for position, and 9 floats for color
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, NULL, GL_STATIC_DRAW);
//push the vertices of the player into the buffer
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*9, CalcPlayerPoints(P1.GetPosition()));
//push the color of each player vertex into the buffer
glBufferSubData(GL_ARRAY_BUFFER, sizeof(float)*9, sizeof(float)*9, CalcPlayerColor(1));
//create and compile the vertex/fragment shader objects
GLuint vs = create_shader("vshader.glsl" ,GL_VERTEX_SHADER);
GLuint fs = create_shader("fshader.glsl" ,GL_FRAGMENT_SHADER);
//create a program object and link the shaders
GLuint program = glCreateProgram();
glAttachShader(program, vs);
glAttachShader(program, fs);
glLinkProgram(program);
//error checking for linking
GLint linked;
glGetProgramiv(program, GL_LINK_STATUS, &linked);
if(!linked)
{
std::cerr<< "Shader program failed to link" <<std::endl;
GLint logSize;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logSize);
char* logMsg = new char[logSize];
glGetProgramInfoLog(program, logSize, NULL, logMsg);
std::cerr<< logMsg << std::endl;
delete [] logMsg;
exit(EXIT_FAILURE);
}
glUseProgram(program);
//create attributes for color and position to pass to shaders
//enable each attribute
GLuint Pos = glGetAttribLocation( program, "vPosition");
glEnableVertexAttribArray(Pos);
GLuint Col = glGetAttribLocation( program, "vColor");
glEnableVertexAttribArray(Col);
//set a pointer at the proper offset into the buffer for each attribute
glVertexAttribPointer(Pos, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) (sizeof(float)*0));
glVertexAttribPointer(Col, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) (sizeof(float)*9));
}
I don't have much experience writing shader linkers, so I think that is where the problem might be. I have some error checking in the shader loader, and nothing comes up. So I think that is fine.
Next I have my display and main function:
//display function for the game
void GameDisplay( void )
{
//set the background color
glClearColor(1.0, 0.0, 1.0, 1.0);
//clear the screen
glClear(GL_COLOR_BUFFER_BIT);
//Draw the Player
glDrawArrays(GL_TRIANGLES, 0, 3);
glutSwapBuffers();
}
//main function
int main(int argc, char** argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(500, 500);
glutCreateWindow("Asteroids");
Player P1 = Player(0.0, 0.0);
PlayerInit(P1);
glutDisplayFunc(GameDisplay);
glutMainLoop();
return 0;
}
Vertex shader :
attribute vec3 vPosition;
attribute vec3 vColor;
varying vec4 color;
void main()
{
color = vec4(vColor, 1.0);
gl_Position = vec4(vPosition, 1.0);
}
Fragment shader
varying vec4 color;
void main()
{
gl_FragColor = color;
}
That's all of the relevant code. CalcPlayerPoints just returns a float array of size 9 to hold the triangle coordinates. CalcPlayerColor does something similar.
One last problem that may help with diagnosing the problem is that whenever I try to exit the program by closing the window of the application, I get a breakpoint in the glutmainloop, however if I close the console window, it exits fine.
Edit: I added the shaders for reference.
Edit: I am using opengl version 3.1

Without the shaders, we can't say if the faulty code isn't GLSL (bad vertices transformations, etc.)
Have you tried checking glGetError to see if the problem doesn't come from your initialization code ?
Maybe try to set the output of your fragment shader to, say, vec4(1.0, 0.0, 0.0, 1.0) to check if its normal output is ill-formed.
Your last problem seems to unveil an undefined behavior, like bad memory allocation/deallocation, which may take place in your Player class (by the way consider passing the object as a reference in your initialization code, because it may at the moment trigger a shallow copy and then a double-free of some pointer).

Turns out there was something wrong with how I was returning the array of vertices from the function used to compute them. The rest of the code worked fine after that fix.

Related

OpenGL Won't Render With Shaders Mac [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm trying to get OpenGL to render a basic triangle with shaders on my newer Macbook Pro with the M1 chip. I'm stuck using Qt Creator as well.
I was able to set it up and get a basic Fixed-Pipeline scene rendering, but when I changed the version to use modern OpenGL I get that the version is 4.1, even though I set it to 3.2.
Here's how I set the version:
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setProfile(QSurfaceFormat::CoreProfile);
format.setMajorVersion(3);
format.setMinorVersion(2);
QSurfaceFormat::setDefaultFormat(format);
this->setFormat(format); // must be called before the widget or its parent window gets shown
When I print out the shader sources I can see that I'm getting exactly what I put into my vertex shader and fragment shader files, so I know that I'm loading in the files correctly. Here's how I'm using them inside of my Shader class:
programID = glCreateProgram(); // Creates the program ID
vs = glCreateShader(GL_VERTEX_SHADER); // Creates the vertex shader
fs = glCreateShader(GL_FRAGMENT_SHADER); // Creates the fragment shader
// Shader reading code removed...
const char* vSrc = vSource.c_str();
int size = vSource.size();
glShaderSource(vs, 1, &vSrc, &size);
glCompileShader(vs);
int result;
glGetShaderiv(vs, GL_COMPILE_STATUS, &result);
if (result == GL_FALSE)
{
int length;
glGetShaderiv(vs, GL_INFO_LOG_LENGTH, &length);
char* msg = new char[length];
glGetShaderInfoLog(vs, length, &length, msg);
std::cout << "Vertex Shader Message:\n" << msg << std::endl;
exit(1);
}
const char* fSrc = fSource.c_str();
int size2 = fSource.size();
glShaderSource(fs, 1, &fSrc, &size2);
glCompileShader(fs);
int result2;
glGetShaderiv(fs, GL_COMPILE_STATUS, &result2);
if (result2 == GL_FALSE)
{
int length;
glGetShaderiv(fs, GL_INFO_LOG_LENGTH, &length);
char* msg = new char[length];
glGetShaderInfoLog(fs, length, &length, msg);
std::cout << "Fragment Shader Message:\n" << msg << std::endl;
exit(1);
}
glAttachShader(programID, vs);
glAttachShader(programID, fs);
glLinkProgram(programID);
glValidateProgram(programID);
And then in my initOpenGL function I have:
float positions[6] =
{
-0.5, -0.5,
0.0, 0.5,
0.5, -0.5
};
glGenBuffers(1, &currentBuffer); // Generates the buffer
glBindBuffer(GL_ARRAY_BUFFER, currentBuffer); // Binds it before we use it
glBufferData(GL_ARRAY_BUFFER, 6 * sizeof(float), positions, GL_STATIC_DRAW); // Sends the data to OpenGL
glEnableVertexAttribArray(0); // Enables the buffer layout
glBindBuffer(GL_ARRAY_BUFFER, currentBuffer);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, 0); // Tells OpenGL how we stored the data
s = new Shader("main.vsh", "main.fsh");
glUseProgram(s->getShaderID());
Then, I render the data with the following call in my render function:
glDrawArrays(GL_TRIANGLES, 0, 3);
Here is the vertex shader code:
#version 410 core
layout (location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
Here is the fragment shader code:
#version 410 core
layout (location = 0) out vec4 color;
void main(void)
{
color = vec4(1.0, 0.0, 0.0, 1.0);
}
This is not a problem with the shaders. However in a core profile OpenGL Context you have to create Vertex Array Object, since the default VAO (0) is not valid. This is not optional:
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glEnableVertexAttribArray(0); // Enables the buffer layout
glBindBuffer(GL_ARRAY_BUFFER, currentBuffer);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 2, 0)
You have to bind the VAO before the vertex specification. The vertex specification is stored in the VAO.
The vertex specification stored in the currently bound VAO is used when drawing a mesh. Hence, you need to make sure the VAO is bound before the drawing command as well:
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLES, 0, 3);

How to setup shaders in openGL

I'm working on developing code in OpenGL, and I was completing one of the tutorials for a lesson. However, the code that I completed did not color the triangle. Based off of the tutorial, my triangle should come out as green, but it keeps turning out white. I think there is an error in the code for my shaders, but I can't seem to find the error.
I tried altering the code a few times, and I even moved on to the next tutorial, which shades each vertex. However, my triangle is still coming out as white.
#include <iostream> //Includes C++ i/o stream
#include <GL/glew.h> //Includes glew header
#include <GL/freeglut.h> //Includes freeglut header
using namespace std; //Uses the standard namespace
#define WINDOW_TITLE "Modern OpenGL" //Macro for window title
//Vertex and Fragment Shader Source Macro
#ifndef GLSL
#define GLSL(Version, Source) "#version " #Version "\n" #Source
#endif
//Variables for window width and height
int WindowWidth = 800, WindowHeight = 600;
/* User-defined Function prototypes to:
* initialize the program, set the window size,
* redraw graphics on the window when resized,
* and render graphics on the screen
* */
void UInitialize(int, char*[]);
void UInitWindow(int, char*[]);
void UResizeWindow(int, int);
void URenderGraphics(void);
void UCreateVBO(void); //This step is missing from Tutorial 3-3
void UCreateShaders(void);
/*Vertex Shader Program Source Code*/
const GLchar * VertexShader = GLSL(440,
in layout(location=0) vec4 vertex_Position; //Receive vertex coordinates from attribute 0. i.e. 2
void main(){
gl_Position = vertex_Position; //Sends vertex positions to gl_position vec 4
}
);
/*Fragment Shader Program Source Code*/
const GLchar * FragmentShader = GLSL(440,
void main(){
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); //Sets the pixels / fragments of the triangle to green
}
);
//main function. Entry point to the OpenGL Program
int main(int argc, char* argv[])
{
UInitialize(argc, argv); //Initialize the OpenGL program
glutMainLoop(); // Starts the Open GL loop in the background
exit(EXIT_SUCCESS); //Terminates the program successfully
}
//Implements the UInitialize function
void UInitialize(int argc, char* argv[])
{
//glew status variable
GLenum GlewInitResult;
UInitWindow(argc, argv); //Creates the window
//Checks glew status
GlewInitResult = glewInit();
if(GLEW_OK != GlewInitResult)
{
fprintf(stderr, "Error: %s\n", glewGetErrorString(GlewInitResult));
exit(EXIT_FAILURE);
}
//Displays GPU OpenGL version
fprintf(stdout, "INFO: OpenGL Version: %s\n", glGetString(GL_VERSION));
UCreateVBO(); //Calls the function to create the Vertex Buffer Object
UCreateShaders(); //Calls the function to create the Shader Program
//Sets the background color of the window to black. Optional
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
//Implements the UInitWindow function
void UInitWindow(int argc, char* argv[])
{
//Initializes freeglut
glutInit(&argc, argv);
//Sets the window size
glutInitWindowSize(WindowWidth, WindowHeight);
//Memory buffer setup for display
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
//Creates a window with the macro placeholder title
glutCreateWindow(WINDOW_TITLE);
glutReshapeFunc(UResizeWindow); //Called when the window is resized
glutDisplayFunc(URenderGraphics); //Renders graphics on the screen
}
//Implements the UResizeWindow function
void UResizeWindow(int Width, int Height)
{
glViewport(0,0, Width, Height);
}
//Implements the URenderGraphics function
void URenderGraphics(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Clears the screen
/*Creates the triangle*/
GLuint totalVertices = 3; //Specifies the number of vertices for the triangle i.e. 3
glDrawArrays(GL_TRIANGLES, 0, totalVertices); //Draws the triangle
glutSwapBuffers(); //Flips the back buffer with the front buffer every frame. Similar to GL Flush
}
//Implements the CreateVBO function
void UCreateVBO(void)
{
//Specifies coordinates for triangle vertices on x and y
GLfloat verts[] =
{
0.0f, 1.0f, //top-center of the screen
-1.0f, -1.0f, //bottom-left of the screen
1.0f, -1.0f //bottom-right of the screen
};
//Stores the size of the verts array / number of the coordinates needed for the triangle i.e. 6
float numVertices = sizeof(verts);
GLuint myBufferID; //Variable for vertex buffer object id
glGenBuffers(1, &myBufferID); //Creates 1 buffer
glBindBuffer(GL_ARRAY_BUFFER, myBufferID); //Activates the buffer
glBufferData(GL_ARRAY_BUFFER, numVertices, verts, GL_STATIC_DRAW); //Sends vertex or coordinate data to GPU
/*Creates the Vertex Attribute Pointer*/
GLuint floatsPerVertex = 2; //Number of coordinates per vertex
glEnableVertexAttribArray(0); //Specifies the initial position of the coordinates in the buffer
/*Instructs the GPU on how to handle the vertex bugger object data.
* Parameters: attribPointerPosition | coordinates per vertex | data type | deactivate normalization | 0 strides | 0 offset
*/
glVertexAttribPointer(0, floatsPerVertex, GL_FLOAT, GL_FALSE, 0, 0);
}
//Implements the UCreateShaders function
void UCreateShaders(void)
{
//Create a shader program object
GLuint ProgramId = glCreateProgram();
GLuint vertexShaderId = glCreateShader(GL_VERTEX_SHADER); //Create a Vertex Shader Object
GLuint fragmentShaderId = glCreateShader(GL_FRAGMENT_SHADER); //Create a Fragment Shader Object
glShaderSource(vertexShaderId, 1, &VertexShader, NULL); //Retrieves the vertex shader source code
glShaderSource(fragmentShaderId, 1, &FragmentShader, NULL); //Retrieves the fragment shader source code
glCompileShader(vertexShaderId); //Compile the vertex shader
glCompileShader(fragmentShaderId); //Compile the fragment shader
//Attaches the vertex and fragment shaders to the shader program
glAttachShader(ProgramId, vertexShaderId);
glAttachShader(ProgramId, fragmentShaderId);
glLinkProgram(ProgramId); //Links the shader program
glUseProgram(ProgramId); //Uses the shader program
}
When completed correctly, the code should result in a solid green triangle.
The variable gl_FragColor is unavailable in GLSL 4.4 core profile since it was deprecated. Because you don't specify a compatibility profile, the default core is assumed. Either use
#version 440 compatibility
for your shaders, or, even better, use the GLSL 4.4 onwards approach:
#version 440 core
layout(location = 0) out vec4 OUT;
void main(){
OUT = vec4(0.0, 1.0, 0.0, 1.0);
}

OpenGL -- glGenVertexArrays, "thread 1: exc_bad_access (code =1, address=0x0)"

I am writing a openGL program(C++) which draws a ground with two 3D objects above it. The programming tool I use is Xcode version 8.0(8A218a)(OSX 10.11.6).
my code(main.cpp):
#include <GL/glew.h>
#include <GL/freeglut.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <iostream>
#include <fstream>
using namespace std;
using glm::vec3;
using glm::mat4;
GLint programID;
//initialize all OpenGL objects
GLuint groundVAO, groundVBO, groundEBO; //ground
bool checkStatus( //OK
GLuint objectID,
PFNGLGETSHADERIVPROC objectPropertyGetterFunc,
PFNGLGETSHADERINFOLOGPROC getInfoLogFunc,
GLenum statusType)
{
GLint status;
objectPropertyGetterFunc(objectID, statusType, &status);
if (status != GL_TRUE)
{
GLint infoLogLength;
objectPropertyGetterFunc(objectID, GL_INFO_LOG_LENGTH, &infoLogLength);
GLchar* buffer = new GLchar[infoLogLength];
GLsizei bufferSize;
getInfoLogFunc(objectID, infoLogLength, &bufferSize, buffer);
cout << buffer << endl;
delete[] buffer;
return false;
}
return true;
}
bool checkShaderStatus(GLuint shaderID) //OK
{
return checkStatus(shaderID, glGetShaderiv, glGetShaderInfoLog, GL_COMPILE_STATUS);
}
bool checkProgramStatus(GLuint programID) //OK
{
return checkStatus(programID, glGetProgramiv, glGetProgramInfoLog, GL_LINK_STATUS);
}
string readShaderCode(const char* fileName) //OK
{
ifstream meInput(fileName);
if (!meInput.good())
{
cout << "File failed to load..." << fileName;
exit(1);
}
return std::string(
std::istreambuf_iterator<char>(meInput),
std::istreambuf_iterator<char>()
);
}
void installShaders() //OK
{
GLuint vertexShaderID = glCreateShader(GL_VERTEX_SHADER);
GLuint fragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER);
const GLchar* adapter[1];
//adapter[0] = vertexShaderCode;
string temp = readShaderCode("VertexShaderCode.glsl");
adapter[0] = temp.c_str();
glShaderSource(vertexShaderID, 1, adapter, 0);
//adapter[0] = fragmentShaderCode;
temp = readShaderCode("FragmentShaderCode.glsl");
adapter[0] = temp.c_str();
glShaderSource(fragmentShaderID, 1, adapter, 0);
glCompileShader(vertexShaderID);
glCompileShader(fragmentShaderID);
if (!checkShaderStatus(vertexShaderID) ||
!checkShaderStatus(fragmentShaderID))
return;
programID = glCreateProgram();
glAttachShader(programID, vertexShaderID);
glAttachShader(programID, fragmentShaderID);
glLinkProgram(programID);
if (!checkProgramStatus(programID))
return;
glDeleteShader(vertexShaderID);
glDeleteShader(fragmentShaderID);
glUseProgram(programID);
}
void keyboard(unsigned char key, int x, int y)
{
//TODO:
}
void sendDataToOpenGL()
{
//TODO:
//create solid objects here and bind to VAO & VBO
//Ground, vertices info
const GLfloat Ground[]
{
-5.0f, +0.0f, -5.0f, //0
+0.498f, +0.898, +0.0f, //grass color
+5.0f, +0.0f, -5.0f, //1
+0.498f, +0.898, +0.0f,
+5.0f, +0.0f, +5.0f, //2
+0.498f, +0.898, +0.0f,
-5.0f, +0.0f, +5.0f
};
GLushort groundIndex[] = {1,2,3, 1,0,3};
//Pass ground to vertexShader
//VAO
glGenVertexArrays(1, &groundVAO);
glBindVertexArray(groundVAO);
//VBO
glGenBuffers(1, &groundVBO);
glBindBuffer(GL_ARRAY_BUFFER, groundVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Ground), Ground, GL_STATIC_DRAW);
//EBO
glGenBuffers(1, &groundEBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, groundEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(groundIndex), groundIndex, GL_STATIC_DRAW);
//connectToVertexShader
glEnableVertexAttribArray(0); //position
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 6, 0);
glEnableVertexAttribArray(1); //color
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 6, (char*)(sizeof(float)*3));
}
void paintGL(void)
{
//TODO:
//render your objects and control the transformation here
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//translate model
glm::mat4 modelTransformMatrix = glm::translate(glm::mat4(), vec3(+0.0f, +0.0f, -3.0f));
//perspective view
glm::mat4 projectionMatrix = glm::perspective(+40.0f, +1.0f, +1.0f, +60.0f);
//ultimate matrix
glm::mat4 ultimateMatrix;
//register location on the graphics cards
GLint ultimateMatrixUniformLocation = glGetUniformLocation(programID, "ultimateMatrix");
/*GLint modelTransformMatrixUniformLocation = glGetUniformLocation(programID, "modelTransformMatrix");
GLint projectionMatrixUniformLocation = glGetUniformLocation(programID, "projectionMatrix");*/
//drawing the ground
/*glUniformMatrix4fv(modelTransformMatrixUniformLocation, 1, GL_FALSE, &modelTransformMatrix[0][0]);
glUniformMatrix4fv(projectionMatrixUniformLocation, 1, GL_FALSE, &projectionMatrix[0][0]);*/
glBindVertexArray(groundVAO);
ultimateMatrix = projectionMatrix * modelTransformMatrix;
glUniformMatrix4fv(ultimateMatrixUniformLocation, 1, GL_FALSE, &ultimateMatrix[0][0]);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
glFlush();
glutPostRedisplay();
}
void initializedGL(void) //run only once
{
glewInit();
glEnable(GL_DEPTH_TEST);
sendDataToOpenGL();
installShaders();
}
int main(int argc, char *argv[])
{
/*Initialization*/
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutCreateWindow("Try");
glutInitWindowSize(700, 700);
//const GLubyte* glversion = glGetString(GL_VERSION);
/*Register different CALLBACK function for GLUT to response
with different events, e.g. window sizing, mouse click or
keyboard stroke */
initializedGL();
//glewExperimental = GL_TRUE;
glutDisplayFunc(paintGL);
glutKeyboardFunc(keyboard);
/*Enter the GLUT event processing loop which never returns.
it will call different registered CALLBACK according
to different events. */
//printf("OpenGL ver: %s\n", glversion);
glutMainLoop();
return 0;
}
VertexShaderCode.glsl:
#version 430 // GLSL version your computer supports
in layout(location=0) vec3 position;
in layout(location=1) vec3 vertexColor;
uniform mat4 ultimateMatrix;
out vec3 theColor;
void main()
{
vec4 v = vec4(position, 1.0);
gl_Position = ultimateMatrix * v;
theColor = vertexColor;
}
FragmentShaderCode.glsl:
#version 430 //GLSL version your computer supports
out vec4 theColor2;
in vec3 theColor;
void main()
{
theColor2 = vec4(theColor, 1.0);
}
Functions: checkStatus, checkShaderStatus, checkProgramStatus, readShaderCode, installShaders should be all fine.
void keyboard() can be ignored since I havent implemented it yet(just for keyboard control).
I implemented the object "Ground" in sendDataToOpenGL(). But when I compiled and ran the program, "thread 1: exc_bad_access (code =1, address=0x0)" occured in the line of VAO:
And the pop-out window is just a white screen instead of a green grass(3d).
I have tried a method which was provided in other stackoverflow post: using glewExperimental = GL_TRUE;. I didnt see any errors by using that, but the popout screen vanished immediately just after it appeared. It seems that it couldnt help the problem.
Can someone give me a help? Thank you!
glGenVertexArrays is available in since OpenGL version 3.0. If vertex array objects are supported can be checked by glewGetExtension("GL_ARB_vertex_array_object").
Glew can enable additional extensions by glewExperimental = GL_TRUE;. See the GLEW documantation which says:
GLEW obtains information on the supported extensions from the graphics driver. Experimental or pre-release drivers, however, might not report every available extension through the standard mechanism, in which case GLEW will report it unsupported. To circumvent this situation, the glewExperimental global switch can be turned on by setting it to GL_TRUE before calling glewInit(), which ensures that all extensions with valid entry points will be exposed.
Add this to your code:
glewExperimental = GL_TRUE;
glewInit();

OpenGL: Can't draw vertices generated by a compute shader stored in SSBO

For a few days now, I've been struggling with compute shaders and buffers. I've looked through multiple examples of their usage, such as in "The OpenGL redbook 8th edition" and on the threads "OpenGL vertices in shader storage buffer" and "OpenGL Compute Shader SSBO," but I can't seem to get this working.
I'm trying to make, for now, a simple program that can generate vertices by invoking a compute shader, store those generated vertices in an SSBO, pass those vertices to a vertex shader, and then go through the rest of the pipeline to make a single, stationary image, which, in this case, is just a line. However, after compiling the program, only a dot at (0,0) is shown.
The compute shader:
#version 430 core
layout(std430, binding = 0) buffer data
{
float coordinate_array[200][3];
};
layout(local_size_x=20, local_size_y=1, local_size_z=1) in;
void main(void)
{
uint x = gl_GlobalInvocationID.x;
if(x%2 == 0)
{
coordinate_array[x][0] = -.5;
coordinate_array[x][0] = -.5;
coordinate_array[x][0] = 0;
}
else
{
coordinate_array[x][0] = .5;
coordinate_array[x][0] = .5;
coordinate_array[x][0] = 0;
}
}
Program:
char *source;
GLuint vertexShader, fragmentShader, computeShader;
GLuint program, computeShaderProgram;
GLuint computeShaderBuffer;
numVertices = 200;
dimension = 3;
size = sizeof(float**)*numVertices*dimension;
//Create the buffer the compute shader will write to
glGenBuffers(1, &computeShaderBuffer);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, computeShaderBuffer);
glBufferData(GL_SHADER_STORAGE_BUFFER, size, NULL, GL_STATIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, computeShaderBuffer);
//Create the compute shader
computeShader = glCreateShader(GL_COMPUTE_SHADER);
source = getSource(computeShaderSrc);
glShaderSource(computeShader, 1, &source, NULL);
glCompileShader(computeShader);
//Create and dispatch the compute shader program
computeShaderProgram = glCreateProgram();
glAttachShader(computeShaderProgram, computeShader);
glLinkProgram(computeShaderProgram);
glUseProgram(computeShaderProgram);
glDispatchCompute(numVertices/20, 1, 1);
//Generate the vertex array object
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
//Use buffer as a set of vertices
glMemoryBarrier(GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, computeShaderBuffer);
//Create the vertex shader
vertexShader = glCreateShader(GL_VERTEX_SHADER);
source = getSource(vertexShaderSource);
glShaderSource(vertexShader, 1, &source, NULL);
//Create the fragment shader
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
source = getSource(fragmentShaderSource);
glShaderSource(fragmentShader, 1, &source, NULL);
glCompileShader(fragmentShader);
//Create the program to draw
program = glCreateProgram();
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);
glUseProgram(program);
//Enable the vertex array
glVertexAttribPointer(0, dimension, GL_FLOAT, 0, 0, (void*)0);
glEnableVertexAttribArray(0);
//Draw it
glClear(GL_COLOR_BUFFER_BIT);
glBindVertexArray(VAO);
glDrawArrays(GL_POINTS, 0, numVertices);
glFlush();
Vertex Shader:
#version 430 core
layout(location = 0) in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
In this sample, I try to use an SSBO to hold the 2 endpoints of a line segment (-.5, -.5) (.5, .5). The shader storage buffer holds this segment 100 times instead of just once; I thought there was a dispatch minimum I wasn't meeting.
If I call glGetError(program) or glGetError(computeShaderProgram), it returns GL_NO_ERROR. Also, the shaders compile successfully and both programs link without error. It does, however, warn me that extension 420pack is required in the compute shader for some reason. When I used #extension ARB_420pack (not my exact writing), it didn't get rid of the warning. My drivers are up-to-date and I have a Radeon R9 270x (OpenGL 4.4). I'm using MinGW and C to create this program, but I do know some C++.
Does anyone have any ideas on what I'm doing wrong?
Does this seem right in your CS:
if(x%2 == 0)
{
coordinate_array[x][0] = -.5;
coordinate_array[x][0] = -.5;
coordinate_array[x][0] = 0;
}
else
{
coordinate_array[x][0] = .5;
coordinate_array[x][0] = .5;
coordinate_array[x][0] = 0;
}
look carefully at the indices you assign to. It's always [x][0]. Clearly you want at least one index to vary.

Meaning of "index" parameter in glEnableVertexAttribArray and (possibly) a bug in the OS X OpenGL implementation

1) Do I understand correctly that to draw using vertex arrays or VBOs I need for all my attributes to either call glBindAttribLocation before the shader program linkage or call glGetAttribLocation after the shader program was successfully linked and then use the bound/obtained index in the glVertexAttribPointer and glEnableVertexAttribArray calls?
To be more specific: these three functions - glGetAttribLocation, glVertexAttribPointer and glEnableVertexAttribArray - they all have an input parameter named "index". Is it the same "index" for all the three? And is it the same thing as the one returned by glGetAttribLocation?
If yes:
2) I've been facing a problem on OS X, I described it here: https://stackoverflow.com/questions/28093919/using-default-attribute-location-doesnt-work-on-osx-osx-opengl-bug , but unfortunately didn't get any replies.
The problem is that depending on what attribute locations I bind to my attributes I do or do not see anything on the screen. I only see this behavior on my MacBook Pro with OS X 10.9.5; I've tried running the same code on Linux and Windows and it seems to work on those platforms independently from which locations are my attributes bound to.
Here is a code example (which is supposed to draw a red triangle on the screen) that exhibits the problem:
#include <iostream>
#include <GLFW/glfw3.h>
GLuint global_program_object;
GLint global_position_location;
GLint global_aspect_ratio_location;
GLuint global_buffer_names[1];
int LoadShader(GLenum type, const char *shader_source)
{
GLuint shader;
GLint compiled;
shader = glCreateShader(type);
if (shader == 0)
return 0;
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len > 1)
{
char* info_log = new char[info_len];
glGetShaderInfoLog(shader, info_len, NULL, info_log);
std::cout << "Error compiling shader" << info_log << std::endl;
delete info_log;
}
glDeleteShader(shader);
return 0;
}
return shader;
}
int InitGL()
{
char vertex_shader_source[] =
"attribute vec4 att_position; \n"
"attribute float dummy;\n"
"uniform float uni_aspect_ratio; \n"
"void main() \n"
" { \n"
" vec4 test = att_position * dummy;\n"
" mat4 mat_projection = \n"
" mat4(1.0 / uni_aspect_ratio, 0.0, 0.0, 0.0, \n"
" 0.0, 1.0, 0.0, 0.0, \n"
" 0.0, 0.0, -1.0, 0.0, \n"
" 0.0, 0.0, 0.0, 1.0); \n"
" gl_Position = att_position; \n"
" gl_Position *= mat_projection; \n"
" } \n";
char fragment_shader_source[] =
"void main() \n"
" { \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
" } \n";
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program_object;
GLint linked;
vertex_shader = LoadShader(GL_VERTEX_SHADER , vertex_shader_source );
fragment_shader = LoadShader(GL_FRAGMENT_SHADER, fragment_shader_source);
program_object = glCreateProgram();
if(program_object == 0)
return 1;
glAttachShader(program_object, vertex_shader );
glAttachShader(program_object, fragment_shader);
// Here any index except 0 results in observing the black screen
glBindAttribLocation(program_object, 1, "att_position");
glLinkProgram(program_object);
glGetProgramiv(program_object, GL_LINK_STATUS, &linked);
if(!linked)
{
GLint info_len = 0;
glGetProgramiv(program_object, GL_INFO_LOG_LENGTH, &info_len);
if(info_len > 1)
{
char* info_log = new char[info_len];
glGetProgramInfoLog(program_object, info_len, NULL, info_log);
std::cout << "Error linking program" << info_log << std::endl;
delete info_log;
}
glDeleteProgram(program_object);
return 1;
}
global_program_object = program_object;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(global_program_object);
global_position_location = glGetAttribLocation (global_program_object, "att_position");
global_aspect_ratio_location = glGetUniformLocation(global_program_object, "uni_aspect_ratio");
GLfloat vertices[] = {-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f};
glGenBuffers(1, global_buffer_names);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 9, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0;
}
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glUseProgram(global_program_object);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glVertexAttribPointer(global_position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(global_position_location);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(global_position_location);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void FreeGL()
{
glDeleteBuffers(1, global_buffer_names);
glDeleteProgram(global_program_object);
}
void SetViewport(int width, int height)
{
glViewport(0, 0, width, height);
glUseProgram(global_program_object);
glUniform1f(global_aspect_ratio_location, static_cast<GLfloat>(width) / static_cast<GLfloat>(height));
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
InitGL();
// Double the resolution to correctly draw with Retina display
SetViewport(1280, 960);
while (!glfwWindowShouldClose(window))
{
Render();
glfwSwapBuffers(window);
glfwPollEvents();
}
FreeGL();
glfwTerminate();
return 0;
}
Does this look like a bug to you? Can anyone reproduce it? If it's a bug where should I report it?
P.S.
I've also tried SDL instead of GLFW, the behavior is the same...
The behavior you see is actually correct as per the spec, and MacOSX has something to do with this, but only in a very indirect way.
To answer question 1) first: You are basically correct. With modern GLSL (>=3.30), you can also specifiy the desired index via the layout(location=...) qualifier directly in the shader code, instead of using glBindAttribLocation(), but that is only a side note.
The problem you are facing is that you are using a legacy GL context. You do not specify a desired version, so you will get maximum compatibility to the old way. Now on windows, you are very likely to get a compatibility profile of the highest version supported by the implementation (typically GL3.x or GL4.x on non-ancient GPUs).
However, on OSX, you are limited to at most GL2.1. And this is where the crux lies: your code is invalid in GL2.x. To explain this, I have to go back in GL history. In the beginning, there was the immediate mode, so you did draw by
glBegin(primType);
glColor3f(r,g,b);
glVertex3f(x,y,z);
[...]
glColor3f(r,g,b);
glVertex3f(x,y,z);
glEnd();
Note that the glVertex call is what actually creates a vertex. All other per-vertex attributes are basically some current vertex state which can be set any time, but calling glVertex will take all of those current attributes together with the position to form the vertex which is fed to the pipeline.
Now when vertex arrays were added, we got functions like glVertexPointer(), glColorPointer() and so on, and each attribute array could be enabled or disabled separately via glEnableClientState(). The array-based draw calls are actually defined in terms of the immediate mode in the OpenGL 2.1 specification as glDrawArrays(GLenum mode, GLint first, GLsizei count) being equivalent to
glBegin(mode);
for (i=0; i<count; i++)
ArrayElement(first + i);
glEnd();
with ArrayElement(i) being defined (this one is derived from the wording of theGL 1.5 spec):
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
This definition has some sublte consequence: You must have the GL_VERTEX_ARRAY attribute array enabled, otherwise nothing will be drawn, since no equivalent of glVertex calls are generated.
Now when the generic attributes were added in GL2.0, a special guarantee was made: generic attribute 0 is aliasing the builtin glVertex attribute - so both can be used interchangeably, in immediate mode as well as in arrays. So glVertexAttrib3f(0,x,y,z) "creates" a vertex the same way glVertex3f(x,y,z) would have. And using an array with glEnableVertexAttribArray(0) is as good as glEnableClientState(GL_VERTEX_ARRAY).
In GL 2.1, the ArrayElement(i) function now looks as follows:
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
for (a=1; a<max_attribs; a++) {
if ( generic_attrib_a_enabled )
glVertexAttrib...(a, <i-th value of attrib a> );
}
if ( generic_attrib_0_enabled)
glVertexAttrib...(0, <i-th value of attrib 0> );
else if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
Now this is what happens to you. You absolutely need attribute 0 (or the old GL_VERTEX_ARRAY attribute) to be enabled for this to generate any vertices for the pipeline.
Note that it should be possible in theory to just enable attribute 0, no matter if it is used in the shader or not. You should just make sure that the corresponding attrib pointer pionts to valid memory, to be 100% safe. So you simply could check if your attribute index 0 is used, and if not, just set the same pointer as attribute 0 as you did for your real attribute, and the GL should be happy. But I haven't tried this.
In more modern GL, these requirements are not there anymore, and drawing without attribute 0 will work as intended, and that is what you saw on those other systems. Maybe you should consider switching to modern GL, say >= 3.2 core profile, where the issue will not be present (but you need to update your code a bit, including the shaders).