OpenGL Odd Row Fragment Shader Error: Not drawing - opengl

I'm writing an application that renders every frame of in interleaved stereoscopic 3d. To make this happen, I am writing two fragment shaders: one to render the left eye's frame's odd rows, and one to render the even rows of pixels of the right frame.
I was using OSX's builtin OpenGL Shader Builder application, and I was able to successfully render every odd row as green:
As you can see, the frag code I'm using looks like this:
void main(){
if ( mod(gl_FragCoord.y - 0.5, 2.0) == 1.0){
gl_FragCoord = vec4(0.0, 1.0, 0.0, 1.0);
}
}
However, I wrote a small OpenGL application to test this shader (Btw, this is NVIDIA OpenGL 2.1, OSX 10.6.8):
#include <iostream>
#include <stdio.h>
#ifdef __APPLE__
#include <OpenGL/gl.h>
#include <OpenGL/glu.h>
#include <GLUT/glut.h>
void DrawGLScene(){
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glFlush();
glutSwapBuffers();
}
void Shading(){
//Fragment shader we want to use
GLuint oddRowShaderId = glCreateShader(GL_FRAGMENT_SHADER);
std::cout << "Creating the fragment shader with id " << oddRowShaderId << std::endl;
const GLchar *source[] =
{ "void main(){ \n",
" if (mod(gl_FragCoord.y-0.5, 2.0) == 0.0){\n",
" gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );\n",
//" gl_BackColor = vec4( 0.0, 1.0, 0.0, 1.0 );\n"
" }\n",
"}\n"
};
std::cout << "Shader source:\n" << source[0] << source[1] << source[2] << source[3] < < source[4] << std::endl;
std::cout << "Gathering shader source code" << std::endl;
glShaderSource(oddRowShaderId, 1, source, 0);
std::cout << "Compiling the shader" << std::endl;
glCompileShader(oddRowShaderId);
std::cout << "Creating new glCreateProgram() program" << std::endl;
GLuint shaderProgramId = glCreateProgram(); //Shader program id
std::cout << "Attaching shader to the new program" << std::endl;
glAttachShader(shaderProgramId, oddRowShaderId); //Add the fragment shader to the program
std::cout << "Linking the program " << std::endl;
glLinkProgram(shaderProgramId); //Link the program
std::cout << "Using the shader program for rendering" << std::endl;
glUseProgram(shaderProgramId); //Start using the shader
}
void keyboard(int key, int x, int y){
switch(key){
case 's':
Shading();
break;
case 'q':
exit(0);
break;
}
}
void idleFunc(){
glutPostRedisplay(); //Redraw the scene.
}
int main(int argc, char** argv){
glutInit(&argc, argv);
glutInitWindowPosition(100, 100);
glutInitWindowSize(1000,1000);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow("anaglyph test");
glutDisplayFunc(DrawGLScene);
glutSpecialFunc(keyboard);
glutIdleFunc(idleFunc);
glutMainLoop();
}
This is the output I get from running the code:
I have a feeling I may not be compiling and glUsePrograming the fragment shader correctly.

Your can save some performance and gain some compatibility and it will result in less headache by using 2 pixel texture mask (even pixels are black, odd - white) and tile it on top of full-screen quads (you will have 2 of them: right eye and left eye). Use inverted mask or put some coord offset on second quad. In general it will look like this:
(perspective and huge gaps added for visualization)
Put scene texture(or whatever you have) on quad and use mask color to discard fragments or transparency. To avoid using 2 g-buffers (GPU memory) you can render a full scene for one eye in texture and for another in view and on top of it render full-screen quad with first scene texture and transparency mask.
Or render them separate if you need two outputs.
Main point is that you should use masking instead of logic operations based on texture coordinates.

Related

Memory barrier problems for writing and reading an image OpenGL

i'm having a problem trying to reading an image from a fragment shader, first i write into the image in shader porgram A (im just painting blue on the image) then i'm reading from another shader program B to display the image, but the reading part is not getting the right color i'm getting a black image
Unexpected result
This is my application code:
void GLAPIENTRY MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
std::cout << "GL CALLBACK: type = " << std::hex << type << ", severity = " << std::hex << severity << ", message = " << message << "\n"
<< (type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "") << std::endl;
}
class ImgRW
: public Core
{
public:
ImgRW()
: Core(512, 512, "JFAD")
{}
virtual void Start() override
{
glEnable(GL_DEBUG_OUTPUT);
glDebugMessageCallback(MessageCallback, nullptr);
shader_w = new Shader("w_img.vert", "w_img.frag");
shader_r = new Shader("r_img.vert", "r_img.frag");
glGenTextures(1, &space);
glBindTexture(GL_TEXTURE_2D, space);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512);
glBindImageTexture(0, space, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
}
virtual void Update() override
{
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
virtual void End() override
{
delete shader_w;
delete shader_r;
glDeleteTextures(1, &space);
glDeleteVertexArrays(1, &vertex_array);
}
private:
Shader* shader_w;
Shader* shader_r;
GLuint vertex_array;
GLuint space;
};
#if 1
CORE_MAIN(ImgRW)
#endif
and these are my fragment shaders:
Writing to image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
imageStore(img, ivec2(gl_FragCoord.xy), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Reading from image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
vec4 color = imageLoad(img, ivec2(gl_FragCoord.xy));
out_color = color;
}
The only way that i get the correct result is if i change the order of the drawing commands and i dont need the memory barriers, like this (in the Update fuction of above):
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I don't know if the problem is the graphics card or the drivers or if i'm missing some kind of flag that enables memoryBarriers or if i put the wrong barrier bits or if i placed the barriers in the code in the wrong part
The Vertex shader for both shader programs is the next:
#version 430 core
void main()
{
vec2 v[4] = vec2[4]
(
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2(-1.0, 1.0),
vec2( 1.0, 1.0)
);
vec4 p = vec4(v[gl_VertexID], 0.0, 1.0);
gl_Position = p;
}
and in my init function is:
void Window::init()
{
glfwInit();
window = glfwCreateWindow(getWidth(), getHeight(), name, nullptr, nullptr);
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebufferSizeCallback);
glfwSetCursorPosCallback(window, cursorPosCallback);
//glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
assert(gladLoadGLLoader((GLADloadproc)glfwGetProcAddress) && "Couldn't initilaize OpenGL");
glEnable(GL_DEPTH_TEST);
}
and in my function run i'm calling my start, update and end functions
void Core::Run()
{
std::cout << glGetString(GL_VERSION) << std::endl;
Start();
float lastFrame{ 0.0f };
while (!window.close())
{
float currentFrame = static_cast<float>(glfwGetTime());
Time::deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
glViewport(0, 0, getWidth(), getHeight());
glClearBufferfv(GL_COLOR, 0, &color[0]);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);
Update();
glfwSwapBuffers(window);
glfwPollEvents();
}
End();
}
glEnable(GL_DEPTH_TEST);
As I suspected.
Just because a fragment shader doesn't write a color output doesn't mean that those fragments will not affect the depth buffer. If the fragment passes the depth test and the depth write mask is on (assuming no other state is involved), it will update the depth buffer with the current fragment's depth (and the color buffer with uninitialized values, but that's a different matter).
Since you're drawing the same geometry both times, the second rendering's fragments will get the same depth values as the corresponding fragments from the first rendering. But the default depth function is GL_LESS. Since any value is not less than itself, this means that all fragments from the second rendering fail the depth test.
And therefore, they don't get rendered.
So just turn off the depth test. And while you're at it, turn off color writes for your "writing" rendering pass, since you're not writing to the color buffers.
Now, you do properly need the memory barrier between the two draw calls. But you only need the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT, since that's how you're reading the data (via image load/store, not samplers).

OpenGL (ES) Image Processing C++

I am currently trying to do image processing using OpenGL ES. I am trying to do basic image effects like blurring switching color space and so on.
I want to build the simplest program to do the following things:
Image loading
Image processing (using shader)
Image saving
I have managed to build the following setup :
An OpenGL context
The image I want to do effects on loaded using DevIL.
Two shaders (one vertex shaders and one fragment shaders)
I am now stuck at using the image I loaded to send data to fragment shader. What I am trying to do is to send the image as a sampler2D to the fragment shader and apply treatment on it.
I have multiple questions such as:
Do I need a vertex shader if all I want to do is pure 2D image processing ?
If I do, what should be done in this vertex shader as I have no vertices at all. Should I create quad vertices (like (0,0) (1, 0) (0, 1) (1, 1)) ? If so, why ?
Do I need to use things like VBO (which seems to be related to the vertex shader), FBO or other thing like that ?
Can't I just load my image into the texture and wait for the fragment shader to do everything I want on this texture ?
Can someone provides some simple piece of "clean" code that could help me understand (without any fancy classes that makes the understanding so complicated) ?
Here is what my fragment shader looks like for simple color swapping:
uniform int width;
uniform int height;
uniform sampler2D texture;
void main() {
vec2 texcoord = vec2(gl_FragCoord.x/width, gl_FragCoord.y/height);
vec4 texture_value = texture2D(texture, texcoord);
gl_FragColor = texture_value.bgra;
}
and my main.cpp :
int main(int argc, char** argv)
{
if (argc != 4) {
std::cerr << "Usage: " << argv[0] << " <vertex shader path> <fragment shader path> <image path>" << std::endl;
return EXIT_FAILURE;
}
// Get an EGL valid display
EGLDisplay display;
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if (display == EGL_NO_DISPLAY) {
std::cerr << "Failed to get EGL Display" << std::endl
<< "Error: " << eglGetError() << std::endl;
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully get EGL Display." << std::endl;
}
// Create a connection to the display
int minor, major;
if (eglInitialize(display, &minor, &major) == EGL_FALSE) {
std::cerr << "Failed to initialize EGL Display" << std::endl
<< "Error: " << eglGetError() << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully intialized display (OpenGL ES version " << minor << "." << major << ")." << std::endl;
}
// OpenGL ES Config are used to specify things like multi sampling, channel size, stencil buffer usage, & more
// See the doc: https://www.khronos.org/registry/EGL/sdk/docs/man/html/eglChooseConfig.xhtml for more informations
EGLConfig config;
EGLint num_configs;
if (!eglChooseConfig(display, configAttribs, &config, 1, &num_configs)) {
std::cerr << "Failed to choose EGL Config" << std::endl
<< "Error: " << eglGetError() << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
else {
std::cerr << "Successfully choose OpenGL ES Config ("<< num_configs << ")." << std::endl;
}
// Creating an OpenGL Render Surface with surface attributes defined above.
EGLSurface surface = eglCreatePbufferSurface(display, config, pbufferAttribs);
if (surface == EGL_NO_SURFACE) {
std::cerr << "Failed to create EGL Surface." << std::endl
<< "Error: " << eglGetError() << std::endl;
}
else {
std::cerr << "Successfully created OpenGL ES Surface." << std::endl;
}
eglBindAPI(EGL_OPENGL_API);
EGLContext context = eglCreateContext(display, config, EGL_NO_CONTEXT, contextAttribs);
if (context == EGL_NO_CONTEXT) {
std::cerr << "Failed to create EGL Context." << std::endl
<< "Error: " << eglGetError() << std::endl;
}
else {
std::cerr << "Successfully created OpenGL ES Context." << std::endl;
}
//Bind context to surface
eglMakeCurrent(display, surface, surface, context);
// Create viewport and check if it has been created correctly
glViewport(0, 0, WIDTH, HEIGHT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
if (viewport[2] != WIDTH || viewport[3] != HEIGHT) {
std::cerr << "Failed to create the viewport. Size does not match (glViewport/glGetIntegerv not working)." << std::endl
<< "OpenGL ES might be faulty!" << std::endl
<< "If you are on Raspberry Pi, you should not updated EGL as it will install fake EGL." << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
// Clear buffer and get ready to draw some things
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create a shader program
GLuint program = load_shaders(std::string(argv[1]), std::string(argv[2]));
if (program == -1)
{
std::cerr << "Failed to create a shader program. See above for more details." << std::endl;
eglTerminate(display);
return EXIT_FAILURE;
}
/* Initialization of DevIL */
if (ilGetInteger(IL_VERSION_NUM) < IL_VERSION) {
std::cerr << "Failed to use DevIL: Wrong version." << std::endl;
return EXIT_FAILURE;
}
ilInit();
ILuint image = load_image(argv[3]);
GLuint texId;
glGenTextures(1, &texId); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texId); /* Binding of texture name */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear interpolation for minifying filter */
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT),
0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); /* Texture specification */
Thanks.
Do I need a vertex shader as all I want to do is pure 2D image processing ?
Using vertex and fragment shaders is mandatory in OpenGL ES 2.
If I do, what should be done in this vertex shader as I have no vertices at all. Should I create quad vertices (like (0,0) (1, 0) (0, 1) (1, 1)) ? If so, why ?
Yes. Because that's how OpenGL ES 2 works. Otherwise you would need to use something like computer shaders (supported in OpenGL ES 3.1+) or OpenCL.
Do I need to use things like VBO (which seems to be related to the vertex shader), FBO or other thing like that ?
Using VBO/IBO won't make practically any difference for you since you only have 4 vertices and 2 primitives. You may want to render to texture, depending on your needs.
Can't I just load my image into the texture and wait for the fragment shader to do everything I want on this texture ?
No.

Getting black screen when using glCreateVertexArrays

I am learning opengl right now. I have bought a book called OpenGL Superbible. But I couldn't survived to properly configure the environment. I use GLFW 3.2 as windowing toolkit (if that is what it is called) and GLEW 2.0.
I am trying to compile and use shaders to draw on screen. According to the book this should draw a triangle on screen. But it doesn't. Instead, it shows the clear background color that is set by glClearColor.
This is the Code:
#include <iostream>
#include <GLFW\glfw3.h>
#include <GL\glew.h>
GLuint CompileShaders();
int main(void) {
// Initialise GLFW
if (!glfwInit()) {
fprintf(stderr, "Failed to initialize GLFW\n");
getchar();
return -1;
}
// Open a window and create its OpenGL context
GLFWwindow *window;
window = glfwCreateWindow(1024, 768, "Tutorial 01", NULL, NULL);
if (window == NULL) {
fprintf(stderr, "Failed to open GLFW window. If you have an Intel GPU, "
"they are not 3.3 compatible. Try the 2.1 version of the "
"tutorials.\n");
getchar();
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// Initialize GLEW
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
getchar();
glfwTerminate();
return -1;
}
// Ensure we can capture the escape key being pressed below
glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
GLuint RederingProgram = CompileShaders();
GLuint VertexArrayObject;
glCreateVertexArrays(1, &VertexArrayObject);
glBindVertexArray(VertexArrayObject);
int LoopCounter = 0;
do {
// Clear the screen. It's not mentioned before Tutorial 02, but it can cause
// flickering, so it's there nonetheless.
/*const GLfloat red[] = {
(float)sin(LoopCounter++ / 100.0f)*0.5f + 0.5f,
(float)cos(LoopCounter++ / 100.0f)*0.5f + 0.5f,
0.0f, 1.0f
};*/
// glClearBufferfv(GL_COLOR, 0, red);
// Draw nothing, see you in tutorial 2 !
glUseProgram(RederingProgram);
glDrawArrays(GL_TRIANGLES, 0, 3);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
} // Check if the ESC key was pressed or the window was closed
while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
// Close OpenGL window and terminate GLFW
glfwTerminate();
return 0;
}
GLuint CompileShaders() {
GLuint VertexShader;
GLuint FragmentShader;
GLuint Program;
static const GLchar *VertexShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
"const vec4 vertices[3] = vec4[3](vec4(0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(-0.25, -0.25, 0.5, 1.0),\n",
" "
"vec4(0.25, 0.25, 0.5, 1.0)); \n",
" gl_Position = vertices[gl_VertexID]; \n",
"} "
" "
" \n"};
static const GLchar *FragmentShaderSource[] = {
"#version 450 core "
" "
"\n",
" "
" \n",
"out vec4 color; \n",
" "
" \n",
"void main(void) "
" "
"\n",
"{ "
" "
" \n",
" color = vec4(0.0, 0.8, 1.0, 1.0); \n",
"} "
" "
" \n"};
// Create and compile vertex shader.
VertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
glCompileShader(VertexShader);
// Create and compile fragment shader.
FragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(FragmentShader, 1, FragmentShaderSource, NULL);
glCompileShader(FragmentShader);
// Create program, attach shaders to it, and link it
Program = glCreateProgram();
glAttachShader(Program, VertexShader);
glAttachShader(Program, FragmentShader);
glLinkProgram(Program);
// Delete the shaders as the program has them now.
glDeleteShader(FragmentShader);
glDeleteShader(VertexShader);
return Program;
}
I am working in visual studio 2015. I have all the libraries to develop some opengl (I think), but somethig is wrong. Please help me. By the way, glCreateVertexArrays() function is only in Opengl 4.5 and above, I know, since the book is explained in opengl 4.5.
I will go crazy soon because no proper tutorials for beginners. People who have learned this are very ambitious people. I bow before those people.
Your shaders shouldn't compile:
glShaderSource(VertexShader, 1, VertexShaderSource, NULL);
This tells the GL that it should expect an array of 1 GLchar pointers. However, your GLSL code is actually split into several individual strings (note the commas);
static const GLchar *VertexShaderSource[] = {
"...GLSL-code..."
"...GLSL-code..."
"...GLSL-code...", // <- this comma ends the first string vertexShaderSource[0]
"...GLSL-code..." // vertexShaderSource[1] starts here
[...]
There are two possible solutions:
Just remove those commas, so that your array contains of just one element pointing to the whole GLSL source as one string.
Tell the GL the truth about your data:
glShaderSoure(..., sizeof(vertexShaderSource)/sizeof(vertexShaderSource[0]), vertexShaderSource, ,,,)
Apart from that, you should always query the compilation and link status of your shaders and program objects. Also query the shader compilation and program link info logs. They will contain human-readbale messages telling you why the compilation / link did fail.

Segmentation Fault glDrawArrays()

I am trying to generate a terrain from a file, and display it in a window on my screen in openGL. I am getting a seg fault and I've localised it to the glDrawArrays() call in my code.
I might be calling it wrong, or my heightmap may have too many vertices for the way I am calling it.
I will link my code below and put a comment next to the segfault line.
/**
* A typical program flow and methods for rendering simple polygons
* using freeglut and openGL + GLSL
*/
#include <stdio.h>
// GLEW loads OpenGL extensions. Required for all OpenGL programs.
#include <GL/glew.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
// Utility code to load and compile GLSL shader programs
#include "shader.hpp"
#include <iostream>
#include <fstream>
#include <vector>
#define WINDOW_WIDTH 400
#define WINDOW_HEIGHT 400
//#define VALS_PER_VERT 3
//#define VALS_PER_COLOUR 4
//#define NUM_VERTS 3 // Total number of vertices to load/render
#define VALS_PER_VERT_HEIGHT 5
#define VALS_PER_COLOUR_HEIGHT 4
#define HEIGHT_VERTS 5 //height map vertices per line
using namespace std;
// Handle to our VAO generated in setShaderData method
//heightmap
unsigned int vertexVaoHandleHeight;
// Handle to our shader program
unsigned int programID;
/**
* Sets the shader uniforms and vertex data
* This happens ONCE only, before any frames are rendered
* #param id, Shader program object to use
* #returns 0 for success, error otherwise
*/
int setShaderData(const unsigned int &id)
{
/*
* What we want to draw
* Each set of 3 vertices (9 floats) defines one triangle
* You can define more triangles to draw here
*/
float heightmapVerts[ HEIGHT_VERTS*VALS_PER_VERT_HEIGHT ] = {
//5
-0.9, -0.6, -0.4, -0.6, -0.9,
-0.2, 0.1, 0.3, 0.1, -0.3,
0, 0.4, 0.8, 0.4, 0,
-0.2, 0.1, 0.3, 0.1, -0.3,
0.5, -0.6, -0.4, -0.6, -0.9,
};
std::cout << "1" << endl;
// Colours for each vertex; red, green, blue and alpha
// This data is indexed the same order as the vertex data, but reads 4 values
// Alpha will not be used directly in this example program
//may cause problems because less numbers
float heightColours[ HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT ] = {
0.8f, 0.7f, 0.5f, 1.0f,
0.3f, 0.7f, 0.1f, 1.0f,
0.8f, 0.2f, 0.5f, 1.0f,
};
std::cout << "2" << endl;
// heightmap stuff ##################################################
// Generate storage on the GPU for our triangle and make it current.
// A VAO is a set of data buffers on the GPU
glGenVertexArrays(1, &vertexVaoHandleHeight);
glBindVertexArray(vertexVaoHandleHeight);
std::cout << "3" << endl;
// Generate new buffers in our VAO
// A single data buffer store for generic, per-vertex attributes
unsigned int bufferHeight[2];
glGenBuffers(2, bufferHeight);
// Allocate GPU memory for our vertices and copy them over
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_VERT_HEIGHT, heightmapVerts, GL_STATIC_DRAW);
// Do the same for our vertex colours
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*HEIGHT_VERTS*VALS_PER_COLOUR_HEIGHT, heightColours, GL_STATIC_DRAW);
std::cout << "4" << endl;
// Now we tell OpenGL how to interpret the data we just gave it
// Tell OpenGL what shader variable it corresponds to
// Tell OpenGL how it's formatted (floating point, 3 values per vertex)
int vertLocHeight = glGetAttribLocation(id, "a_vertex");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[0]);
glEnableVertexAttribArray(vertLocHeight);
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
std::cout << "5" << endl;
// Do the same for the vertex colours
int colourLocHeight = glGetAttribLocation(id, "a_colour");
glBindBuffer(GL_ARRAY_BUFFER, bufferHeight[1]);
glEnableVertexAttribArray(colourLocHeight);
glVertexAttribPointer(colourLocHeight, VALS_PER_COLOUR_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
// heightmap stuff ##################################################
std::cout << "6" << endl;
// An argument of zero un-binds all VAO's and stops us
// from accidentally changing the VAO state
glBindVertexArray(0);
// The same is true for buffers, so we un-bind it too
glBindBuffer(GL_ARRAY_BUFFER, 0);
std::cout << "7" << endl;
return 0; // return success
}
/**
* Renders a frame of the state and shaders we have set up to the window
* Executed each time a frame is to be drawn.
*/
void render()
{
// Clear the previous pixels we have drawn to the colour buffer (display buffer)
// Called each frame so we don't draw over the top of everything previous
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
std::cout << "8" << endl;
// HEIGHT MAP STUFF ###################################
// Make the VAO with our vertex data buffer current
glBindVertexArray(vertexVaoHandleHeight);
// Send command to GPU to draw the data in the current VAO as triangles
std::cout << "8.5" << endl;
//CRASHES HERE
glDrawArrays(GL_TRIANGLES, 0, /*HEIGHT_VERTS = 5*/ 3);
std::cout << "8.75" << endl;
glBindVertexArray(0); // Un-bind the VAO
std::cout << "9" << endl;
// HEIGHT MAP STUFF ###################################
glutSwapBuffers(); // Swap the back buffer with the front buffer, showing what has been rendered
glFlush(); // Guarantees previous commands have been completed before continuing
}
/**
* Program entry. Sets up OpenGL state, GLSL Shaders and GLUT window and function call backs
* Takes no arguments
*/
int main(int argc, char **argv) {
//READ IN FILE//
std::fstream myfile("heights.csv", std::ios_base::in);
if(!myfile.good()){cout << "file not found" << endl;}
std::vector<float> numbers;
float a;
while (myfile >> a){/*printf("%f ", a);*/
numbers.push_back(a);
}
//for (int i=0; i<numbers.size();i++){cout << numbers[i] << endl;}
getchar();
//READ IN FILE//
// Set up GLUT window
glutInit(&argc, argv); // Starts GLUT systems, passing in command line args
glutInitWindowPosition(100, 0); // Positions the window on the screen relative to top left
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT); // Size in pixels
// Display mode takes bit flags defining properties you want the window to have;
// GLUT_RGBA : Set the pixel format to have Red Green Blue and Alpha colour channels
// GLUT_DOUBLE : Each frame is drawn to a hidden back buffer hiding the image construction
// GLUT_DEPTH : A depth buffer is kept so that polygons can be drawn in-front/behind others (not used in this application)
#ifdef __APPLE__
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH | GLUT_3_2_CORE_PROFILE);
#else
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH );
#endif
glutCreateWindow("Hello World!"); // Makes the actual window and displays
// Initialize GLEW
glewExperimental = true; // Needed for core profile
if (glewInit() != GLEW_OK) {
fprintf(stderr, "Failed to initialize GLEW\n");
return -1;
}
// Sets the (background) colour for each time the frame-buffer (colour buffer) is cleared
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
// Set up the shaders we are to use. 0 indicates error.
programID = LoadShaders("minimal.vert", "minimal.frag");
if (programID == 0)
return 1;
// Set this shader program in use
// This is an OpenGL state modification and persists unless changed
glUseProgram(programID);
// Set the vertex data for the program
if (setShaderData(programID) != 0)
return 1;
// Render call to a function we defined,
// that is called each time GLUT thinks we need to update
// the window contents, this method has our drawing logic
glutDisplayFunc(render);
// Start an infinite loop where GLUT calls methods (like render)
// set with glut*Func when needed.
// Runs until something kills the window
glutMainLoop();
return 0;
}
The size argument of glVertexAttribPointer() must be 1, 2, 3, or 4. You pass 5 here:
#define VALS_PER_VERT_HEIGHT 5
...
glVertexAttribPointer(vertLocHeight, VALS_PER_VERT_HEIGHT, GL_FLOAT, GL_FALSE, 0, 0);
You should always call glGetError() if you have problems with your rendering. The call above will immediately give you a GL_INVALID_VALUE error.
Your code also looks generally inconsistent. In some places, you seem to assume that you have 3 vertices, in others 4, in others 5. Then, as shown above, you have vertices with 5 coordinates, which does not really make any sense. You may want to have a careful look at your own code, and make sure that everything is consistent with what you are trying to do.

Meaning of "index" parameter in glEnableVertexAttribArray and (possibly) a bug in the OS X OpenGL implementation

1) Do I understand correctly that to draw using vertex arrays or VBOs I need for all my attributes to either call glBindAttribLocation before the shader program linkage or call glGetAttribLocation after the shader program was successfully linked and then use the bound/obtained index in the glVertexAttribPointer and glEnableVertexAttribArray calls?
To be more specific: these three functions - glGetAttribLocation, glVertexAttribPointer and glEnableVertexAttribArray - they all have an input parameter named "index". Is it the same "index" for all the three? And is it the same thing as the one returned by glGetAttribLocation?
If yes:
2) I've been facing a problem on OS X, I described it here: https://stackoverflow.com/questions/28093919/using-default-attribute-location-doesnt-work-on-osx-osx-opengl-bug , but unfortunately didn't get any replies.
The problem is that depending on what attribute locations I bind to my attributes I do or do not see anything on the screen. I only see this behavior on my MacBook Pro with OS X 10.9.5; I've tried running the same code on Linux and Windows and it seems to work on those platforms independently from which locations are my attributes bound to.
Here is a code example (which is supposed to draw a red triangle on the screen) that exhibits the problem:
#include <iostream>
#include <GLFW/glfw3.h>
GLuint global_program_object;
GLint global_position_location;
GLint global_aspect_ratio_location;
GLuint global_buffer_names[1];
int LoadShader(GLenum type, const char *shader_source)
{
GLuint shader;
GLint compiled;
shader = glCreateShader(type);
if (shader == 0)
return 0;
glShaderSource(shader, 1, &shader_source, NULL);
glCompileShader(shader);
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled)
{
GLint info_len = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &info_len);
if (info_len > 1)
{
char* info_log = new char[info_len];
glGetShaderInfoLog(shader, info_len, NULL, info_log);
std::cout << "Error compiling shader" << info_log << std::endl;
delete info_log;
}
glDeleteShader(shader);
return 0;
}
return shader;
}
int InitGL()
{
char vertex_shader_source[] =
"attribute vec4 att_position; \n"
"attribute float dummy;\n"
"uniform float uni_aspect_ratio; \n"
"void main() \n"
" { \n"
" vec4 test = att_position * dummy;\n"
" mat4 mat_projection = \n"
" mat4(1.0 / uni_aspect_ratio, 0.0, 0.0, 0.0, \n"
" 0.0, 1.0, 0.0, 0.0, \n"
" 0.0, 0.0, -1.0, 0.0, \n"
" 0.0, 0.0, 0.0, 1.0); \n"
" gl_Position = att_position; \n"
" gl_Position *= mat_projection; \n"
" } \n";
char fragment_shader_source[] =
"void main() \n"
" { \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
" } \n";
GLuint vertex_shader;
GLuint fragment_shader;
GLuint program_object;
GLint linked;
vertex_shader = LoadShader(GL_VERTEX_SHADER , vertex_shader_source );
fragment_shader = LoadShader(GL_FRAGMENT_SHADER, fragment_shader_source);
program_object = glCreateProgram();
if(program_object == 0)
return 1;
glAttachShader(program_object, vertex_shader );
glAttachShader(program_object, fragment_shader);
// Here any index except 0 results in observing the black screen
glBindAttribLocation(program_object, 1, "att_position");
glLinkProgram(program_object);
glGetProgramiv(program_object, GL_LINK_STATUS, &linked);
if(!linked)
{
GLint info_len = 0;
glGetProgramiv(program_object, GL_INFO_LOG_LENGTH, &info_len);
if(info_len > 1)
{
char* info_log = new char[info_len];
glGetProgramInfoLog(program_object, info_len, NULL, info_log);
std::cout << "Error linking program" << info_log << std::endl;
delete info_log;
}
glDeleteProgram(program_object);
return 1;
}
global_program_object = program_object;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(global_program_object);
global_position_location = glGetAttribLocation (global_program_object, "att_position");
global_aspect_ratio_location = glGetUniformLocation(global_program_object, "uni_aspect_ratio");
GLfloat vertices[] = {-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f};
glGenBuffers(1, global_buffer_names);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 9, vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
return 0;
}
void Render()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glUseProgram(global_program_object);
glBindBuffer(GL_ARRAY_BUFFER, global_buffer_names[0]);
glVertexAttribPointer(global_position_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(global_position_location);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(global_position_location);
glUseProgram(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
void FreeGL()
{
glDeleteBuffers(1, global_buffer_names);
glDeleteProgram(global_program_object);
}
void SetViewport(int width, int height)
{
glViewport(0, 0, width, height);
glUseProgram(global_program_object);
glUniform1f(global_aspect_ratio_location, static_cast<GLfloat>(width) / static_cast<GLfloat>(height));
}
int main(void)
{
GLFWwindow* window;
if (!glfwInit())
return -1;
window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
InitGL();
// Double the resolution to correctly draw with Retina display
SetViewport(1280, 960);
while (!glfwWindowShouldClose(window))
{
Render();
glfwSwapBuffers(window);
glfwPollEvents();
}
FreeGL();
glfwTerminate();
return 0;
}
Does this look like a bug to you? Can anyone reproduce it? If it's a bug where should I report it?
P.S.
I've also tried SDL instead of GLFW, the behavior is the same...
The behavior you see is actually correct as per the spec, and MacOSX has something to do with this, but only in a very indirect way.
To answer question 1) first: You are basically correct. With modern GLSL (>=3.30), you can also specifiy the desired index via the layout(location=...) qualifier directly in the shader code, instead of using glBindAttribLocation(), but that is only a side note.
The problem you are facing is that you are using a legacy GL context. You do not specify a desired version, so you will get maximum compatibility to the old way. Now on windows, you are very likely to get a compatibility profile of the highest version supported by the implementation (typically GL3.x or GL4.x on non-ancient GPUs).
However, on OSX, you are limited to at most GL2.1. And this is where the crux lies: your code is invalid in GL2.x. To explain this, I have to go back in GL history. In the beginning, there was the immediate mode, so you did draw by
glBegin(primType);
glColor3f(r,g,b);
glVertex3f(x,y,z);
[...]
glColor3f(r,g,b);
glVertex3f(x,y,z);
glEnd();
Note that the glVertex call is what actually creates a vertex. All other per-vertex attributes are basically some current vertex state which can be set any time, but calling glVertex will take all of those current attributes together with the position to form the vertex which is fed to the pipeline.
Now when vertex arrays were added, we got functions like glVertexPointer(), glColorPointer() and so on, and each attribute array could be enabled or disabled separately via glEnableClientState(). The array-based draw calls are actually defined in terms of the immediate mode in the OpenGL 2.1 specification as glDrawArrays(GLenum mode, GLint first, GLsizei count) being equivalent to
glBegin(mode);
for (i=0; i<count; i++)
ArrayElement(first + i);
glEnd();
with ArrayElement(i) being defined (this one is derived from the wording of theGL 1.5 spec):
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
This definition has some sublte consequence: You must have the GL_VERTEX_ARRAY attribute array enabled, otherwise nothing will be drawn, since no equivalent of glVertex calls are generated.
Now when the generic attributes were added in GL2.0, a special guarantee was made: generic attribute 0 is aliasing the builtin glVertex attribute - so both can be used interchangeably, in immediate mode as well as in arrays. So glVertexAttrib3f(0,x,y,z) "creates" a vertex the same way glVertex3f(x,y,z) would have. And using an array with glEnableVertexAttribArray(0) is as good as glEnableClientState(GL_VERTEX_ARRAY).
In GL 2.1, the ArrayElement(i) function now looks as follows:
if ( normal_array_enabled )
Normal3...( <i-th normal value> );
[...] // similiar for all other bultin attribs
for (a=1; a<max_attribs; a++) {
if ( generic_attrib_a_enabled )
glVertexAttrib...(a, <i-th value of attrib a> );
}
if ( generic_attrib_0_enabled)
glVertexAttrib...(0, <i-th value of attrib 0> );
else if ( vertex_array_enabled)
Vertex...( <i-th vertex value> );
Now this is what happens to you. You absolutely need attribute 0 (or the old GL_VERTEX_ARRAY attribute) to be enabled for this to generate any vertices for the pipeline.
Note that it should be possible in theory to just enable attribute 0, no matter if it is used in the shader or not. You should just make sure that the corresponding attrib pointer pionts to valid memory, to be 100% safe. So you simply could check if your attribute index 0 is used, and if not, just set the same pointer as attribute 0 as you did for your real attribute, and the GL should be happy. But I haven't tried this.
In more modern GL, these requirements are not there anymore, and drawing without attribute 0 will work as intended, and that is what you saw on those other systems. Maybe you should consider switching to modern GL, say >= 3.2 core profile, where the issue will not be present (but you need to update your code a bit, including the shaders).